May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

The tar pit of Red Hat overcomplexity

RHEL 6 and RHEL 7 differences are no smaller than between SUSE and RHEL which essentially doubles workload of sysadmins who now need to administer an "extra" flavor of Linux/Unix. While this situation improved in December 2020 with the retirement of RHEL6, extra complexity of RHEL7 leads to mental overflow and loss of productivity. That's why the most experienced sysadmins have ambivalent feelings towards  RHEL 7 and 8. The joke is that "The D in Systemd stands for 'Dammmmit!' "

Version 2.10, Mar 16, 2021

News Administration Recommended Links Recommended Books Notes on RHCSA Certification for RHEL 7 Differences between RHEL 8 and RHEL 7 Registering a server using Red Hat Subscription Manager (RHSM) Converting CentOS to Oracle Linux
CentOS8 fiasco Oracle Linux Administration Notes RHEL vs. Oracle Linux pricing Oracle Linux licensing issues in standalone and Oracle VM environments centos2ol_wrapper Softpanorama sysadmin utilities VNC-based Installation in Anaconda Red Hat Patching
Systemd Kickstart Modifying ISO image to include kickstart file Log rotation in RHEL/Centos/Oracle linux Root filesystem is mounted read only on boot Installing X11 and Gnome Desktop in RHEL 7 RHCSA: Using redirection and pipes RHCSA: Text files processing
Installation Red Hat6 Startup Scripts Installation of Red Hat from a USB drive Configuring network in RHEL7 RHEL 6 NTP configuration RHEL handling of DST change LVM Xinetd
Installation and configuration of KVM in RHEL7 Recovery of LVM partitions Biosdevname and renaming of Ethernet interfaces RHEL vs. Oracle Linux pricing Changing runlevel when booting with grub in RHEL6 Disabling useless daemons in RHEL/Centos/Oracle 6 servers Infiniband  bash
RPM YUM Cron Wheel Group PAM  SELinux Disk Management IP address change
Redundant daemons in RHEL Disabling the avahi daemon Security rsyslog SSH NFS Samba NTP
 serial console Screen vim Log rotation rsync Sendmail VNC/VINO Midnight Commander
Red Hat Certification Program Red Hat history Saferm -- wrapper for rm command Red Hat vs. Solaris Sysadmin Horror Stories Tips Humor  
Without that discipline, too often, software teams get lost in what are known in the field as "boil-the-ocean" projects -- vast schemes to improve everything at once. That can be inspiring, but in the end we might prefer that they hunker down and make incremental improvements to rescue us from bugs and viruses and make our computers easier to use.

Idealistic software developers love to dream about world-changing innovations; meanwhile, we wait and wait for all the potholes to be fixed.

Frederick Brooks, Jr,
The Mythical Man-Month, 1975

Acknowledgment. I would like to thank sysadmins who have sent me the feedback for the version 1.0 of this article. It definitely helped to improve it.  Mistakes and errors in the current version are my own.


The key problem with RHEL7 is its adoption of systemd as well as multiple and significant changes in standard utilities and daemons. This looks like Second-system effect: the tendency of small, elegant, and successful systems to be succeeded by over-engineered, bloated systems, due to inflated expectations and overconfidence. Troubleshooting skills needed are different from RHEL6.  It also "obsoletes "pre-RHEL 7" books including most good books from O'Reilly and other publishers.

The distance between RHEL6 and  RHEL7 is such that we can view RHEL7 as a new flavor of Linux introduced into an enterprise environment.  Which increases the load of a sysadmin who previously supported RHEL5,6 and Suse Enterprise 12 30% or more. The amount of headache and hours you need to spend on the job increased, but salaries not so much.  The staff of IT departments continues to shrink, despite the growth of the overcomplexity.

Passing RHCSA exam is now also more difficult than for RHEL 6 and requires at least a year of hands-on experience along with self-study even for those who were certified in RHEL6.  "Red Hat engineer" certification became useless as complexity is such that no single person can master the whole system. Due to growing overcomplexity tech support for RHEL7 mutated into Red Hat Knowledgebase indexing service. Most problems now require extensive research to solve, hours of reading articles in Red Hat Knowledgebase, documentation, posts in Stack Overflow and similar sites.  In this sense "Red Hat engineer" certification probably should be renamed to "Red Hat detective", or something like that. 

Networking in Red Hat 7 is very brittle as soon as you go into complex staff such as bonding, special modes of running NFS and like.  Usefulness of Network Manager for stationary rack-mounted servers connected by cables is open to review.  It is just another layer of overcomplexity imposed on you without any real need for server environment.  And it really complicates bonding of interfaces.  Sometimes you can't make the configuration work without NM_CONTROLLED="no" directive in the interface file.

This behavior of Red Hat, which amounts to the intentional destruction of the existing server ecosystem by misguided "desktop Linux" enthusiasts within Red Hat developer community, and by Red Hat brass greed, requires some adaptation.  Licensing structure of large enterprises regarding Red Hat should probably be changed: tech support should probably be diversified using your hardware vendors such as Dell and HP.  If you do not use Oracle Linux it might make sense to add it to the mix, especially on the low end (self-support) and for servers running databases, where you can get a higher quality support from Oracle, than from Red Hat.  Oracle provide CentOS style licensing for its distribution and is a natural replacement of CentOS in  enterprise. Conversion script is available and works.

Tech support became increasingly self-support anyway. It became in home research over Red Hat knowledgebase and forums such as Stack Overflow.  To save money it make sense to switch to wider use self- support licenses (or even CentOS if it works on your type of servers)  on the low end, non-critical servers too, which allows buying Premium licenses for the critical servers for the same, or even less amount of money than the uniform licensing structure.  Four socket servers now should be avoided due to higher licensing costs (and with the ability to get 32 cores or more on a two socket server they are not  very critical, anyway as many application are single threaded and few can  utilize more then 16 cores effectively). Docker should be more widely used instead of licensing virtual instances from Red Hat (the limitation on the number of virtual instances is a typical IBM-style way of extracting money from the customers). With the demise of CentOS, only Oracle Linux can fill the gap created by IBM for early Red Hat 8 adopters and that makes more wide adoption of Oracle Linux more attractive than before. 


Imagine a language in which both grammar and vocabulary are changing each decade. Add to this that the syntax is complex, the vocabulary is huge and each verb has a couple of dozens of suffixes that often change its meaning, sometimes drastically.   Like in Slavic languages;-).

This is the situation of "complexity trap" that we have in enterprise Linux. In this sense troubles with RHEL7 are just a tip of the iceberg.  You can learn some subset when you closely work with the particular subsystem  (package installation, networking, nfsd, httpd, sshd, Puppet/Ansible, Nagios, and so on and so forth) only to forget vital details after a couple quarters, or a year.  I have a feeling that RHEL is spinning out of control at an increasingly rapid pace.

Many older sysadmins now have the sense that they no longer can understand the OS they need to manage and were taken hostages by desktop enthusiasts faction within Red Hat. They now need to work in a kind of hostile environment. Many complain that they feel overwhelmed with this avalanche of changes, and can't  troubleshoot problems that they used to able to troubleshoot before. In RHEL7 it is easier for an otherwise competent sysadmin to miss a step, or forget something important in the stress and pressure of the moment even for tasks that he knows perfectly well. That means that SNAFU become more common.  Overcomplexity of RHEL7, of course, increases the importance of checklists and personal knowledgebase (most often  set of files, or a simple website, or a wiki).  Keeping all previous images and restoration from images became a vital troubleshooting method.

But creation of your own knowledgebase and creation of images (for example via Relax-and-Recover) before each major change requires time that is difficult to find and thus is often skipped: it is not uncommon to spent more time on creating documentation to the problem, than on its resolution. Here such sites as Softpanorama can help, but "document everything" mantra is definitely increasing the overload of sysadmin. At the same this step simply can't be skipped as the situation reached the stage of the inability to remember the set of information necessary for productive work.  So keeping your own knowledgebase is a survival tactic.

Now even "the basic set" of commands and utilities in Red Hat is way too large for mere mortals, especially for sysadmins, who need to maintain one additional flavor of Linux (say, Suse Enterprise).  Any uncommon task became a research: you need to consult man pages, as well as available on the topic Web pages such as documents at Red Hat portal, Stack Overflow discussion, and other relevant to the problem in hand sites. Often you can't perform again the task that you performed a quarter or two ago without consulting your notes and documentation.

Even worse is the resulting  from overcomplexity "primitivization" of your style of work.  Sometimes you discover an interesting and/or more productive way to perform an important task. If you do not perform this task frequently and do not write this up, it will soon be displaced in your memory with other things: you will completely forget about it and degrade to the "basic" scheme of doing things.  This is very typically for any environment that is excessively complex. 

That's probably why so few enterprise sysadmin those days have their personal .profile and .bashrc files and often simply use defaults. Similarly the way Ansible/Puppet/Chef are used is often a sad joke -- they are used on so primitive level that execution of scripts via NFS would do the same thing with less fuss. The same is true for Nagios which in many cases is "primitivised" to a glorified ping.  BTW Ansible, despite all hype is just a reincarnating of IBM JCL on a new, parallelized, level. As such it is OK for simple "waterfall" tasks but it doesn't  work well for anything more complex. BTW Unix shell and REXX wiped out the floor with JCL and  this is not accidental: they are better ways of doing the same set of tasks.

RHEL7 increased the level of complexity to such a level that it become painful to work with, and especially to troubleshoot complex problems. That's probably why the quality of Red Hat support deteriorated so much (it essentially became referencing service to Red Hat advisories)  -- they are overwhelmed, and no longer can concentrate of a single ticket in this river of tickets related to RHEL7 that they receive daily.  In many cases it is clear from their answers that they did not even tried to understand the problem you face and just searched the database for keywords.

The adoption of hugely complex, less secure (with a new security exploit approximately once a quarter), and unreliable systemd instead of init looks like Second-system effect: the tendency of small, elegant, and successful systems to be succeeded by over-engineered, bloated systems, due to inflated expectations and overconfidence.

The adoption of hugely complex, less secure (with a new security exploit approximately once a quarter), and unreliable systemd instead of init looks like Second-system effect: the tendency of small, elegant, and successful systems to be succeeded by over-engineered, bloated systems, due to inflated expectations and overconfidence.

If you patch you server periodically you will notice that the lion share of updates now includes updates to systemd.

At the same time other changes were also significant enough to classify RHEL 7 as a new flavor of Linux, distinct from RHEL4-RHEL6 family: along with systemd, multiple new utilities and daemons were introduced as well, networking commands and operations are different; also many methods of going common tasks changed, often dramatically. And old methods of administration of the system simply do not work. Especially in the area of troubleshooting.

RHEL7 should be viewed as a new flavor of Linux, not as a version upgrade. As such productivity of sysadmins juggling multiple flavors of Linux drops. The value of many (or most) high quality books about Linux was destroyed by this distribution, and there is no joy to see this vandalism

It might well be the resulting complexity crosses some "red line" (as in "The straw that broke the camel's back ")  and instantly started to be visible and annoying to vastly more people then before.  In other words, with the addition of systemd quantity turned into quality  Look, for example, at the recovery of forgotten sysadmin password in RHEL7 vs. RHEL6.   The value of many high quality books about Linux was destroyed by this distribution, and there is no joy to see this vandalism.  And you feel somewhat like a prisoner of somebody else decisions:   you have essentially no control.

RHEL7 might well create the situation in which overcomplexity started to be visible and annoying to vastly more sysadmins then before.
 (as in "The straw that broke the camel's back ").   In other words, with the addition of systemd quantity turned into quality

Still it will be an exaggeration to say that all sysadmin (or RHEL 7 users in general) resent systemd. Many don't really care. Not much changed for  "click-click-click" sysadmins who limit  themselves to GUI tools, and other categories of users who use the RHEL7 "as is", such most Linux laptop users. But even for laptop users (who mostly use Ubuntu, not RHEL),  there are doubts that they are  benefitting from faster boot ( systemd is at least 10 times larger then classic init and SSD drives accelerated loading of daemons dramatically, making delayed approach adopted by systemd obsolete). 

It is also easier for entry level sysadmins who are coming to the enterprises right now and for whom RHEL7 is their first system. Most use Ubuntu at home and at university and they never  used any system without systemd. Also, they use much smaller subset of services and utilities, and generally are not involved with problem connected with more complex problems connected with hanged boots, botched networking, and crashed daemons.  But enterprise sysadmins who often are simultaneously "level 3 support" specialists were hit hard. Now they understand internals even less.

Introduction of systemd signifies a new stage of "Microsoftization" of Red Hat

Introduction of systemd signifies a new stage of "Microsoftization" of Red Hat.  This daemon replaces init in a way I find problematic: the key idea is to replace a language in which init scripts were written (which provides programmability and has its own flaws, which still were fixable) with the fixed set of all singing, all dancing parameters in  so called "unit files", removing programmability.

Initially there were some segments of Linux community which did not accept systemd. For example, several veteran Unix sysadmins forked Debian and created Devuan Linux -- a distribution without systemd. That was so obvious slap in the Red Hat face that they allocated more resources to prove that they can dictate their will. So, paradoxically, Devuan has tremendously positive effect of the  systemd development forcing Red Hat to double the efforts ;-) 

In any case after seven years ( or so ) of frantic "development" (which mostly means "debugging") systemd in RHEL 7 got into some kind of semi-stable stage (although patches to systemd are still are probably the most frequently issued patches in RHEL 7; each time you patch Red Hat systemd is also patched  ). But the major bugs are always connected with the weaknesses in architecture and they can't be removed. They need to be  accepted as features.

One interesting innovation of systemd in the area of obscure bugs are "timing bugs". For example, a lot of people observed with RHEL7 and CentOS7 that they can't install OS via a slow link (over VPN). It hangs with strange messages like "Starting Terminate Plymouth Boot Screen." (Plymouth is a project from Fedora and now listed among the's official resources providing a flicker-free graphical boot process; introducing subtle errors as a free gift; why it needs to be present if server is completely unclear.) Search "installation hangs on plymouth" or "boot hangs on plymouth"  for recent cases ( I observed this behaviour via VPN with CentOS/RHEL  7.7  because I do not do such things often and at this point I already forgot how to use VFLASH and NFS/HTTP access to full ISO in Dell DRAC (you can't remember all those relevant things now, not matter how good memory you have.). I though I will initiate the install go to lunch and return to the Anaconda selection of the timezone screen. I was wrong ;-).  

In my case the debugging console shows that systemd entered infinite loop. But it did install OK from the local media using the same ISO. Also parallel invocation of daemons is bad for debugging  and several debugging methods previously available are disabled in systemd because of that.  Access to TTYs is now sporadic: CTRL+ALT+F2 and similar commands often do not work when you boot has problems. Situation is really bad if you are forced to work with DRAC or ILO via VPN, because deficiencies of DRAC and ILO multiply the deficiencies of systemd.   

There are also pretty obscure variations of this bug. See for example

Three points about systemd

This fascinating story of personal and corporate ambition gone awry still wait for its researcher. When we are thinking  about some big disasters like, for example, Titanic, it is not about the outcome -- yes, the ship sank, lives were lost -- or even the obvious cause (yes, the iceberg), but to learn more about WHY. Why the warnings were ignored ? Why such a dangerous course was chosen? One additional and perhaps most important for our times question is: Why the lies that were told were believed?  

Currently customers on the ruse are assured that difficulties with systemd (and RHEL7 in general) are just temporary, until the software could be improved and stabilized. While some progress was made, that day might never come due to the architectural flaws of such  an approach and the resulting increase in complexity as well as the loss of flexibility, as programmability is now is more limited. 

If you run the command find /lib/systemd/system | wc -l on a "pristine" RHEL system (just after the installation) you get over 300 unit files. Does this number raise some questions about the level of complexity of systemd?  Yes it does.  It's more then three hundred files and with such a number it is reasonable to assume that some of them might have some hidden problem in generation of the correct init file logic on the fly.

You can think of systemd as the "universal" init script that on the fly is customized by supplied by init file parameters. Previously part of this functionality was implemented as a PHP-style pseudo-language within the initial comment block of the regular bash script. While the implementation was very weak (it was never written as a specialized interpreter with formal language definition, diagnostics and such), this approach was not bad at all in comparison with the extreme, "everything is a parameter" approach taken by systems, which eliminated bash from init file space.   And it might make to return to such a "mixed" approach in the future on a new level, as in a way systemd provides the foundation for such a language. Parameters used to deal with dependencies and such can be generated and converted into something less complex and more manageable.

Systemd is  an implementation of the interpreter of an ad hoc non-procedural language

Systemd is essentially an implementation of the interpreter of implicit non-procedural language defined by those parameters by a person, who never have written an interpreter from a programming language in his life, and never studied writing compilers and interpreters as a discipline.  Of course, that shows. 

It supports over a hundred various parameters which serve as keywords of this ad-hoc language, which we can call "Poettering init language." You can verify the  number  of introduced keywords yourself, using a pipe like the following one:

cat /lib/systemd/system/[a-z]* | egrep -v "^#|^ |^\[" | cut -d '=' -f 1 | sort | uniq -c | sort -rn
The key idea of systemd is to replace a language in which init scripts were written (which provides programmability and has its own flaws, which still were fixable) with the fixed set of all singing, all dancing parameters grouped in so called "unit files" and the interpret of this "ad hoc" new non-procedural language written in C. This is a flawed approach from the architectural standpoint, and no amount of systemd fixes can change that. 

Yes, this is "yet another language" (as if sysadmins did not have enough of them already), and a badly designed one judging from to the total number  of keywords/parameters introduced. The most popular are the following 35 (in reverse frequency of occurrence order):

    234 Description
    215 Documentation
    167 After
    145 ExecStart
    125 Type
    124 DefaultDependencies
    105 Before
     78 Conflicts
     68 WantedBy
     56 RemainAfterExit
     55 ConditionPathExists
     47 Requires
     37 Wants
     28 KillMode
     28 Environment
     22 ConditionKernelCommandLine
     21 StandardOutput
     21 AllowIsolate
     19 EnvironmentFile
     19 BusName
     18 ConditionDirectoryNotEmpty
     17 TimeoutSec
     17 StandardInput
     17 Restart
     14 CapabilityBoundingSet
     13 WatchdogSec
     13 StandardError
     13 RefuseManualStart
     13 PrivateTmp
     13 ExecStop
     12 ProtectSystem
     12 ProtectHome
     12 ConditionPathIsReadWrite
     12 Alias
     10 RestartSec

Another warning sign about systemd is paying outsize attention for the subsystem that loads/unloads the initial set of daemons and then manages the working set of daemon, replacing init scripts and runlevels with a different and dramatically more complex alternative.  Essentially they replaced reasonably simple and understandable subsystem with some flaws, by a complex and opaque subsystem with non-procedural language defined by the multitude of parameters. While the logic of application of those parameters is hidden within systemd code. 

Which due to its size has a lot more flaws. As well as side effects because proliferation of those parameters and sub parameters is never ending process: the answer to any new problem discovered in the systemd is the creation of additional parameters or, in best case, modification of existing.  Which looks to me like a self-defeating, never ending spiral of adding complexity to this subsystem, which requires incorporation into systemd many things that simply do not belong to the init. Thus making it "all signing all dancing" super daemon.  In other  words, systemd expansion might be a perverted attempt to solve problems resulting from the fundamental flaws of the chosen approach.

One simple and educational experiment that shows brittleness of system approach how is to  replace on a freshly installed RHEL7 VM /etc/passwd /etc/shadow and /etc/group with files from RHEL 6 and see what happens during the reboot (BTW such error can happen in any organization with novice sysadmins, who are overworked and want to cut corners during the  installation of the new system, so such behaviour is not only of theoretical interest)

Overcomplexity means bugs, side effects and insecurity

Now about the complexity: the growth of codebase (in lines of codes) is probably more then ten times. I read that the systemd has around 100K lines of C code (or 63K if you exclude journald, udevd, etc). In comparison sysvinit has about 7K lines of C code.  The total number of lines in all systemd-related subsystems is huge and by  some estimates is  close to a million (Phoronix Forums).  It is well known that the number of bugs grows with the total lines of codes. At a certain level of complexity "quantity turns in quality": the number of bugs became infinite in a sense that the system can't be fully debugged due to intellectual limitations of authors and maintainers in understanding the codebase as well as gradual deterioration of the conceptual integrity with time.

At a certain level of complexity "quantity turns in quality": the number of bugs became infinite in a sense that the system can't be fully debugged due to intellectual limitations of authors and maintainers in understanding the codebase as well as gradual deterioration of the conceptual integrity with time.

It is also easier to implant backdoors in complex code, especially in privileged complex code.  In this sense a larger init system means a larger attack surface, which on the current level of Linux complexity is already substantial with the never ending security patches for major subsystems (look at security patches in CentOS Pulse Newsletter, January 2019 #1901). as well as the parallel stream of Zero Day exploits for each major version of Linux on which such powerful organization as NSA is working day and night, as it is a now a part of their toolbox.  As the same systemd code is shared by all four major Linux distributions (RHEL, SUSE, Debian, and Ubuntu), systemd  represents a lucrative target for Zero Day exploits.

As boot time does not matter for servers (which often are rebooted just a couple of times a year) systemd raised complexity level and made RHEL7 drastically different from RHEL6,  while providing nothing constructive in return. The distance between RHEL6 and  RHEL7 is approximately the same as distance between RHEL6 and Suse, so we can speak of RHEL7 as a new flavor of Linux and about introduction of a new flavor of Linux into enterprise environment. Which, as any new flavor of Linux,  raises the cost of system administration (probably around 20-30%, if the particular enterprise is using mainly RHEL6 with  some SLES instances)

Systemd and other changes made RHEL7 as different from RHEL6 as Suse distribution is from Red Hat. Which, as any new flavor of Linux,  raises the cost of system administration (probably around 20-30%, if the particular enterprise is using RHEL6 and Suse12 right now)

Hopefully before 2020 we do not need to upgrade to RHEL7, and have time to explore this new monster. But eventually, when support of RHEL 6 ends, all RHEL enterprise users either need to switch to RHEL7 or to alternative distribution such as Oracle Linux or SUSE Enterprise.  Theoretically the option to abandon RHEL for other distribution that does not use systemd also exists, but corporate inertia is way to high and such a  move is too risky to materialize. Some narrow segments, such as research probably can use Debian without systemd, but in general that answer is no. Because this is a Catch 22 -- adopting distribution without systemd raised the level complexity (via increased diversification of Linux flavors) to the same levels as adopting RHEL 7 with system.

Still, at least theoretically, this is actually an opportunity for Oracle to bring Solaris solution of this problem to Linux, but I doubt that they want to spend money on it. They probably will continue their successful attempts to clone Red Hat (under the name of Oracle Linux) improving kernel and some other subsystems to work better with Oracle database. They might provide some extension of the useful life of RHEL6 though, as they did with RHEL 5.

Possibility of a single developer to alter to the worse the course of the large open source project

Actually Lennart Poettering is an interesting Trojan horse within the Linux developers community.  Of course there were some other factors, but in a way, this looks  like an example how one talented, motivated and productive Apple (or Windows) desktop biased C programmer can cripple a large open source project facing no any organized resistance. Of course, that means that his goals align with the goals of Red Hat management, which is to control the Linux  environment in a way  similar to Microsoft -- via complexity (Microsoft can be called the king of software complexity) providing for lame sysadmins a set of GUI based tools for "click, click, click" style administration. Also standardization on GUI-based tools for the administration provides more flexibility to change internals without annoying users. The tools inner working of which they do not understand and are not interested in understanding. He single-handedly created a movement, which I would call "Occupy Linux" ;-). See Systemd Humor.

And respite wide resentment, I did not see the slogan "Boycott RHEL 7" too often and none on major IT websites jointed the "resistance".  The key for stowing systemd down the developers throats  was its close integration with Gnome (both Suse and Debian/Ubuntu adopted systemd). It is evident, that the absence of "architectural council" in projects like Linux is a serious weakness.

It also suggests that developers from companies representing major Linux distribution became uncontrolled elite of Linux world, a kind of "open source nomenklatura", if you wish.  In other words we see the  "iron law of oligarchy" in action here. See also Systemd invasion into Linux distributions

IBM acquisition of Red Hat made IBM the de-facto owner of Enterprise Linux

Taking into account dominant market share of RHEL (which became Microsoft of Linux) finding an alternative ("no-systemd) distribution is a difficult proposition. But Red Hat acquisition by IBM might change that. Neither HP, nor Oracle have any warm feelings toward IBM and preexisting Red Hat neutrality to major hardware and enterprise software vendors now disappeared, making IBM de-facto owner of enterprise Linux. As IBM enterprise software directly competes with enterprise software from HP and Oracle, that fact gives IBM a substantial advantage.

Linux old guard always viewed IBM with suspicion.  See Big Business and Linux and Selling Bazaar to Cathedral: Linux Gold Rush  Now IBM acquisition of Red Hat made IBM the de-facto owner of Enterprise Linux

IBM acquisition of Red Hat made IBM the de-facto owner of Enterprise Linux. In a way, systemd can be viewed as the major step in making Red Hat an IBM product

The first move of IBM in its new role was to kill CentOS project. IBM brass views CentOS installations as potential Red hat licenses.  The only surprising thing is why it took for IBM so long. This is a move I expected since Red Hat was bought by IBM. But the question whether CentOS does cut into Red Hat revenue stream is very tricky as it serves also as testing ground for many, who later buy Red Hat licensees. Large enterprises also run their research and testing servers on CentOS, not on Red Hat. Now with the flexibility of Oracle linux when you can seamlessly convert from paid version to free and back, Red Hat has problems it did not anticipated initially as this move with CentOS was a huge marketing gift to Oracle (which, surprisingly and somewhat unfair,  for decades was semi-forgotten clone; now suddenly it became No.1 ) and I will not be surprised that total number of licenses they manage to sell will drop, not increase. Oracle linux also has higher quality then CentOS, which recently suffered from strange errors, which were not reproducible  on Red Hat Enterprise (especially during installation) .  Probably not substantially as they survived systemd fiaco without any major hit, so they remain the king og the hill and can dictate the rules of the game for enterprise linux. But now in no way they can raise prices. That might be suicidal. 

"By W3Tech's count, while Ubuntu is the most popular Linux server operating system with 47.5%, CentOS is number two with 18.8% and Debian is third, 17.5%. RHEL? It's a distant fourth with 1.8%."

The CentOS 7 is not not fully compatible with the Red Hat 7, anyway. Somebody intentionally changed the default names of interfaces adding prefix "System" to each (which is IMHO an act of vandalism). Which means that for compatibility with Red Hat you need to change "System ens2" back to ens2. Of course, this can be done via script, but still... Also, the modification of interface via nmcli does not save changes, and they are lost on reboot. I did not look into this problem in depth and did not compare it with Red Hat behaviour, which might well be the same.

In the past, I has had a very good experience with the subscriptions for Oracle Linux, which is yet another Red Hat clone, but usually with better debugged kernel and higher level of tech support for the same money (or cheaper for the same level of tech support, which is case of Red Hat is close to zero in any case -- in NA it was outsourced and deteriorated into a database searching exercise even on the "priority" level ) It also has famous Solaris 10 Dtrace ported which helps in troubleshooting. I think it has ZFS, but only in paid subscription version. 

Alternatives to distributions based on systemd

There are very few ( see A list of non-systemd distributions). Ubuntu can work without systemd SystemdForUpstartUsers - Ubuntu Wiki Another alternative is  Devuan (a clone of Debian created specifically to avoid using systemd). This distribution survived till 2020, so it has some staying power. It probably can be  recommended to hobbyists and customers who do not depend much on commercial software (for example, for genomic computational clusters). There might even companies providing commercial support for it.   It is pretty reliable and can compete with CentOS.  Latest versions of Debian also can be installed without systemd and Gnome, but systemd remains the default option. 

Alternatives to Red Hat

Most people do not like Oracle, but Oracle Linux has more relaxed licensing terms than Red Hat (it can be used without license; "patches only" (aka self-support) distribution is significantly cheaper, etc). I would consider Oracle to be the major competitor with some strong points. OpenSUSE also is an option, especially to European users.

But most probably Red Hat will survive this turbulent time and most organization will reatain considerable share of RHEL installation, possibly diluting them with Oracle linux, not replacing it totally. It is similar to the way as you was forced to convert from Windows 7 to Windows 10 on enterprise desktop.  So much about open source as something providing a choice. This is a choice similar to famous quote from Henry Ford days: "A customer can have a car painted any color he wants as long as its black".  With the current complexity of Linux the key question is "Open for whom?" For hackers?

Availability of the source code does not affect much  most enterprise customers as the main mode of installation of OS and packages is via binary, precompiled RPMs, not the source complied on the fly like in Gentoo or FreeBSD/OpenBSD.  I do not see RHEL7 in this respect radically different from, say, Solaris 11.4, which is not supplied in source form for most customers. BTW while more expensive (and with the uncertain future) is also more secure (due to RBAC and security via obscurity ;-)  and has a better polished light weight virtual machines (aka zones, including branded zones ), DTrace,  as well as a better filesystem (ZFS), although XFS is not bad iether, although it is unable fully utilize advantages of SSD disks.

Five major flavors of Red Hat

Red hat far from the most popular Linux distribution but it has a sold foot print in large enterprises.

By W3Tech's count, while Ubuntu is the most popular Linux server operating system with 47.5%, CentOS is number two with 18.8% and Debian is third, 17.5%. RHEL? It's a distant fourth with 1.8%."

Red Hat used to exist in multiple flavors but IBM will take care about derivates, soon :-). CentOs is already dead (although rumors are that original CentOs developers will resurect it under the nema of Rocky OS). As of December 2020 we can see six major flavors of Red Hat:

  1. RHEL -- leading commercial distribution of Linux. The lowest cost is HPC node license  which is $100 per year (it does not provide GUI, has restricted access to packages repositories,  and needs  a headnode license; provides only patches -- self-support -- with restricted number of packages in repositories). Patches only regular Enterprise version (self-support license) is  $350 per year. Standard license which provides patches and "mostly" web-based support is $800 per server per year (two sockets max; 4 sockets are extra).  Premium license that provides phone support for severity 1 and 2 (24 x 7)  is more expensive (~ $1300 per year).  See for example CDW prices 
  2. Oracle  Linux.  For all practical purposes this is a commercial distribution identical to RHEL, but slightly cheaper. What is important is that Oracle allows to use it exectly like ke CentOS: without subscription you can get patches and rely on self-support.  Self-support for enterprise  version is almost three times cheaper ( ~$120 per year) and as such is highly preferable.  It can be used either with Red Hat stock kernel or with (free) Oracle supplied kernel, which supposedly is less buggy and better optimized for running Oracle database and similar application (MySQL is one). 

    The Oracle move to milk weaknesses of GPL was pretty unique and it proved to be shrewd and highly successful. In version obtained via subscription it also has famous Solaris 10  Dtrace ported which helps in troubleshooting.  I think it has ZFS ported too,  but again only in paid subscription version. 

    Unlike RHEL, Oracle Linux can be used as the replacement for CentOS -- Oracle provides patches for free:

    Oracle Linux is free to download, use and distribute and is provided in a variety of installation and deployment methods.

    Oracle Unbreakable Linux Network (ULN) is provided to customers with Oracle Linux support subscriptions.

    Here is one comment from CentOS forum:

    Matthew Stier: December 8, 2020 at 8:11 pm

    My office switched the bulk of our RHEL to OL years ago, and find it a great product, and great support, and only needing to get support for systems we actually want support on. Oracle provided scripts to convert EL5, EL6, and EL7 systems, and was able to convert some EL4 systems I still have running. (Its a matter of going through the list of installed packages, use 'rpm -e --justdb' to remove the package from the rpmdb, and re-installing the package (without dependencies) from the OL ISO.)

  3. [Will be abandoned after 2024] CentOS -- Former community supported distribution that was first embraced but than killed by Red Hat. CentOs 7 support will last till June 2024. It is replaced by rolling beta release of RHEL, called CentOS Steam. CentOs7 users are OK and will be supported till July 2024.   But Red Hat left users of CentOS8 hanging dry cutting the support to nine year to one year. Such a move is viewed by many in CentOS form as a trust issue -- betrayal of trust. That isn't professional at all. Who wants to deal with a company who can pull out their commitments whenever they feel like it?

    Mitch H says: December 8, 2020 at 8:58 pm

    Yes, lets release a roadmap for Centos 8 support, get a ton of the community to begin migrating over to it, only to have the rug pulled from under them. Beautiful move.

    And there was some petty vandalism on the part of Red Hat even in CentOS7, for example nmcli con shows different names of interfaces on the same machine -- all with prefix "System". Oracle linux is a natural replacement for CentOS, patches are free and are of reasonable, usable quality. Oracle linux also suffers less from limited resources including access to servers to download patches (kind of "tragedy of commons" type of problems with any free distributions -- in a way CentOS is a victim of its own success).  Releases of a clone of the most recent Red Hat version is typically delayed by several months, which is actually a good thing, except for beta-addicts. 

  4. [Will probably be abandoned after 2024] Scientific Linux is a CentOS clone with some bug fixes and additional packages useful for scientific applications (last release is version 7.9) produced by the USA Fermi National Accelerator Laboratory in cooperation with the European Organization for Nuclear Research (CERN). The web site is Scientific Linux.  While they support version 7 till the end of life (which means 2024), there will be no version 8 of this distribution (SCIENTIFIC-LINUX-ANNOUNCE):

    Scientific Linux is driven by Fermilab's scientific mission and focused on the changing needs of experimental facilities.

    Fermilab is looking ahead to DUNE[1] and other future international collaborations. One part of this is unifying our computing platform with collaborating labs and institutions.

    Toward that end, we will deploy CentOS 8 in our scientific computing environments rather than develop Scientific Linux 8. We will collaborate with CERN and other labs to help make CentOS an even better platform for high-energy physics computing.

    Fermilab will continue to support Scientific Linux 6 and 7 through the remainder of their respective lifecycles. Thank you to all who have contributed to Scientific Linux and who continue to do so.

    Currently version  7 is a very attractive option for computational nodes of HPC clusters. It also appeals to individual application programmers in such areas as genomic and molecular modeling as it provides a slightly better environment for software development then stock Red Hat.

    Unfortunately Scientific Linux which is IMHO is a superior but not so popular derivative of CentOS was never popular among major research software developers and few mention it . It is probably assumed that if software works on CentOS it will work in Scientific Linux.

    With Red Hat decision to abandon CentOS and convert into RHEL beta, both Fermi lab and CERN find themselves hanging in the air, as they run thousand od Academic Linux installations. And they might try to resurrect the project is some shape or form, for example by supporting Rocky Linux.

  5. SUSE Enterprise and OpenSuse, which historically was derived from RHEL and on application level is by and large compatible with it. They are mostly popular in Europe, especially in Germany. While using RPM packages SUSE uses a different package manager. It also allows to use a very elegant AppArmor kernel module for enhanced security (RHEL does not allow to use this module).  But the differences between two are substantial.
  6. Fedora -- yet another community distribution, which generally can be considered as a beta version of RHEL. Attractive mostly for beta addicts. For example, Red Hat Enterprise Linux 6 was forked from Fedora 12 and contains back ported features from Fedora 13 and 14.  Fedora was the first distribution to adopt systemd starting with Fedora 15 and that tells you a lot...
  7. Rocky Linux. This is an attempt of original creator  of CentOS Gregory M. Kurtzer to resurrect the project under the new name. Little is currently know except the intentions and GitHub site hpcng/rocky  If things go right Rocky Linux could be a suitable replacement for CentOS Linux 7 users after the support ends,

Red Hat 7 and 8 installer behavior over slow links and  the switch to administration from home offices

One year ago today, many companies in the US sent sysadmins home. They were forced to work exclusively with the remote software to weather lockdowns, travel bans and other disruptions. Expenses were cut and migration from Red Hat to CentOS and, starting from approx October,  to Oracle Linux accelerated. Overall, global IT spending declined in 2020 around 3.2% from 2019 levels, to $3.69 trillion.

Switch to remote work exposes different approaches to managing systems in various companies. For example, companies  with DRAC Express on their servers were caught with pants down, while those who have enterprise licenses with vFlash fared better.  The problem is that installation of RHEL 7 and  up is very brittle over slow VPN links and usually hangs at some point. Which adds to the pre-existing problems due to systemd and network manager adverse influence on the stability of the system.

One way to deal with this situation is to use Kickstart and add directives

inst.text inst.nokill

in the kernel command line during the boot. The most important of those is a little known parameter which copies Stage2 to RAM and in many cases allows installation run to the end despite slow links. 

Overcomplexity curse

There is no  free lunch. And the drive by Red Hat to increase the complexity of Enterprise Linux creates some interesting and unanticipated side effects; most of which are negative.

For example, the quality of Red Hat tech support progressively deteriorated from RHEL 4 to RHEL 6.  So problems with RHEL 7 are just a new stage of the same process caused by growing overcomplexity of OS and correspondingly difficulties in providing support to the billion of features present in it. But now deterioration reached a new level and suddenly became noticeable.

Mutation of Red Hat techsupport onto Knowledgebase's indexing service:  further deterioration of the quality of tech support in comparison with RHEL6 caused by overcomplexity

While this is nothing new, the drop in case of RHEL 7 was pretty noticeable and pretty painful. Despite several level of support included in licenses (with premium supposedly to be higher level) technical support for really complex cases for RHEL7 is uniformly weak. It degenerated into "looking in database" type of service: an indexing  service for Red Hat vast Knowledgebase.

While Knowledgebase is a useful tool and along with Stack overflow often provides good tips were to look, it fails short of classic notion of tech support. Only in rare cases the article recommended exactly describes your problem. Usually this is just a starting point of your own research, no  more no less.  In some cases references to articles  provided are simply misleading and it is completely clear that engineers on the other end have no time or desire seriously word on your  tickers and help you.  

I would also dispute the notion that the title of Red Hat engineer makes sense for RHEL 7.  This distribution is so complex that is it clearly above the ability to comprehend it for even talented and dedicated professionals.  Each area such as networking, process management, installation of packages and patching, security, etc is incredibly complex.   Certification exam barely scratched the surface of this complexity. 

Of course, much depends on individual and probably it does serve a useful role providing some assurance that the particular person can learn further, but in no way passing it means that the person has the ability to troubleshoot complex problem connected with RHEL7 (the usual meaning of the  term engineer).  This is just an illusion.  He might be  able to cover "in depth" one or two area, for example troubleshooting of networking and package installation. or troubleshooting of web servers or troubleshooting Docker. In all other areas he will be much less advanced troubleshooter, almost a novice. And in no way the person can possesses "universal" troubleshooting  skill for Red Hat. just network stack is so complex that it now requires not a vendor certification (like Cisco in the past),  but PHD in computer science in this area from the top university. And even that might not be enough, especially in environment  where proxy is present and some complex staff like bonded interfaces are used. Just making Docker to pull images from repositories, if your network uses Web proxy is a non-trivial exercise ;-).

If you have a complex problem, you are usually stuck, although premium service provide you an opportunity to talk with a live person, which might help.  for appliances and hardware you now need to resort to helpdesk of OEM (which means getting extended 5 years warranty from OEM is now a must, not an option). 

In a way, unless you buy premium license,  the only way to use RHEL7 is "as is".  While possible for most production servers, this is not a very attractive proposition for a research environment.

Of course, this deterioration is only particularly connected with RHEL7 idiosyncrasies. Linux generally became a more complex OS. But the trend from RHEL4 to RHEL7 is a clear trend to a Byzantine OS that nobody actually knows well (Even a number of utilities now is such that nobody knows probably more then 30% or 50% of them; just yum and installation of packages in general represent such a complex maze, that you might learn it all your professional career). Other parts in IT infrastructure are also growing in complexity and additional security measures throw an additional monkey wrench 

The level of complexity definitely reached human limits and I observed several times that even if you learn some utility during particular case of troubleshooting you will soon forget it if the problem doers not reoccur, say with a year. Look like acquired knowledge is displaced by new and wiped out die to limited capacity of human brain to hold information. In this sense the title "Red Hat Engineer" became a sad joke.   The title probably should be  called "Red Hat detective", as many problems are really mysteries which deserve new Agatha Christy to tell the world.

With RHEL 7 the "Red Hat Certified Engineer" certification became a sad joke as any such "engineer" ability to troubleshoot complex cases deteriorated significantly.  This is clearly visible when you are working with certified Red Hat engineers from Dell or HP, when using their Professional Services

The premium support level RHEL7 is very expensive for small and medium firms. But now there no free lunch and if you are using commercial version of RHEL you simply need to pay this annual maintenance for some critical servers. Just as insurance. But, truth be told, for many servers you might well be OK with just patches (and/or buy higher level. premium support license only for one server out of bunch of similar servers). Standard support RHEL7 is only marginally better that self-support using Google and access to Red Hat knowledgebase.

Generally the less static your datacenter is,  and the more unique type of servers you use, the more premium support licenses you need. But you rarely need more then one for each distinct type of the servers (aka "server group"). Moreover for certain type of problems, for example driver related problems,  now you need such level of support not from Red Hat, but from you hardware vendor as they provide better quality for problems related to hardware, as they know  it much better. 

For database servers getting a license from Oracle makes sense too as Oracle engineers are clearly superior in this area.  So diversification of licensing and minimization and  strategic placement of number  of the most expensive premium licenses now makes perfect sense and provides you is an opportunity to save some money.   This approach is used for a long time for computational clusters where typically only one node (the headnode) gets the premium support license, and all other nodes get the cheapest license possible or even running Academic Linux or CentOS. Now  it is time to generalize this approach to other part of the enterprise datacenter.   

Even if you learned something important today you will soon forget if you do not use it as there way too may utilities, application, configuration files, etc. You name it.  Keeping  your own knowledgebase  as a set of html files or private wiki is now the necessity.  Lab journal type of logs are now not enough

Another factor that makes this "selective licensing" necessary is that prices increased from RHEL5 to RHEL7 (especially is you use virtual guests a lot; see discussion at RHEL 6 how much for your package (The Register). Of course,  Xen is preferable for creation of virtual machines, so usage of "native"  RHEL VM is just an issue due to inertia. Also now instead of full virtualization in many cases can use light weight virtualization (or special packaging,  if you wish) via Docker (as long as major version of the kernel needed is the same). In any case this Red Hat attempt to milk the cow by charging for each virtual guest is so IBM-like that it can generate nothing but resentment, and the resulting from it desire to screw Red Hat in return.  The feeling  for a long time known for IBM enterprise software customers ;-).

First I was disappointed with the quality of RHEL7 support and even complained, but with time I start to understand that they have no choice but screw customers -- the product is way over their heads and the number of tickets to be resolved is too high resulting in overload. The fact that the next day other person can work in your ticker adds insult to the injury. That's why for a typical ticket their help now is limited to pointing you to relevant (or semi-relevant ;-) articles in the knowledgebase. Premium support still is above average, and they can switch  you to a live engineer on a different continent in critical cases in later hours, so if server downtime is important this is a kind of (weak) insurance. In any case, my impression is that Red Hat support is probably overwhelmed by problems with RHEL7 and even for subsystems fully developed by Red Hat such as subscription manager and yum. Unless you are lucky and accidentally get really knowledgeable guy, who is willing to help. Attempt to upgrade the ticket to higher level of severity sometimes help, but usually does not. 

The other problem (that I already mentioned) is that ticket are now assigned to a new person each day, so if the ticket in  not resolved by the first person, you get in treadmill with each new person starting from scratch.  That's why now it is important to submit with each ticket a good write-up and SOS file from the very beginning, as this slightly increase you chances that the first support person will give you a valid advice, not just as semi-useless reference to the article in the knowledgebase. Otherwise their replies often demonstrates complete lack of understanding of what problem you are experiencing.

It the ticket is important for you, in no way you can just submit a description of the problem now. You always need to submit SOS tarball, and, preferably, some result of your initial research.  Do not expect they will be looked closely. But just the process of organizing all the information you know for submission greatly help to improve you understanding of the problem, and sometimes led to resolution of ticket by yourself.   As RHEL engineers get used to work with plain vanilla RHEL installed you generally need to point what deviations your install has (proxy is one example --- it should be mentioned and configuration specifically listed; complex  cards like Intel four port 10Gbit cards, or Mellanox Ethernet cards with Infiniband layer from HP need to be mentioned too)

A new role for sysadmin in RHEL 7: Sysadmin as Sherlock Holms

Sherlock Holmes is a fictional private detective created by British author Sir Arthur Conan Doyle. Referring to himself as a "consulting detective" in the stories. He demonstrated superhuman ability to solve really intractable cases using observation, deduction, forensic science, and logical reasoning. Often his insights borders on the fantastic.

Right now the interactions between various parts of Red Hat are so complex that each sysadmin should became Sherlock Holms, the person who really enjoys solving complex cases, because most case are complex.  And unlike you Sherlock Holms has unlimited time to solve them and no day-to-day responsibilities.

But there is not other way. You need to use typical forensic investigation blueprint to solve some  cases  that arise in regular production systems. Especially painful ins the period of transition from RHEL6 to RHEL7. For example, if you which from RHEL6 to RHEL7 and use NFS directories for users you can  discover that users can't login via ssh to their home directories using passwordless login. Passwordless login works with RHEL6 server but not with newly installed RHEL7 servers.  And such configuration is common and is required setup on small clusters and similar research installations that submit jobs using scheduler to various or multiple sever from one central server (the headnode).

When you discover this you usually spend several hours looking for common suspects -- group writable directory, wrong permissions and/or ownership on ~/.ssh directory,    authorized_keys, other files in it, etc. Nothing works. The attempt to debug this error by running server on some high port like 2222 shows no errors (user can login), but users sill can't login to "normal server" on port 22.  Here is the post of one user who faced this problem with CentOS and has no progress after working on it for a month  ( linux - Passwordless ssh on shared NFS home directory does not work (centos 7) - Stack Overflow):

I have a cluster of 7 nodes (All Centos 7 OS). Master node is maercher5 and the rest are slave nodes. I need to setup passwordless ssh on the master node to the slave nodes to run MPI programs. The home directory is shared by NFS from the master node to all the slave nodes. I followed this tutorial to do a passwordless ssh from master node to slave nodes. I have the same UID and GID on all machines. Since there is only 1 ssh folder shared on all nodes. Permissions for ssh folder is:
$ ls -al  $HOME/.ssh
total 28
drwx------.  2 sarah sarah    76 Apr 16 21:17 .
drwx------. 17 sarah sarah  4096 Apr 17 13:51 ..
-rw-------.  1 sarah sarah 11895 Apr 16 21:17 authorized_keys
-rw-------.  1 sarah sarah  1679 Apr  3 00:55 id_rsa
-rw-r--r--.  1 sarah sarah   411 Apr 10 14:24
-rw-------.  1 sarah sarah  2265 Apr 10 13:58 known_hosts

Nodes can ping each other well. Marcher5 is the master node.

[sarah@marcher5]$ cat /etc/hosts   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 marcher5 marcher7 marcher8 marcher9 marcher10 marcher11 marcher12

On all slave nodes, NFS Mount is as follows:

[sarah@marcher11 ~]$ cat /etc/fstab

/dev/mapper/centos-root /                       xfs     defaults        1 1
UUID=79c2716b-9099-4731-82cc-094ca26eb837 /boot                   xfs     defaults        1 2
#/dev/mapper/centos-home /home                   xfs     defaults        1 2
/dev/mapper/centos-swap swap                    swap    defaults        0 0
marcher5:/home/sge_users /home/sge_users nfs soft,intr,bg,nosuid,timeo=20,retrans=10,async,wsize=8192,rsize=8192  0 0

[sarah@marcher11 ~]$ mount |grep home
    /dev/mapper/centos-home on /home type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
    marcher5:/home/sge_users on /home/sge_users type nfs4 (rw,nosuid,relatime,vers=4.0,rsize=8192,wsize=8192,namlen=255,soft,proto=tcp,port=0,timeo=20,retrans=10,sec=sys,clientaddr=,local_lock=none,addr=

The problem is that passwordless ssh does not work.

[sarah@marcher5 mpi2007]$ ssh -v marcher11
OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 56: Applying options for *
debug1: Connecting to marcher11 [] port 22.
debug1: Connection established.
debug1: identity file /home/sge_users/sarah/.ssh/id_rsa type 1
debug1: identity file /home/sge_users/sarah/.ssh/id_rsa-cert type -1
debug1: identity file /home/sge_users/sarah/.ssh/id_dsa type -1
debug1: identity file /home/sge_users/sarah/.ssh/id_dsa-cert type -1
debug1: identity file /home/sge_users/sarah/.ssh/id_ecdsa type -1
debug1: identity file /home/sge_users/sarah/.ssh/id_ecdsa-cert type -1                                                           [29/1894]
debug1: identity file /home/sge_users/sarah/.ssh/id_ed25519 type -1
debug1: identity file /home/sge_users/sarah/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.6.1
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1
debug1: match: OpenSSH_6.6.1 pat OpenSSH_6.6.1* compat 0x04000000
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr [email protected] none
debug1: kex: client->server aes128-ctr [email protected] none
debug1: kex: [email protected] need=16 dh_need=16
debug1: kex: [email protected] need=16 dh_need=16
debug1: sending SSH2_MSG_KEX_ECDH_INIT
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ECDSA 80:81:97:62:dd:9b:fc:e2:76:bc:13:ce:30:07:79:49
debug1: Host 'marcher11' is known and matches the ECDSA host key.
debug1: Found key in /home/sge_users/sarah/.ssh/known_hosts:5
debug1: ssh_ecdsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Next authentication method: gssapi-keyex
debug1: No valid Key exchange context
debug1: Next authentication method: gssapi-with-mic
debug1: Unspecified GSS failure.  Minor code may provide more information
debug1: Unspecified GSS failure.  Minor code may provide more information
No Kerberos credentials available

debug1: Unspecified GSS failure.  Minor code may provide more information
No Kerberos credentials available

debug1: Unspecified GSS failure.  Minor code may provide more information

debug1: Unspecified GSS failure.  Minor code may provide more information
No Kerberos credentials available

debug1: Next authentication method: publickey
debug1: Offering RSA public key: /home/sge_users/sarah/.ssh/id_rsa
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Trying private key: /home/sge_users/sarah/.ssh/id_dsa
debug1: Trying private key: /home/sge_users/sarah/.ssh/id_ecdsa
debug1: Trying private key: /home/sge_users/sarah/.ssh/id_ed25519
debug1: Next authentication method: password
sarah@marcher11's password:
debug1: Authentication succeeded (password).
Authenticated to marcher11 ([]:22).
debug1: channel 0: new [client-session]
debug1: Requesting [email protected]
debug1: Entering interactive session.
debug1: Sending environment.
debug1: Sending env LC_ALL = C
debug1: Sending env LANG = en_US.UTF-8

I've been gated in this issue for over a month, any help would be appreciated. I tried to do this from root@master to root@slave and it works.


That takes additional hours of deciphering sshd debug log and ssh client debug 3 log.  You will definitely obtain extensive knowledge of ssh logs and debugging modes. Which is mostly useless because you will forget everything when a similar problem strikes you in year or two.

Later you figure out that probably because of SElinux, read documentation that you previously avoided ;-)  and try to figure that out how to deal with this situation.  You will find posts with the recommendation to use restorecon -Rv ~/.ssh (such as Passwordless ssh is not working in CentOS 7 - Unix & Linux Stack Exchange ) which will put you in the wrong direction for a day or two. After some search you may even find  the directive referenced below  linux - Passwordless ssh on shared NFS home directory does not work (centos 7) - Stack Overflow

Turns out I ONLY need to run 'setsebool -P use_nfs_home_dirs 1' as root. Then everything worked like a charm. Thank you @user_ABCD

That close but still no cigar: this solution does not work for you. Now what?

Of course the next thing you do is to disable SElinux (which hate with passion anyway). Still no success. After a while you compare client configurations of various servers and discover that on the server from which your users login to satellite host there is line. StrictHostKeyChecking=no (which was added  not only simplify user access to satellite nodes but also because when the server is replaced or OS is reinstalled you need manually correct the ssh hosts file so that jobs from the cluster scheduler be able to run n all nodes. Which required that passwordless be tested and already available). Even you do not have it in your central ssh client configuration file on the headnode server, some users could have it in their .ssh/config file and in this case you have situation that some users experience difficulties, while other do not (but this is an easier case as here configuration of users can be directly compared)

You remove it and everything start working.  Now you have a problem in  hand -- the cluster scheduler requires this line to be present and passwordless login required that this line be  absent. You can write a utility that checks all satellite servers for the ability to login so the jobs submitted via scheduler do not fail, or you may find other solution. But your life became more complex.

This process is typical for programmers who debug complex program so in all this DevOps hype (see Is DevOps a yet another "for profit" technocult?  ) there might a be a grain of truth. You really need to became a programmer.  Or, better, Sherlock Homes ;-)

Human expert competence vs. excessive automation

Linux is a system complexity of which is far  beyond the ability of mere mortals.  Even seasoned sysadmins with, say, 20 years under the best know of tiny portion of the system -- the portion which their previous experience allows them to study and learn. And in this sense introduction of more and more complex subsystems into Linux is a crime.  It simply makes already bad situation worse.

The paradox of complex system with more "intelligence" that now tend to replace simple systems that existed before is that in "normal circumstances" they accommodate incompetence. As Guardian noted they are made easy to operate by automatically correcting mistakes.

Earl Wiener, a cult figure in aviation safety, coined what is known as Wieners Laws of aviation and human error. One of them was: Digital devices tune out small errors while creating opportunities for large errors. We might rephrase it as:

Automation will routinely tidy up ordinary messes, but occasionally create an extraordinary mess. It is an insight that applies far beyond aviation.

The behaviour of Anaconda installer in RHEL7 is a good example of this trend. Its behaviour is infuriating for an expert (constant "hand holding"), but is a life saver for a novice and most decisions are taken automatically (it even prevents novices from accidentally wiping out existing linux partitions)  and while the system is most probably suboptimal both in disk structure (I think it is wrong to put root partition under LVM: that complicates troubleshooting and increases the possibility of data loss) and in selection of packages (that can be corrected), but it probably will boot after the installation.

Because of this, a unqualified  operator can function for a long time before his lack of skill becomes apparent his incompetence is a hidden and if everything is OK such situation can persist almost indefinitely.  The problems starts as soon as situation goes out of control, for example server crashes or for some reason became unbootable.  And God forbid if such system contains  valuable data.  There is some subtle analogy in what Red Hat did with RHEL7 and what Boeing did with 737 MAX. Both tried to hide the flaw in architecture and overcomplexity via introduction of additional of software to make it usable for unqualified operator.

Even if a sysadmin was an expert, automatic systems erode their skills by removing the need for practice.  It also diminish the level of understanding by introducing an additional intermediary in situations where previously sysadmin operated "directly".  This is the case with systemd -- it does introduce a powerful intermediary for many operations that previously were manual and make booting process much less transparent and more difficult to troubleshoot if something goes wrong. In such instances systemd becomes an enemy of sysadmin instead of a helper.   This is a similar situation as with HP RAID controllers which when they lose configuration happily wipe out your disk array creating a new just to help you.

But such systems tend to fail either in unusual situations or in ways that produce unusual situations, requiring a particularly skilful response. A more capable and reliable automatic system is, the more chances that the situation in case of disaster will be worse.

When something goes wrong in such situations, it is hard to deal with a situation that is very likely to be bewildering.

As aside note there is a way to construct interface in a way that it can even train system in command line. 

Regression to the previous system image as a troubleshooting method

Overcomplexity of RHEL7 also means also that you need to spend real money on configuration management (or hire a couple of dedicated guys to create your own system) as comparison of "states" or the  system and regression to the previous state often are now the only viable method of resolving, or at least mitigating the problem you face. 

In view of those complications, the ability to return to the "previous system image" now is a must and software such as ReaR (Relax-and-Recover ) needs to be known  very well by each RHEL7 sysadmin.   Capacity of low profile USB 3.0 drives (FIT drives) now reached 256GB (and Samsung SSD is also small enough that it can hang from the server USB port without much troubles which increase this local space to 1TB), and they are now capable to store at least a one system image locally on the  server (if you exclude large data directories), providing  bare metal recovery to "known good" state of the system. 

The option of booting to the last good image is what you might desperately need when you can't resolve the problem, which now happen more and more often as in many cases troubleshooting leads nowhere. For those who are averse to USB "solution" as "unclean", can , of course, stored images on NFS.

In any case this is the capability that you will definitely need in case of some "induced" problem such as botched upgrades.  Dell servers have flash cards in the enterprise version of DRAC, which can be used as perfect local repository of configuration information (via git or any other suitable method), which will survive the crash of the server.  And configuration information stored in /etc,  /boot, /root and few other places is the most important information for the recovery of the system.  What is important is that this repositories be updated continuously with each change of the state the system.

Using GIT or Subversion (or any other version control system that you are comfortable with) also mow makes more sense  GIT is not well suited for  sysadmin changes management control as by  default it does not preserve attributes and ACLs, but some packages like etckeeper add this capability. Of course, etckeeper is far from being perfect but  at least it can serve as a starting point). That capitalizes on the fact that after the installation OS usually works ;-)  It just deteriorates with  time. So we are starting to observe  the phenomenon who is well known to Windows administrators: self-disintegration of OS with time ;-) The problems typically comes later with additional packages, libraries, etc, which often are not critical for system functionality. "Keep it simple stupid" is a good approach  here.  Although for servers that are in research this mantra is impossible to implement.

Due to overcomplexity, you should be extremely frugal with packages that you keep on the server. For example, for many servers you do not need X11 to be installed (and Gnome is a horrible package, if you ask me. LXDE (which is default on Knoppix) is smaller and is adequate for most sysadmins (even Cockpit, which has  even smaller footprint  might be adequate). That cuts a lot of packages, cuts the size of /etc directory and a lot of complexity. Avoiding a complex package in favor of simpler and leaner alternative is almost always worthwhile approach.

RHEL 7 fiasco: adding desktop features in a server-oriented distribution

RHEL 7 looks like strange mix of desktop functionality artificially added to distribution which is oriented strictly on the server space (since the word "Enterprise" in the name.) This increase in complexity does not provide any noticeable return on the substantial investment in learning of yet another flavor of Linux.   Which is absolutely necessary as in no way RHEL6 and RHEL7 can be viewed as a single flavor of Linux.  The difference is similar to the difference between RHEL6 and SLES 10. 

Emergence of Devian was a strong  kick in the chin of Red Hat honchos, and they doubled their efforts in pushing systemd and resolving existing problem with some positive results. Improvement of systemd during the life of RHEL 7 are undeniable and its quality is higher in recent RHEL 7 releases, starting with RHEL7.5.  But architectural problems can't be solved by doubling  the efforts to debug existing problems. They remain, and people who can compare systemd with Solaris 10 solution instantly can see all the defects of the chosen architecture.

Similarly Red Hat honchos either out of envy to Apple (and/or Microsoft) success in desktop space, or for some other reason ( such as the ability to dictate the composition of enterprise Linux) make this server oriented Linux distribution a hostage of desktop Linux enthusiasts whims, and by doing so broke way too many things. And systemd is just one, although the most egregious example of this activity (see, for example, the discussion at Systemd invasion into Linux Server space.)

You can browse horror stories about systemd using any search engine you like (they are very educational). Here is random find from Down The Tech Rabbit Hole systemd, BSD, Dockers (May 31, 2018): 

...The comment just below that is telling (Why I like the BSD forums so many clueful in so compact a space)

Jan 31, 2018
CoreOS _heavily_ relies on and makes use of systemd
and provides no secure multi-tenancy as it only leverages cgroups and namespaces and lots of wallpaper and duct tape (called e.g. docker or LXC) over all the air gaps in-between

Most importantly, systemd cant be even remotely considered production ready (just 3 random examples that popped up first and/or came to mind), and secondly, cgroups and namespaces (combined with all the docker/LXC duct tape) might be a convenient toolset for development and offer some great features for this use case, but all 3 were never meant to provide secure isolation for containers; so they shouldnt be used in production if you want/need secure isolation and multi-tenancy (which IMHO you should always want in a production environment).

SmartOS uses zones, respectively LX-zones, for deployment of docker containers. So each container actually has his own full network stack and is safely contained within a zone. This provides essentially the same level of isolation as running a separate KVM VM for each container (which seems to be the default solution in the linux/docker world today) but zones run on bare-metal without all the VM and additional kernel/OS/filesystem overhead the fully-fledged KVM VM drags along.[]

The three links having tree horror stories of systemD induced crap. Makes me feel all warm and fuzzy that I rejected it on sight as the wrong idea. Confirmation, what a joy.

First link:

10 June 2016
Postmortem of yesterdays downtime

Yesterday we had a bad outage. From 22:25 to 22:58 most of our servers were down and serving 503 errors. As is common with these scenarios the cause was cascading failures which we go into detail below.

Every day we serve millions of API requests, and thousands of businesses depend on us we deeply regret downtime of any nature, but its also an opportunity for us to learn and make sure it doesnt happen in the future. Below is yesterdays postmortem. Weve taken several steps to remove single point of failures and ensure this kind of scenario never repeats again.


While investigating high CPU usage on a number of our CoreOS machines we found that systemd-journald was consuming a lot of CPU.

Research led us to which included a suggested fix. The fix was tested and we confirmed that systemd-journald CPU usage had dropped significantly. The fix was then tested on two other machines a few minutes apart, also successfully lowering CPU use, with no signs of service interruption.

Satisfied that the fix was safe it was then rolled out to all of our machines sequentially. At this point there was a flood of pages as most of our infrastructure began to fail. Restarting systemd-journald had caused docker to restart on each machine, killing all running containers. As the fix was run on all of our machines in quick succession all of our fleet units went down at roughly the same time, including some that we rely on as part of our service discovery architecture. Several other compounding issues meant that our architecture was unable to heal itself. Once key pieces of our infrastructure were brought back up manually the services were able to recover...

I would just to add that any sysadmin who is working behind the proxy in enterprise environment would be happy to smash a cake or two into systemd developers face.  Here is an explanation of what in involved in adding proxy to Docker configuration  ( Control Docker with systemd Docker Documentation ):


The Docker daemon uses the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environmental variables in its start-up environment to configure HTTP or HTTPS proxy behavior. You cannot configure these environment variables using the daemon.json file.

This example overrides the default docker.service file. If you are behind an HTTP or HTTPS proxy server, for example in corporate settings, you need to add this configuration in the Docker systemd service file.

  1. Create a systemd drop-in directory for the docker service:
    $ sudo mkdir -p /etc/systemd/system/docker.service.d
  2. Create a file called /etc/systemd/system/docker.service.d/http-proxy.conf that adds the HTTP_PROXY environment variable:

    Or, if you are behind an HTTPS proxy server, create a file called /etc/systemd/system/docker.service.d/https-proxy.conf that adds the HTTPS_PROXY environment variable:

  3. If you have internal Docker registries that you need to contact without proxying you can specify them via the NO_PROXY environment variable:
    Environment="HTTP_PROXY=" "NO_PROXY=localhost,,"

    Or, if you are behind an HTTPS proxy server:

    Environment="HTTPS_PROXY=" "NO_PROXY=localhost,,"
  4. Flush changes:
    $ sudo systemctl daemon-reload
  5. Restart Docker:
    $ sudo systemctl restart docker
  6. Verify that the configuration has been loaded:
    $ systemctl show --property=Environment docker

    Or, if you are behind an HTTPS proxy server:

    $ systemctl show --property=Environment docker

In other words you now has deal with various idiosyncrasies that did not existed with the regular startup scripts. As everything in systemd is "yet another  setting" it adds opaque and difficult to understand layer of indirection. With problems amplified by poorly written documentation. The fact that you no longer can edit shell scripts to do such a simple change as adding a couple of environment variables to set a proxy (some  applications need a different proxy that "normal applications", for example a proxy without authentication)  is somewhat anxiety-producing factor.

Revamping Anaconda in "consumer friendly" fashion in RHEL 7 was another "enhancement", which can well be called a blunder.  But that can be compensated by usage of kickstart files.  There is also an option  to modify anaconda itself, but that's requires a lot of work. See Red Hat Enterprise Linux 7 Anaconda Customization Guide. Road to hell is paved with good intention and in their excessive zeal to protect users from errors in deleting existing partitions (BTW RHEL stopped displaying labels for partitions for some time, and now displayed UUIDs only) and make anaconda more "fool proof" they made the life professional sysadmins really miserable. Recently I tried to install RHEL 7.5 on a server with existing partition (which required unique install and Kickstart was not too suitable for this) and discovered that in RHEL 7 installer GUI there is not easy way to delete existing partitions. I was forced to exist installation boot from RHEL6 ISO delete partition and then continue (at this point I realized that choosing RHEL7 for the particular server was a mistake  and corrected it ;-). Later I have found the following post from 2014 that describes the same situation that I encountered in 2019 very well: 

Can not install RHEL 7 on disk with existing partitions - Red Hat Customer Portal

 May 12 2014  |
Can not install RHEL 7 on disk with existing partitions. Menus for old partitions are grayed out. Can not select mount points for existing partitions. Can not delete them and create new ones. Menus say I can but installer says there is a partition error. It shows me a complicated menu that warns be what is about to happen and asks me to accept the changes. It does not seem to accept "accept" as an answer. Booting to the recovery mode on the USB drive and manually deleting all partitions is not a practical solution.

You have to assume that someone who is doing manual partitioning knows what he is doing. Provide a simple tool during install so that the drive can be fully partitioned and formatted. Do not try so hard to make tools that protect use from our own mistakes. In the case of the partition tool, it is too complicated for a common user to use so don't assume that the person using it is an idiot.

The common user does not know hardware and doesn't want to know it. He expects security questions during the install and everything else should be defaults. A competent operator will backup anything before doing a new OS install.  He needs the details and doesn't need an installer that tells him he can't do what he has done for 20 years.

RHEL7 is doing a lot of other unnecessary for server space things. Most of those changes also increase complexity by hiding "basic" things with the additional layers of "indirection".  For example, try to remove Network Manager in RHEL 7.  Now explain to me why we need it for a server room with servers that use "forever" attached by 10 Mbit/sec cables and which does not run any virtual machines or Docker.

RHEL 7 denies the usefulness of previous knowledge which makes overcomplexity more painful

But what is really bad is that RHEL7 in general and systemd in particular denies usefulness of previous knowledge of Red Hat and destroys the value of several dozens of good or even excellent books related to system administration and published in 1998-2015.  That's a big and unfortunate loss, as many of authors of excellent Linux books will never update them for RHEL7.   In a way, deliberate desire to break with the past demonstrated by systemd might well be called vandalism.

And  Unix is almost 50 Years OS (1973-2019). Linux was created around 1991 and in 2019 will be 28 years old. In other words this is a mature OS with more then two decades of history under the belt. Any substantial changes for OS of this age are not only costly, they are highly disruptive.

systemd denies usefulness of previous knowledge of Red Hat and destroys the value of books related to system administration and published in 1998-2015  

This utter disrespect to people who spend years learning Red Hat and writing about it increased my desire to switch to CentOS or Oracle Linux. I do not want to pay Red Hat money any longer and Red hat support is now outsourced and deteriorated to the level when it is almost completely useless "database query with human face", unless you buy premium support. And even in this case much depends on to what analyst your ticket is assigned.

Moreover the architectural quality of RHEL 7 is low. Services mixed into the distribution are produced by different authors and as such have no standards on configuration including such important for the enterprise thing as the enable the WEB proxy for those services that need to deal with Internet (yum is one example). At the same time the complexity of this distribution is considerably higher then RHEL6 with almost zero return on investment.  Troubleshooting it different and much more difficult.  Seasoned sysadmin switching to RHEL 7 feel like  a novice on the skating ring for several months. And as for troubleshooting complex problem much longer then that. 

It is clear that RHEL 7 main problem is not systemd by itself but the fact that the resulting distribution suffers from overcomplexity. It is way too complex to administer and requires regular human having to remember way so many things. Things that they can never fit into one head. This means this is by definition a brittle system, the elephant that nobody understands  completely.  Much like Microsoft Windows. And I am talking not only  about systemd, which was introduced in this version. Other subsystems suffer from overcomplexity as well. Some of them came directly from RHEL, some not (Biosdevname, which is a perversion, came from Dell).

Troubleshooting became not only more difficult.  In complex cases you also are provided with less information (especially cases where the system does not boot).   Partially this is due to generally complexity of modern Linux environment (library hell, introduction of redundant utilities, etc), partially due to increase complexity of hardware,. More one of the key reasons is the lack of architectural vision on the part of Red Hat.

Add to this multiple examples of sloppy programming (dealing with proxies is still not unified and individual packages have their own settings which can conflict with the environment  variable settings) and old warts (RPM format now is old and partially outlived its usefulness, creating rather complex issues with patching and installing software, issues that take a lot of sysadmin time to resolve) and you get the picture. 

A classic example of Red Hat inaptitude is how they handle proxy settings. Even for software fully controlled by Red Hat  such as yum and subscription manager they use proxy settings in each and every configuration file. Why not to put them  /etc/sysconfig/network and always, consistently read them from the environment variables first and this this file second?  Any well behaving application needs to read environment variables which should take precedence over settings in configuration files. They do not do it.  God knows why. 

They even manage to embed proxy settings from /etc/rhsm/rhsm.conf into yum file /etc/yum.repos.d/redhat.repo, so the proxy value is taken from this file. Not from  your /etc/yum.conf settings, as you would expect.  Moreover this is done without any elementary checks for consistency: if you make a pretty innocent  mistake and specify proxy setting in /etc/rhsm/rhsm.conf as


The Red Hat registration manager will accept this and will work fine. But for yum to work properly /etc/rhsm/rhsm.conf proxy specification requires just DNS name without prefix http:// or https://  -- prefix https will be added blindly (and that's wrong) in redhat.repo   without checking if you specified http:// (or https://) prefix or not. This SNAFU will lead to generation in  redhat.repo  the proxy statement of the form https://

At this point you are up for a nasty surprise -- yum will not work with any Redhat repository. And there is no any meaningful diagnostic messages. Looks like RHEL managers are iether engaged in binge drinking, or watch too much porn on the job ;-). 

Yum which started as a very helpful utility also gradually deteriorated. It turned into a complex monster which requires quite a bit of study and introduces by itself  a set of very complex bugs, some of which are almost features.

SELinux was never a transparent security subsystems and has a lot of quirks of its own. And its key idea is far from being elegant like were the key ideas of AppArmor ( per application assigning umask for key directories and config files)  which actually disappeared from the Linux security landscape. Many sysadmins simply disable SElinux leaving only firewall to protect the server. Some application require disabling SELinux for proper functioning, and this is specified in the documentation.

The deterioration of architectural vision within Red Hat as a company is clearly visible in the terrible (simply terrible, without any exaggeration) quality of the customer portal, which is probably the worst I ever encountered. Sometimes I just put tickets to understand how to perform particular operation on the portal.  Old or Classic as they call it RHEL customer portal actually was OK, and even has had some useful features. Then for some reason they tried to introduced something new and completely messes the thing.  As of August  2017  quality somewhat improves, but still leaves much to be desired. Sometimes I wonder why I am still using the distribution, if the company which produced it (and charges substantial money for it ) is so tremendously architecturally inapt, that it is unable to create a usable customer portal. And in view of existence of Oracle Linux I do not really know the answer to this question.  Although Oracle has its own set of problems, thousands of EPEL packages, signed and built by Oracle, have been added to Oracle Linux yum server.

Note about moving directories /bin, /lib, /lib64, and /sbin to /usr

In RHEL 7 Red Hat decided to move four system directories ( /bin, /lib /lib64 and /sbin ) previously located at /  into  /usr.  They were replaced with symlinks to preserve compatibility.

[0]d620@ROOT:/ # ll
total 316K
dr-xr-xr-x.   5 root root 4.0K Jan 17 17:46 boot/
drwxr-xr-x.  18 root root 3.2K Jan 17 17:46 dev/
drwxr-xr-x.  80 root root 8.0K Jan 17 17:46 etc/
drwxr-xr-x.  12 root root  165 Dec  7 02:12 home/
drwxr-xr-x.   2 root root    6 Apr 11  2018 media/
drwxr-xr-x.   2 root root    6 Apr 11  2018 mnt/
drwxr-xr-x.   2 root root    6 Apr 11  2018 opt/
dr-xr-xr-x. 125 root root    0 Jan 17 12:46 proc/
dr-xr-x---.   7 root root 4.0K Jan 10 20:55 root/
drwxr-xr-x.  28 root root  860 Jan 17 17:46 run/
drwxr-xr-x.   2 root root    6 Apr 11  2018 srv/
dr-xr-xr-x.  13 root root    0 Jan 17 17:46 sys/
drwxrwxrwt.  12 root root 4.0K Jan 17 17:47 tmp/
drwxr-xr-x.  13 root root  155 Nov  2 18:22 usr/
drwxr-xr-x.  20 root root  278 Dec 13 13:45 var/
lrwxrwxrwx.   1 root root    7 Nov  2 18:22 bin -> usr/bin/
lrwxrwxrwx.   1 root root    7 Nov  2 18:22 lib -> usr/lib/
lrwxrwxrwx.   1 root root    9 Nov  2 18:22 lib64 -> usr/lib64/
-rw-r--r--.   1 root root 292K Jan  9 12:05 .readahead
lrwxrwxrwx.   1 root root    8 Nov  2 18:22 sbin -> usr/sbin/

You would think that RHEL RPMs now use new locations. Wrong. You can do a simple experiment of RHEL7 or CentOs7 based VM -- deleted those symlinks using the command

rm -f /bin /lib /lib64 /sbin 

and see what will happen ;-). Actually recovery from this situation might represents a good interview question for candidates pretending on senior sysadmin positions.

Note of never ending stream of security patches

First and foremost not too much zeal
( Talleyrand advice to young diplomats)

In security patches Red Hat found its Eldorado. I would say that they greatly helped Red Hat  to became a billion dollars' in revenue company, despite obvious weakness of GPL for the creating a stable revenue stream. Constantly shifting sands created by over-emphasizing security patches compensate weakness of GPL to the extent  that profitability can approach the level of a typical commercial software company.

And Red Hat produces them at the rate of around a dozen a week or so ;-). As keeping up with patches is an easy metric to implement in the Dilbertized neoliberal enterprise Sometimes even it becomes the key metric by which sysadmins is judged along with another completely fake metric -- average time to resolve tickets of a given category. So clueless corporate brass often play the role of Trojan Horse for Red Hat by enforcing too strict (and mostly useless) patching policies (which is such cases are often not adhered in practice, creating a gap between documentation and reality typical for any corporate bureaucracy). If this part of IT is outsourced,  everything became the contract negotiation, including this one and the situation with the patching became even more complicated with additional levels of indirection and bureaucratic avoidance of responsibility. 

Often patching is elevated to such high priority that it consumes lion share of sysadmin efforts (and is loved  by sysadmins who are not  good for anything else, as they can report their accomplishments each month ;-)  why other important issue  are lingering if obscurity and disrepair.

While not patching servers at all is questionable policy (which bounds on recklessness),  too frequent patching of servers is another extreme, and extremes meet. As Talleyrand advised to young diplomats "First and foremost not too much zeal". this is fully applicable to the patching of RHEL servers.

Security of linux server (taking into account that RHEL is Microsoft Windows of Linux  world and it the most attacked flavor of linux) is mostly architectural issue. Especially network architecture issue.

For example, using address translation (private subnet within the enterprise) is now a must. Prosy for some often  abused protocols is desirable.

As attach can happen not only via main interface, but also via management interface and you do not control or under remote management firmware, providing  a different, tightly controlled subnet for DRAC assessable only for selected IPs belongs to sysadmin team improves security far above what patching of DRAC firmware can accomplish (see Linux Servers Most Affected by IPMI Enabled JungleSec Ransomware by Christine Hall ).

Similarly implementing jumpbox for accessing servers from DHCP segment (corporate desktops) via ssh also improves ssh security without any patches (and in the past Openssh was a source of very nasty exploits, as is a very complex subsystem). There is no reason why "regular" desktops on DHCP segment should be able to initiate login directly (typically IP addressee for sysadmins are made "pseudo-static" by supplying DHCP server with MAC addresses of those desktops.) 

In any case, as soon as you see a email from the top IT honcho that congratulates subordinates for archiving 100% compliance with patching schedule in the enterprise that does not implements those measures, you know what kind of IT and enterprise is that.

I mentioned two no-nonsense measures (separate subnet for DRAC/ILO and jumpbox for ssh), but there are several other "common sense" measures which really effect security far more the any tight patching schedule. Among  them:

So if those measures are not implemented,  creating spreadsheets with the dates of installation of patches is just perversion of security, not a real security improvement measure. Yet another Dilbertalised enterprise activity. As well as providing as a measure that provides some justification for the existence of the corporate security department.  In reality the level of security by-and-large is defined by IT network architecture and no amount of patching will improve bad Enterprise IT architecture. 

I thing that this never ending stream of security patches serves dual purpose, with the secondary goal to entice customers to keep continues Red Hat subscription for the life of the server.  For example for research servers, I am not so sure that updating to the next minor release (say from CentOS 7.4 -- to CentOS 7.5) provide less adequate level of security.  And such schedule creates much less fuss, enabling people to concentrate on more important problems. 

And if the server is internet facing then well thought out firewall policy and SElinux enforcing policy block most "remote exploits" even without any patching. Usually only HTTP port is opened (ssh port in most cases should be served via VPN, or internal DMZ subnet, if  the server is on premises ) and that limits exposure patching or no patching, although of course does not eliminate  this. Please understands that the mere existence of NSA, CIA with their own team of hackers (and their foreign counterparts) guarantees at any point of time availability of zero-day exploits for "lucrative" targets on the level of the network stack, or lower. Against these zero-day threats, reactive security is helpless (and patching is a reactive security measure.)  In this sense,  scanning of servers and running set of predefined exploits (mainly PHP oriented) by script kiddies is just a noise and does not qualify as an attack, no matter how dishonest security companies try to hype this threat.  The real threat is quite different.

If you need higher security you probably need to switch to the distribution that supports AppArmor kernel module (Suse Enterprise in one). Or if you have competent sysadmins with certification in Solaris to  Solaris (on Intel, or even Spark) which not only provides better overall security, but also implement "security via obscurity" defense which alone stops 99.95 of Linux exploit dead of tracks (and outside of encryption algorithms there is nothing shameful in using security via obscurity; and even for encryption algorithms it can serve a useful role too ;-). Solaris has RBAC (role based access control) implemented which allows to compartmentalize many roles (including root). Sudo is just a crude attempt to "bolt on" RBAC  on Linux and Free BSD. Solaris also has a special hardened version called Trusted Solaris.

Too frequent patching of the server also sometimes introduces some subtle changes that break some applications, which are not that easy to detect and debug.

All-in-all "too much  zeal" in patching the servers and keep up with the never ending stream of RHEL patches often represents a voodoo dance around the fire. Such thing happens often when that ability to understand a very complex phenomenon is beyond normal human comprehension. In this case easy "magic" solution, a palliative is adopted.

Also as I noted above, in this  area the real situation on corporate servers most often does not corresponds to spreadsheets and presentations to corporate management.

RHEL licensing

We have two distinct problem with RHEL licensing:

Licensing model: four different types of RHEL licenses

RHEL is struggling to fence off "copycats" by complicating access to the source of patches, but the problem is that its licensing model in  pretty much Byzantium.  It is based of a half-dozen of different types of subscriptions. The most useful are pretty expensive.  That dictates the necessity of diversification within this particular vendor and combining/complementing it with licenses from other vendors. With RHEL 7 is does not makes sense any longer to license all you server from Red Hat.

As for Byzantium structure of Red Hat licensing I resent paying Red Hat for our 4 socket servers to the extent that I stop using this type of servers and completely switched to two socket servers. Which with Intel CPUs rising core count was a easy escape from RHEL restrictions and greed.  Currently Red Hat probably has most complex, the most Byzantine system of subscriptions comparable with IBM (which is probably the leader in "licensing obscurantism" and the ability to force customers to overpay for its software ;-).

And there are at least four different RHEL licenses for real (hardware-based) servers (  )

  1. Self-support. If you have many identical or almost identical servers or virtual machines it does not make sense to buy standard or premium licenses for all. It can be bought for one server and all other can be used with self-support licenses, which provides access to patches). They sometimes are used for a group for servers, for example a park of webservers with only one server getting standard of premium licensing. 
  2. Standard (actually means web only, although formally you can try chat and phone during business hours (if you manage to get to a support specialist). So this is mostly web-based support.  This level of subscription provides better set of repositories, but Red Hat is playing games in this area and some of them now require additional payment.
  3. Premium (Web and phone with phone 24 x7 for severity 1 and 2 problems). Here you really can get specialist on phone if you have a critical problems. Even in after hours.
  4. HPC computational nodes. Those are the cheapest but they have limited number of packages in the distribution and repositories. In this sense using Oracle Linux self-support licenses is a better deal for computational nodes then this type of RHEL licenses; sometime CentOS can be used too which eliminates the licensing problem completely, but adds other problems (access to repositories sometimes is a problem as too many people are using  too few mirrors). In any case, I have positive experience with using CentOS for computational clusters that run bioinformatics software.  The headnode can be licensed from Red Hat or Oracle.  Recently I have found that Dell provides pretty good, better the Red Hat, support for headnode type problems too.  The same is probably true for the HP Red Hat support.
  5. No-Cost RHEL Developer Subscription is available from March 2016  -- I do not know much about this license.

There is also several restricted support contracts:

RHEL licensing scheme is based on so called "entitlements" which oversimplifying is one license for a 2 socket server. In the past they were "mergeable" so if your 4 socket license expired and you have two spare two socket licenses Red Hat was happy to accommodate your needs. Now they are not and use IBM approach to this issue.  And that's a problem :-).

Now you need the right mix if you have different classes of servers. All is fine until you use mixture of licenses (for example some cluster licenses, some patch only ( aka self-support), some premium license -- 4 types of licenses altogether). In this case to understand where is the particular license landed in the past was almost impossible to predict. But if you use uniform server part this scheme worked reasonably well (actually better then the current model.) Red Hat fixed the problem of unpredictability where the particular license goes (now you have a full control) but created a new problem -- if you license expired it's now your problem -- with subscription manager it nor replaced by the license from "free license pool". 

This path works well but to cut the costs and you now need to buy five year license with the server to capitalize the cost of the Red Hat license. But five year term means that you lack the ability to switch to a different flavor of Linux. Most often this is not a problem, but still.

This also a good solution for computational cluster licenses -- Dell and HP can install basic cluster software on the enclosure for minimal fee. They try to force upon an additional software which you might not want or need (Bright Cluster Manager in case of Dell), but  that's a solvable issue (you just do not extend support contract after the evaluation period ends).  

And believe me this HPC architecture is very useful outside computational tasks so HPC license for node can be used outside the  areas, where computational cluster are used to cut the licensing costs.  It is actually an interesting paradigm of managing heterogeneous datacenter. The only problem is that you need to learn to use it :-). For example SGE is a well engineered scheduler (originally from Sun, but later open sourced) that with some minimal additional software can be used as enterprise scheduler.  While this is a free software it beats many commercial offerings and while it lacks calendar scheduling, any calendar scheduler can be used with it to compensate for this (even cron -- in this each cron task becomes SGE submit script).  Another advantage is that it is no longer updated, as often updates of commercial software are often just directed on milking customers and add nothing or even subtract from the value of software.  Other cluster schedulers can be used instead of SGE as well.

Using HPC-like config with the "headnode" and "computational nodes" is option to lower the fees if you use multiple similar servers which do not need any fancy software (for example, a  blade enclosure with 16 blades used as HTTP server farm, or Oracle DB farm). It is relatively easy to organize a particular set of servers as a cluster with SGE (or other similar scheduler) installed on the head node and the common NFS (or GPFS) filesystem exported to all nodes.  Such a common filesystem is ideal for complex maintenance tasks and it alone serves as a poor man software configuration system. Just add a few simple scripts and parallel execution software and in many cases you this is all you need for configuration management ;-).  It is amazing that many aspects/advantages of this functionality is not that easy to replicate with Ansible, Puppet, or Chef. 

BTW now Hadoop is a fashionable thing (while being just a simple case of distributed search) and you can always claim that this is a Hadoop type of service, which justifies calling such  an architecture a cluster.  In this case you can easily pay premium license for the headnode and one nodes, but all other computation nodes are $100 a year each or so. Although you can get the same self-support license from Oracle for almost the price ($119 per year the last time I checked) without any  Red Hat restrictions, so from other point of view, why bother licensing self-support servers from Red Hat ?

As you can mix and match licenses it is recommended to buy Oracle  self-support licenses instead of RHEL self-support licenses (Oracle license costs $119 per year the last time I checked). It provides a better set of repositories and does not entail any Red Hat restrictions, so from other point of view, why bother licensing self-support servers from Red Hat ?

The ides of Guinea Pig node

Rising costs also created strong preference creating server group so that only one  server in the server group has "expensive license and used as guinea pig for problems, which all other enjoy self-support licenses. That allow you getting full support for complex problem by replicating them of Guinea Pig node. And outside financial industry companies now  typically are tight on money for IT. 

It is natural that at the certain, critical level of complexity, qualified support disappears. Read RHEL gurus are unique people with tremendous talent and they are exceedingly rare, dying our breed.   Now you what you get is low level support which mostly consist of pointing you to a relevant (or often to an irrelevant) to your case article in the Red Hat knowledgebase. With patience you can upgrade your ticket and get to proper specialist, but the amount of effort might nor justify that -- for the amount of time and money you might be better off using a good external consultants. For example, from universities, which have high quality people in need of additional money. 

In most cases the level of support still can be viewed as acceptable only if you have Premium subscription. But at least for me this creates resentment; that why now I an trying to install CentOS or Scientific Linux instead if they work OK for a particular application.  You can buy Premium license only for one node out of several similar saving on many and using this node of Guinea Pig for the problems.

New licensing scheme, while improving  the situation in many areas, has a couple of serious drawbacks , such as dealing with 4 socket servers and expiration of licenses problems. We will talk about 4 socket servers later (and who now need them if Intel packs 16 or more cores into one  CPU ;-). But too tight control of licenses by licensing manager that Red Hat probably enjoys, is a hassle for me: if the license expired now this is "your problem", as there is no automatic renewal from the pool of available licenses (which was one of the few positive thing about now  discontinued RHN).

Instead you need to write scripts and deploy them on all nodes to be able to use patch command on all nodes simultaneously via some parallel execution tool (which is OK way to deploy security patches after testing). 

And please note that typically large corporation overpay Red Hat for licensing, as they have bad or non-operational licensing controls and prefer to err on higher number of licenses then they need. This adds insult to injury -- why on the earth we can't use available free licenses to automatically replace them when the license expires but equivalent pool of licenses is available and not used ? 

Diversification of RHEL licensing and tech support providers

Of course, diversifying licensing from Red Hat now must be the option that  should be given a serious consideration. One important fact is that college graduated now comes to the enterprise with the knowledge of Ubuntu. And due to that tend to deploy application using Docker, as they are more comfortable, more knowledgably in Ubuntu than in CentOS or Red Hat enterprise. This is a factor that should be considered.

But there is an obvious Catch 22 with switching to another distribution: adding Debian/Devian to the mix also is not free -- it increases the cost of administration by around  the same threshold as the switch to RHEL7:  introducing to the mix  "yet another Linux distribution" usually have approximately the same cost estimated around 20-30% of loss of productivity of sysadmins. As such "diversification" of flavors of Linux in enterprise environment generally should be avoided at all costs. 

So while paying for continuing  systemd development to Red Hat is not the best  strategy, switching to alternative distribution which is not RHEL derivative and typically uses a different package manager entrain substantial costs and substantial uncertainty. Many enterprise applications simply do not support flavors other then Red Hat and they require licensed server for the installation.  So the pressure to conform with the whims of Red Hat brass is high and most people are not ready to drop Red Hat just due to the problems with systemd. So mitigation of the damage caused by systemd strategies are probably the most valuable avenue of actions.  One such  strategy is diversification of RHEL licensing and providers of tech support, while not abandoning  RHEL compatible flavors.

This diversification strategy first of all should include larger use of CentOS as well as Oracle linux as more cost effective alternatives. CentOS is not panacea and it has problem with installation of typical enterprise class servers. Often anaconda does not properly recognize hardware for even more or less current (say, less then three years old) server. But if it installs OK and work, why not.

One positive think is that this allows you to purchase more costly "premium" license for server that really matter. Also in case you have 16 identical blade you can purchase only one premium license and for the rest use self-support licenses. After all both hardware and OS are identical.  In HPC clusters config  they are absolutely identical. Taking into account  the increased complexity of RHEL7  buying such a license this is just a sound  insurance against increased uncertainty.  You do need at least one such license for a each given class of servers at the  datacenter. Plus for certain critical servers, for example, for HPC luster headnode, if you have computational clusters in your datacenter.

Generalization of this idea is creation of  "license groups"  in which only one server is licensed with expensive premium, subscription and all other are identical or similar servers are used with minimal self-support license. This plan can be executed with Oracle even better then with Red Hat as it has lower price for self-support subscription. 

The second issue is connected with using Red Hat as a sole support provider. For those who use Dell and HP hardware one attractive alternative is to use then as both Dell and HP provides "in house" support of RHEL for their hardware. the cost can be capitalized for five years support contract during the purchase. 

In case Dell or HP engineers who provide support for RHEL  know their hardware much better then RHEL engineers. So in this critical area where it is unclear whether this is OS/driver, or hardware problem they are more easy to work with. Dell actually helps you to compile and test a new driver in such cases (I had one  case when 4 port Intel 10Gbit card that  came with Dell blade has broken driver in regular RHEL distribution and it needed to be replaced with a custom compiled driver ) They also are noticeably better in debugging complex cases when the server  can't start normally. And there are some very tricky cases here. For example the problem in Dell can be connected  with the BIOS, but demonstrate itself on the OS level. 

BTW less well known and less popular vendor of enterprise Linux -- Suse -- provides RHEL support too. In the past for large customers SUSE used to provide a "dedicated engineer" who can serve as your liaison to developers and tier III level of support.

Oracle in addition to having its own  distribution also can provide support; it is easier to get to the engineer from them in case of complex problem that is the case with Red Hat.

It is important to understand that due to excessive complexity of RHEL7, and the flow of tickets related to systemd, Red Hat tech support mostly degenerated to the level of  "pointing to relevant  Knowledgebase article."  Sometime the article is relevant and helps to solve the problem, but often it is just a "go to hell" type of response, an imitation of support, if you wish. 

In the past (in time of RHEL 4) the quality of support was much better and can even discuss your problem with the support engineer.  Now it is unclear what we are paying for.  My experience suggests that most complex problem typically are in one way or another are connected with the hardware interaction with OS. If this observation is true it might  be better to use alternative providers, which in many cases provide higher quality tech support as they are more specialized.

So it you have substantial money allocated to support (and here  I mean 100 or more systems to support) you probably should be thinking about third party that suit your needs too.

Note about licensing management system

There are two licensing system used by Red Hat

  1. Classic(RHN) -- old system that was phased out in mid 2017 (now has only historical interest)
  2. "New" (RHSM) -- a new, better, system used predominantly on RHEL 6 and 7. Obligatory from August 2017.

RHSM is complex and requires study.  Many hours of sysadmin time are wasted on mastering its complexities, while in reality this is an overhead that allows Red Hat to charge money for the product. So the fact that they are NOT supporting it well tells us a lot about the level of deterioration of the company.   So those Red Hat honchos with high salaries essentially create a new job in enterprise environment -- a license administrator. Congratulations !

All-in-all Red Hat successful created almost un-penetrable mess of obsolete and semi obsolete notes, poorly written and incomplete documentation, dismal diagnostic and poor troubleshooting tools. And the level of frustration sometimes reaches such a level that people just abandon RHEL. I did for several non-critical system. If CentOS or Academic Linux works there is no reason to suffer from Red Hat licensing issues. Also that makes Oracle, surprisingly, more attractive option too :-). Oracle Linux is also cheaper. But usually you are bound by corporate policy here.

"New" subscription system (RHSM) is slightly better then RHN for large organizations, but it created new problem, for example a problem with 4 socket servers which now are treated as a distinct entity from two socket servers. In old RHN the set of your "entitlements" was treated uniformly as licensing tokens and can cover various  number of sockets (the default is 2). For 4 socket server it will just take two 2-socket licenses. This was pretty logical (albeit expensive) solution. This is not the case with RHNSM. They want you to buy specific license for 4 socket server and generally those are tiled to the upper levels on RHEL licensing (no self-support for 4 socket servers). In RHN, at least, licenses were eventually converted into some kind of uniform licensing tokens that are assigned to unlicensed systems more or less automatically (for example if you have 4 socket system then two tokens were consumed). With RHNSM this is not true, which creating for large enterprises a set of complex problems. In general licensing by physical socket (or worse by number of cores -- an old IBM trick) is a dirty trick step of Red Hat which point its direction in the future.

RHSM allows to assign specific license to specific box and list the current status of licensing.  But like RHN it requires to use proxy setting in configuration file, it does not take them from the environment. If the company has several proxies and you have mismatch you can be royally screwed. In general you need to check consistency of your environment  settings with conf files settings.  The level of understanding of proxies environment by RHEL tech support is basic of worse, so they are  using the database of articles instead of actually troubleshooting based on sosreport data. Moreover each day there might a new person working on your ticket, so there no continuity.

RHEL System Registration Guide ( is weak and does not cover more complex cases and typical mishaps.

"New" subscription system (RHSM) is slightly better then old RHN in a sense that it creates for you a pool of licensing and gives you the ability to assign more expensive licensees to the most valuable servers.  It allows to assign specific license to specific box and to list the current status of licensing. 


Learn More

The RHEL System Registration Guide ( which outlines major options available for registering a system (and carefully avoids mentioning bugs and pitfalls, which are many).  For some reason migration from RHN to RHNSM usually worked well

Also might be useful (to the extent any Red Hat purely written documentation is useful) is the document: How to register and subscribe a system to the Red Hat Customer Portal using Red Hat Subscription-Manager ( At least it tires to  answers to some most basic questions. 

Other problems with RHEL 7

Binding customers with a never ending stream or patches

Theoretically patching should be performed rarely and after considerable testing as the possibility of breaking something is real. But in Red Hat world patching became a regular religious ritual, which while  devoid of most meaning  still is adhered by most customers.  This is mainly a way to justify subscription model: no subscription -- no patches.  In this sense Oracle plays more fair game.

Some subsystems such as systemd are patched almost each patch cycle representing a variation of the theme of the "car in permanent repair mode"

Even so called security patches which represents a small subset of total parches are highly questionable: Red Hat does not provide any information to what exact vulnerabilities the particular patch addresses, outside really big SNAFU when some high provide customers are affected and the scandal unfolds (typically with "externally accisible" daemons as sshd) .  So most probably only few of them matter.  But they less a nuisance than the application of all patches, and as such they are preferable.  All patches for internal utilities that does not directly affect ("broken") functionality should be viewed as optional because if an intruder could find a way into one of user accounts, the game is essentially over, unless this is a script kiddie.

Typically Red Hat releases minor updates once a year or so. That means that you can expect ten minor(dot) updates for the life of the particular version. And true enough, the last minor update for RHEL6 was RHEL 6.10. RHEL7, which proved to be more fragile and less stable then RHEL6,  probably will exceeds that. As of December 2020 it reached level of 7.9 and it still has four years to go.

Generally, I would recommend against too much zeal in applying patches to Red Hat systems. IMHO it is enough to update only on minor releases (with some lag) and use only security patches for your quarterly or monthly updates until a new minor release emerges, unless you experience a real problems with the particular version.  Which now happens more and more often.

The contribution to the security provided by the security updates is by-and-large an illusion, as RHEL is so complex that it is reasonable to assume that it contains infinite number of zero day exploits. As such it can not be made more secure by applying a couple of dozen fixes. So at best this is a ritual like a variation of classic sysadmin ritual of "waving a dead chicken": security now is mainly architectural thing with firewall being the first and most important line of defense.  SElinux also can play a role but it is very complex and few sysadmin learn to use it properly.

On other words, no amount of patches can change Red Hat for the better, much like no amount of patches can change Microsoft Windows for the better. It is inherently insecure system, especially against qualified and determined attacker with some financial resources and/or financial incentives.

In this sense RHEL sucks even more then Suse and Ubuntu, as SE Linux subsystem is very complex and few sysadmin learn it and use correctly, and it does not have much simpler and more elegant AppArmor subsystem, which solves the problem for the packages which use ports to communicate (such as http, ssh and other servers)  by essentially providing per application  value of umask for each directory.

Red Hat provides bug fixes (aka  Urgent bug fixes)  only for the following 20 packages:

  1. bind,
  2. bash,
  3. chrony,
  4. grub2,
  5. grubby,
  6. glibc,
  7. gnutls,
  8. kernel,
  9. libgcrypt,
  10. libvirt,
  11. nss,
  12. openssh,
  13. openssl,
  14. python 3.6,
  15. qemu-kvm,
  16. rpm,
  17. sudo,
  18. systemd,
  19. wget,
  20. yum/dnf

Urgent bugs not on the list may be addressed at Red Hat discretion. The policy applies ONLY on current active minor releases per the Red Hat Enterprise Linux Life Cycle Policy.

Note: Active minor releases refers to the period of time a specific minor release is maintained. For example, RHEL 8.1 with EUS is active for 24 months from general availability.

Mitigating damage done by systemd

Some  tips:

Sometime you just need to be inventive and add additional startup script to mitigate the damage. Here is one realistic example (systemd sucks):

How to umount NFS before killing any processes. or How to save your state before umounting NFS. or The blind and angry leading the blind and angry down a thorny path full of goblins. April 29, 2017

A narrative, because the reference-style documentation sucks.

So, rudderless Debian installed yet another god-forsaken solipsist piece of over-reaching GNOME-tainted garbage on your system: systemd. And you've got some process like openvpn or a userspace fs daemon or so on that you have been explicitly managing for years. But on shutdown or reboot, you need to run something to clean up before it dies, like umount. If it waits until too late in the shutdown process, your umounts will hang.

This is a very very very common variety of problem. To throw salt in your wounds, systemd is needlessly opaque even about the things that it will share with you.

"This will be easy."

Here's the rough framework for how to make a service unit that runs a script before shutdown. I made a file /etc/systemd/system/greg.service (you might want to avoid naming it something meaningful because there is probably already an opaque and dysfunctional service with the same name already, and that will obfuscate everything):

Description=umount nfs to save the world


The man pages systemd.unit(5) and systemd.service(5) are handy references for this file format. Roughly, After= indicates which service this one is nested inside -- units can be nested, and this one starts after networking.service and therefore stops before it. The ExecStart is executed when it starts, and because of RemainAfterExit=yes it will be considered active even after /bin/true completes. ExecStop is executed when it ends, and because of Type=oneshot, networking.service cannot be terminated until ExecStop has finished (which must happen within TimeoutSec=10 seconds or the ExecStop is killed).

If networking.service actually provides your network facility, congratulations, all you need to do is systemctl start greg.service, and you're done! But you wouldn't be reading this if that were the case. You've decided already that you just need to find the right thing to put in that After= line to make your ExecStop actually get run before your manually-started service is killed. Well, let's take a trip down that rabbit hole.

The most basic status information comes from just running systemctl without arguments (equivalent to list-units). It gives you a useful triple of information for each service:

greg.service                loaded active exited

loaded means it is supposed to be running. active means that, according to systemd's criteria, it is currently running and its ExecStop needs to be executed some time in the future. exited means the ExecStart has already finished.

People will tell you to put LogLevel=debug in /etc/systemd/system.conf. That will give you a few more clues. There are two important steps about unit shutdown that you can see (maybe in syslog or maybe in journalctl):

systemd[4798]: greg.service: Executing: /root/bin/umountnfs
systemd[1]: rsyslog.service: Changed running -> stop-sigterm

That is, it tells you about the ExecStart and ExecStop rules running. And it tells you about the unit going into a mode where it starts killing off the cgroup (I think cgroup used to be called process group). But it doesn't tell you what processes are actually killed, and here's the important part: systemd is solipsist. Systemd believes that when it closes its eyes, the whole universe blinks out of existence.

Once systemd has determined that a process is orphaned -- not associated with any active unit -- it just kills it outright. This is why, if you start a service that forks into the background, you must use Type=forking, because otherwise systemd will consider any forked children of your ExecStart command to be orphans when the top-level ExecStart exits.

So, very early in shutdown, it transitions a ton of processes into the orphaned category and kills them without explanation. And it is nigh unto impossible to tell how a given process becomes orphaned. Is it because a unit associated with the top level process (like getty) transitioned to stop-sigterm, and then after getty died, all of its children became orphans? If that were the case, it seems like you could simply add to your After rule.


For example, my openvpn process was started from /etc/rc.local, so systemd considers it part of the unit rc-local.service (defined in /lib/systemd/system/rc-local.service). So After=rc-local.service saves the day!

Not so fast! The openvpn process is started from /etc/rc.local on bootup, but on resume from sleep it winds up being executed from /etc/acpi/actions/ And if it failed for some reason, then I start it again manually under su.

So the inclination is to just make a longer After= line:

After=networking.service acpid.service

Maybe [email protected]? Maybe systemd-user-sessions.service? How about adding all the items from After= to Requires= too? Sadly, no. It seems that anyone who goes down this road meets with failure. But I did find something which might help you if you really want to:

systemctl status 1234

That will tell you what unit systemd thinks that pid 1234 belongs to. For example, an openvpn started under su winds up owned by /run/systemd/transient/session-c1.scope. Does that mean if I put After=session-c1.scope, I would win? I have no idea, but I have even less faith. systemd is meddlesome garbage, and this is not the correct way to pay fealty to it.

I'd love to know what you can put in After= to actually run before vast and random chunks of userland get killed, but I am a mere mortal and systemd has closed its eyes to my existence. I have forsaken that road.

I give up, I will let systemd manage the service, but I'll do it my way!

What you really want is to put your process in an explicit cgroup, and then you can control it easily enough. And luckily that is not inordinately difficult, though systemd still has surprises up its sleeve for you.

So this is what I wound up with, in /etc/systemd/system/greg.service:

Description=openvpn and nfs mounts


Here's roughly the narrative of how all that plays out:

So, this EXIT_STATUS hack... If I had made the NFS its own service, it might be strictly nested within the openvpn service, but that isn't actually what I desire -- I want the NFS mounts to stick around until we are shutting down, on the assumption that at all other times, we are on the verge of openvpn restoring the connection. So I use the EXIT_STATUS to determine if umountnfs is being called because of shutdown or just because openvpn died (anyways, the umount won't succeed if openvpn is already dead!). You might want to add an export > /tmp/foo to see what environment variables are defined.

And there is a huge caveat here: if something else in the shutdown process interferes with the network, such as a call to ifdown, then we will need to be After= that as well. And, worse, the documentation doesn't say (and user reports vary wildly) whether it will wait until your ExecStop completes before starting the dependent ExecStop. My experiments suggest Type=oneshot will cause that sort of delay...not so sure about Type=forking.

Fine, sure, whatever. Let's sing Kumbaya with systemd.

I have the idea that Wants= vs. Requires= will let us use two services and do it almost how a real systemd fan would do it. So here's my files:

Then I replace the killall -9 openvpn with systemctl stop greg-openvpn.service, and I replace systemctl start greg.service with systemctl start greg-nfs.service, and that's it.

The Requires=networking.service enforces the strict nesting rule. If you run systemctl stop networking.service, for example, it will stop greg-openvpn.service first.

On the other hand, Wants=greg-openvpn.service is not as strict. On systemctl start greg-nfs.service, it launches greg-openvpn.service, even if greg-nfs.service is already active. But if greg-openvpn.service stops or dies or fails, greg-nfs.service is unaffected, which is exactly what we want. The icing on the cake is that if greg-nfs.service is going down anyways, and greg-openvpn.service is running, then it won't stop greg-openvpn.service (or networking.service) until after /root/bin/umountnfs is done.

Exactly the behavior I wanted. Exactly the behavior I've had for 14 years with a couple readable shell scripts. Great, now I've learned another fly-by-night proprietary system.

GNOME, you're as bad as MacOS X. No, really. In February of 2006 I went through almost identical trouble learning Apple's configd and Kicker for almost exactly the same purpose, and never used that knowledge again -- Kicker had already been officially deprecated before I even learned how to use it. People who will fix what isn't broken never stop.

As an aside - Allan Nathanson at Apple was a way friendlier guy to talk to than Lennart Poettering is. Of course, that's easy for Allan -- he isn't universally reviled.

A side story

If you've had systemd foisted on you, odds are you've got Adwaita theme too.

rm -rf /usr/share/icons/Adwaita/cursors/

You're welcome. Especially if you were using one of the X servers where animated cursors are a DoS. People who will fix what isn't broken never stop.

[update August 10, 2017]

I found out the reason my laptop double-unsuspends and other crazy behavior is systemd. I found out systemd has hacks that enable a service to call into it through dbus and tell it not to be stupid, but those hacks have to be done as a service! You can't just run dbus on the commandline, or edit a config file. So in a fit of pique I ran the directions for uninstalling systemd.

It worked marvelously and everything bad fixed itself immediately. The coolest part is restoring my hack to run openvpn without systemd didn't take any effort or thought, even though I had not bothered to preserve the original shell script. Unix provides some really powerful, simple, and *general* paradigms for process management. You really do already know it. It really is easy to use.

I've been using sysvinit on my laptop for several weeks now. Come on in, the water's warm!

So this is still a valuable tutorial for using systemd, but the steps have been reduced to one: DON'T.

[update September 27, 2017]

systemd reinvents the system log as a "journal", which is a binary format log that is hard to read with standard command-line tools. This was irritating to me from the start because systemd components are staggeringly verbose, and all that shit gets sent to the console when the services start/stop in the wrong order such that the journal daemon isn't available. (side note, despite the intense verbosity, it is impossible to learn anything useful about why systemd is doing what it is doing)

What could possibly motivate such a fundamental redesign? I can think of two things off the top of my head: The need to handle such tremendous verbosity efficiently, and the need to support laptops. The first need is obviously bullshit, right -- a mistake in search of a problem. But laptops do present a logging challenge. Most laptops sleep during the night and thus never run nightly maintenance (which is configured to run at 6am on my laptop). So nothing ever rotates the logs and they just keep getting bigger and bigger and bigger.

But still, that doesn't call for a ground-up redesign, an unreadable binary format, and certainly not deeper integration. There are so many regular userland hacks that would resolve such a requirement. But nevermind, because.

I went space-hunting on my laptop yesterday and found an 800MB journal. Since I've removed systemd, I couldn't read it to see how much time it had covered, but let me just say, they didn't solve the problem. It was neither an efficient representation where the verbosity cost is ameliorated, nor a laptop-aware logging system.

When people are serious about re-inventing core Unix utilities, like ChromeOS or Android, they solve the log-rotation-on-laptops problem.

Pretty convoluted RPM packaging system which creates problems

The idea of RPM was to simplify installation of complex packages. But they created of a set of problem of their own. Especially connected with libraries (which not exactly Red Hat problem, it is Linux problem called "libraries hell"). One example is so called multilib problem that is detected by YUM:

--> Finished Dependency Resolution

Error:  Multilib version problems found. This often means that the root
       cause is something else and multilib version checking is just
       pointing out that there is a problem. Eg.:

         1. You have an upgrade for libicu which is missing some
            dependency that another package requires. Yum is trying to
            solve this by installing an older version of libicu of the
            different architecture. If you exclude the bad architecture
            yum will tell you what the root cause is (which package
            requires what). You can try redoing the upgrade with
            --exclude libicu.otherarch ... this should give you an error
            message showing the root cause of the problem.

         2. You have multiple architectures of libicu installed, but
            yum can only see an upgrade for one of those arcitectures.
            If you don't want/need both architectures anymore then you
            can remove the one with the missing update and everything
            will work.

         3. You have duplicate versions of libicu installed already.
            You can use "yum check" to get yum show these errors. can also use --setopt=protected_multilib=false to remove
       this checking, however this is almost never the correct thing to
       do as something else is very likely to go wrong (often causing
       much more problems).

       Protected multilib versions: libicu-4.2.1-14.el6.x86_64 != libicu-4.2.1-11.el6.i686

The idea of precompiled package is great until it is not. And that's what we have now. Important package such as R language or Infiniband drivers from Mellanox routinely prevent the ability to patch systems in RHEL 6.

The total number of packages installed is just way too high with many overlapping packages. Typically it is over one thousand, unless you use base system, or HPC computational node distribution.  In the latter case it is still over six hundred.  So in no way you can understand the packages structure in your system without special tools. It became a complex and not very transparent layer of indirection between you and installed binaries and packages that  you have on the system.  And the level on  which you know this subsystem is now important indication of the qualification of sysadmin. Along with networking and LAMP stack, or its enterprise variations.  

Number of daemons running in the default RHEL installation is also  very high and few sysadmins understand what all those daemons are doing and why they are running after the startup. In other word we already entered Microsoft world.  In other words RHEL is Microsoft Windows of the Linux word.  And with systemd pushed through the throat of enterprise customers, you will understand even less.

And while Red Hat support is expensive, the help in such  cases form Red Hat support is marginal. Looks like they afraid not only of customers, but of their own packages too. All those guys do and to look into database to see if a similar problem is described. That works some some problems, but for more difficult one it usually does not. Using free version of Linux such as CentOS is an escape, but with commercial applications you are facing troubles: they can easily blame Os for the the problem you are having and them you are holding the bag.

No efforts are make to consolidate those hundreds of overlapping packages (with some barely supported or unsupported).  This "library hell" is a distinct feature of modern enterprise linux distribution.

 When /etc/resolv.conf is no longer a valid DNS configuration file

In RHEL7 Network Manager is the default configuration tool for the entire networking stack including DNS resolution. One interesting problem with Network Manager is that in default installation  it happily overwrites /etc/resolv.conf putting the end to the Unix era during which you can assume that if you write something into config file it will be intact and any script that generate such a file should be able  to detect that it was changed manually and re-parse changes or produce a warning. 

In RHEL6 most sysadmins  simply deinstall Network Manager on servers, and thus did not face its idiosyncrasies. BTW Network Manager is a part of Gnome project, which is pretty symbolic  taking into account that the word "Gnome" recently became a kind of curse among Linux users ( GNOME 3.x has alienated both users and developers ;-).

In RHEL 6 and  before certain installation options excluded Network Manager from installation by default (Minimal server in RHEL6 is one).  For all other you can deinstall it after the installation. This is no longer recommended in RHEL7, but still you can disable it. See  How to disable NetworkManager on CentOS - RHEL 7 for details.  Still as the Network Manager is the preferred by Red Hat intermediary for connecting to the network for RHEL7 (and now is it is present even in minimal installation) many sysadmin prefer to keep it on RHEL7. People who tried run into various types of problems if they have a setup that was more or less non-trivial. See for example How to uninstall NetworkManager in RHEL7 Packages like Docker expect it to be present as well.

 Non-documented by Red Hat solution exists even with Network Manager running (CentOS 7 NetworkManager Keeps Overwriting -etc-resolv.conf )

To prevent Network Manager to overwrite your resolv.conf changes, remove the DNS1, DNS2, ... lines from /etc/sysconfig/network-scripts/ifcfg-*.


...tell NetworkManager to not modify the DNS settings:


After that you need to restart Network manager service. Red Hat does not addresses this problem properly even in training courses:

Alex Wednesday, November 16, 2016 at 00:38 - Reply ↓

Ironically, Redhats own training manual does not address this problem properly.

I was taking a RHEL 7 Sysadmin course when I ran into this bug. I used nmcli thinking it would save me time in creating a static connection. Well, the connection was able to ping IPs immediately, but was not able to resolve any host addresses. I noticed that /etc/resolv.conf was being overwritten and cleared of its settings.

No matter what we tried, there was nothing the instructor and I could do to fix the issue. We finally used the dns=none solution posted here to fix the problem.

Actually the behaviour is more complex and tricky then I described and hostnamectl produce the same effect of overwriting the file:

Brian Wednesday, March 7, 2018 at 01:08 -

Thank you Ken yes that fix finally worked for me on RHEL 7!

Set dns=none in the [man] section of /etc/NetworkManager/NetworkManager.conf
Tested: with $ cat /etc/resolv.conf both before and after # service network restart and got the same output!

Otherwise I could not find out how to reliably set the search domains list, as I did not see an option in the /etc/sysconfig/network-scripts/ifcfg-INT files.

Brian Wednesday, March 7, 2018 at 01:41 -

Brian again here note that I also had DNS1, DNS2 removed from /etc/sysconfig/nework-scripts/ifcfg-INT.

CAUTION: the hostnamectl[1] command will also reset /etc/resolv.conf rather bluntly replacing the default search domain and deleting any nameserver entries. The file will also include the # Generated by NetworkManager header comment.

[1] e.g. #hostnamectl set-hostname newhost.domain static; hostnamectl status
Then notice how that will overwrite /etc/resolv.conf as well

But this is troubling sign: now in addition to which configuration file you need to edit, you need to know which files you can edit and which you can't (or how you can disable the default behaviour).  Which is a completely different environment which is  just one step away from Microsoft Registry. 

Moreover, even minimal installation of RHEL7 has over hundred of *.conf files.

Problems with the architectural vision of Red Hat brass

Both architectural level of thinking of Red Hat brass (with daemons like avahi, systemd, network manager installed by default for all types of servers ) and clear attempts along the lines "Not invented here" in virtualization creates concerns. It is clear that Red Hat by itself can't become a major virtualization player like VMware. It just does not have enough money for development and marketing.  That's why they now try to became a major player in "private cloud" space with  Docker.

You would think that the safest bet is to reuse the leader among open source offerings, which is currently Xen. But Red Hat brass thinks differently and wants to play more dangerous poker game: it started promoting KVM, making it obligatory part of RHCSA exam. Actually, Red Hat has released Enterprise Linux 5 with integrated Xen and then changed their mind after RHEL 5.5 or so. In RHEL 6 Xen no longer present even as an option. It was replaced by KVM.

What is good that after ten years they eventually manage to some extent  to re-implement Solaris 10 zones (without RBAC). In RHEL 7 they are more or less usable.

Security overkill with SELinux

RHEL contain security layer called SELinux, but in most cases of corporate deployment it is either disabled, or operates in permissive mode.  The reason is that is notoriously difficult to configure correctly and in many cases the game does not worth the candles.  This is a classic problem that overcomplexity created: functionality to me OS more secure is here, but  it not used/configured properly, because very few sysadmin can operate on the required level of complexity.

Firewall which is a much simpler concept proved to be tremendously more usable in  corporate deployments, especially in cases when you have obnoxious or incompetent security department (a pretty typical situation for a large corporation ;-) as it prevents a lot of stupid questions from utterly incompetent "security gurus" about opened ports and can stop dead scanning attempts of tools that test for known vulnerabilities and by using which security departments are trying to justify their miserable existence. Those tools sometimes crash production servers. 

Generally it is dangerous to allow exploits used in such tools which local script kiddies (aka "security team") recklessly launch against your production server (as if checking for a particular vulnerability using internal script is inferior solution).  Without understanding of their value (which is often zero) and possible consequences (which are sometimes non zero ;-)

Another interesting but seldom utilized option is to use AppArmor which is now integrated into Linux kernel.  AppArmor is a security module for Linux kernel and it is part of the mainstream kernel since 2.6.36 kernel. It is considered as an alternative to SELinux and is IMHO more elegant, more understandable solution to the same problem.  But you might need to switch to Suse in this case. Red Hat Enterprise Linux kernel doesn't have support for AppArmor security modules ( Does Red Hat Enterprise Linux support AppArmor )

To get the idea about the level of complexity of SE try to read the RHEL7 Deployment Guide. Full set of documentation is available from   So it is not accidental that in many cases SElinix is disabled in enterprise installations. Some commercial software packages explicitly recommend to disable it in their installation manuals.

Unfortunately  AppArmor which is/was used in SLES (only by knowledgeable sysadmins ;-), never got traction and never achieved real popularity iether (SLES now  has SELinux  as well, and  as such suffers from overcomplexity even more then RHEL ;-).  It is essentially the idea of introducing "per-application" unmask for all major directories, which can block dead attempts to write to them and/or read the information from certain sensitive files, even if the application has a vulnerability.

Escaping potential exploits by using Solaris on Intel

As I mentioned above patching became high priority activity in large Red Hat enterprise customers. Guidelines now are strict and usually specify monthly or in best case quality application of security patches.  This amount of efforts might be better applied elsewhere with much better return on investment. 

Solaris has an interesting security subsystem called RBAC, which  allow selectively grant part of root privileges and can be considered as a generalization of sudo. But Solaris is dying in enterprise environment as Oracle limits it to their own (rather expensive) hardware.  But you can use Solaris for X86 via VM (XEN).  IMHO if you need a really high level of security for a particular server, which does not have any fancy applications installed, this sometimes might be a preferable path to go despite being "security via obscurity" solution. There is no value of using Linux in highly security applications as Linux is the most hacked flavor of Unix. And this situation will not change in the foreseeable future.

This solution is especially attractive if you still have knowledgeable Solaris sysadmin from "the old guard" on the floor. Security via obscurity actually works pretty well in this case; add to this RBAC capabilities and you have a winner.  The question here is why take  additional risks with "zero day" Linux exploits,  if you can avoid them. See Potemkin Villages of Computer Security  for more detailed discussion.

I never understood IT managers who spend  additional money on enhancing linux  security. Especially via "security monitoring" solutions using third party providers, which  more often then not is a complete, but expensive fake.  Pioneered by ISS more then ten year ago.

Normal hardening scripts are OK, but to spend money of same fancy and expensive system to enhance Linux  security is a questionable option in my opinion, as Linux will always be the most attacked flavor of Unix with the greatest number of zero time exploits.  That is especially true for foreign companies operating in the USA. You can be sure that specialists in NSA are well ahead of any hackers in zero time exploits for Linux (and, if not Linux, then CISCO is good enough too ;-) 

So instead of  Sisyphus task of enhancing Linus security via keeping up with patching schedule (which is a typical large enterprise mantra, as this is the most obvious and understandable by corporate IT brass path) it makes sense to switch to a different OS for critical servers, especially Internet  facing servers or to use a security appliance to block most available paths to the particular server group (for example one typical blunder is allocating IP addresses for DRAC/ILO remote controls on the same segment  as the main server interface; those "internal appliances" rarely have up-to-date firmware, sometimes have default passwords, are in general are much more hackable and do need additional protection by a separate firewalls which limit  access to selected sysadmins desktops with static IP addresses).

My choice would be Oracle Solaris as this is a well architectured by Sun Microsystems OS with an excellent security record and additional, unique security mechanisms (up to Trusted Solaris level).  And a good thing is that Oracle (so far) did not spoil it with excessive updates, like Red Hat spoiled it distribution with RHEL 7.  Your mileage may vary.

Also important  in today's "semi-outsourced" IT environments is the level of competence and level of loyalty of the people who are responsible for selecting and implementing security solutions. For example low loyalty of contractor based technical personnel naturally leads to increased probability of security incidents and/or signing useless or harmful for the enterprise contracts: security represents the new Eldorado for snake oil sellers. A good recent example was (in the past) tremendous  market success of ISS intrusion detection appliances. Which were as close to snake oil as one can get.  May be  that's why they were bought  by IBM for completely obscene amount of money: list of fools has tremendous commercial value for such shrewd players as IBM.

In such  an environment  "security via obscurity" is probably the optimal path of increasing the security of both OS and typical applications.  Yes, Oracle is more expensive. But there is no free lunch. I am actually disgusted with the proliferation of security products for Linux ;-)

Current versions and year of end of support

As of October 2018 supported versions of RHEL are 6.10 and 7.3-7.5. Usually a large enterprise uses a mixture of versions, so knowing  the end  of support for each is a common problem.  Compatibility within a single version of RHEL is usually very good (I would say on par with Solaris) and the risk on upgrading from, say, 6.5 to 6.10 is minimal.  There were some broken minor upgrades like RHEL 6.6, but this is a rare even, and this particular on  is in the past.

Problems arise with major version upgrade. Usually total reinstallation is the best bet.  Formally you can upgrade from RHEL 6.10 to RHEL 7.5 but only for a very narrow class of servers and not much installed (or much de-installed ;-) .  You need to remove Gnome via yum groupremove gnome-desktop, if it works (how to remove gnome-desktop using yum ) and several other packages.

Against,  here your mileage may vary and reinstallation might be a better option (RedHat 6 to RedHat 7 upgrade):

1. Prerequisites

1. Make sure that you are on running on the latest minor version (i.e rhel 6.8).

2. The upgrade process can handle only the following package groups and packages: Minimal (@minimal),
Base (@base), Web Server (@web-server), DHCP Server, File Server (@nfs-server), and Print Server

3. Upgrading GNOME and KDE is not supported, So please uninstall the GUI desktop before the upgrade and
install it after the upgrade.

4. Backup the entire system to avoid potential data loss.

5. If in case you system is registered with RHN classic , you must first unregister from RHN classic and
register with subscription-manager.

6. Make sure /usr is not on separate partition.

2. Assessment

Before upgrading we need to assess the machine first and check if it is eligible for the upgrade and this can be
done by an utility called "Preupgrade Assistant".

Preupgrade Assistant does the following:

2.1. Installing dependencies of Preupgrade Assistant:

Preupgrade Assistant needs some dependencies packages (openscap , openscap-engine-sce, openscap-
utils , pykickstart, mod_wsgi) all these packages can be installed from the installation media but "openscap-
engine-sce" package needs to be downloaded from the redhat portal.

See Red Hat Enterprise Linux - Wikipedia.  and Red Hat Enterprise Linux.


In linux there is no convention for determination which flavor of linux you are running. For Red Hat in order to  determine which version is installed on the server you can use command

cat /etc/redhat-release

Oracle linux adds its own file preserving RHEL file, so a more appropriate  command would be

cat /etc/*release

End of support issues

See Red Hat Enterprise Linux Life Cycle - Red Hat Customer Portal:

Installation troubles with RHEL 7 Anaconda : "frozen pane", broken Plymouth subsystem and friends

One of the major attraction of previous versions of Red Hat was the performance of Red Hat installer, called Anaconda. The latter was an interesting software product in a sense that it gave enough control for sysadmin to select components, provided for unattended installation (via kickstart) and was pretty tolerant to various hardware. You was able to install it on a large variety systems.  Practically any Dell desktop, laptop of server was OK. All HP were OK too. With version 7 that changed.  Kickstart supported various combinations of the KS file and the ISO file. They can be retrieved from  HTTP (you need to mount them so that the tree is visible via Web server), NFS 3(just iso file, or the tree) and USB drives. The KS file can be imbedded in ISO file, or be standalone.  In the latter case, you need to point its location in the kernel boot command line as an option.

In other words in RHEL6 Anaconda with Kickstart was a very useful tool for installation of the OS on multiple system. And/or unattended installation in general. It was rather simple to use. Even sysadmins of medium qualification could use it for installation from HTTP or NFS on many systems.

In RHEL7 both Anaconda, and, especially, kickstart mode became very brittle. Sometimes it work. But in most cases it does not. Which means that this not even beta software, it is alpha. And meaningful diagnostic is absent, which is also typical for alpha software. Instead you get mainly systemd spam. When an error happened, at best you are greeted with "frozen pane" message, but in most case the system is just hang with no messages.  Sometimes console is available and you can explore what is happening, but often it is not. This is not how high quality production systems should be programmed. This is a perfect example of amateurish, incompetent programming.

What I have found that in  complex cases, were version 6 Anaconda used to work,  version 7 Anaconda simply hangs. Diagnostic is so bad I would not recommend to use it unless you have week or two to experiment and at least a dozen of server to install OS on.   I spend a week learning intricacies of anaconda implementation in RHEL 7 until I finally found combination of boot parameters in RHEL7 that used to work in RHEL6.  So, it took me over 40 hours and probably a hundred reboots to get the functionality that I was accustomed for in RHEL 6.  BTW each reboot on  HP blade takes from 5 to 20 min (depending on RAM size, can be even more if you have over 128GB of memory; those sadists put a message on the boot screen "Starting drivers. Please wait, this may take a few moments." Such a few moments ;-) , and that fact alone and kills all the rationality behind the systemd.  And it is unclear why systemd is necessary during installation at all.

In RHEL 7 they  changed almost all critical boot kernel boot parameters that affect kickstart and it does not look like this is an improvement. For example if you think that new ip parameter also works as ksdevice (because it allows to specify the interface, for example inst.ks-ens2:dhcp) you are up to big disappointment and a lot of lost hours.

There are also subtle changes in the kickstart DSL (domain specific language). For example, in RHEL 6 their following two directives were enough to wipe out partitions on install harddrive (in this case /dev/sda)

ignoredisk --only-use=sda
clearpart --all --initlabel 

This combination of directives does not work in RHEL7. You need to specify the disk explicitly in clearpart directive

ignoredisk --only-use=sda
clearpart --all --initlabel --drives=sda

Which semantically is incorrect as we already specified that the only disk we will touch is sda in the directive "ignoredisk --only-use=sda"

As usual, Red Hat support  is not that helpful, although I was surprised that they offered me an interactive session to troubleshoot the issue (although the first response was a typical "database search" type of answers).

and there are a lot  on mine in this new Anaconda that the person who got used to RHEL 6 inveitable steps on. For example, if you are working over VPN and use inst.repo=nfs:<location_of_your_iso>  to redirect fetching of ISO to the local source now you need to specify both inst.ks-ens2:dhcp (or equivalent for static IP address) and ksdevice. Otherwise you need to this only in GUI. with multiple interfaces this directive does not work alone of the command line, unless the needed interface is the first one. Which is strange.

In any case, the reality is that if the new Anaconda does not like something, you either get a crash or "frozen pane" message.  And then you need to bump your head against the wall and try to guess what exactly it does not like...

The reality is that if the new Anaconda  it does not like something, you either get a crash or  "frozen pane" message.

For example, on HP blade ProLiant BL460c Gen9 blades (which are not that old, HP was still selling them in 2017 and probably 2018),   I observed the following: versions up to RHEL 7.5 launched anaconda in GUI mode OK.  But versions starting from RHEL7.7 and up to 7.9 simply freeze when Anaconda starts. And this is pretty popular, mainstream enterprise hardware. The message "Pane is dead"  is the result of some bug in switching from GUI console to text console, when X11 can't start (but, again, it was starting OK in older versions).  But the question is why

But X11 did started OK on previous versions of RHEL 7 installed on the same hardware, so the X11 driver was available.  BTW, what I have found is that if you specify VESA   driver ( inst.xdriver=vesa ) as a boot option, Anaconda switches to the text mode OK, but still refuse to work in GUI mode. It is unclear why. GUI mode is now the recommended mode 8.3. Installing in Text Mode Red Hat Enterprise Linux 7 - Red Hat Customer Portal

If your system has a graphical display, but graphical installation fails, try booting with the inst.xdriver=vesa option - see Chapter 23, Boot Options.

Google for "Anaconda  pane is dead" and you will see that this type of errors started with RHEL 7 beta (874906 anaconda pane is dead when hitting some issue):

Xiaowei Li 2012-11-09 03:16:28 UTC

Description of problem:

anaconda hung and panel is dead when hitting some issue.

In the previous anaconda, we can use install OS manually and go to the shell to debug if hitting any issues on the anaconda GUI.

But in the current anaconda of RHEL7, we cannot go to the shell to check anything. This is very bad user experience.

Chris Lumens, 2012-11-09 15:10:36 UTC

You can still switch to a shell. anaconda now runs in tmux, which is a screen-like program and you can tell this by the green bar on the bottom. To switch to the shell, you just hit ctrl-b <N>, where N is the number of the window you want to switch to.

And this error in some form is still present in RHEL 8. See also:

It even got into documentation  Chapter 60. Kernel Red Hat Enterprise Linux 7 - Red Hat Customer Portal

 The problem is characterized by errors starting X11, followed by a Pane is dead message in the Anaconda screen. The workaround is to append inst.text to the kernel command line and install in text mode.

And "Pane is dead"  and hanging on install is not the only problem. On Dell M620 blades in my case anaconda actually work (even with kickstart), but the resulting OS on boot went  indefinite (created by systemd ;-) loop.  The last message was "Starting wait for Plymouth boot screen to Quit" I experienced this problem with RHEL 7.7, See also

How do deactivate plymouth boot screen- - Ask Ubuntu

Easiest quick fix is to edit the grub line as you boot. Hold down the shift key so you see the menu. Hit the e key to edit Edit the 'linux' line, remove the 'quiet' and 'splash'

To disable it in the long run

Edit /etc/default/grub

Change the line  GRUB_CMDLINE_LINUX_DEFAULT=quiet splash to


And then update grub

sudo update-grub

Road to hell in paved with good intentions: biosdevname package on Dell servers

Loss of architectural integrity of Unix is now very pronounced in RHEL. Both RHEL6 and  RHEL7 although 7 is definitely worse. And this is not only systemd fiasco. For example, recently I spend a day troubleshooting an interesting and unusual problem: one out of 16 identical (both hardware and software wise) blades in a small HPC cluster (and only one) failed to enable bonded interface on boot and this remained off line. Most sysadmins would think that something is wrong with hardware. For example, Ethernet card on the blade and/or switch port or even internal enclosure interconnect. I also initially thought his way. But this was not the case  ;-)  

Tech support from Dell was not able to locate any hardware problem although they did diligently upgraded CMC on enclosure and BIOS and firmware on the blade. BTW this blade has had similar problems in the past and Dell tech support once even replaced Ethernet card in it, thinking that it is culprit. Now I know that this was a completely wrong decision on their part, and waist of both time and money  :-), They come to this conclusion by swapping the blade to a different slot and seeing that the problem migrated into new slot. Bingo -- the card is the root cause.  The problem is that it was not.  What is especially funny is that replacing the card did solve the problem for a while. After reading the information provided below you will be as puzzled as me why it happened. 

To make long story short the card was changed but  after yet another power outage the problem  returned. This  time I started to suspect that the card has nothing to do with the problem. After more close examination I discovered that in its infinite wisdom in RHEL 6 Red Hat introduced a package called biosdevname. The package was developed by Dell (the fact which seriously undermined my trust in Dell hardware ;-). This package renames interfaces to a new set of names, supposedly consistent with their etching on the case of rack servers. It is useless (or more correctly harmful) for blades.  The package is primitive and does not understand that server is a rack server or blade.  Moreover by doing this supposedly useful remaining this package introduces in  70-persistent-net.rules file a stealth rule:

KERNEL=="eth*", ACTION=="add", PROGRAM="/sbin/biosdevname -i %k", NAME="%c"

I did not look at the code, but from the observed behaviour it looks that in some cases in RHEL 6 (and most probably in RHEL 7 too) the package adds a "stealth" rule to the END (not the beginning but the end !!!) of  /udev/rules.d/70-persistent-net.rules file, which means that if similar rule in 70-persistent-net.rules exits it is overwritten. Or something similar to this effect. 

If you look at Dell knowledge base there are dozens of advisories related to this package (search just for biosdevname). Which suggests that there some deeply wrong with its architecture.

What I observed  that on some (the key word is some, converting the situation in Alice in Wonderland environment) rules for interfaces listed in  70-persistent-net.rules file simply does not work, if this package is enabled. For example Dell Professional services in their infinite wisdom renamed interfaces  back to eth0-eth4 for Inter X710 4-port 10Gb Ethernet card that we have on some blades. On 15 out of 16 blades in the Dell enclosure this absolutely wrong solution works perfectly well. But on blades 16 sometimes it does not. As a result this blade does not boot after power outage of reboot. When this happens is unpredictable. Sometimes it  boots, but sometimes it does not.  And you can't understand what is happening, no matter how hard you try because of stealth nature of changes introduced by biosdevname package. 

Two interfaces on this blades (as you now suspect eth0 and eth1) were bonded on this blade. After around 6 hours of poking around the problem I discover that despite presence of rule for "eth0-eth4 in 70-persistent-net.rules file RHEL 6.7 still renames on boot all four interfaces to em0-em4 scheme and naturally bonding fails as eth0 and eth1 interfaces do not exit.

First I decided to deinstall biosdevname package and see what will happen. Did not work (see below why -- de-installation script in this RPM is incorrect and contains a bug -- it is not enough to remove files  you also need to run  the command update-initramfs -u (Hat tip to Oler). Searching for "Renaming em to eth" I found a post in which the author recommended disabling this "feature" via adding biosdevname=0 to kernel parameters /etc/grub.conf. That worked.  So two days my life were lost for finding a way to disable this completely unnecessary for blades RHEL "enhancement".

Here is some information about this package

Copyright (c) 2006, 2007 Dell, Inc.     

Licensed under the GNU General Public License, Version 2.

biosdevname in its simplest form takes a kernel device name as an argument, and returns the BIOS-given name it "should" be.  This is
necessary on systems where the BIOS name for a given device (e.g. the label on the chassis is "Gb1") doesn't map directly
and obviously to the kernel name (e.g. eth0).
The distro-patches/sles10/ directory contains a patch needed to integrate biosdevname into the SLES10 udev ethernet
naming rules.This also works as a straight udev rule.  On RHEL4, that looks like:
KERNEL=="eth*", ACTION=="add", PROGRAM="/sbin/biosdevname -i %k", NAME="%c"
This makes use of various BIOS-provided tables:
PCI Confuration Space
PCI IRQ Routing Table ($PIR)
PCMCIA Card Information Structure
SMBIOS 2.6 Type 9, Type 41, and HP OEM-specific types
therefore it's likely that this will only work well on architectures that provide such information in their BIOS.

To add insult to injury this behaviour was demonstrated on only one of 16 absolutely identically configured Dell M630 blades with identical hardware and absolutely identical (cloned) OS instances.  Which makes RHEL real "Alice in Wonderland" system.  This is just one example. I have more similar  stories to tell

I would like to repeat that while idea is not completely wrong and sometimes might even make sense,  the package itself is very primitive and the utility that is included in this package does not understand that the target for installation is a blade (NOTE to DELL: there is no etchings on blade network interfaces ;-)

If you look at this topic using your favorite search engine (which should be Google anymore ;-) you will find dozens of posts in which people try to resolve this problem with various levels of competency and success. Such a tremendous waist of time and efforts.  Among best that I have found were:

But this is not the only example of harmful packages installed. They also install audio packages on servers that have no audio card  ;-) 

Avoiding useless daemons during installation

While the new version Anaconda sucks, you still can improve the typical for RHEL situation with a lot of useless daemons installed by carefully selecting packages and then reusing generated kickstart file. That can be done via advanced menu for one box and then using this kickstart file for all other boxes with minor modifications.

Kickstart still works, despite trend toward overcomplexity in other parts of distribution. They did not manage to screw it up yet  ;-)

What's new in RHEL7

RHEL 7 was released in June 2014. With the release of RHEL 7 we see hard push to systemd exclusivity.  Runlevels are gone. The release of RHEL 7 with systemd as the only option for system and process management has reignited the old debate weather Red Hat is trying to establish Microsoft-style monopoly over enterprise Linux and move Linux closer to Windows: closed but user-friendly system. 

Scripting languages version in RHEL7

RHEL7 uses Bash 4.2, Perl 5.16.3, PHP 5.4.16, Python 2.7.5. 

XFS filesystem

The high-capacity, 64-bit XFS file system, which was available in RHEL6, now the default file system. It originated in the Silicon Graphics Irix operating system. It can scale up to 500 TB of addressable memory. In comparison, previous file systems, such EXT 4, were limited to 16 TBs.

For more packages version information see Red Hat Enterprise Linux


for server sysadmins systemd is a massive, fundamental change to core Linux administration for no perceivable gain. So while there is a high level of support of systemd from Linux users who run Linux on their laptops and maybe as home server, there is a strong backlash against systemd from Linux system administrators who are responsible for significant number of Linux servers in enterprise environment.

After all runlevels were used in production environment, if only to run system with or without X11.  Please read  an interesting essay on systemd (ProSystemdAntiSystemd).

Often initiated by opponents, they will lament on the horrors of PulseAudio and point out their scorn for Lennart Poettering. This later became a common canard for proponents to dismiss criticism as Lennart-bashing. Futile to even discuss, but its a staple.

Lennarts character is actually, at times, relevant.. Trying to have a large discussion about systemd without ever invoking him is like discussing glibc in detail without ever mentioning Ulrich Drepper. Most people take it overboard, however.

A lot of systemd opponents will express their opinions regarding a supposed takeover of the Linux ecosystem by systemd, as its auxiliaries (all requiring governance by the systemd init) expose APIs, which are then used by various software in the desktop stack, creating dependency chains between it and systemd that the opponents deemed unwarranted. They will also point out the udev debacle and occasionally quote Lennart. Opponents see this as anti-competitive behavior and liken it to embrace, extend, extinguish. They often exaggerate and go all out with their vitriol though, as they start to contemplate shadowy backroom conspiracies at Red Hat (admittedly it is pretty fun to pretend that anyone defending a given piece of software is actually a shill who secretly works for it, but I digress), leaving many of their concerns to be ignored and deem ridiculous altogether.

... ... ...

In addition, the Linux community is known for reinventing the square wheel over and over again. Chaos is both Linuxs greatest strength and its greatest weakness. Remember HAL? Distro adoption is not an indicator of something being good, so much as something having sufficient mindshare.

... ... ...

The observation that sysinit is dumb and heavily flawed with its clunky inittab and runlevel abstractions, is absolutely nothing new. Richard Gooch wrote a paper back in 2002 entitled Linux Boot Scripts, which criticized both the SysV and BSD approaches, based on his earlier work on simpleinit(8). That said, his solution is still firmly rooted in the SysV and BSD philosophies, but he makes it more elegant by supplying primitives for modularity and expressing dependencies.

Even before that, DJB wrote the famous daemontools suite which has had many successors influenced by its approach, including s6, perp, runit and daemontools-encore. The former two are completely independent implementations, but based on similar principles, though with significant improvements. An article dated to 2007 entitled Init Scripts Considered Harmful encourages this approach and criticizes initscripts.

Around 2002, Richard Lightman wrote depinit(8), which introduced parallel service start, a dependency system, named service groups rather than runlevels (similar to systemd targets), its own unmount logic on shutdown, arbitrary pipelines between daemons for logging purposes, and more. It failed to gain traction and is now a historical relic.

Other systems like initng and eINIT came afterward, which were based on highly modular plugin-based architectures, implementing large parts of their logic as plugins, for a wide variety of actions that software like systemd implements as an inseparable part of its core. Initmacs, anyone?

Even Fefe, anti-bloat activist extraordinaire, wrote his own system called minit early on, which could handle dependencies and autorestart. As is typical of Fefes software, it is painful to read and makes you want to contemplate seppuku with a pizza cutter.

And thats just Linux. Partial list, obviously.

At the end of the day, all comparing to sysvinit does is show that youve been living under a rock for years. Whats more, it is no secret to a lot of people that the way distros have been writing initscripts has been totally anathema to basic software development practices, like modularizing and reusing common functions, for years. Among other concerns such as inadequate use of already leaky abstractions like start-stop-daemon(8). Though sysvinit does encourage poor work like this to an extent, its distro maintainers who do share a deal of the blame for the mess. See the BSDs for a sane example of writing initscripts. OpenRC was directly inspired by the BSDs example. Hint: its in the name - RC.

The rather huge scope and opinionated nature of systemd leads to people yearning for the days of sysvinit. A lot of this is ignorance about good design principles, but a good part may also be motivated from an inability to properly convey desires of simple and transparent systems. In this way, proponents and opponents get caught in feedback loops of incessantly going nowhere with flame wars over one initd implementation (that happened to be dominant), completely ignoring all the previous research on improving init, as it all gets left to bite the dust. Even further, most people fail to differentiate init from rc scripts, and sort of hold sysvinit to be equivalent to the shoddy initscripts that distros have written, and all the hacks they bolted on top like LSB headers and startpar(2). This is a huge misunderstanding that leads to a lot of wasted energy.

Dont talk about sysvinit. Talk about systemd on its own merits and the advantages or disadvantages of how it solves problems, potentially contrasting them to other init systems. But dont immediately go SysV initscripts were way better and more configurable, I dont see what systemd helps solve beyond faster boot times., or from the other side systemd is way better than sysvinit, look at how clean unit files are compared to this horribly written initscript I cherrypicked! Why wouldnt you switch?

... ... ...

Now that we pointed out how most systemd debates play out in practice and why its usually a colossal waste of time to partake in them, lets do a crude overview of the personalities that make this clusterfuck possible.

The technically competent sides tend to largely fall in these two broad categories:

a) Proponents are usually part of the modern Desktop Linux bandwagon. They run contemporary mainstream distributions with the latest software, use and contribute to large desktop environment initiatives and related standards like the *kits. Theyre not necessarily purely focused on the Linux desktop. Theyll often work on features ostensibly meant for enterprise server management, cloud computing, embedded systems and other needs, but the rhetoric of needing a better desktop and following the example set by Windows and OS X is largely pervasive amongst their ranks. They will decry what they perceive as integration failures, fragmentation and are generally hostile towards research projects and anything they see as toy projects. They are hackers, but their mindset is largely geared towards reducing interface complexity, instead of implementation complexity, and will frequently argue against the alleged pitfalls of too much configurability, while seeing computers as appliances instead of tools.

b) Opponents are a bit more varied in their backgrounds, but they typically hail from more niche distributions like Slackware, Gentoo, CRUX and others. They are largely uninterested in many of the Desktop Linux advancements, value configuration, minimalism and care about malleability more than user friendliness. Theyre often familiar with many other Unix-like environments besides Linux, though they retain a fondness for the latter. They have their own pet projects and are likely to use, contribute to or at least follow a lot of small projects in the low-level system plumbing area. They can likely name at least a dozen alternatives to the GNU coreutils (I can name about 7, I think), generally favor traditional Unix principles and see computers as tools. These are the people more likely to be sympathetic to things like the suckless philosophy.

It should really come as no surprise that the former group dominates. Theyre the ones that largely shape the end user experience. The latter are pretty apathetic or even critical of it, in contrast. Additionally, the former group simply has far more manpower in the right places. Red Hats employees alone dominate much of the Linux kernel, the GNU base system, GNOME, NetworkManager, many projects affiliated with standards (including Polkit) and more. Theres no way to compete with a vast group of paid individuals like those.


The Year of the Linux Desktop has become a meme at this point, one that is used most often sarcastically. Yet there are still a lot of people who deeply hold onto it and think that if only Linux had a good abstraction engine for package manager backends, those Windows users will be running Fedora in no time.

What were seeing is undoubtedly a cultural clash by two polar opposites that coexist in the Linux community. We can see it in action through the vitriol against Red Hat developers, and conversely the derision against Gentoo users on part of Lennart Poettering, Greg K-H and others. Though it appears in this case Gentoo user is meant as a metonym for Linux users whose needs fall outside the mainstream application set. Theo de Raadt infamously quipped that Linux is for people who hate Microsoft, but that quote is starting to appear outdated.

Many of the more technically competent people with views critical of systemd have been rather quiet in public, for some reason. Likely its a realization that the Linux desktops direction is inevitable, and thus trying to criticize it is a futile endeavor. There are people who still think GNOME abandoning Sawfish was a mistake, so yes.

The non-desktop people still have their own turf, but they feel threatened by systemd to one degree or another. Still, I personally do not see them dwindling down. What I believe will happen is that they will become even more segregated than they already are from mainstream Linux and that using their software will feel more otherworldly as time goes on.

There are many who are predicting a huge renaissance for BSD in the aftermath of systemd, but Im skeptical of this. No doubt there will be increased interest, but as a whole it seems most of the anti-systemd crowd is still deeply invested in sticking to Linux.

Ultimately, the cruel irony is that in systemds attempt to supposedly unify the distributions, it has created a huge rift unlike any other and is exacerbating the long-present hostilities between desktop Linux and minimalist Linux sides at rates that are absolutely atypical. What will end up of systemd remains unknown. Given Linuxs tendency for chaos, it might end up the new HAL, though with a significantly more painful aftermath, or it might continue on its merry way and become a Linux standard set in stone, in which case the Linux community will see a sharp ideological divide. Or perhaps it wont. Perhaps things will go on as usual, on an endless spiral of reinvention without climax. Perhaps we will be doomed to flame on systemd for all eternity. Perhaps well eventually get sick of it and just part our own ways into different corners.

Either way, Ive become less and less fond of politics for uselessd and see systemd debates as being metaphorically like car crashes. I likely wont help but chime in at times, though I intend uselessd to branch off into its own direction with time.


A very controversial subsystem, systemd is implemented. systemd is a suite of system management daemons, libraries, and utilities designed for Linux and programmed exclusively for the Linux API. There is no more runlevels. For servers systemd makes little sense. Sysadmins now need to learn new systemd commands  for starting and stopping various services. There is still service command included for backwards compatibility, but it may go away in future releases. See CentOS 7 - RHEL 7 systemd commands Linux BrigadeCentOS 7 - RHEL 7 systemd commands

From Wikipedia (systemd)

In a 2012 interview, Slackware's founder Patrick Volkerding  expressed the following reservations about the systemd architecture which are fully applicable to the server environment

Concerning systemd, I do like the idea of a faster boot time (obviously), but I also like controlling the startup of the system with shell scripts that are readable, and I'm guessing that's what most Slackware users prefer too. I don't spend all day rebooting my machine, and having looked at systemd config files it seems to me a very foreign way of controlling a system to me, and attempting to control services, sockets, devices, mounts, etc., all within one daemon flies in the face of the UNIX concept of doing one thing and doing it well.

In an August 2014 article published in InfoWorld, Paul Venezia wrote about the systemd controversy, and attributed the controversy to violation of the Unix philosophy, and to "enormous egos who firmly believe they can do no wrong."[42] The article also characterizes the architecture of systemd as more similar to that of Microsoft Windows software:[42]

While systemd has succeeded in its original goals, it's not stopping there. systemd is becoming the Svchost of Linux which I don't think most Linux folks want. You see, systemd is growing, like wildfire, well outside the bounds of enhancing the Linux boot experience. systemd wants to control most, if not all, of the fundamental functional aspects of a Linux system from authentication to mounting shares to network configuration to syslog to cron.


After 10 years or so after Solaris 10 Linux at last got them.

Linux containers have emerged as a key open source application packaging and delivery technology, combining lightweight application isolation with the flexibility of image-based deployment methods. Developers have rapidly embraced Linux containers because they simplify and accelerate application deployment, and many Platform-as-a-Service (PaaS) platforms are built around Linux container technology, including OpenShift by Red Hat.

Red Hat Enterprise Linux 7 implements Linux containers using core technologies such as control groups (cGroups) for resource management, namespaces for process isolation, and SELinux for security, enabling secure multi-tenancy and reducing the potential for security exploits. The Red Hat container certification ensures that application containers built using Red Hat Enterprise Linux will operate seamlessly across certified container hosts.


With more and more systems, even at the low end, presenting non-uniform memory access (NUMA) topologies, Red Hat Enterprise Linux 7 addresses the performance irregularities that such systems present. A new, kernel-based NUMA affinity mechanism automates memory and scheduler optimization. It attempts to match processes that consume significant resources with available memory and CPU resources in order to reduce cross-node traffic. The resulting improved NUMA resource alignment improves performance for applications and virtual machines, especially when running memory-intensive workloads.


Red Hat Enterprise Linux 7 unifies hardware event reporting into a single reporting mechanism. Instead of various tools collecting errors from different sources with different timestamps, a new hardware event reporting mechanism (HERM) will make it easier to correlate events and get an accurate picture of system behavior. HERM reports events in a single location and in a sequential timeline. HERM uses a new userspace daemon, rasdaemon, to catch and log all RAS events coming from the kernel tracing infrastructure.


Red Hat Enterprise Linux 7 advances the level of integration and usability between the Red Hat Enterprise Linux guest and VMware vSphere. Integration now includes: Open VM Tools bundled open source virtualization utilities. 3D graphics drivers for hardware-accelerated OpenGL and X11 rendering. Fast communication mechanisms between VMware ESX and the virtual machine.


The ability to revert to a known, good system configuration is crucial in a production environment. Using LVM snapshots with ext4 and XFS (or the integrated snapshotting feature in Btrfs described in the Snapper section) an administrator can capture the state of a system and preserve it for future use. An example use case would involve an in-place upgrade that does not present a desired outcome and an administrator who wants to restore the original configuration.


Red Hat Enterprise Linux 7 introduces Live Media Creator for creating customized installation media from a kickstart file for a range of deployment use cases. Media can then be used to deploy standardized images whether on standardized corporate desktops, standardized servers, virtual machines, or hyperscale deployments. Live Media Creator, especially when used with templates, provides a way to control and manage configurations across the enterprise.


Red Hat Enterprise Linux 7 features the ability to use installation templates to create servers for common workloads. These templates can simplify and speed creating and deploying Red Hat Enterprise Linux servers, even for those with little or no experience with Linux.

What's new in RHEL8

 Python 3.6 is now the default version of Python (see below).  Wayland as their default display server for Gnome instead of the server, which was used with the previous major version of RHEL.  If you upgrade to RHEL 8 from a RHEL 7 system where you used the GNOME session, your system continues to use

In Red Hat Enterprise Linux 8, the GCC toolchain is based on the GCC 8.2 

nftables replaces iptables as the default network packet filtering framework. firewalld uses nftables by default

Red Hat Enterprise Linux 7 supported two implementations of the NTP protocol: ntp and chrony. In Red Hat Enterpise Linux 8, only chrony is available.

In ssh the Digital Signature Algorithm (DSA) is considered deprecated in Red Hat Enterprise Linux 8. Authentication mechanisms that depend on DSA keys do not work in the default configuration.

Network scripts are deprecated in Red Hat Enterprise Linux 8 and they are no longer provided by default. The basic installation provides a new version of the ifup and ifdown scripts which call the NetworkManager service through the nmcli tool. In Red Hat Enterprise Linux 8, to run the ifup and the ifdown scripts, NetworkManager must be running.

If any of these scripts are required, the installation of the deprecated network scripts in the system is still possible with the following command:

~]# yum install network-scripts

With Red Hat Enterprise Linux 8, all packages related to KDE Plasma Workspaces (KDE) have been removed, and it is no longer possible to use KDE as an alternative to the default GNOME desktop environment.

The Btrfs file system has been removed in Red Hat Enterprise Linux 8.

virt-manager has been deprecated

  • The support for rootless containers is available as a technology preview in RHEL 8 Beta.

    Rootless containers are containers that are created and managed by regular system users without administrative permissions.

Chapter 4. New features - Red Hat Customer Portal

YUM performance improvement and support for modular content

On Red Hat Enterprise Linux 8, installing software is ensured by the new version of the YUM tool, which is based on the DNF technology.

YUM based on DNF has the following advantages over the previous YUM v3 used on RHEL 7:

  • Increased performance
  • Support for modular content
  • Well-designed stable API for integration with tooling

For detailed information about differences between the new YUM tool and the previous version YUM v3 from RHEL 7, see

YUM based on DNF is compatible with YUM v3 when using from the command line, editing or creating configuration files.

For installing software, you can use the yum command and its particular options in the same way as on RHEL 7. Packages can be installed under the previous names using Provides. Packages also provide compatibility symlinks, so the binaries, configuration files and directories can be found in usual locations.

4.6. Shells and command-line tools

The nobody user replaces nfsnobody

In Red Hat Enterprise Linux 7, there was:

  • the nobody user and group pair with the ID of 99, and
  • the nfsnobody user and group pair with the ID of 65534, which is the default kernel overflow ID, too.

Both of these have been merged into the nobody user and group pair, which uses the 65534 ID in Red Hat Enterprise Linux 8. New installations no longer create the nfsnobody pair.

This change reduces the confusion about files that are owned by nobody but have nothing to do with NFS.


4.7. Web servers, databases, dynamic languages

Database servers in RHEL 8

RHEL 8 provides the following database servers:

  • MySQL 8.0, a multi-user, multi-threaded SQL database server. It consists of the MySQL server daemon, mysqld, and many client programs.
  • MariaDB 10.3, a multi-user, multi-threaded SQL database server. For all practical purposes, MariaDB is binary-compatible with MySQL.
  • PostgreSQL 10 and PostgreSQL 9.6, an advanced object-relational database management system (DBMS).
  • Redis 4.0, an advanced key-value store. It is often referred to as a data structure server because keys can contain strings, hashes, lists, sets, and sorted sets. Redis is provided for the first time in RHEL.

Note that the NoSQL MongoDB database server is not included in RHEL 8.0 Beta because it uses the Server Side Public License (SSPL).


Notable changes in MySQL 8.0

RHEL 8 is distributed with MySQL 8.0, which provides, for example, the following enhancements:

  • MySQL now incorporates a transactional data dictionary, which stores information about database objects.
  • MySQL now supports roles, which are collections of privileges.
  • The default character set has been changed from latin1 to utf8mb4.
  • Support for common table expressions, both nonrecursive and recursive, has been added.
  • MySQL now supports window functions, which perform a calculation for each row from a query, using related rows.
  • InnoDB now supports the NOWAIT and SKIP LOCKED options with locking read statements.
  • GIS-related functions have been improved.
  • JSON functionality has been enhanced.
  • The new mariadb-connector-c packages provide a common client library for MySQL and MariaDB. This library is usable with any version of the MySQL and MariaDB database servers. As a result, the user is able to connect one build of an application to any of the MySQL and MariaDB servers distributed with RHEL 8.

In addition, the MySQL 8.0 server distributed with RHEL 8 is configured to use mysql_native_password as the default authentication plug-in because client tools and libraries in RHEL 8 are incompatible with the caching_sha2_password method, which is used by default in the upstream MySQL 8.0 version.

To change the default authentication plug-in to caching_sha2_password, edit the /etc/my.cnf.d/mysql-default-authentication-plugin.cnf file as follows:


(BZ#1649891, BZ#1519450, BZ#1631400)

Notable changes in MariaDB 10.3

MariaDB 10.3 provides numerous new features over the version 5.5 distributed in RHEL 7. Some of the most notable changes are:

  • MariaDB Galera Cluster, a synchronous multi-master cluster, is now a standard part of MariaDB.
  • InnoDB is used as the default storage engine instead of XtraDB.
  • Common table expressions
  • System-versioned tables
  • FOR loops
  • Invisible columns
  • Sequences
  • Instant ADD COLUMN for InnoDB
  • Storage-engine independent column compression
  • Parallel replication
  • Multi-source replication

In addition, the new mariadb-connector-c packages provide a common client library for MySQL and MariaDB. This library is usable with any version of the MySQL and MariaDB database servers. As a result, the user is able to connect one build of an application to any of the MySQL and MariaDB servers distributed with RHEL 8.

See also Using MariaDB on Red Hat Enterprise Linux 8.

(BZ#1637034, BZ#1519450)

Python scripts must specify major version in hashbangs at RPM build time

In RHEL 8, executable Python scripts are expected to use hashbangs (shebangs) specifying explicitly at least the major Python version.

The /usr/lib/rpm/redhat/brp-mangle-shebangs buildroot policy (BRP) script is run automatically when building any RPM package, and attempts to correct hashbangs in all executable files. The BRP script will generate errors when encountering a Python script with an ambiguous hashbang, such as:

  • #! /usr/bin/python
  • #! /usr/bin/env python

To modify hashbangs in the Python scripts causing these build errors at RPM build time, use the script from the platform-python-devel package: -pn -i %{__python3} PATH ...

Multiple PATHs can be specified. If a PATH is a directory, recursively scans for any Python scripts matching the pattern ^[a-zA-Z0-9_]+\.py$, not only those with an ambiguous hashbang. Add this command to the %prep section or at the end of the %install section.

For more information, see Handling hashbangs in Python scripts.


Python 3 is the default Python implementation in RHEL 8

Red Hat Enterprise Linux 8 is distributed with Python 3.6. The package is not installed by default. To install Python 3.6, use the yum install python3 command.

Python 2.7 is available in the python2 package. However, Python 2 will have a shorter life cycle and its aim is to facilitate smoother transition to Python 3 for customers.

Neither the default python package nor the unversioned /usr/bin/python executable is distributed with RHEL 8. Customers are advised to use python3 or python2 directly. Alternatively, administrators can configure the unversioned python command using the alternatives command.

For details, see Using Python in Red Hat Enterprise Linux 8.


Notable changes in Ruby

RHEL 8 provides Ruby 2.5, which introduces numerous new features and enhancements over Ruby 2.0.0 available in RHEL 7. Notable changes include:

  • Incremental garbage collector has been added.
  • The Refinements syntax has been added.
  • Symbols are now garbage collected.
  • The $SAFE=2 and $SAFE=3 safe levels are now obsolete.
  • The Fixnum and Bignum classes have been unified into the Integer class.
  • Performance has been improved by optimizing the Hash class, improved access to instance variables, and the Mutex class being smaller and faster.
  • Certain old APIs have been deprecated.
  • Bundled libraries, such as RubyGems, Rake, RDoc, Psych, Minitest, and test-unit, have been updated.
  • Other libraries, such as mathn, DL, ext/tk, and XMLRPC, which were previously distributed with Ruby, are deprecated or no longer included.
  • The SemVer versioning scheme is now used for Ruby versioning.


Notable changes in PHP

Red Hat Enterprise Linux 8 is distributed with PHP 7.2. This version introduces the following major changes over PHP 5.4, which is available in RHEL 7:

  • PHP uses FastCGI Process Manager (FPM) by default (safe for use with a threaded httpd).
  • The php_value and php-flag variables should no longer be used in the httpd configuration files; they should be set in pool configuration instead: /etc/php-fpm.d/*.conf
  • PHP script errors and warning are logged to the /var/log/php-fpm/www-error.log file instead of /var/log/httpd/error.log
  • When changing the PHP max_execution_time configuration variable, the httpd ProxyTimeout setting should be increased to match
  • The user running PHP scripts is now configured in the FPM pool configuration (the /etc/php-fpm/d/www.conf file; the apache user is the default)
  • The php-fpm service needs to be restarted after a configuration change or after a new extension is installed

The following extensions have been removed:

  • aspell
  • mysql (note that the mysqli and pdo_mysql extensions are still available, provided by php-mysqlnd package)
  • zip
  • memcache


Notable changes in Perl

Perl 5.26, distributed with RHEL 8, introduces the following changes over the version available in RHEL 7:

  • Unicode 9.0 is now supported.
  • New op-entry, loading-file, and loaded-file SystemTap probes are provided.
  • Copy-on-write mechanism is used when assigning scalars for improved performance.
  • The IO::Socket::IP module for handling IPv4 and IPv6 sockets transparently has been added.
  • The Config::Perl::V module to access perl -V data in a structured way has been added.
  • A new perl-App-cpanminus package has been added, which contains the cpanm utility for getting, extracting, building, and installing modules from the Comprehensive Perl Archive Network (CPAN) repository.
  • The current directory . has been removed from the @INC module search path for security reasons.
  • The do statement now returns a deprecation warning when it fails to load a file because of the behavioral change described above.
  • The do subroutine(LIST) call is no longer supported and results in a syntax error.
  • Hashes are randomized by default now. The order in which keys and values are returned from a hash changes on each perl run. To disable the randomization, set the PERL_PERTURB_KEYS environment variable to 0.
  • Unescaped literal { characters in regular expression patterns are no longer permissible.
  • Lexical scope support for the $_ variable has been removed.
  • Using the defined operator on an array or a hash results in a fatal error.
  • Importing functions from the UNIVERSAL module results in a fatal error.
  • The find2perl, s2p, a2p, c2ph, and pstruct tools have been removed.
  • The ${^ENCODING} facility has been removed. The encoding pragma's default mode is no longer supported. To write source code in other encoding than UTF-8, use the encoding's Filter option.
  • The perl packaging is now aligned with upstream. The perl package installs also core modules, while the /usr/bin/perl interpreter is provided by the perl-interpreter package. In previous releases, the perl package included just a minimal interpreter, whereas the perl-core package included both the interpreter and the core modules.


Notable changes in Apache httpd

RHEL 8 is distributed with the Apache HTTP Server 2.4.35. This version introduces the following changes over httpd available in RHEL 7:

  • HTTP/2 support is now provided by the mod_http2 package, which is a part of the httpd module.
  • Automated TLS certificate provisioning and renewal using the Automatic Certificate Management Environment (ACME) protocol is now supported with the mod_md package (for use with certificate providers such as Let's Encrypt)
  • The Apache HTTP Server now supports loading TLS certificates and private keys from hardware security tokens directly from PKCS#11 modules. As a result, a mod_ssl configuration can now use PKCS#11 URLs to identify the TLS private key, and, optionally, the TLS certificate in the SSLCertificateKeyFile and SSLCertificateFile directives.
  • The multi-processing module (MPM) configured by default with the Apache HTTP Server has changed from a multi-process, forked model (known as prefork) to a high-performance multi-threaded model, event. Any third-party modules that are not thread-safe need to be replaced or removed. To change the configured MPM, edit the /etc/httpd/conf.modules.d/00-mpm.conf file. See the httpd.service(8) man page for more information.

For more information about httpd, see Setting up the Apache HTTP web server.

(BZ#1632754, BZ#1527084, BZ#1581178)

The nginx web server new in RHEL 8

RHEL 8 introduces nginx 1.14, a web and proxy server supporting HTTP and other protocols, with a focus on high concurrency, performance, and low memory usage. nginx was previously available only as a Software Collection.

The nginx web server now supports loading TLS certificates and private keys from hardware security tokens directly from PKCS#11 modules. As a result, an nginx configuration can use PKCS#11 URLs to identify the TLS private key in the ssl_certificate_key directive.

OpenSSH rebased to version 7.8p1

The openssh packages have been upgraded to upstream version 7.8p1. Notable changes include:

  • Removed support for the SSH version 1 protocol.
  • Removed support for the hmac-ripemd160 message authentication code.
  • Removed support for RC4 (arcfour) ciphers.
  • Removed support for Blowfish ciphers.
  • Removed support for CAST ciphers.
  • Changed the default value of the UseDNS option to no.
  • Disabled DSA public key algorithms by default.
  • Changed the minimal modulus size for Diffie-Hellman parameters to 2048 bits.
  • Changed semantics of the ExposeAuthInfo configuration option.
  • The UsePrivilegeSeparation=sandbox option is now mandatory and cannot be disabled.
  • Set the minimal accepted RSA key size to 1024 bits.


RSA-PSS is now supported in OpenSC

This update adds support for the RSA-PSS cryptographic signature scheme to the OpenSC smart card driver. The new scheme enables a secure cryptographic algorithm required for the TLS 1.3 support in the client software.


rsyslog rebased to version 8.37.0

The rsyslog packages have been upgraded to upstream version 8.37.0, which provides many bug fixes and enhancements over the previous versions. Most notable changes include:

  • Enhanced processing of rsyslog internal messages; possibility of rate-limiting them; fixed possible deadlock.
  • Enhanced rate-limiting in general; the actual spam source is now logged.
  • Improved handling of oversized messages - the user can now set how to treat them both in the core and in certain modules with separate actions.
  • mmnormalize rule bases can now be embedded in the config file instead of creating separate files for them.
  • The user can now set the GnuTLS priority string for imtcp that allows fine-grained control over encryption.
  • All config variables, including variables in JSON, are now case-insensitive.
  • Various improvements of PostgreSQL output.
  • Added a possibility to use shell variables to control config processing, such as conditional loading of additional configuration files, executing statements, or including a text in config. Note that an excessive use of this feature can make it very hard to debug problems with rsyslog.
  • 4-digit file creation modes can be now specified in config.
  • Reliable Event Logging Protocol (RELP) input can now bind also only on a specified address.
  • The default value of the enable.body option of mail output is now aligned to documentation
  • The user can now specify insertion error codes that should be ignored in MongoDB output.
  • Parallel TCP (pTCP) input has now the configurable backlog for better load-balancing.

Packages in base version

Appendix B. Packages in BaseOS - Red Hat Customer Portal

Top Visited
Past week
Past month


Old News ;-)

[Jul 23, 2021] Major Linux RPM problem uncovered

Jul 22, 2021 |

Dmitry Antipov, a Linux developer at CloudLinux , AlmaLinux OS's parent company, first spotted the problem in March 2021. Antipov found that RPM would work with unauthorized RPM packages . This meant that unsigned packages or packages signed with revoked keys could silently be patched or updated without a word of warning that they might not be kosher.

Why? Because RPM had never properly checked revoked certificate key handling. Specifically, as Linux and lead RPM developer Panu Matilainen explained: " Revocation is one of the many unimplemented things in rpm's OpenPGP support . In other words, you're not seeing a bug as such; it's just not implemented at all, much like expiration is not."

Antipov, wearing his hat as a TuxCare (CloudLinux's KernelCare and Extended Lifecycle Support) team member, has submitted a patch to fix this problem. As Antipov explained in an interview: "The problem is that both RPM and DNF , [a popular software package manager that installs, updates, and removes packages on RPM-based Linux distributions] do a check to see if the key is valid and genuine but not expired, but not for revocation. As I understand it, all the distribution vendors have just been lucky enough to never have been hit by this."

They have indeed been lucky. Armed with an out-of-date key, it could be child's play to sneak malware into a Linux desktop or server.

[Jul 23, 2021] NVD - CVE-2021-33909

A vulnerability (CVE-2021-33909) in the Linux kernel's filesystem layer that may allow local, unprivileged attackers to gain root privileges on a vulnerable host has been unearthed by researchers.
"Qualys security researchers have been able to independently verify the vulnerability, develop an exploit, and obtain full root privileges on default installations of Ubuntu 20.04, Ubuntu 20.10, Ubuntu 21.04, Debian 11, and Fedora 34 Workstation. Other Linux distributions are likely vulnerable and probably exploitable," said Bharat Jogi, Senior Manager, Vulnerabilities and Signatures, Qualys.
Jul 23, 2021 |

fs/seq_file.c in the Linux kernel 3.16 through 5.13.x before 5.13.4 does not properly restrict seq buffer allocations, leading to an integer overflow, an Out-of-bounds Write, and escalation to root by an unprivileged user, aka CID-8cae8cd89f05.[email protected]/message/Z4UHHIGISO3FVRF4CQNJS4IKA25ATSFU/

[Jul 23, 2021] Nasty Linux systemd security bug revealed - ZDNet

Jul 22, 2021 |

Qualsys has found an ugly Linux systemd security hole that can enable any unprivileged user to crash a Linux system. The patch is available, and you should deploy it as soon as possible.

... ... ...

It works by enabling attackers to misuse the alloca() function in a way that would result in memory corruption. This, in turn, allows a hacker to crash systemd and hence the entire operating system. Practically speaking, this can be done by a local attacker mounting a filesystem on a very long path . This causes too much memory space to be used in the systemd stack, which results in a system crash.

[Jun 14, 2021] Patch Released for 7-Year-Old Privilege Escalation Bug In Linux Service Polkit - Slashdot

Jun 14, 2021 |

Not in this house. ( Score: 5 , Funny) by Ostracus ( 1354233 ) on Saturday June 12, 2021 @04:53PM ( #61480684 ) Journal

7 year old? No no no, that's something that happens to that other OS. Reply to This Share Flag as Inappropriate Re:Not in this house. ( Score: 5 , Funny) by ArchieBunker ( 132337 ) on Saturday June 12, 2021 @05:04PM ( #61480708 ) Homepage

I guess the many eyes missed this one. My systemd-free Gentoo is just fine ( Score: 5 , Informative) by AcidFnTonic ( 791034 ) on Saturday June 12, 2021 @05:07PM ( #61480718 ) Homepage

No issue with my opted out systemd-free Gentoo install....

[Jun 12, 2021] Seven-year-old make-me-root bug in Linux service polkit patched

Highly recommended!
Linux systems that have polkit version 0.113 or later installed – like Debian (unstable) , RHEL 8 , Fedora 21+ , and Ubuntu 20.04 – are affected.
Jun 12, 2021 |

A seven-year-old privilege escalation vulnerability that's been lurking in several Linux distributions was patched last week in a coordinated disclosure.

In a blog post on Thursday, GitHub security researcher Kevin Backhouse recounted how he found the bug ( CVE-2021-3560 ) in a service called polkit associated with systemd, a common Linux system and service manager component.

Introduced in commit bfa5036 seven years ago and initially shipped in polkit version 0.113, the bug traveled different paths in different Linux distributions. For example, it missed Debian 10 but it made it to the unstable version of Debian , upon which other distros like Ubuntu are based.

Formerly known as PolicyKit, polkit is a service that evaluates whether specific Linux activities require higher privileges than those currently available. It comes into play if, for example, you try to create a new user account.

me title=

Backhouse says the flaw is surprisingly easy to exploit, requiring only a few commands using standard terminal tools like bash, kill, and dbus-send.

"The vulnerability is triggered by starting a dbus-send command but killing it while polkit is still in the middle of processing the request," explained Backhouse.

Killing dbus-send – an interprocess communication command – in the midst of an authentication request causes an error that arises from polkit asking for the UID of a connection that no longer exists (because the connection was killed).

"In fact, polkit mishandles the error in a particularly unfortunate way: rather than rejecting the request, it treats the request as though it came from a process with UID 0," explains Backhouse. "In other words, it immediately authorizes the request because it thinks the request has come from a root process."

This doesn't happen all the time, because polkit's UID query to the dbus-daemon occurs multiple times over different code paths. Usually, those code paths handle the error correctly, said Backhouse, but one code path is vulnerable – and if the disconnection happens when that code path is active, that's when the privilege elevation occurs. It's all a matter of timing, which varies in unpredictable ways because multiple processes are involved.

The intermittent nature of the bug, Backhouse speculates, is why it remained undetected for seven years.

Linux systems that have polkit version 0.113 or later installed – like Debian (unstable) , RHEL 8 , Fedora 21+ , and Ubuntu 20.04 – are affected.

"CVE-2021-3560 enables an unprivileged local attacker to gain root privileges," said Backhouse. "It's very simple and quick to exploit, so it's important that you update your Linux installations as soon as possible." ®

[Jun 12, 2021] Seven years old bug in Polkit gives unprivileged users root access

Highly recommended!
The polkit service is used by systemd. Linux systems that have polkit version 0.113 or later installed like Debian (unstable), RHEL 8, Fedora 21+, and Ubuntu 20.04 are affected. "CVE-2021-3560 enables an unprivileged local attacker to gain root privileges," said Backhouse. "It's very simple and quick to exploit, so it's important that you update your Linux installations as soon as possible."
See Red Hat Advisory ...
Jun 12, 2021 |

Ancient Linux bugs provide root access to unprivileged users

Security researchers have discovered some 7-year-old vulnerabilities Linux distribution

Can be used by unprivileged local users to bypass authentication and gain root access.

The bug patched last week exists in Polkit System Service, a toolkit used to assess whether a particular Linux activity requires higher privileges than currently available. Polkit is installed by default on some Linux distributions, allowing unprivileged processes to communicate with privileged processes.

Linux distributions that use systemd also use Polkit because the Polkit service is associated with systemd.

This vulnerability has been tracked as CVE-2021-3560 and has a CVSS score of 7.8. It was discovered by Kevin Backhouse, a security researcher on GitHub. He states that this issue occurred in 2013 with code commit bfa5036.

Initially shipped with Polkit version 0.113, it has moved to various Linux distributions over the last seven years.

"If the requesting process disconnects from dbus-daemon just before the call to polkit_system_bus_name_get_creds_sync begins, the process will not be able to get the unique uid and pid of the process and will not be able to verify the privileges of the requesting process." And Red Hat Advisory ..

"The biggest threats from this vulnerability are data confidentiality and integrity, and system availability."

so Blog post According to Backhouse, exploiting this vulnerability is very easy and requires few commands using standard terminal tools such as bash, kill and dbus-send.

This flaw affects Polkit versions between 0.113 and 0.118. Red Hat's Cedric Buissart said it will also affect Debian-based distributions based on Polkit 0.105.

Among the popular Linux distributions affected are Debian "Bullseye", Fedora 21 (or later), Ubuntu 20.04, RHEL 8.

Polkit v.0.119, released on 3rd rd We will address this issue in June. We recommend that you update your Linux installation as soon as possible to prevent threat attackers from exploiting the bug.

CVE-2021-3560 is the latest in a series of years ago vulnerabilities affecting Linux distributions.

In 2017, Positive Technologies researcher Alexander Popov discovered a flaw in the Linux kernel introduced in the code in 2009. Tracked as CVE-2017-2636, this flaw was finally patched in 2017.

Another old Linux security flaw indexed as CVE-2016-5195 was introduced in 2007 and patched in 2016. This bug, also known as the "dirty COW" zero-day, was used in many attacks before the patch was applied.

Ancient Linux bugs provide root access to unprivileged users

Source link Ancient Linux bugs provide root access to unprivileged users

[Jun 12, 2021] 7 'dmesg' Commands for Troubleshooting and Collecting Information of Linux Systems

Jun 09, 2021 |

List all Detected Devices

To discover which hard disks has been detected by kernel, you can search for the keyword " sda " along with " grep " like shown below.

[[email protected] ~]# dmesg | grep sda

[    1.280971] sd 2:0:0:0: [sda] 488281250 512-byte logical blocks: (250 GB/232 GiB)
[    1.281014] sd 2:0:0:0: [sda] Write Protect is off
[    1.281016] sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00
[    1.281039] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    1.359585]  sda: sda1 sda2 < sda5 sda6 sda7 sda8 >
[    1.360052] sd 2:0:0:0: [sda] Attached SCSI disk
[    2.347887] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)
[   22.928440] Adding 3905532k swap on /dev/sda6.  Priority:-1 extents:1 across:3905532k FS
[   23.950543] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro
[   24.134016] EXT4-fs (sda5): mounted filesystem with ordered data mode. Opts: (null)
[   24.330762] EXT4-fs (sda7): mounted filesystem with ordered data mode. Opts: (null)
[   24.561015] EXT4-fs (sda8): mounted filesystem with ordered data mode. Opts: (null)

NOTE : The "˜sda' first SATA hard drive, "˜sdb' is the second SATA hard drive and so on. Search with "˜hda' or "˜hdb' in the case of IDE hard drive.

[Jun 12, 2021] How to manage systemd units with systemctl

Jun 09, 2021 |

June 9, 2021 - by Magesh Maruthamuthu - Leave a Comment SHARE TWEET PIN IT SHARE!2&fsb=1&xpc=2DIgkTxMvj&p=https%3A//

In this article, we'll show you how to manage the systemd units using systemctl command.

What's systemd?

systemd is a system and service manager for modern Linux operating systems, which backwards compatible with SysV and LSB init scripts.

It provides a numerous features such as parallel startup of system services at boot time and on-demand activation of daemons, etc,.

systemd introduces the concept of systemd units, which are located under "/usr/lib/systemd/system' whereas legacy init scripts were located under "/etc/rc.d/init.d' .

systemd is the first process that starts after the system boots and holds PID 1.

systemd unit types

There are different types of unit files that represent system resources and services. Each unit file type comes with their own extensions, below are the commonly used systemd unit types.

Unit files are plain-text files that can be created or modified by a privilege user.

Run the following command to see all unit types:

$ systemctl -t help
Unit Type File Extension Description
Service unit .service A service on the system, including instructions for starting, restarting, and stopping the service.
Target unit .target It replaces sysV init run levels that control system boot.
Device unit .device A device file recognized by the kernel.
Mount unit .mount A file system mount point.
Socket unit .socket A network socket associated with a service.
Swap unit .swap A swap device or a swap file.
Timer unit .timer A systemd timer.
What's systemctl?

The systemctl command is the primary tool to manage or control systemd units. It combines the functionality of SysVinit's service and chkconfig commands into a single command.

It comes with a long list of options for different functionality, the most commonly used options are starting, stopping, restarting, masking, or reloading a daemon.!3&btvi=1&fsb=1&xpc=dB88R53JpO&p=https%3A// Listing all Units

To list all loaded units regardless of their state, run the following command on your terminal. It lists all units, including service, target, mount, socket, etc,.

$ systemctl list-units --all
Listing Services

To list all currently loaded service units, run:

$ systemctl list-units --type service
$ systemctl list-units --type=service

Details about the header of above output:

  • UNIT = The name of the service unit
  • LOAD = Reflects whether the unit file has been loaded.
  • ACTIVE = The high-level unit file activation state.
  • SUB = The low-level unit file activation state.
  • DESCRIPTION = Short description of the unit file.

By default, the "systemctl list-units' command displays only active units. If you want to list all loaded units regardless of their state, run:

$ systemctl list-units --type service --all

  UNIT                                                LOAD      ACTIVE   SUB     DESCRIPTION                                                                  
  accounts-daemon.service                             loaded    active   running Accounts Service                                                             
 --  acpid.service                                       not-found inactive dead    acpid.service                                                                
  after-local.service                                 loaded    inactive dead    /etc/init.d/after.local Compatibility                                        
  alsa-restore.service                                loaded    active   exited  Save/Restore Sound Card State                                                
  alsa-state.service                                  loaded    inactive dead    Manage Sound Card State (restore and store)                                  
 --  amavis.service                                      not-found inactive dead    amavis.service                                                               
  apparmor.service                                    loaded    active   exited  Load AppArmor profiles                                                       
  appstream-sync-cache.service                        loaded    inactive dead    Synchronize AppStream metadata from repositories into AS-cache               
  auditd.service                                      loaded    active   running Security Auditing Service                                                    
  avahi-daemon.service                                loaded    active   running Avahi mDNS/DNS-SD Stack                                                      
  backup-rpmdb.service                                loaded    inactive dead    Backup RPM database                                                          
  backup-sysconfig.service                            loaded    inactive dead    Backup /etc/sysconfig directory                                              
  bluetooth.service                                   loaded    active   running Bluetooth service                                                            
  btrfs-balance.service                               loaded    inactive dead    Balance block groups on a btrfs filesystem                                   
  btrfs-scrub.service                                 loaded    inactive dead    Scrub btrfs filesystem, verify block checksums                               
  btrfs-trim.service                                  loaded    inactive dead    Discard unused blocks on a mounted filesystem                                
  btrfsmaintenance-refresh.service                    loaded    inactive dead    Update cron periods from /etc/sysconfig/btrfsmaintenance                     
  ca-certificates.service                             loaded    inactive dead    Update system wide CA certificates                                           
  check-battery.service                               loaded    inactive dead    Check if mainboard battery is Ok                                             
  colord.service                                      loaded    active   running Manage, Install and Generate Color Profiles                                  
  cron.service                                        loaded    active   running Command Scheduler                                                            
  cups.service                                        loaded    active   running CUPS Scheduler

To list only the running services, run:!4&btvi=3&fsb=1&xpc=BWPsiWXa7a&p=https%3A//

$ systemctl list-units --type=service --state=running

To list all service units installed in the file system, not only the loaded, run:

$ sudo systemctl list-unit-files --type=service

To list only enabled service units, run:

$ systemctl list-unit-files --type=service --state=enabled

To display the unit file contents that systemd has loaded into memory, run:

$ systemctl cat sshd.service

# /etc/systemd/system/sshd.service
Description=OpenSSH Daemon

ExecStartPre=/usr/sbin/sshd -t $SSHD_OPTS
ExecStart=/usr/sbin/sshd -D $SSHD_OPTS
ExecReload=/bin/kill -HUP $MAINPID


To view a list of properties that are set for the specified unit, run:

$ systemctl show sshd.service

TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
WatchdogTimestamp=Thu 2021-06-03 06:28:04 IST

To display a single property, use the "-p' flag with the property name.

$ systemctl show sshd.service -p ControlGroup


To recursively show only dependencies of target units, run: For instance, to show dependencies of ssh service.

$ systemctl list-dependencies sshd.service

 --  ""system.slice
 --  """
 --    ""detect-part-label-duplicates.service
 --    ""dev-hugepages.mount
 --    ""dev-mqueue.mount
 --    ""dracut-shutdown.service
 --    ""haveged.service
 --    ""kmod-static-nodes.service
 --    ""lvm2-lvmpolld.socket
 --    ""lvm2-monitor.service
 --    ""plymouth-read-write.service
 --    ""plymouth-start.service
Listing Sockets

To list socket units currently in memory, run:

$ systemctl list-units --type=socket
$ systemctl list-sockets

UNIT                            LOAD   ACTIVE SUB       DESCRIPTION
avahi-daemon.socket             loaded active running   Avahi mDNS/DNS-SD Stack Activation Socket        
cups.socket                     loaded active running   CUPS Scheduler                                   
dbus.socket                     loaded active running   D-Bus System Message Bus Socket                  
dm-event.socket                 loaded active listening Device-mapper event daemon FIFOs                 
iscsid.socket                   loaded active listening Open-iSCSI iscsid Socket                         
lvm2-lvmpolld.socket            loaded active listening LVM2 poll daemon socket                          
pcscd.socket                    loaded active listening PC/SC Smart Card Daemon Activation Socket        
syslog.socket                   loaded active running   Syslog Socket                                    
systemd-initctl.socket          loaded active listening /dev/initctl Compatibility Named Pipe            
systemd-journald-dev-log.socket loaded active running   Journal Socket (/dev/log)                        
systemd-journald.socket         loaded active running   Journal Socket                                   
systemd-rfkill.socket           loaded active listening Load/Save RF Kill Switch Status /dev/rfkill Watch
systemd-udevd-control.socket    loaded active running   udev Control Socket                              
systemd-udevd-kernel.socket     loaded active running   udev Kernel Socket
LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

14 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
Listing Mounts

To list mount units currently loaded, run:

$ systemctl list-units --type=mount

UNIT                           LOAD   ACTIVE SUB     DESCRIPTION
-.mount                        loaded active mounted Root Mount                                   
\x2esnapshots.mount            loaded active mounted /.snapshots                                  
boot-grub2-i386\x2dpc.mount    loaded active mounted /boot/grub2/i386-pc                          
boot-grub2-x86_64\x2defi.mount loaded active mounted /boot/grub2/x86_64-efi                       
dev-hugepages.mount            loaded active mounted Huge Pages File System                       
dev-mqueue.mount               loaded active mounted POSIX Message Queue File System              
home.mount                     loaded active mounted /home                                        
opt.mount                      loaded active mounted /opt                                         
proc-sys-fs-binfmt_misc.mount  loaded active mounted Arbitrary Executable File Formats File System
root.mount                     loaded active mounted /root                                        
run-media-linuxgeek-DATA.mount loaded active mounted /run/media/linuxgeek/DATA                    
run-user-1000-gvfs.mount       loaded active mounted /run/user/1000/gvfs                          
run-user-1000.mount            loaded active mounted /run/user/1000                               
srv.mount                      loaded active mounted /srv                                         
sys-fs-fuse-connections.mount  loaded active mounted FUSE Control File System                     
sys-kernel-debug-tracing.mount loaded active mounted /sys/kernel/debug/tracing                    
sys-kernel-debug.mount         loaded active mounted Kernel Debug File System                     
tmp.mount                      loaded active mounted /tmp                                         
usr-local.mount                loaded active mounted /usr/local                                   
var.mount                      loaded active mounted /var                                         

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

20 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
Listing Timers

To list timer units currently loaded, run:

me marginwidth=

$ systemctl list-timers

NEXT                         LEFT                LAST                         PASSED       UNIT                         ACTIVATES
Fri 2021-06-04 17:00:00 IST  8min left           Fri 2021-06-04 16:00:03 IST  51min ago    snapper-timeline.timer       snapper-timeline.service
Fri 2021-06-04 21:38:01 IST  4h 46min left       Thu 2021-06-03 12:10:13 IST  1 day 4h ago snapper-cleanup.timer        snapper-cleanup.service
Fri 2021-06-04 21:42:54 IST  4h 51min left       Thu 2021-06-03 12:15:06 IST  1 day 4h ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Sat 2021-06-05 00:00:00 IST  7h left             Fri 2021-06-04 00:00:23 IST  16h ago      logrotate.timer              logrotate.service
Sat 2021-06-05 00:00:00 IST  7h left             Fri 2021-06-04 00:00:23 IST  16h ago      mandb.timer                  mandb.service
Sat 2021-06-05 00:43:15 IST  7h left             Fri 2021-06-04 01:52:04 IST  14h ago      check-battery.timer          check-battery.service
Sat 2021-06-05 00:48:48 IST  7h left             Fri 2021-06-04 00:05:23 IST  16h ago      backup-rpmdb.timer           backup-rpmdb.service
Sat 2021-06-05 01:41:30 IST  8h left             Fri 2021-06-04 00:57:23 IST  15h ago      backup-sysconfig.timer       backup-sysconfig.service
Mon 2021-06-07 00:00:00 IST  2 days left         Tue 2021-06-01 03:16:20 IST  3 days ago   btrfs-balance.timer          btrfs-balance.service
Mon 2021-06-07 00:00:00 IST  2 days left         Mon 2021-05-31 12:08:22 IST  4 days ago   fstrim.timer                 fstrim.service
Thu 2021-07-01 00:00:00 IST  3 weeks 5 days left Tue 2021-06-01 03:16:20 IST  3 days ago   btrfs-scrub.timer            btrfs-scrub.service

11 timers listed.
Pass --all to see loaded but inactive timers, too.
Service Management

Service is also one of the unit type in the systemd system, which have unit files with a suffix of ".service' . Six types of common actions can be performed against a service, which will be divided into two major types.

  • Boot-time actions: These are enable and disable, which are used to control a service at boot time.
  • Run-time actions: These are start, stop, restart, and reload, which are used to control a service on-demand.
Start a service

To start a systemd service, run: The "UNIT_NAME' could be any application name like sshd, httpd, mariadb, etc,.

$ sudo systemctl start UNIT_NAME.service
$ sudo systemctl start UNIT_NAME
Stop a service

To stop a currently running service, execute: For instance, to stop Apache httpd service.

$ sudo systemctl stop httpd.service
Restart and reload a service

To restart a running service, run:

$ sudo systemctl restart UNIT_NAME.service

You may need to reload a service while making changes to the configuration file, which will bring up new parameters that you added. To do so, run:

$ sudo systemctl reload UNIT_NAME.service
Enabling and disabling a service at boot

To start services automatically at boot, run: This will create a symlink from either "/usr/lib/systemd/system/UNIT_NAME.service' or "/etc/systemd/system/UNIT_NAME.service' to the "/etc/systemd/system/' .

$ sudo systemctl enable UNIT_NAME.service

You can double check that the service is enabled by executing the following command.

$ systemctl is-enabled UNIT_NAME.service

To disable the service at boot, run: This will remove the symlink that has created earlier for the service unit.

$ sudo systemctl disable UNIT_NAME.service
Checking the status of service

To check the status of a service, run: This will give you more detailed information about the service unit.

$ systemctl status UNIT_NAME.service

# systemctl status httpd
 --  httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/httpd.service.d
           """limit_nofile.conf, respawn.conf
   Active: active (running) since Fri 2021-05-28 03:23:54 IST; 1 weeks 3 days ago
     Docs: man:httpd(8)
  Process: 19226 ExecReload=/usr/sbin/httpd $OPTIONS -k graceful (code=exited, status=0/SUCCESS)
 Main PID: 25933 (httpd)
   Status: "Total requests: 0; Current requests/sec: 0; Current traffic:   0 B/sec"
    Tasks: 187
   Memory: 479.6M
   CGroup: /system.slice/httpd.service
           ""12161 /usr/sbin/httpd -DFOREGROUND
           ""19283 /usr/sbin/httpd -DFOREGROUND
           ""19284 /usr/sbin/httpd -DFOREGROUND
           ""19286 Passenger watchdog
           ""19289 Passenger core
           ""19310 /usr/sbin/httpd -DFOREGROUND
           ""19333 /usr/sbin/httpd -DFOREGROUND
           ""19339 /usr/sbin/httpd -DFOREGROUND
           ""19459 /usr/sbin/httpd -DFOREGROUND
           ""20564 /opt/plesk/php/5.6/bin/php-cgi -c /var/www/vhosts/system/
           ""21821 /usr/sbin/httpd -DFOREGROUND
           """25933 /usr/sbin/httpd -DFOREGROUND
Jun 06 12:19:11 systemd[1]: Reloading The Apache HTTP Server.
Jun 06 12:19:12 systemd[1]: Reloaded The Apache HTTP Server.
Jun 06 13:18:06 systemd[1]: Reloading The Apache HTTP Server.
Jun 06 13:18:07 systemd[1]: Reloaded The Apache HTTP Server.
Jun 06 13:18:26 systemd[1]: Reloading The Apache HTTP Server.
Jun 06 13:18:27 systemd[1]: Reloaded The Apache HTTP Server.
Jun 06 13:19:09 systemd[1]: Reloading The Apache HTTP Server.
Jun 06 13:19:10 systemd[1]: Reloaded The Apache HTTP Server.
Jun 07 04:10:25 systemd[1]: Reloading The Apache HTTP Server.
Jun 07 04:10:26 systemd[1]: Reloaded The Apache HTTP Server.

To check if a service unit is currently active (running) by executing the below command.

$ systemctl is-active UNIT_NAME.service
Masking and Unmasking Units

To prevent any service unit from being started manually or by another service then you need to mask it. If you have masked any service unit, it will completely disable the service and will not start service until it is unmaksed.

$ sudo systemctl mask UNIT_NAME.service

If you try to start the masked service, you will see the following message:

$ sudo systemctl start UNIT_NAME.service
Failed to start UNIT_NAME.service: Unit UNIT_NAME.service is masked.

To unmask a unit, run:

$ sudo systemctl unmask UNIT_NAME.service
Creating and modifying systemd unit files

In this section, we will show you how to create and mofiy systemd unit files. There are three main directories where unit files are stored on the system.

  • /usr/lib/systemd/system/ "" systemd unit files dropped when the package has installed.
  • /run/systemd/system/ "" systemd unit files created at run time.
  • /etc/systemd/system/ "" systemd unit files created by "systemctl enable' command as well as unit files added for extending a service.
Modifying existing systemd unit file

In this example, we will show how to modify an existing unit file. The "/etc/systemd/system/' directory is reserved for unit files created or customized by the system administrator.

For example, to edit "httpd.service' unit file, run:

$ sudo systemctl edit httpd.service

This creates an override snippet file under "/etc/systemd/system/httpd.service.d/override.conf' and opens it in your text editor. Add new parameters to the httpd.service unit file and the new parameters will be added to the existing service file when the file saved.

Restart the httpd service to loads the new service configuration (Unit file must be restated if you modify the running unit file).

$ sudo systemctl restart httpd

If you want to edit the full unit file, run:

$ sudo systemctl edit --full httpd.service

This will load the current unit file into the editor. When the file is saved, systemctl will create a file at "/etc/systemd/system/httpd.service' .

Make a note: Any unit file in /etc/systemd/system will override the corresponding file in /lib/systemd/system.

To revert the changes or return to the default configuration of the unit, delete the following custom configuration files:

To remove a snippet, run:

$ sudo rm -r /etc/systemd/system/httpd.service.d

To remove a full modified unit file, run:

$ sudo rm /etc/systemd/system/httpd.service

To apply changes to unit files without rebooting the system, execute: The "daemon-reload' option reloads all unit files and recreates the entire dependency tree.

$ sudo systemctl daemon-reload
systemd Targets

Targets are specialized unit files that determine the state of a Linux system. systemd uses targets to group together other units through a chain of dependencies, which serves a purpose such as runlevels.

Each target is named instead of a number and unit files can be linked to one target and multiple targets can be active simultaneously.

Listing Targets

To view a list of the available targets on your system, run:

$ systemctl list-units --type=target
$ systemctl list-unit-files --type=target

UNIT                   LOAD   ACTIVE SUB    DESCRIPTION  
---------------------------------------------------------------------------                  loaded active active Basic System               loaded active active Bluetooth                 loaded active active Local Encrypted Volumes           loaded active active Login Prompts              loaded active active Graphical Interface     loaded active active Local File Systems (Pre)        loaded active active Local File Systems        loaded active active Multi-User System     loaded active active Network is Online        loaded active active Network (Pre)                loaded active active Network                   loaded active active Host and Network Name Lookups loaded active active User and Group Name Lookups           loaded active active Paths                  loaded active active Remote File Systems (Pre)       loaded active active Remote File Systems           loaded active active Slices                       loaded active active Sockets                        loaded active active Sound Card                      loaded active active Swap                         loaded active active System Initialization       loaded active active System Time Synchronized          loaded active active Timers                       

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

23 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
Displaying the default target

By default, the systemd process uses the default target when booting the system. To view the default target on your system, run:

$ systemctl get-default

To set a different target as a default target, run: For instance, to set a "', run:

$ sudo systemctl set-default
Changing the current active target

To change the current active target immediately, run: For example, if you want to switch from the current graphical target (GUI) to the multi-user target (CLI "" command line interface), run:

$ sudo systemctl isolate
Booting the system with Single User mode

If your computer does not boot due to an issue, you can boot the system into rescue (single-user) mode for further troubleshooting.

$ sudo systemctl rescue
Booting the system with Emergency mode

Similarly you can boot the system with emergency mode to repair your system. This provides a very minimal environment for the user, which can be used when the system cannot enter rescue mode.

$ sudo systemctl emergency
Power management

systemctl also allows users to halt, shutdown and reboot a system.

To halt a system, run:

$ sudo systemctl halt

To shutdown a system, run:

$ sudo systemctl poweroff

To reboot a system, run:

$ sudo systemctl reboot

In this guide, we have shown you how to use systemctl command in Linux with several examples to manage or control systemd units.

If you have any questions or feedback, feel free to comment below.

[Jun 08, 2021] Increasing Network Connections in Centos7

Feb 24, 2016 |

I had a client who was losing network connectivity intermittently recently and it turns out they needed to increase the high limit for network connections. Centos7 has some variable name changes from previous versions so here are some helpful tips on how to increase the limits.

In older Centos you might have seen these error messages:

ip_conntrack version 2.4 (8192 buckets, 65536 max) – 304 bytes per conntrack

In newer verions, something like:

localhost kernel: nf_conntrack: table full, dropping packet

The below is for Centos versions that have renamed the ip_conntrack to nf_conntrack.

To get a list of network parameters:

sysctl -a | grep netfilter

This shows current value for the key parameter:

/sbin/sysctl net.netfilter.nf_conntrack_max

This shows your system current load:

/sbin/sysctl net.netfilter.nf_conntrack_count

So now to update the value in the kernel to triple the limit, of course make sure your RAM has room with what you choose:

/sbin/sysctl -w net.netfilter.nf_conntrack_max=196608

To make it permanent after reboot, please add these values to the /etc/sysctl.conf


Hope this helps!

  • Tags

[Jun 08, 2021] Recovery LVM Data from RAID

May 24, 2021 |

Recovery LVM Data from RAID – Doug's Blog

We had a client that had an OLD fileserver box, a Thecus N4100PRO. It was completely dust-ridden and the power supply had burned out.

Since these drives were in a RAID configuration, you could not hook any one of them up to a windows box, or a linux box to see the data. You have to hook them all up to a box and reassemble the RAID.

We took out the drives (3 of them) and then used an external SATA to USB box to connect them to a Linux server running CentOS. You can use parted to see what drives are now being seen by your linux system:

parted -l | grep 'raid\|sd'

Then using that output, we assembled the drives into a software array:

mdadm -A /dev/md0 /dev/sdb2 /dev/sdc2 /dev/sdd2

If we tried to only use two of those drives, it would give an error, since these were all in a linear RAID in the Thecus box.

If the last command went well, you can see the built array like so:

root% cat /proc/mdstat
Personalities : [linear]
md0 : active linear sdd2[0] sdb2[2] sdc2[1]
1459012480 blocks super 1.0 128k rounding

Note the personality shows the RAID type, in our case it was linear, which is probably the worst RAID since if any one drive fails, your data is lost. So good thing these drives outlasted the power supply! Now we find the physical volume:

pvdisplay /dev/md0

Gives us:

-- Physical volume --
PV Name /dev/md0
VG Name vg0
PV Size 1.36 TB / not usable 704.00 KB
Allocatable yes
PE Size (KByte) 2048
Total PE 712408
Free PE 236760
Allocated PE 475648
PV UUID iqwRGX-zJ23-LX7q-hIZR-hO2y-oyZE-tD38A3

Then we find the logical volume:

lvdisplay /dev/vg0

Gives us:

-- Logical volume --
LV Name /dev/vg0/syslv
VG Name vg0
LV UUID UtrwkM-z0lw-6fb3-TlW4-IpkT-YcdN-NY1orZ
LV Write Access read/write
LV Status NOT available
LV Size 1.00 GB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors 16384

-- Logical volume --
LV Name /dev/vg0/lv0
VG Name vg0
LV UUID 0qsIdY-i2cA-SAHs-O1qt-FFSr-VuWO-xuh41q
LV Write Access read/write
LV Status NOT available
LV Size 928.00 GB
Current LE 475136
Segments 1
Allocation inherit
Read ahead sectors 16384

We want to focus on the lv0 volume. You cannot mount yet, until you are able to lvscan them.


Show us things are inactive currently:

inactive '/dev/vg0/syslv' [1.00 GB] inherit
inactive '/dev/vg0/lv0' [928.00 GB] inherit

So we set them active with:

vgchange vg0 -a y

And doing lvscan again shows:

ACTIVE '/dev/vg0/syslv' [1.00 GB] inherit
ACTIVE '/dev/vg0/lv0' [928.00 GB] inherit

Now we can mount with:

mount /dev/vg0/lv0 /mnt

And viola! We have our data up and accessable in /mnt to recover! Of course your setup is most likely going to look different from what I have shown you above, but hopefully this gives some helpful information for you to recover your own data.

[Jun 08, 2021] Too many systemd Created slice messages !

Aug 04, 2015 |

Installing the recent linux version seems to come with a default setting of flooding the /var/log/messages with entirely annoying duplicitous messages like:

systemd: Created slice user-0.slice.
systemd: Starting Session 1013 of user root.
systemd: Started Session 1013 of user root.
systemd: Created slice user-0.slice.
systemd: Starting Session 1014 of user root.
systemd: Started Session 1014 of user root.

Here is how I got rid of these:

vi /etc/systemd/system.conf

And then uncomment LogLevel and make it: LogLevel=notice

  1 # This file is part of systemd.
  2 #
  3 #  systemd is free software; you can redistribute it and/or modify it
  4 #  under the terms of the GNU Lesser General Public License as published by
  5 #  the Free Software Foundation; either version 2.1 of the License, or
  6 #  (at your option) any later version.
  7 #
  8 # Entries in this file show the compile time defaults.
  9 # You can change settings by editing this file.
 10 # Defaults can be restored by simply deleting this file.
 11 #
 12 # See systemd-system.conf(5) for details.
 14 [Manager]
 15 LogLevel=notice
 16 #LogTarget=journal-or-kmsg


systemctl restart rsyslog
systemd-analyze set-log-level notice

[Jun 02, 2021] The Poetterisation of GNU-Linux

10, 2013 |

I've found a disturbing trend in GNU/Linux, where largely unaccountable cliques of developers unilaterally decide to make fundamental changes to the way it works, based on highly subjective and arrogant assumptions, then forge ahead with little regard to those who actually use the software, much less the well-established principles upon which that OS was originally built. The long litany of examples includes Ubuntu Unity , Gnome Shell , KDE 4 , the /usr partition , SELinux , PolicyKit , Systemd , udev and PulseAudio , to name a few.

I hereby dub this phenomenon the " Poetterisation of GNU/Linux ".

The broken features, creeping bloat, and in particular the unhealthy tendency toward more monolithic, less modular code in certain Free Software projects, is a very serious problem, and I have a very serous opposition to it. I abandoned Windows to get away from that sort of nonsense, I didn't expect to have to deal with it in GNU/Linux.

Clearly this situation is untenable.

The motivation for these arbitrary changes mostly seems to be rooted in the misguided concept of "popularity", which makes no sense at all for something that's purely academic and non-commercial in nature. More users does not equal more developers. Indeed more developers does not even necessarily equal more or faster progress. What's needed is more of the right sort of developers, or at least more of the existing developers to adopt the right methods.

This is the problem with distros like Ubuntu, as the most archetypal example. Shuttleworth pushed hard to attract more users, with heavy marketing and by making Ubuntu easy at all costs, but in so doing all he did was amass a huge burden, in the form of a large influx of users who were, by and large, purely consumers, not contributors.

As a result, many of those now using GNU/Linux are really just typical Microsoft or Apple consumers, with all the baggage that entails. They're certainly not assets of any kind. They have expectations forged in a world of proprietary licensing and commercially-motivated, consumer-oriented, Hollywood-style indoctrination, not academia. This is clearly evidenced by their belligerently hostile attitudes toward the GPL, FSF, GNU and Stallman himself, along with their utter contempt for security and other well-established UNIX paradigms, and their unhealthy predilection for proprietary software, meaningless aesthetics and hype.

Reading the Ubuntu forums is an exercise in courting abject despair, as one witnesses an ignorant hoard demand GNU/Linux be mutated into the bastard son of Windows and Mac OS X. And Shuttleworth, it seems, is only too happy to oblige , eagerly assisted by his counterparts on other distros and upstream projects, such as Lennart Poettering and Richard Hughes, the former of whom has somehow convinced every distro to mutate the Linux startup process into a hideous monolithic blob , and the latter of whom successfully managed to undermine 40 years of UNIX security in a single stroke, by obliterating the principle that unprivileged users should not be allowed to install software system-wide.

GNU/Linux does not need such people, indeed it needs to get rid of them as a matter of extreme urgency. This is especially true when those people are former (or even current) Windows programmers, because they not only bring with them their indoctrinated expectations, misguided ideologies and flawed methods, but worse still they actually implement them , thus destroying GNU/Linux from within.

Perhaps the most startling example of this was the Mono and Moonlight projects, which not only burdened GNU/Linux with all sorts of "IP" baggage, but instigated a sort of invasion of Microsoft "evangelists" and programmers, like a Trojan horse, who subsequently set about stuffing GNU/Linux with as much bloated, patent encumbered garbage as they could muster.

I was part of a group who campaigned relentlessly for years to oust these vermin and undermine support for Mono and Moonlight, and we were largely successful. Some have even suggested that my diatribes , articles and debates (with Miguel de Icaza and others) were instrumental in securing this victory, so clearly my efforts were not in vain.

Amassing a large user-base is a highly misguided aspiration for a purely academic field like Free Software. It really only makes sense if you're a commercial enterprise trying to make as much money as possible. The concept of "market share" is meaningless for something that's free (in the commercial sense).

Of course Canonical is also a commercial enterprise, but it has yet to break even, and all its income is derived through support contracts and affiliate deals, none of which depends on having a large number of Ubuntu users (the Ubuntu One service is cross-platform, for example).

What GNU/Linux needs is a small number of competent developers producing software to a high technical standard, who respect the well-established UNIX principles of security , efficiency , code correctness , logical semantics , structured programming , modularity , flexibility and engineering simplicity (a.k.a. the KISS Principle ), just as any scientist or engineer in the field of computer science and software engineering should .

What it doesn't need is people who shrug their shoulders and bleat " disks are cheap ".

[Jun 02, 2021] The Basics of the Unix Philosophy - programming

Jun 02, 2021 |

Gotebe 3 years ago

Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features.

By now, and to be frank in the last 30 years too, this is complete and utter bollocks. Feature creep is everywhere, typical shell tools are choke-full of spurious additions, from formatting to "side" features, all half-assed and barely, if at all, consistent.

Nothing can resist feature creep. not_perfect_yet 3 years ago

It's still a good idea. It's become very rare though. Many problems we have today are a result of not following it.

name_censored_ 3 years ago
· edited 3 years ago Gold

By now, and to be frank in the last 30 years too, this is complete and utter bollocks.

There is not one single other idea in computing that is as unbastardised as the unix philosophy - given that it's been around fifty years. Heck, Microsoft only just developed PowerShell - and if that's not Microsoft's take on the Unix philosophy, I don't know what is.

In that same time, we've vacillated between thick and thin computing (mainframes, thin clients, PCs, cloud). We've rebelled against at least four major schools of program design thought (structured, procedural, symbolic, dynamic). We've had three different database revolutions (RDBMS, NoSQL, NewSQL). We've gone from grassroots movements to corporate dominance on countless occasions (notably - the internet, IBM PCs/Wintel, Linux/FOSS, video gaming). In public perception, we've run the gamut from clerks ('60s-'70s) to boffins ('80s) to hackers ('90s) to professionals ('00s post-dotcom) to entrepreneurs/hipsters/bros ('10s "startup culture").

It's a small miracle that iproute2 only has formatting options and grep only has --color . If they feature-crept anywhere near the same pace as the rest of the computing world, they would probably be a RESTful SaaS microservice with ML-powered autosuggestions.

badsectoracula 3 years ago

This is because adding a new features is actually easier than trying to figure out how to do it the Unix way - often you already have the data structures in memory and the functions to manipulate them at hand, so adding a --frob parameter that does something special with that feels trivial.

GNU and their stance to ignore the Unix philosophy (AFAIK Stallman said at some point he didn't care about it) while becoming the most available set of tools for Unix systems didn't help either.

level 2

ILikeBumblebees 3 years ago
· edited 3 years ago

Feature creep is everywhere

No, it certainly isn't. There are tons of well-designed, single-purpose tools available for all sorts of purposes. If you live in the world of heavy, bloated GUI apps, well, that's your prerogative, and I don't begrudge you it, but just because you're not aware of alternatives doesn't mean they don't exist.

typical shell tools are choke-full of spurious additions,

What does "feature creep" even mean with respect to shell tools? If they have lots of features, but each function is well-defined and invoked separately, and still conforms to conventional syntax, uses stdio in the expected way, etc., does that make it un-Unixy? Is BusyBox bloatware because it has lots of discrete shell tools bundled into a single binary? nirreskeya 3 years ago

Zawinski's Law :) 1 Share Report Save

icantthinkofone -34 points· 3 years ago
More than 1 child
waivek 3 years ago

The (anti) foreword by Dennis Ritchie -

I have succumbed to the temptation you offered in your preface: I do write you off as envious malcontents and romantic keepers of memories. The systems you remember so fondly (TOPS-20, ITS, Multics, Lisp Machine, Cedar/Mesa, the Dorado) are not just out to pasture, they are fertilizing it from below.

Your judgments are not keen, they are intoxicated by metaphor. In the Preface you suffer first from heat, lice, and malnourishment, then become prisoners in a Gulag. In Chapter 1 you are in turn infected by a virus, racked by drug addiction, and addled by puffiness of the genome.

Yet your prison without coherent design continues to imprison you. How can this be, if it has no strong places? The rational prisoner exploits the weak places, creates order from chaos: instead, collectives like the FSF vindicate their jailers by building cells almost compatible with the existing ones, albeit with more features. The journalist with three undergraduate degrees from MIT, the researcher at Microsoft, and the senior scientist at Apple might volunteer a few words about the regulations of the prisons to which they have been transferred.

Your sense of the possible is in no sense pure: sometimes you want the same thing you have, but wish you had done it yourselves; other times you want something different, but can't seem to get people to use it; sometimes one wonders why you just don't shut up and tell people to buy a PC with Windows or a Mac. No Gulag or lice, just a future whose intellectual tone and interaction style is set by Sonic the Hedgehog. You claim to seek progress, but you succeed mainly in whining.

Here is my metaphor: your book is a pudding stuffed with apposite observations, many well-conceived. Like excrement, it contains enough undigested nuggets of nutrition to sustain life for some. But it is not a tasty pie: it reeks too much of contempt and of envy.

Bon appetit!

[May 27, 2021] Pen testing with Linux security tools -

May 27, 2021 |

Pen testing with Linux security tools Use Kali Linux and other open source tools to uncover security gaps and weaknesses in your systems. 25 May 2021 Peter Gervase (Red Hat) Feed 26 up Image credits : Lewis Cowles, CC BY-SA 4.0 x Subscribe now

Get the highlights in your inbox every week.

The multitude of well-publicized breaches of large consumer corporations underscores the critical importance of system security management. Fortunately, there are many different applications that help secure computer systems. One is Kali , a Linux distribution developed for security and penetration testing. This article demonstrates how to use Kali Linux to investigate your system to find weaknesses.

Kali installs a lot of tools, all of which are open source, and having them installed by default makes things easier.


(Peter Gervase, CC BY-SA 4.0 )

The systems that I'll use in this tutorial are:

  1. : This is the system where I'll launch the scans and attacks. It has 30GB of memory and six virtualized CPUs (vCPUs).
  2. : This is a Red Hat Enterprise Linux 8 system that will be the target. It has 16GB of memory and six vCPUs. This is a relatively up-to-date system, but some packages might be out of date.
  3. This system also includes httpd-2.4.37-30.module+el8.3.0+7001+0766b9e7.x86_64 , mariadb-server-10.3.27-3.module+el8.3.0+8972+5e3224e9.x86_64 , tigervnc-server-1.9.0-15.el8_1.x86_64 , vsftpd-3.0.3-32.el8.x86_64 , and WordPress version 5.6.1.

I included the hardware specifications above because some of these tasks are pretty demanding, especially for the target system's CPU when running the WordPress Security Scanner ( WPScan ).

Investigate your system

I started my investigation with a basic Nmap scan on my target system. (You can dive deeper into Nmap by reading Using Nmap results to help harden Linux systems .) An Nmap scan is a quick way to get an overview of which ports and services are visible from the system initiating the Nmap scan.


(Peter Gervase, CC BY-SA 4.0 )

This default scan shows that there are several possibly interesting open ports. In reality, any open port is possibly interesting because it could be a way for an attacker to breach your network. In this example, ports 21, 22, 80, and 443 are nice to scan because they are commonly used services. At this early stage, I'm simply doing reconnaissance work and trying to get as much information about the target system as I can.

I want to investigate port 80 with Nmap, so I use the -p 80 argument to look at port 80 and -A to get information such as the operating system and application version.


(Peter Gervase, CC BY-SA 4.0 )

Some of the key lines in this output are:

80 / tcp open http Apache httpd 2.4.37 (( Red Hat Enterprise Linux ))
| _http-generator: WordPress 5.6.1

Since I now know this is a WordPress server, I can use WPScan to get information about potential weaknesses. A good investigation to run is to try to find some usernames. Using --enumerate u tells WPScan to look for users in the WordPress instance. For example:

┌── ( root💀kali ) - [ ~ ]
└─ # wpscan --url --enumerate u
__ _______ _____
\ \ / / __ \ / ____ |
\ \ / \ / /| | __ ) | ( ___ ___ __ _ _ __ ®
\ \ / \ / / | ___ / \___ \ / __ |/ _ ` | '_ \
\ /\ / | | ____) | (__| (_| | | | |
\/ \/ |_| |_____/ \___|\__,_|_| |_|

WordPress Security Scanner by the WPScan Team
Version 3.8.10
Sponsored by Automattic -
@_WPScan_, @ethicalhack3r, @erwan_lr, @firefart

[+] URL: []
[+] Started: Tue Feb 16 21:38:49 2021

Interesting Finding(s):
[i] User(s) Identified:

[+] admin
| Found By: Author Posts - Display Name (Passive Detection)
| Confirmed By:
| Author Id Brute Forcing - Author Pattern (Aggressive Detection)
| Login Error Messages (Aggressive Detection)

[+] pgervase
| Found By: Author Posts - Display Name (Passive Detection)
| Confirmed By:
| Author Id Brute Forcing - Author Pattern (Aggressive Detection)
| Login Error Messages (Aggressive Detection)

This shows there are two users: admin and pgervase . I'll try to guess the password for admin by using a password dictionary, which is a text file with lots of possible passwords. The dictionary I used was 37G and had 3,543,076,137 lines.

Like there are multiple text editors, web browsers, and other applications you can choose from, there are multiple tools available to launch password attacks. Here are two example commands using Nmap and WPScan:

# nmap -sV --script http-wordpress-brute --script-args userdb=users.txt,passdb=/path/to/passworddb,threads=6
# wpscan --url --passwords /path/to/passworddb --usernames admin --max-threads 50 | tee nmap.txt

This Nmap script is one of many possible scripts I could have used, and scanning the URL with WPScan is just one of many possible tasks this tool can do. You can decide which you would prefer to use

This WPScan example shows the password at the end of the file:

┌── ( root💀kali ) - [ ~ ]
└─ # wpscan --url --passwords passwords.txt --usernames admin
__ _______ _____
\ \ / / __ \ / ____ |
\ \ / \ / /| | __ ) | ( ___ ___ __ _ _ __ ®
\ \ / \ / / | ___ / \___ \ / __ |/ _ ` | '_ \
\ /\ / | | ____) | (__| (_| | | | |
\/ \/ |_| |_____/ \___|\__,_|_| |_|

WordPress Security Scanner by the WPScan Team
Version 3.8.10
Sponsored by Automattic -
@_WPScan_, @ethicalhack3r, @erwan_lr, @firefart

[+] URL: []
[+] Started: Thu Feb 18 20:32:13 2021

Interesting Finding(s):


[+] Performing password attack on Wp Login against 1 user/s
Trying admin / redhat Time: 00:01:57 <==================================================================================================================> (3231 / 3231) 100.00% Time: 00:01:57
Trying admin / redhat Time: 00:01:57 <========================================================= > (3231 / 6462) 50.00% ETA: ??:??:??
[SUCCESS] - admin / redhat

[!] Valid Combinations Found:
| Username: admin, Password: redhat

[!] No WPVulnDB API Token given, as a result vulnerability data has not been output.
[!] You can get a free API token with 50 daily requests by registering at

[+] Finished: Thu Feb 18 20:34:15 2021
[+] Requests Done: 3255
[+] Cached Requests: 34
[+] Data Sent: 1.066 MB
[+] Data Received: 24.513 MB
[+] Memory used: 264.023 MB
[+] Elapsed time: 00:02:02

The Valid Combinations Found section near the end contains the admin username and password. It took only two minutes to go through 3,231 lines.

I have another dictionary file with 3,238,659,984 unique entries, which would take much longer and leave a lot more evidence.

Using Nmap produces a result much faster:

┌── ( root💀kali ) - [ ~ ]
└─ # nmap -sV --script http-wordpress-brute --script-args userdb=users.txt,passdb=password.txt,threads=6
Starting Nmap 7.91 ( https: // ) at 2021 -02- 18 20 : 48 EST
Nmap scan report for ( )
Host is up ( 0.00015s latency ) .
Not shown: 995 closed ports
21 / tcp open ftp vsftpd 3.0.3
22 / tcp open ssh OpenSSH 8.0 ( protocol 2.0 )
80 / tcp open http Apache httpd 2.4.37 (( Red Hat Enterprise Linux ))
| _http-server-header: Apache / 2.4.37 ( Red Hat Enterprise Linux )
| http-wordpress-brute:
| Accounts:
| admin:redhat - Valid credentials <<<<<<<
| pgervase:redhat - Valid credentials <<<<<<<
| _ Statistics: Performed 6 guesses in 1 seconds, average tps: 6.0
111 / tcp open rpcbind 2 - 4 ( RPC #100000)
| rpcinfo:
| program version port / proto service
| 100000 2 , 3 , 4 111 / tcp rpcbind
| 100000 2 , 3 , 4 111 / udp rpcbind
| 100000 3 , 4 111 / tcp6 rpcbind
| _ 100000 3 , 4 111 / udp6 rpcbind
3306 / tcp open mysql MySQL 5.5.5-10.3.27-MariaDB
MAC Address: 52 : 54 :00:8C:A1:C0 ( QEMU virtual NIC )
Service Info: OS: Unix

Service detection performed. Please report any incorrect results at https: // / submit / .
Nmap done: 1 IP address ( 1 host up ) scanned in 7.68 seconds

However, running a scan like this can leave a flood of HTTPD logging messages on the target system: - - [18/Feb/2021:20:14:01 -0500] "POST /wp-login.php HTTP/1.1" 200 7575 "" "WPScan v3.8.10 (" - - [18/Feb/2021:20:14:00 -0500] "POST /wp-login.php HTTP/1.1" 200 7575 "" "WPScan v3.8.10 (" - - [18/Feb/2021:20:14:00 -0500] "POST /wp-login.php HTTP/1.1" 200 7575 "" "WPScan v3.8.10 (" - - [18/Feb/2021:20:14:00 -0500] "POST /wp-login.php HTTP/1.1" 200 7575 "" "WPScan v3.8.10 (" - - [18/Feb/2021:20:14:00 -0500] "POST /wp-login.php HTTP/1.1" 200 7575 "" "WPScan v3.8.10 (" - - [18/Feb/2021:20:14:00 -0500] "POST /wp-login.php HTTP/1.1" 200 7575 "" "WPScan v3.8.10 (" - - [18/Feb/2021:20:14:02 -0500] "POST /wp-login.php HTTP/1.1" 200 7575 "" "WPScan v3.8.10 (" - - [18/Feb/2021:20:14:02 -0500] "POST /wp-login.php HTTP/1.1" 200 7575 "" "WPScan v3.8.10 (" - - [18/Feb/2021:20:14:02 -0500] "POST /wp-login.php HTTP/1.1" 200 7575 "" "WPScan v3.8.10 ("

To get information about the HTTPS server found in my initial Nmap scan, I used the sslscan command:

┌── ( root💀kali ) - [ ~ ]
└─ # sslscan
Version: 2.0.6-static
OpenSSL 1.1.1i-dev xx XXX xxxx

Connected to

Testing SSL server on port 443 using SNI name

SSL / TLS Protocols:
SSLv2 disabled
SSLv3 disabled
TLSv1.0 disabled
TLSv1.1 disabled
TLSv1.2 enabled
TLSv1.3 enabled
< snip >

This shows information about the enabled SSL protocols and, further down in the output, information about the Heartbleed vulnerability:

TLSv1.3 not vulnerable to heartbleed
TLSv1.2 not vulnerable to heartbleed Tips for preventing or mitigating attackers More Linux resources There are many ways to defend your systems against the multitude of attackers out there. A few key points are:
  • Know your systems: This includes knowing which ports are open, what ports should be open, who should be able to see those open ports, and what is the expected traffic on those services. Nmap is a great tool to learn about systems on the network.
  • Use current best practices: What is considered a best practice today might not be a best practice down the road. As an admin, it's important to stay up to date on trends in the infosec realm.
  • Know how to use your products: For example, rather than letting an attacker continually hammer away at your WordPress system, block their IP address and limit the number of times they can try to log in before getting blocked. Blocking the IP address might not be as helpful in the real world because attackers are likely to use compromised systems to launch attacks. However, it's an easy setting to enable and could block some attacks.
  • Maintain and verify good backups: If an attacker comprises one or more of your systems, being able to rebuild from known good and clean backups could save lots of time and money.
  • Check your logs: As the examples above show, scanning and penetration commands may leave lots of logs indicating that an attacker is targeting the system. If you notice them, you can take preemptive action to mitigate the risk.
  • Update your systems, their applications, and any extra modules: As NIST Special Publication 800-40r3 explains, "patches are usually the most effective way to mitigate software flaw vulnerabilities, and are often the only fully effective solution."
  • Use the tools your vendors provide: Vendors have different tools to help you maintain their systems, so make sure you take advantage of them. For example, Red Hat Insights , included with Red Hat Enterprise Linux subscriptions, can help tune your systems and alert you to potential security threats.
Learn more

This introduction to security tools and how to use them is just the tip of the iceberg. To dive deeper, you might want to look into the following resources:

[May 16, 2021] Kickstart driven CentOS 7 install from USB

May 16, 2021 |

Kickstart driven CentOS 7 install from USB

None of what is written below is particularly original, however, I was unable to find a method documented on the internet at the time of writing that successfully created a kickstart driven CentOS 7 USB installer.

My interest was in doing this manually as I require this USB (image) to be created from a script. Therefore, I did not look into using ISO to USB applications - in addition, these typically do not allow custom kickstart files to be used.


Much of the process described below was found on the CentOS Wiki page on Installing from USB key, and from the Softpanorama page on the same subject. I thoroughly recommend reading all of the latter as it highlights the shortcomings/dangers associated with the steps below.

USB key preparation Partition USB

This can probably be done as a disk image too, though I haven't tried this yet. Below I will use /dev/sdX for the USB device.

  • Create two partitions, one of type W95 FAT32 (LBA) (assigned code "c" in fdisk) of ~250MB, make this partition bootable. Create an ext3 partition from the remaining space.
sudo fdisk /dev/sdX
n (create partition, accept defaults for type, number, and first sector)
+250M (defined size as 250MB)
c (change type to W95 FAT32 (LBA) - other FAT types may work, but I have not tried)
a (make bootable)
n (create partition, accept defaults for type, number, first sector, and size)
w (write changes to device)
  • Format partitons
sudo mkfs -t vfat -n "BOOT" /dev/sdX1
sudo mkfs -L "DATA" /dev/sdX2
  • Write MBR data to device
sudo dd conv=notrunc bs=440 count=1 if=/usr/share/syslinux/mbr.bin of=/dev/sdX
  • Install syslinux to first parition
sudo syslinux /dev/sdX1
Copy files to USB
  • Mount the partitions
mkdir BOOT && sudo mount /dev/sdX1 BOOT
mkdir DATA && sudo mount /dev/sdX2 DATA
mkdir DVD && sudo mount /path/to/centos/dvd.iso DVD
  • Copy DVD isolinux contents to BOOT
sudo cp DVD/isolinux/* BOOT
  • rename isolinux.cfg to syslinux.cfg
sudo mv BOOT/isolinux.cfg BOOT/syslinux.cfg
  • I also deleted a few bits from BOOT I didn't think were required, e.g. isolinux.bin , TRANS.TBL , upgrade.img , grub.conf .
  • I then copied my kickstart file to the BOOT directory and the CentOS 7 ISO to the DATA partition.

The final file structure looked something like this:

├── boot.msg
├── initrd.img
├── ks.cfg
├── ldlinux.sys
├── memtest
├── splash.png
├── syslinux.cfg
├── upgrade.img
├── vesamenu.c32
└── vmlinuz
└── CentOS-7.0-1406-x86_64-Minimal.iso
Edit the syslinux.cfg

So that it points to the ISO and the kickstart

Here is the install CentOS 7 entry from the Minimal ISO isolinux.cfg (which we renamed syslinux.cfg ):

label linux                                                                     
  menu label ^Install CentOS 7                                                  
  kernel vmlinuz                                                                
  append initrd=initrd.img inst.stage2=hd:LABEL=CentOS\x207\x20x86_64 quiet

The append line is changed to read the following:

append initrd=initrd.img inst.stage2=hd:sdb2:/ ks=hd:sdb1:/ks.cfg

I suspect LABEL could be used here, rather than the enumerated device, which would make it safer, but I haven't tried this yet. Assuming the system you are installing on only has a single HD the USB key will be enumerated as sdb more information about this can be found in the Softpanorama article.

When you boot from the USB and select Install CentOS 7, it now installs the system as described by your kickstart.

[Mar 30, 2021] CloudLinux Launches AlmaLinux, CentOS Linux clone

It is out. now the question is "Will it survive?"
Mar 30, 2021 |

... Now, under a new name, AlmaLinux OS is here with its first release.

The company also announced the formation of a non-profit organization: AlmaLinux Open Source Foundation . This group will take over managing the AlmaLinux project going forward. CloudLinux has committed a $1 million annual endowment to support the project.

Jack Aboutboul, former Red Hat and Fedora engineer and architect, will be AlmaLinux's community manager. Altogether, Aboutboul brings over 20 years of experience in open-source communities as a participant, manager, and evangelist.

He'll be helped by the AlmaLinux governing board. Currently, this includes Jesse Asklund, global head of customer experience for WebPros at cPanel ; Simon Phipps, open-source advocate and a former president of the Open Source Initiative (OSI) ; Igor Seletskiy, CloudLinux CEO; and Eugene Zamriy, CloudLinux director of release engineering at. Two additional members of the governing board for the 501(c)(6) non-profit organization will be selected by the AlmaLinux community.

"In an effort to fill the void soon to be left by the demise of CentOS as a stable release, AlmaLinux has been developed in close collaboration with the Linux community," said Aboutaboul in a statement. "These efforts resulted in a production-ready alternative to CentOS that is supported by community members."

... ... ...

Since its original CentOS announcements, Red Hat has announced free RHEL releases for small production workloads and development teams and open-source, non-profit groups . That, however, doesn't answer the needs of businesses, which were using CentOS and relying on their own in-house support teams rather than Red Hat's support.

This first release of AlmaLinux is a one-to-one binary compatible fork of RHEL 8.3. Looking ahead, AlmaLinux will seek to keep step-in-step with future RHEL releases. RHEL 8.x, CentOS 8.x, and Oracle Linux 8.x migration instructions are available today.

The GitHub page has already been published and the completed source code has been published in the main download repository . The CloudLinux engineering team has also published FAQ on AlmaLinux Wiki .

[Mar 28, 2021] How to Disable SELinux on RHEL 8 by Josphat Mutai

Jan 26, 2019 |

... ... ...

Before disabling SELinux, check first its mode of operation using the command sestatus.

SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      31

The default mode in RHEL 8 is Enforcing . In this mode, SELinux policy is enforced and it denies access based on SELinux policy rules.

The other available mode for running SELinux in enabled state is Permissive. In this mode, SELinux policy is not enforced and access is not denied but denials are logged for actions that would have been denied if running in enforcing mode.

To permanently disable SELinux. edit its main configuration file /etc/selinux/config and set:


it will take effect affter the reboot. To work in disabled mode and postpone reboot, just set the current mode to Permissive in runtime.

sudo setenforce 0

[Feb 21, 2021] Systemd-Free Devuan GNU-Linux 3.1 Distro Released for Freedom Lovers - 9to5Linux

Feb 21, 2021 |

The Devuan GNU/Linux project announced today the release and general availability of Devuan GNU/Linux 3.1 as the first update in the latest Devuan GNU/Linux 3.0 "Beowulf" operating system series. Devuan GNU/Linux 3.1 comes nine months after the release of the Devuan GNU/Linux 3.0 series to provide the freedom loving community with up-to-date ISO images in case they need to reinstall the system or deploy the systemd-free distribution on new computers.

While Devuan GNU/Linux 4.0 "Chimaera" is still in the works, Devuan GNU/Linux 3.1 brings updated desktop-live, server, and minimal-live ISO images powered by the Linux 4.19 LTS kernel from Debian GNU/Linux 10 "Buster" operating system series.

The biggest change in this release is the availability of the runit init scheme in the installer, alongside existing OpenRC and SysVinit options. runit was already available as an alternative to /sbin/init since the release of the Devuan GNU/Linux 3.0 "Beowulf" series. In addition, the installer now recommends the as default mirror for fetching packages, and lets you use an alternate bootloader to GRUB, such as LILO, along with the ability to exclude non-free firmware, from the Expert install options.

This release also ships with a new package (debian-pulseaudio-config-override) that promises to address issues with the PulseAudio sound system being off by default, the Mozilla Firefox 78.7 ESR web browser, LightDM 1.26 login manager, and many other updated core components and apps. You can download Devuan GNU/Linux 3.1 right now from the official website as Desktop Live, Minimal Live, Server, and Netinstall images for both 32-bit and 64-bit architectures. Unfortunately, the ARM and virtual images have not been updated in this release. Existing Devuan GNU/Linux 3.0 users don't need to download the new ISOs, but ensure their installations are up to date at all times by running the sudo apt update && sudo apt full-upgrade command in a terminal emulator. Download Devuan GNU/Linux 3.1 Desktop Desktop Devuan GNU/Linux 3.1 Server Last updated 6 days ago

Read more at Systemd-Free Devuan GNU/Linux 3.1 Distro Released for Freedom Lovers

[Feb 05, 2021] The Unofficial Way To Migrate To AlmaLinux From CentOS 8 - OSTechNix

Feb 05, 2021 |

The Unofficial Way To Migrate To AlmaLinux From CentOS 8 Written by Sk February 3, 2021 1053 Views 1 comment 3

AlmaLinux beta is already out! You can read the details in our previous post . I hope you all are exploring the beta version. Some of you might be wondering when will the AlmaLinux developers release a tool to migrate CentOS to AlamaLinux. While there is no news from the AlamaLinux team yet, I came across an unofficial way to migrate to AlmaLinux from CentOS 8 on Reddit.

A Reddit user has provided a simple workaround for the impatient users who wants to migrate to AlmaLinux. I followed the steps and It worked! I can able to successfully convert CentOS 8 to AlmaLinux beta version using the steps provided below. The migration process was smooth and straightforward!!2&btvi=1&fsb=1&xpc=KEore2TxMc&p=https%3A//

A word of caution:

Before migrating to AlmaLinux, backup all important data from your CentOS system. I tested it on a freshly installed CentOS 8 virtual machine. My CentOS VM has no data and it is a minimal installation. I would not recommend this method to migrate production systems. I strongly suggest you to test this method in your testing machine and then decide whether you want proceed the migration.

If you're not sure what to do, it is really better to wait for the official script from AlmaLinux developers.

Migrate To AlmaLinux From CentOS 8

First, update your CentOS 8 system using command as root or sudo user:

$ sudo dnf update -y

Reboot your CentOS system after the update is completed.

$ sudo reboot

Next, remove all CentOS gpg keys, repositories and branding details such as backgrounds, logos etc.

If it is a CentOS desktop system, run the following command to remove all aforementioned details:

$ sudo rpm -e --nodeps centos-backgrounds centos-indexhtml centos-gpg-keys centos-linux-release centos-linux-repos centos-logos

If it is a CentOS server with no GUI, run this command:

$ sudo rpm -e --nodeps centos-gpg-keys centos-linux-release centos-linux-repos

Next, download and install AlmaLinux release package:

$ sudo rpm -ivh

Sample output:

 warning: /var/tmp/rpm-tmp.R3ZO5W: Header V4 RSA/SHA256 Signature, key ID c21ad6ea: NOKEY
 Verifying                                                             (################################# [100%]
 Preparing                                                             (################################# [100%]
 Updating / installing 
    1:almalinux-release-8.3-2.el8                                        (################################# [100%
Install AlmaLinux release package

Finally, migrate to AlmaLinux from the CentOS 8 system using command:

$ sudo dnf distro-sync -y
Migrate To AlmaLinux From CentOS 8

This command will install some new packages, upgrade and downgrade some existing packages, reinstall a few packages and delete some packages. This will take a while depending upon the Internet connection speed and the total number of installed packages in your CentOS system. Please be patient. For me, It took around 20 minutes.

After the migration is completed, reboot your system:

$ sudo reboot

Now your system will boot to the newly migrated AlmaLinux system:

Boot to AlmaLinux

Check if the migration process is successful:

$ cat /etc/redhat-release 
AlmaLinux release 8.3 Beta (Purple Manul)
Check AlmaLinux release version

There it is! Congratulations! We have successfully migrated from CentOS 8 to AlmaLinux 8 beta version.

[Feb 03, 2021] Red Hat Expands Free RHEL to Quell CentOS Kerfuffle

How that can beat Oracle offer is unclear. Also at least two additional alternatives to CentOS are in the pipeline. AlmaLinux is due to be released in Q2 2021, is derived from CloudLinux, which an existing commercial downstream RHEL distribution. Rocky Linux from CentOS co-founder Gregory Kurtzer is also making progress and will be released this year.
Feb 03, 2021 |

Last week, Red Hat announced it will now allow you to run Red Hat Enterprise Linux in production on up to 16 servers for free. The program, which begins on February 1, doesn't include technical support, but does include security patches and bug fixes. It's a free RHEL offering meant to appease CentOS users, who were unhappy upon learning in December 2020 that Red Hat will end support for the popular free RHEL alternative at the end of this year . (Previously, users had been promised support through 2029.)

... ... ...

One group that Red Hat already knows is deploying millions of CentOS installs are web hosting companies, who are using CentOS because they have in-house RHEL expertise and therefore don't require support. Their hosting plans typically default to CentOS, while including options for other free Linux distributions, such as Ubuntu or Debian, for those who want them

[Feb 02, 2021] Oracle does provides an equivalent to CentOS free of charge.

Red Hat knowingly allowed tens of thousands (or more) of people to undergo upgrades from CentOS7 to CentOS8, while knowing they were going to pull the plug on CentOS8. Its a huge breach of trust, borderline fraud... So it would be fair if the lose some licenses to Oracle which still provides equivalent to CentOs free of charge.
Notable quotes:
"... I'll be damned if OEL isn't a stable equivalent of RHEL. In many cases, it feels like it's more stable. ..."
"... And they have a CentOS -> OEL migration script that you could run and then you could buy their service. RedHat did not support that for a long time, so they were just leaving money/customers on the table. That seems to have changed recently, but too little too late. ..."
Dec 10, 2020 |

10 hours ago

...I'll be damned if OEL isn't a stable equivalent of RHEL. In many cases, it feels like it's more stable. Download it and try it yourself:

... ... ...

orev 1 point 8 hours ago

And they have a CentOS -> OEL migration script that you could run and then you could buy their service. RedHat did not support that for a long time, so they were just leaving money/customers on the table. That seems to have changed recently, but too little too late.

hawaiian717 1 point 10 minutes ago

VirtualBox itself is GPLv2, so there's not a lot Oracle can do. The problem is the Extension Pack which is free only for personal/evaluation use; for commercial use it must be purchased.

[Feb 02, 2021] RedHat local repository and offline updates

Aug 03, 2018 |

My company just bought a two redhat license for two physical machines , the machines wont be accessible via internet , so we have an issue here regarding the updates , patches , ... etc .

i am thinking of configuring a local repository to be accessible via internet and have all the necessary updates but there is a problem here that i have only two licenses , is it applicable if i activate the local repository for the updates and one of my two service machines , or is there any other way like if there is some sort of offline package that i can download it separately from redhat and update my machines without internet access ?

thanks in advance


You have several options:
  • Red Hat Satellite
  • Download the updates on a connected system (using reposync )
  • Update with new minor release media
  • Manually downloading and installing or updating packages
  • Create a Local Repository

See How can we regularly update a disconnected system (A system without internet connection)? for details.

[Feb 02, 2021] How can we regularly update a disconnected system (A system without internet connection)

May 02, 2019 |

Solution Verified - Updated August 10 2017 at 12:12 PM -


Depending on the environment and circumstances, there are different approaches for updating an offline system.

Approach 1: Red Hat Satellite

For this approach a Red Hat Satellite server is deployed. The Satellite receives the latest packages from Red Hat repositories. Client systems connect to the Satellite and install updates. More details on Red Hat Satellite are available here: . Please also refer to the document Update a Disconnected Red Hat Network Satellite .

  • Pros:
    • Installation of updates can be automated.
    • Completely supported solution.
    • Provides selective granualarity regarding which updates get made available and installed
    • Satellite can provide repositories for different major versions of Red Hat products
  • Cons:
    • Purchase of Satellite subscription required, setup and maintenance of the Satellite server.

Approach 2: Download the updates on a connected system

If a second, similar system exists

  • which has the same packages installed (the same package profile)
  • and if this second system can be activated/connected to the RHN

then the second system can download applicable errata packages. After downloading the errata packages can be applied to other systems. More documentation: How to update offline RHEL server without network connection to Red Hat Network/Proxy/Satellite? .

  • Pros:
    • No additional server required.
  • Cons:
    • Updating procedure is hard to automate, will probably be performed manually each time.
    • A new system is required for each architecture / major version of RHEL (such as 6.x)

Approach 3: Update with new minor release media

DVD media of new RHEL minor releases (i.e. RHEL6.1) are available from RHN. These media images can directly on the system be used for updating, or offered i.e. via http and be used from other systems as a yum repository for updating. For more details refer to:

Approach 4: Manually downloading and installing or updating packages

It is possible to download and install errata packages. For details refer to this document: How do I download security RPMs using the Red Hat Errata Website? .

  • Pros:
    • No additional server required.
  • Cons:
    • Consumes a lot of time.
    • Difficult to automate.
    • Dependency resolution can become very complicated and time consuming.

Approach 5: Create a Local Repository

This approach is applicable to RHEL 5/6/7. With a registered server that is connected to Red Hat repositories, and is the same Major version. The connected system can use reposync to download all the rpms from a specified repository into a local directory. Then using http,nfs,ftp,or targeting a local directory (file://) this can be configured as a repository which yum can use to install packages and resolve dependencies.

How to create a local mirror of the latest update for Red Hat Enterprise Linux 5, 6, 7 without using Satellite server?

  • Pros:
    • Automation possible.
    • For Development and testing environments, this allows a static (unchanging) repository for the Dev systems to verify updates before the Prod systems update.
  • Cons:
    • Advanced features that Satellite provides are not available in this approach.
    • Does not provide selective granularity as to which errata get made available and installed.
    • A new system is required for each architecture / major version of RHEL (such as 6.x)
    • Clients can not version lock to a minor version using a local repository. The repository must version lock before the reposync to collect only the specified version packages.
    • Clients will not see any new updates until the local repository runs reposync and createrepo --update to download new packages and create new metadata
      • The clients will likely have to run yum clean all to clear out old metadata and collect the new repo metadata
Checking the security erratas :-

To check the security erratas on the system that is not connected to the internet, download the copy the updateinfo.xml.gz file from the identical registered system. The detailed steps can be checked in How to update security Erratas on system which is not connected to internet ? knowledgebase.

Root Cause

Without a connection to the RHN/RHSM the updates have to be transferred over other paths. These are partly hard to implement and automate.

This solution is part of Red Hat's fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form. 8 Comments Log in to comment RZ Community Member 26 points

1 February 2012 3:56 PM Randy Zagar

"Approach 3: update with new minor release media" will not work with RHEL 6. Many packages (over 1500) in the "optional" channels simply are not present on any iso images. There is an open case , but the issue will not be addressed before RHEL 6 Update 4 (and possibly never).

22 May 2014 9:27 AM Umesh Susvirkar

I agree with "Randy Zager" "optional" packages should be available offline along with other channels which are not available in ISO's.

12 July 2016 7:20 PM Adrian Kennedy

Can Approach 5 "additional server, reposync fetching" be applied with RHEL 7 servers?

4 August 2016 8:55 AM Dejan Cugalj


Yes. You need to:
- subscribe server to RH 
- synchronize repositories with reposync util
- up to 40GB per major release of RHEL.
16 August 2016 10:54 PM Michael White

However, won't I need to stand up another RHEL 7 server in additional to the RHEL 6 server?

8 August 2017 7:01 PM John Castranio

Correct. When using an external server to reposync updates, you will need one system for each Major Version of RHEL that you want to sync packages from.

RHEL 7 does not have access to RHEL 6 repositories just as RHEL 6 can't access RHEL 7 repositories

15 January 2019 10:14 PM BRIAN KEYES

what I am looking for is the instructions on the reposync install AND how to update off line clients

do I have to manually install apache?

16 January 2019 10:50 PM Randy Zagar

You will need: a RH Satellite or RH Proxy server, an internal yum server, and a RHN client for each OS variant (and architecture) you intend to support "offline". E.g. supporting 6Server, 6Client, and 6Workstation for i686 and x86_64 would normally require 6 RHN clients, but only three RHN clients would be necessary for RHEL7, as there's no support for i686 architecture

Yum clients can (according to the docs) use nfs resource paths in the baseurl statement, so apache is not strictly necessary on your yum server, but most people do it that way...

Each RHN client will need: local storage to store packages downloaded via reposync (e.g. "reposync -d -g -l -n -t -p /my/storage --repoid=rhel-i686-workstation-optional-6"). You'll need to run "createrepo" on each repository that gets updated, and you'll need to create an rsync service that provides access to each clients' /my/storage volume

Your internal yum server will need a cron script to run rsync against your RHN clients so you can collect all these software channels in one spot.

You'll also need to create custom yum repo files for your client systems (e.g. redhat-6Workstation.repo) that will point to the correct repositories on your yum server.

I'd recommend you NOT run these cron scripts during normal business hours... your sys-admins will want a stable copy so they can clone things for other offline networks.

If you're clever, you can convince one RHN client system to impersonate the different OS variants, reducing the number of systems you need to deploy.

You'll also most likely want to run "hardlink" on your yum server pretty regularly as there's lots of redundant packages across each OS variant.

[Jan 30, 2021] CloudLinux Hopes to Release CentOS Replacement AlmaLinux This Week

It is unclear whether it will be competitive with Oracle linux or not... The amount of funds they have is much less then in case of Oracle. As they already market clone of ISM then have substantial synergy, although not to the extent Oracle has as it need to tune it to the needs of its database division and cell commercial version of the clone which helps to recoup the costs. .
Jan 30, 2021 |

The Linux server operating system also now has a proper name: AlmaLinux. It was originally dubbed "Lenix" as a placeholder. Alma is Latin for "hope."

While the exact number of servers running CentOS is an unknown, Seletskiy is in a unique position to make an educated guess..."

"I cannot say the total number, but I'm sure that in enterprise use it's to the tune of five to ten million CentOS servers."

..."I don't know how much it will cost, to be honest," Seletskiy told us. "I know that it will definitely be at least to the tune of half a million or more."

The money will be spent in part to hire developers to maintain support for JBoss and other software that is essential to enterprise workloads, he said, in addition to the cost of creating a nonprofit organization that will hold the project's trademarks and assure members that the project will be community controlled.

[Jan 29, 2021] Sudo vulnerability allows attackers to gain root privileges on Linux systems (CVE-2021-3156) - Help Net Security

Jan 29, 2021 |

Sudo vulnerability allows attackers to gain root privileges on Linux systems (CVE-2021-3156)

A vulnerability ( CVE-2021-3156 ) in sudo, a powerful and near-ubiquitous open-source utility used on major Linux and Unix-like operating systems, could allow any unprivileged local user to gain root privileges on a vulnerable host (without authentication).


"This vulnerability is perhaps the most significant sudo vulnerability in recent memory (both in terms of scope and impact) and has been hiding in plain sight for nearly 10 years," said Mehul Revankar, Vice President Product Management and Engineering, Qualys, VMDR, and noted that there are likely to be millions of assets susceptible to it.

About the vulnerability (CVE-2021-3156)

Also dubbed Baron Samedit (a play on Baron Samedi and sudoedit), the heap-based buffer overflow flaw is present in sudo legacy versions (1.8.2 to 1.8.31p2) and all stable versions (1.9.0 to 1.9.5p1) in their default configuration.

"When sudo runs a command in shell mode, either via the -s or -i command line option, it escapes special characters in the command's arguments with a backslash. The sudoers policy plugin will then remove the escape characters from the arguments before evaluating the sudoers policy (which doesn't expect the escape characters) if the command is being run in shell mode," sudo maintainer Todd C. Miller explained .

"A bug in the code that removes the escape characters will read beyond the last character of a string if it ends with an unescaped backslash character. Under normal circumstances, this bug would be harmless since sudo has escaped all the backslashes in the command's arguments. However, due to a different bug, this time in the command line parsing code, it is possible to run sudoedit with either the -s or -i options, setting a flag that indicates shell mode is enabled. Because a command is not actually being run, sudo does not escape special characters. Finally, the code that decides whether to remove the escape characters did not check whether a command is actually being run, just that the shell flag is set. This inconsistency is what makes the bug exploitable."

Qualys researchers, who unearthed and reported CVE-2021-3156, have provided additional technical details and instructions on how users can verify whether they have a vulnerable version.

They developed several exploit variants that work on Ubuntu 20.04, Debian 10, and Fedora 33, but won't be sharing the exploit code publicly. "Other operating systems and distributions are also likely to be exploitable," they pointed out.

Fixes are available

The bug has been fixed in sudo 1.9.5p2, downloadable from here .

Patched vendor-supported version have been provided by Ubuntu , RedHat , Debian , Fedora , Gentoo , and others.

Though it only allows escalation of privilege and not remote code execution, CVE-2021-3156 could be leveraged by attackers who look to compromise Linux systems and have already managed to get access (e.g., through brute force attacks).

[Jan 22, 2021] IBM Plunges After Reporting Lowest Q4 Revenue This Century, Slowdown In Cloud And Another Grotesque EPS Fudge

Jan 22, 2021 |


IBM Plunges After Reporting Lowest Q4 Revenue This Century, Slowdown In Cloud And Another Grotesque EPS Fudge - ZeroHedge

There was some hope last year that IBM was finally turning things around: after all, after 5 consecutive quarters of declining revenues, the company had just managed to grow its top-line for the first time since Q2 2018 - when revenue grew by a paltry 0.1% - and only for the 4th time in the past 8 years. Alas it was not meant to be, and moments ago IBM revealed that revenue declined again in Q4, dropping for the third consecutive quarter, sliding a whopping 6.5%, the biggest decline since 2015 - and while Red Hat revenue rose by 19%, boosting cloud revenue by 10% (including $738MM in internal revenue), total external cloud and cognitive revenues of $6.8 billion once again missed expectations of $7.3BN, and more ominously, were a decline of 4.5% from last year.

Then again "boosted" may be using the term loosely: at $20.4BN in total revenue, and once again missing consensus expectations of a $20.6BN print, IBM's Q4 2020 was its worst fourth quarter for sales this century.

Some more Q4 revenue details, which missed across all key categories, including cloud and cognitive:

  • Cloud and cognitive software revenue $6.84 billion, estimate $7.26 billion
  • Global business services revenue $4.17 billion, estimate $4.17 billion
  • Global technology services revenue $6.57 billion, estimate $6.79 billion
  • Systems revenue $2.50 billion, estimate $2.48 billion
  • Adjusted gross margin 52.5%, estimate 51.2%
  • Total cloud revenue of $7.5 billion, up 10%
  • Red Hat revenue up 19%, normalized for historical comparability

And visually:

And while IBM's Q4 adjusted, non-GAAP EPS of $2.07 beat expectations of $1.79, if down a whopping 56% Y/Y, as usual this was the product of lots of "artificial intelligence" and aggressive accounting magic because the unadjusted EPS was $1.41, or 32% below the adjusted number. Oh, and the only reason why EPS was this high: IBM reverted to its grotesque "accounting trick" of slashing its effective tax rate, which in Q4 tumbled to just 1.9% down from 8.1% a year ago.

But wait there's more, because the GAAP to non-GAAP bridge was, as usual, ridiculous and a continuation of an "one-time, non-recurring" addback trend that started so many years ago we can't even remember when, but one thing is certain: none of IBM's multiple-time, recurring charges are either one-time, or non-recurring.



me title=

We have said it before, but we'll say it again: here is IBM's "one-time, non-recurring" items In Q3...

... and in Q2 ...

.... and in Q1 ...

... and Q4 2019...

And here is the actual "beat" in context:

"We made progress in 2020 growing our hybrid cloud platform as the foundation for our clients' digital transformations while dealing with the broader uncertainty of the macro environment," said Arvind Krishna, IBM chairman and chief executive officer. "The actions we are taking to focus on hybrid cloud and AI will take hold, giving us confidence we can achieve revenue growth in 2021."

Maybe... and yet just like the past three quarters, IBM did not have enough "visibility" into the future to give any guidance for 2021.

There was some good news: in Q4, when IBM's free cash flow was $6.1 billion, the company did not return all of that to shareholders; instead it handed out just $1.5 billion in dividends.

So where did the remaining cash go? "In 2020 we increased investment in our business across R&D and CAPEX, and since October, announced the acquisition of seven companies focused on hybrid cloud and AI," said James Kavanaugh, IBM senior vice president and chief financial officer. "With solid cash generation, steadily expanding gross profit margins, disciplined financial management and ample liquidity, we are well positioned for success as the leading hybrid cloud platform company."

And speaking of cash flow, IBM ended the second quarter with $14.3 billion of cash on hand which includes marketable securities, up $1.3 billion from Q2. Debt, including Global Financing debt of $20.9 billion, totaled $65.4, up from $64.7 billion.

And some more good news: it appears that IBM is finally paying down its debt, which, including Global Financing debt of $21.2 billion, totaled $61.5 billion, down $3.9 billion since the end of the third quarter, and down $11.5 billion since closing the Red Hat acquisition.

Bottom line: while IBM's core business remains a melting ice cube, the bigger concern was the slowdown in Cloud growth, which led to another dismal quarter for revenue and (unadjusted EPS). Worse, now that IBM is in cash paydown mode, it means little to no growth opportunities, and after algos read through the boilerplate, was enough to send IBM stock tumbled over 3%, erasing all gains for 2021.


[Jan 05, 2021] Golmanization" of IBM and Red Hat: ormer Goldman Exec, Trump Advisor Gary Cohn Joins IBM As Vice Chairman

Vampire squid does not take prisoner it its pursuit for money ;-) It seems fitting that IBM brass hire a financial predator like Cohn in a bid to increase profitability at all cost. Red Hat users should probably take a note.
Jan 05, 2021 |

Gary Cohn, the onetime No. 2 at Goldman Sachs who left the vampire squid (and cashed out hundreds of millions in performance-based incentives, tax free) back in 2017 for what turned out to be a brief, but tumultuous, stint in the Trump Administration, is returning to the boardroom and the c-suite.

After launching a SPAC, Cohn is headed to IBM, where he will serve as vice chairman and a member of the executive leadership team.

Cohn recently made headlines for refusing to return some $10MM in compensation paid out by Goldman Sachs. Cohn was the lone executive among a group of current and former Goldman leaders who stiffed the bank, which tried to claw back the bonus money as a kind of penance for Goldman's involvement in the 1MDB scandal.

Then again, hiring Cohn makes sense in at least one respect. As Big Blue scrambles to open up new markets and business lines...

stockmarketpundit 32 minutes ago

In the timeless wisdom of George Carlin, "It's a big club and you ain't in it."

J J Pettigrew 29 minutes ago (Edited)

Its a small club...the rotating board member game..

You sit I my board, I'll sit on Jim's (any name) board, Jim sits on your board...and we will all vote for heavy compensations and stock options...

see you in West Palm....

BlueLightning 10 minutes ago (Edited)

These parasites just go from one gravy train job to another. Just one big club!

Five_Black_Eyes_Intel_Agency 23 minutes ago

My all time favourite revolving door pathway is when execs jump from corporations to regulatory bodies, and back to corporations again.

Wonders get achieved, like tax evasion, forcing Americans to pay the highest drug prices in the OECD, and fantastic free lunches sponsored by US taxpayers.

[Jan 02, 2021] How to convert from CentOS or Oracle Linux to RHEL

convert2rhel is an RPM package which contains a Python2.x script written in completely incomprehensible over-modulazed manner. Python obscurantism in action ;-)
Looks like a "backbox" tool unless you know Python well. As such it is dangerous to rely upon.
Jan 02, 2021 |
  • Ensure that you have an access to RHEL packages through custom repositories configured in the /etc/yum.repos.d/ directory and pointing, for example, to RHEL ISO , FTP, or HTTP. Note that the OS will be converted to the version of RHEL provided by these repositories. Make sure that the RHEL minor version is the same or later than the original OS minor version to prevent downgrading and potential conversion failures. See instructions on how to configure a repository .

  • Recommended: Update packages from the original OS to the latest version that is available in the repositories accessible from the system, and restart the system:

    # yum update -y
    # restart

    Without performing this step, the rollback feature will not work correctly, and exiting the conversion in any phase may result in a dysfunctional system.

    Before starting the conversion process, back up your system.

    Converting the system
    1. Start Convert2RHEL using custom repositories:

      # convert2rhel --disable-submgr --enablerepo <RHEL_RepoID> --debug

      Replace RHEL_RepoID with your custom repository configured in the /etc/yum.repos.d/ directory, for example, rhel-7-server-rpms .

      To display all available options, use the -h , --help option:

      # convert2rhel -h

      NOTE: Packages that are available only in the original distribution and do not have corresponding counterparts in RHEL repositories, or third-party packages, which originate neither from the original Linux distribution nor from RHEL, are left unchanged.

    2. Before Convert2RHEL starts replacing packages from the original distribution with RHEL packages, the following warning message is displayed:

      The tool allows rollback of any action until this point. 
      By continuing all further changes on the system will need to be reverted manually by the user, if necessary.

      Changes made by Convert2RHEL up to this point can be automatically reverted. Confirm that you wish to proceed with the conversion process.

    3. Wait until Convert2RHEL installs the RHEL packages.

      NOTE: After a successful conversion, the utility prints out the convert2rhel command with all arguments necessary for running non-interactively. You can copy the command and use it on systems with a similar setup.

    4. At this point, the system still runs with the original distribution kernel loaded in RAM. Reboot the system to boot into the newly installed RHEL kernel.

      # reboot
    5. Remove third-party packages from the original OS that remained unchanged (typically packages that do not have a RHEL counterpart). To get a list of such packages, use:

      # yum list extras --disablerepo="*" --enablerepo=<RHEL_RepoID>
    6. If necessary, reconfigure system services after the conversion.

    Troubleshooting Logs

    The Convert2RHEL utility stores the convert2rhel.log file in the /var/log/convert2rhel/ directory. Its content is identical to what is printed to the standard output.

    The output of the rpm -Va command, which is run automatically unless the --no-rpm-va option is used, is stored in the /var/log/convert2rhel/rpm_va.log file for debugging purposes.

    Stefan Vtr 1 July 2020 6:24 AM

    The Link to "instructions on how to configure a repository." is not working (404). Also it would be great if the tool installs the repos that are needed for the conversion itself.

    Michal Bocek 1 July 2020 8:30 AM

    Thanks, Stefan, for pointing that out. Before we fix that, you can use this link:

    Regarding the second point of yours - this article explains how to use convert2rhel with custom repositories. Since Red Hat does not have the RHEL repositories public, we leave it up to the user where they obtain the RHEL repositories. For example, when they have a subscribed RHEL system in their company, they can create a mirror of the RHEL repositories available on that system by following this guide:

    However, convert2rhel is also able to connect to Red Hat Subscription Management (RHSM), and for that you need to provide the subscription-manager package and pass the subscription credentials to convert2rhel. Then the convert2rhel chooses the right repository to use for the conversion. You can find the step by step guide for that in

    We are working on improving the user experience related to the use of RHSM.

    Ari Lemmke 10 September 2020 12:31 AM

    • This system could have been done much much much much better.
    • I do not see any point for this utility if it does not work .. i.e. is "working" like this.
    • Nice that it rollbacks everything. For rollbacking feature it gets 1 out of 10 points.

[Jan 02, 2021] Linux sysadmin basics- Start NIC at boot

Nov 14, 2019 |

If you've ever booted a Red Hat-based system and have no network connectivity, you'll appreciate this quick fix.

Posted: | (Red Hat)

"Fast Ethernet PCI Network Interface Card SN5100TX.jpg" by Jana.Wiki is licensed under CC BY-SA 3.0

It might surprise you to know that if you forget to flip the network interface card (NIC) switch to the ON position (shown in the image below) during installation, your Red Hat-based system will boot with the NIC disconnected:

Setting the NIC to the ON position during installation.
More Linux resources

But, don't worry, in this article I'll show you how to set the NIC to connect on every boot and I'll show you how to disable/enable your NIC on demand.

If your NIC isn't enabled at startup, you have to edit the /etc/sysconfig/network-scripts/ifcfg-NIC_name file, where NIC_name is your system's NIC device name. In my case, it's enp0s3. Yours might be eth0, eth1, em1, etc. List your network devices and their IP addresses with the ip addr command:

$ ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:81:d0:2d brd ff:ff:ff:ff:ff:ff
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff
    inet brd scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff

Note that my primary NIC (enp0s3) has no assigned IP address. I have virtual NICs because my Red Hat Enterprise Linux 8 system is a VirtualBox virtual machine. After you've figured out what your physical NIC's name is, you can now edit its interface configuration file:

$ sudo vi /etc/sysconfig/network-scripts/ifcfg-enp0s3

and change the ONBOOT="no" entry to ONBOOT="yes" as shown below:


Save and exit the file.

You don't need to reboot to start the NIC, but after you make this change, the primary NIC will be on and connected upon all subsequent boots.

To enable the NIC, use the ifup command:

ifup enp0s3

Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)

Now the ip addr command displays the enp0s3 device with an IP address:

$ ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:81:d0:2d brd ff:ff:ff:ff:ff:ff
    inet brd scope global dynamic noprefixroute enp0s3
       valid_lft 86266sec preferred_lft 86266sec
    inet6 2600:1702:a40:88b0:c30:ce7e:9319:9fe0/64 scope global dynamic noprefixroute 
       valid_lft 3467sec preferred_lft 3467sec
    inet6 fe80::9b21:3498:b83c:f3d4/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff
    inet brd scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff

To disable a NIC, use the ifdown command. Please note that issuing this command from a remote system will terminate your session:

ifdown enp0s3

Connection 'enp0s3' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)

That's a wrap

It's frustrating to encounter a Linux system that has no network connection. It's more frustrating to have to connect to a virtual KVM or to walk up to the console to fix it. It's easy to miss the switch during installation, I've missed it myself. Now you know how to fix the problem and have your system network-connected on every boot, so before you drive yourself crazy with troubleshooting steps, try the ifup command to see if that's your easy fix.

Takeaways: ifup, ifdown, /etc/sysconfig/network-scripts/ifcfg-NIC_name

[Jan 02, 2021] Looking forward to Linux network configuration in the initial ramdisk (initrd)

Nov 24, 2020 |
The need for an initrd

When you press a machine's power button, the boot process starts with a hardware-dependent mechanism that loads a bootloader . The bootloader software finds the kernel on the disk and boots it. Next, the kernel mounts the root filesystem and executes an init process.

This process sounds simple, and it might be what actually happens on some Linux systems. However, modern Linux distributions have to support a vast set of use cases for which this procedure is not adequate.

First, the root filesystem could be on a device that requires a specific driver. Before trying to mount the filesystem, the right kernel module must be inserted into the running kernel. In some cases, the root filesystem is on an encrypted partition and therefore needs a userspace helper that asks the passphrase to the user and feeds it to the kernel. Or, the root filesystem could be shared over the network via NFS or iSCSI, and mounting it may first require configured IP addresses and routes on a network interface.

[ You might also like: Linux networking: 13 uses for netstat ]

To overcome these issues, the bootloader can pass to the kernel a small filesystem image (the initrd) that contains scripts and tools to find and mount the real root filesystem. Once this is done, the initrd switches to the real root, and the boot continues as usual.

The dracut infrastructure

On Fedora and RHEL, the initrd is built through dracut . From its home page , dracut is "an event-driven initramfs infrastructure. dracut (the tool) is used to create an initramfs image by copying tools and files from an installed system and combining it with the dracut framework, usually found in /usr/lib/dracut/modules.d ."

A note on terminology: Sometimes, the names initrd and initramfs are used interchangeably. They actually refer to different ways of building the image. An initrd is an image containing a real filesystem (for example, ext2) that gets mounted by the kernel. An initramfs is a cpio archive containing a directory tree that gets unpacked as a tmpfs. Nowadays, the initrd images are deprecated in favor of the initramfs scheme. However, the initrd name is still used to indicate the boot process involving a temporary filesystem.

Kernel command-line

Let's revisit the NFS-root scenario that was mentioned before. One possible way to boot via NFS is to use a kernel command-line containing the root=dhcp argument.

The kernel command-line is a list of options passed to the kernel from the bootloader, accessible to the kernel and applications. If you use GRUB, it can be changed by pressing the e key on a boot entry and editing the line starting with linux .

The dracut code inside the initramfs parses the kernel command-line and starts DHCP on all interfaces if the command-line contains root=dhcp . After obtaining a DHCP lease, dracut configures the interface with the parameters received (IP address and routes); it also extracts the value of the root-path DHCP option from the lease. The option carries an NFS server's address and path (which could be, for example, ). Dracut then mounts the NFS share at this location and proceeds with the boot.

If there is no DHCP server providing the address and the NFS root path, the values can be configured explicitly in the command line:

root=nfs: ip=

Here, the first argument specifies the NFS server's address, and the second configures the ens2 interface with a static IP address.

There are two syntaxes to specify network configuration for an interface:



The first can be used for automatic configuration (DHCP or IPv6 SLAAC), and the second for static configuration or a combination of automatic and static. Here some examples:


Note that if you pass an ip= option, but dracut doesn't need networking to mount the root filesystem, the option is ignored. To force network configuration without a network root, add rd.neednet=1 to the command line.

You probably noticed that among automatic configuration methods, there is also ibft . iBFT stands for iSCSI Boot Firmware Table and is a mechanism to pass parameters about iSCSI devices from the firmware to the operating system. iSCSI (Internet Small Computer Systems Interface) is a protocol to access network storage devices. Describing iBFT and iSCSI is outside the scope of this article. What is important is that by passing ip=ibft to the kernel, the network configuration is retrieved from the firmware.

Dracut also supports adding custom routes, specifying the machine name and DNS servers, creating bonds, bridges, VLANs, and much more. See the dracut.cmdline man page for more details.

Network modules

The dracut framework included in the initramfs has a modular architecture. It comprises a series of modules, each containing scripts and binaries to provide specific functionality. You can see which modules are available to be included in the initramfs with the command dracut --list-modules .

At the moment, there are two modules to configure the network: network-legacy and network-manager . You might wonder why different modules provide the same functionality.

network-legacy is older and uses shell scripts calling utilities like iproute2 , dhclient , and arping to configure interfaces. After the switch to the real root, a different network configuration service runs. This service is not aware of what the network-legacy module intended to do and the current state of each interface. This can lead to problems maintaining the state across the root switch boundary.

A prominent example of a state to be kept is the DHCP lease. If an interface's address changed during the boot, the connection to an NFS share would break, causing a boot failure.

To ensure a seamless transition, there is a need for a mechanism to pass the state between the two environments. However, passing the state between services having different configuration models can be a problem.

The network-manager dracut module was created to improve this situation. The module runs NetworkManager in the initrd to configure connection profiles generated from the kernel command-line. Once done, NetworkManager serializes its state, which is later read by the NetworkManager instance in the real root.

Fedora 31 was the first distribution to switch to network-manager in initrd by default. On RHEL 8.2, network-legacy is still the default, but network-manager is available. On RHEL 8.3, dracut will use network-manager by default.

Enabling a different network module

While the two modules should be largely compatible, there are some differences in behavior. Some of those are documented in the nm-initrd-generator man page. In general, it is suggested to use the network-manager module when NetworkManager is enabled.

To rebuild the initrd using a specific network module, use one of the following commands:

# dracut --add network-legacy  --force --verbose
# dracut --add network-manager --force --verbose

Since this change will be reverted the next time the initrd is rebuilt, you may want to make the change permanent in the following way:

# echo 'add_dracutmodules+=" network-manager "' > /etc/dracut.conf.d/network-module.conf
# dracut --regenerate-all --force --verbose

The --regenerate-all option also rebuilds all the initramfs images for the kernel versions found on the system.

The network-manager dracut module

As with all dracut modules, the network-manager module is split into stages that are called at different times during the boot (see the dracut.modules man page for more details).

The first stage parses the kernel command-line by calling /usr/libexec/nm-initrd-generator to produce a list of connection profiles in /run/NetworkManager/system-connections . The second part of the module runs after udev has settled, i.e., after userspace has finished handling the kernel events for devices (including network interfaces) found in the system.

When NM is started in the real root environment, it registers on D-Bus, configures the network, and remains active to react to events or D-Bus requests. In the initrd, NetworkManager is run in the configure-and-quit=initrd mode, which doesn't register on D-Bus (since it's not available in the initrd, at least for now) and exits after reaching the startup-complete event.

The startup-complete event is triggered after all devices with a matching connection profile have tried to activate, successfully or not. Once all interfaces are configured, NM exits and calls dracut hooks to notify other modules that the network is available.

Note that the /run/NetworkManager directory containing generated connection profiles and other runtime state is copied over to the real root so that the new NetworkManager process running there knows exactly what to do.


If you have network issues in dracut, this section contains some suggestions for investigating the problem.

The first thing to do is add rd.debug to the kernel command-line, enabling debug logging in dracut. Logs are saved to /run/initramfs/rdsosreport.txt and are also available in the journal.

If the system doesn't boot, it is useful to get a shell inside the initrd environment to manually check why things aren't working. For this, there is an rd.break command-line argument. Note that the argument spawns a shell when the initrd has finished its job and is about to give control to the init process in the real root filesystem. To stop at a different stage of dracut (for example, after command-line parsing), use the following argument:


The initrd image contains a minimal set of binaries; if you need a specific tool at the dracut shell, you can rebuild the image, adding what is missing. For example, to add the ping and tcpdump binaries (including all their dependent libraries), run:

# dracut -f  --install "ping tcpdump"

and then optionally verify that they were included successfully:

# lsinitrd | grep "ping\|tcpdump"
Arguments: -f --install 'ping tcpdump'
-rwxr-xr-x   1 root     root        82960 May 18 10:26 usr/bin/ping
lrwxrwxrwx   1 root     root           11 May 29 20:35 usr/sbin/ping -> ../bin/ping
-rwxr-xr-x   1 root     root      1065224 May 29 20:35 usr/sbin/tcpdump
The generator

If you are familiar with NetworkManager configuration, you might want to know how a given kernel command-line is translated into NetworkManager connection profiles. This can be useful to better understand the configuration mechanism and find syntax errors in the command-line without having to boot the machine.

The generator is installed in /usr/libexec/nm-initrd-generator and must be called with the list of kernel arguments after a double dash. The --stdout option prints the generated connections on standard output. Let's try to call the generator with a sample command line:

$ /usr/libexec/nm-initrd-generator --stdout -- \
          ip=enp1s0:dhcp:00:99:88:77:66:55 rd.peerdns=0

802-3-ethernet.cloned-mac-address: '99:88:77:66:55' is not a valid MAC

In this example, the generator reports an error because there is a missing field for the MTU after enp1s0 . Once the error is corrected, the parsing succeeds and the tool prints out the connection profile generated:

$ /usr/libexec/nm-initrd-generator --stdout -- \
        ip=enp1s0:dhcp::00:99:88:77:66:55 rd.peerdns=0

*** Connection 'enp1s0' ***






Note how the rd.peerdns=0 argument translates into the ignore-auto-dns=true property, which makes NetworkManager ignore DNS servers received via DHCP. An explanation of NetworkManager properties can be found on the nm-settings man page.

[ Network getting out of control? Check out Network automation for everyone, a free book from Red Hat . ]


The NetworkManager dracut module is enabled by default in Fedora and will also soon be enabled on RHEL. It brings better integration between networking in the initrd and NetworkManager running in the real root filesystem.

While the current implementation is working well, there are some ideas for possible improvements. One is to abandon the configure-and-quit=initrd mode and run NetworkManager as a daemon started by a systemd service. In this way, NetworkManager will be run in the same way as when it's run in the real root, reducing the code to be maintained and tested.

To completely drop the configure-and-quit=initrd mode, NetworkManager should also be able to register on D-Bus in the initrd. Currently, dracut doesn't have any module providing a D-Bus daemon because the image should be minimal. However, there are already proposals to include it as it is needed to implement some new features.

With D-Bus running in the initrd, NetworkManager's powerful API will be available to other tools to query and change the network state, unlocking a wide range of applications. One of those is to run nm-cloud-setup in the initrd. The service, shipped in the NetworkManager-cloud-setup Fedora package fetches metadata from cloud providers' infrastructure (EC2, Azure, GCP) to automatically configure the network.

[Jan 01, 2021] Looks like potentially Oracle can pickup up to 65% of CentOS users

Jan 01, 2021 |

What do you think of the recent Red Hat announcement about CentOS Linux/Stream?

I can use either CentOS Linux or Stream and it makes no difference to me
I will switch reluctantly to CentOS Stream but I'd rather not
I depend on CentOS Linux 8 and its stability and now I need a new alternative
I love the idea of CentOS Stream and can't wait to use it
I'm off to a different distribution before CentOS 8 sunsets at the end of 2021
I feel completely betrayed by this decision and will avoid Red Hat solutions in future
Total votes: 54

[Jan 01, 2021] Oracle Linux DTrace

Jan 01, 2021 |

... DTrace gives the operational insights that have long been missing in the data center, such as memory consumption, CPU time or what specific function calls are being made.

  • Designed for use on production systems to troubleshoot performance bottlenecks
  • Provides a single view of the software stack - from kernel to application - leading to rapid identification of performance bottlenecks
  • Dynamically instruments kernel and applications with any number of probe points, improving the ability to service software
  • Enables maximum resource utilization and application performance, as well as precise quantification of resource requirements
  • Fast and easy to use, even on complex systems with multiple layers of software

Developers can learn about and experiment with DTrace on Oracle Linux by installing the appropriate RPMs:

  • For Unbreakable Enterprise Kernel Release 5 (UEK5) on Oracle Linux 7 dtrace-utils and dtrace-utils-devel .
  • For Unbreakable Enterprise Kernel Release 6 (UEK6) on Oracle Linux 7 and Oracle Linux 8 dtrace and dtrace-devel .

[Jan 01, 2021] Oracle Linux vs. Red Hat Enterprise Linux by Jim Brull

Jan 05, 2019 |

... ... ...

Here's what we found.

  • Stability
    It's well known that Red Hat Enterprise Linux is created from the most stable and tested Fedora innovations, but since Oracle Linux was grown from the RHEL framework yet includes additional, built-in integrations and optimizations specifically tailored for Oracle products, our comparison showed that Oracle Linux is actually more stable for enterprises running Oracle systems , including Oracle databases.
  • Flexibility
    As an industry leader, RHEL provides a wide range of integrated applications and tools that help tailor fit the Red Hat Enterprise Linux system to highly specific business needs. However, once again Oracle Linux was found to excel over RHEL because OL offered the Red Hat Compatible Kernel (RHCK), option, which enables any RHEL-certified app to run on Oracle Linux . In addition, OL offers its own network of ISVs / third-party solutions, which can help personalize your Linux setup even more while integrating seamlessly with your on-premises or cloud-based Oracle systems.

[Jan 01, 2021] Consider looking at openSUSE (still run out of Germany)

Jan 01, 2021 |

If you are on CentOS-7 then you will probably be okay until RedHat pulls the plug on 2024-06-30 so do don't do anything rash. If you are on CentOS-8 then your days are numbered (to ~ 365) because this OS will shift from major-minor point updates to a streaming model at the end of 2021. Let's look at two early founders: SUSE started in Germany in 1991 whilst RedHat started in America a year later. SUSE sells support for SLE (Suse Linux Enterprise) which means you need a license to install-run-update-upgrade it. Likewise RedHat sells support for RHEL (Red Hat Enterprise Linux). SUSE also offers "openSUSE Leap" (released once a year as a major-minor point release of SLE) and "openSUSE Tumbleweed" (which is a streaming thingy). A couple of days ago I installed "OpenSUSE Leap" onto an old HP-Compaq 6000 desktop just to try it out (the installer actually had a few features I liked better than the CentOS-7 installer). When I get back to the office in two weeks, I'm going to try installing "OpenSUSE Leap" onto an HP-DL385p_gen8. I'll work with this for a few months and I am comfortable, I will migrate my employer's solution over to "OpenSUSE Leap".

Parting thoughts:

  1. openSUSE is run out of Germany. IMHO switching over to a European distro is similar to those database people who preferred MariaDB to MySQL when Oracle was still hoping that MySQL would die from neglect.

  2. Someone cracked off to me the other day that now that IBM is pulling strings at "Red Hat", that the company should be renamed "Blue Hat"

7 comments 47% Upvoted Log in or sign up to leave a comment Log In Sign Up Sort by level 1

general-noob 4 points · 3 days ago

I downloaded and tried it last week and was actually pretty impressed. I have only ever tested SUSE in the past. Honestly, I'll stick with Red Hat/CentOS whatever, but I was still impressed. I'd recommend people take a look.

servingwater 2 points · 3 days ago

I have been playing with OpenSUSE a bit, too. Very solid this time around. In the past I never had any luck with it. But Leap 15.2 is doing fine for me. Just testing it virtually. TW also is pretty sweet and if I were to use a rolling release, it would be among the top contenders.

One thing I don't like with OpenSUSE is that you can't really, or are not supposed to I guess, disable the root account. You can't do it at install, if you leave the root account blank suse, will just assign the password for the user you created to it.
Of course afterwards you can disable it with the proper commands but it becomes a pain with YAST, as it seems YAST insists on being opened by root.

neilrieck 2 points · 2 days ago

Thanks for that "heads about" about root

gdhhorn 1 point · 2 days ago

One thing I don't like with OpenSUSE is that you can't really, or are not supposed to I guess, disable the root account. You can't do it at install, if you leave the root account blank suse, will just assign the password for the user you created to it.

I'm running Leap 15.2 on the laptops my kids run for school. During installation, I simply deselected the option for the account used to be an administrator; this required me to set a different password for administrative purposes.

Perhaps I'm misunderstanding your comment.

servingwater 1 point · 2 days ago

I think you might.
My point is/was that if I select to choose my regular user to be admin, I don't expect for the system to create and activate a root account anyways and then just assign it my password.
I expect the root account to be disabled.

gdhhorn 2 points · 2 days ago

I didn't realize it made a user, 'root,' and auto generated a password. I'd always assumed if I said to make the user account admin, that was it.

TIL, thanks.

servingwater 1 point · 2 days ago

I was surprised, too. I was bit "shocked" when I realized, after the install, that I could login as root with my user password.
At the very least, IMHO, it should then still have you set the root password, even if you choose to make your user admin.
It for one lets you know that OpenSUSE is not disabling root and two gives you a chance to give it a different password.
But other than that subjective issue I found OpenSUSE Leap a very solid distro.

[Jan 01, 2021] What about the big academic labs? (Fermilab, CERN, DESY, etc)

Jan 01, 2021 |

The big academic labs (Fermilab, CERN and DESY to only name three of many used to run something called Scientific Linux which was also maintained by Red Hat.see: and Shortly after Red Hat acquired CentOS in 2014, Red Hat convinced the big academic labs to begin migrating over to CentOS (no one at that time thought that Red Hat would become Blue Hat) 11 comments 67% Upvoted Log in or sign up to leave a comment Log In Sign Up Sort by level 1

phil_g 14 points · 2 days ago

To clarify, as a user of Scientific Linux:

Scientific Linux is not and was not maintained by Red Hat. Like CentOS, when it was truly a community distribution, Scientific Linux was an independent rebuild of the RHEL source code published by Red Hat. It is maintained primarily by people at Fermilab. (It's slightly different from CentOS in that CentOS aimed for binary compatibility with RHEL, while that is not a goal of Scientific Linux. In practice, SL often achieves binary compatibility, but if you have issues with that, it's more up to you to fix them than the SL maintainers.)

I don't know anything about Red Hat convincing institutions to stop using Scientific Linux; the first I heard about the topic was in April 2019 when Fermilab announced there would be no Scientific Linux 8 . (They may reverse that decision. At the moment, they're " investigating the best path forward ", with a decision to be announced in the first few months of 2021.) level 2 neilrieck 4 points · 2 days ago

I fear you are correct. I just stumbled onto this article: Even the wikipedia article states "This product is derived from the free and open-source software made available by Red Hat, but is not produced, maintained or supported by them." But it does seem that Scientific Linux was created as a replacement for Fermilab Linux. I've also seen references to CC7 to mean "Cern Centos 7". CERN is keeping their Linux page up to date because what I am seeing here ( ) today is not what I saw 2-weeks ago.

There are

Niarbeht 16 points · 2 days ago

There are

Uh oh, guys, they got him!

deja_geek 9 points · 2 days ago

RedHat didn't convince them to stop using Scientific Linux, Fermilab no longer needed to have their own rebuild of RHEL sources. They switched to CentOS and modified CentOS if they needed to (though I don't really think they needed to)

meat_bunny 10 points · 2 days ago

Maintaining your own distro is a pain in the ass.

My crystal ball says they'll just use whatever RHEL rebuild floats to the top in a few months like the rest of us.

carlwgeorge 2 points · 2 days ago

SL has always been an independent rebuild. It has never been maintained, sponsored, or owned by Red Hat. They decided on their own to not build 8 and instead collaborate on CentOS. They even gained representation on the CentOS board (one from Fermi, one from CERN).

I'm not affiliated with any of those organizations, but my guess is they will switch to some combination of CentOS Stream and RHEL (under the upcoming no/low cost program).

VestoMSlipher 1 point · 11 hours ago

[Jan 01, 2021] CentOS HAS BEEN CANCELLED !!!

Jan 01, 2021 |


Post by whoop " 2020/12/08 20:00:36

Is anybody considering switching to RHEL's free non-production developer subscription? As I understand it, it is free and receives updates.
The only downside as I understand it is that you have to renew your license every year (and that you can't use it in commercial production).

[Dec 30, 2020] Switching from CentOS to Oracle Linux: a hands-on example

In view of the such effective and free promotion of Oracle Linux by IBM/Red Hat brass as the top replacement for CentOS, the script can probably be slightly enhanced.
The script works well for simple systems, but still has some sharp edges. Checks for common bottlenecks should be added. For exmple scale in /boot should be checked if this is a separate filesystem. It was not done. See my Also, in case it was invoked the second time after the failure of the step "Installing base packages for Oracle Linux..." it can remove hundreds of system RPM (including sshd, cron, and several other vital packages ;-).
And failures on this step are probably the most common type of failures in conversion. Inexperienced sysadmins or even experienced sysadmins in a hurry often make this blunder running the script the second time.
It probably happens due to the presence of the line 'yum remove -y "${new_releases[@]}" ' in the function remove_repos, because in their excessive zeal to restore the system after error the programmers did not understand that in certain situations those packages that they want to delete via YUM have dependences and a lot of them (line 65 in the current version of the script) Yum blindly deletes over 300 packages including such vital as sshd, cron, etc. Due to this execution of the script probably should be blocked if Oracle repositories are already present. This check is absent.
After this "mass extinction of RPM packages," event you need to be pretty well versed in yum to recover. The names of the deleted packages are in yum log, so you can reinstall them and something it helps. In other cases system remain unbootable and the restore from the backup is the only option.
Due sudden surge of popularity of Oracle Linux due to Red Hat CentOS8 fiasco, the script definitely can benefit from better diagnostics. The current diagnostic is very rudimentary. It might also make sense to make steps modular in the classic /etc/init.d fashion and make initial steps shippable so that the script can be resumed after the error. Most of the steps have few dependencies, which can be resolved by saving variables during the first run and sourcing them if the the first step is not step 1.
Also, it makes sense to check the amount of free space in /boot filesystem if /boot is a separate filesystem. The script requires approx 100MB of free space in this filesystem. Failure to write a new kernel to it due to the lack of free space leads to the situation of "half-baked" installation, which is difficult to recover without senior sysadmin skills.
See additional considerations about how to enhance the script at
Dec 15, 2020 Simon Coter Blog

... ... ...

We published a blog post earlier this week that explains why , but here is the TL;DR version:

  • Oracle Linux is free to download, distribute and use (even in production) and has been since its release over 14 years ago
  • Installation media, updates and source code are all publicly available on the Oracle Linux yum server with no login or authentication requirements
  • Since its first release in 2006, Oracle Linux has been 100% application binary compatible with the equivalent RHEL version. In that time, we have never had a compatibility bug logged.

For these reasons, we created a simple script to allow users to switch from CentOS to Oracle Linux about five years ago. This week, we moved the script to GitHub to allow members of the CentOS community to help us improve and extend the script to cover more CentOS respins and use cases.

The script can switch CentOS Linux 6, 7 or 8 to the equivalent version of Oracle Linux. Let's take a look at just how simple the process is.

Download the script from GitHub

The simplest way to get the script is to use curl :

$ curl -O
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 10747 100 10747 0 0 31241 0 --:--:-- --:--:-- --:--:-- 31241

If you have git installed, you could clone the git repository from GitHub instead.

Run the script to switch to Oracle Linux

To switch to Oracle Linux, just run the script as root using sudo :

$ sudo bash

Sample output of script run .

As part of the process, the default kernel is switched to the latest release of Oracle's Unbreakable Enterprise Kernel (UEK) to enable extensive performance and scalability improvements to the process scheduler, memory management, file systems, and the networking stack. We also replace the existing CentOS kernel with the equivalent Red Hat Compatible Kernel (RHCK) which may be required by any specific hardware or application that has imposed strict kernel version restrictions.

Switching the default kernel (optional)

Once the switch is complete, but before rebooting, the default kernel can be changed back to the RHCK. First, use grubby to list all installed kernels:

[demo@c8switch ~]$ sudo grubby --info=ALL | grep ^kernel
[sudo] password for demo:

In the output above, the first entry (index 0) is UEK R6, based on the mainline kernel version 5.4. The second kernel is the updated RHCK (Red Hat Compatible Kernel) installed by the switch process while the third one is the kernel that were installed by CentOS and the final entry is the rescue kernel.

Next, use grubby to verify that UEK is currently the default boot option:

[demo@c8switch ~]$ sudo grubby --default-kernel

To replace the default kernel, you need to specify either the path to its vmlinuz file or its index. Use grubby to get that information for the replacement:

[demo@c8switch ~]$ sudo grubby --info /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64
args="ro crashkernel=auto resume=/dev/mapper/cl-swap rhgb quiet $tuned_params"
initrd="/boot/initramfs-4.18.0-240.1.1.el8_3.x86_64.img $tuned_initrd"
title="Oracle Linux Server (4.18.0-240.1.1.el8_3.x86_64) 8.3"

Finally, use grubby to change the default kernel, either by providing the vmlinuz path:

[demo@c8switch ~]$ sudo grubby --set-default /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64
The default is /boot/loader/entries/0dbb9b2f3c2744779c72a28071755366-4.18.0-240.1.1.el8_3.x86_64.conf with index 1 and kernel /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64

Or its index:

[demo@c8switch ~]$ sudo grubby --set-default-index 1
The default is /boot/loader/entries/0dbb9b2f3c2744779c72a28071755366-4.18.0-240.1.1.el8_3.x86_64.conf with index 1 and kernel /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64

Changing the default kernel can be done at any time, so we encourage you to take UEK for a spin before switching back.

It's easy to access, try it out.

For more information visit .

[Dec 30, 2020] HPE ClearOS

Dec 30, 2020 |

The last of the RHEL downstreams up for discussion today is Hewlett-Packard Enterprise's in-house distro, ClearOS . Hewlett-Packard makes ClearOS available as a pre-installed option on its ProLiant server line, and the company offers a free Community version to all comers.

ClearOS is an open source software platform that leverages the open source model to deliver a simplified, low cost hybrid IT experience for SMBs. The value of ClearOS is the integration of free open source technologies making it easier to use. By not charging for open source, ClearOS focuses on the value SMBs gain from the integration so SMBs only pay for the products and services they need and value.

ClearOS is mostly notable here for its association with industry giant HPE and its availability as an OEM distro on ProLiant servers. It seems to be a bit behind the times -- the most recent version is ClearOS 7.x, which is in turn based on RHEL 7. In addition to being a bit outdated compared with other options, it also appears to be a rolling release itself -- more comparable to CentOS Stream itself, than to the CentOS Linux that came before it.

ClearOS is probably most interesting to small business types who might consider buying ProLiant servers with RHEL-compatible OEM Linux pre-installed later.

[Dec 30, 2020] Where do I go now that CentOS Linux is gone- Check our list - Ars Technica

Dec 30, 2020 |

Springdale Linux

I've seen a lot of folks mistakenly recommending the deceased Scientific Linux distro as a CentOS replacement -- that won't work, because Scientific Linux itself was deprecated in favor of CentOS. However, Springdale Linux is very similar -- like Scientific Linux, it's a RHEL rebuild distro made by and for the academic scientific community. Unlike Scientific Linux, it's still actively maintained!

Springdale Linux is maintained and made available by Princeton and Rutgers universities, who use it for their HPC projects. It has been around for quite a long time. One Springdale Linux user from Carnegie Mellon describes their own experience with Springdale (formerly PUIAS -- Princeton University Institute for Advanced Study) as a 10-year ride.

Theresa Arzadon-Labajo, one of Springdale Linux's maintainers, gave a pretty good seat-of-the-pants overview in a recent mailing list discussion :

The School of Mathematics at the Institute for Advanced Study has been using Springdale (formerly PUIAS, then PU_IAS) since its inception. All of our *nix servers and workstations (yes, workstations) are running Springdale. On the server side, everything "just works", as is expected from a RHEL clone. On the workstation side, most of the issues we run into have to do with NVIDIA drivers, and glibc compatibility issues (e.g Chrome, Dropbox, Skype, etc), but most issues have been resolved or have a workaround in place.

... Springdale is a community project, and [it] mostly comes down to the hours (mostly Josko) that we can volunteer to the project. The way people utilize Springdale varies. Some are like us and use the whole thing. Others use a different OS and use Springdale just for its computational repositories.

Springdale Linux should be a natural fit for universities and scientists looking for a CentOS replacement. It will likely work for most anyone who needs it -- but its relatively small community and firm roots in academia will probably make it the most comfortable for those with similar needs and environments.

[Dec 30, 2020] GhostBSD and a few others are spearheading a charge into the face of The Enemy, making BSD palatable for those of us steeped in Linux as the only alternative to we know who.

Dec 30, 2020 |

64"best idea" ... (by Otis on 2020-12-25 19:38:01 GMT from United States)
@62 dang it BSD takes care of all that anxiety about systemd and the other bloaty-with-time worries as far as I can tell. GhostBSD and a few others are spearheading a charge into the face of The Enemy, making BSD palatable for those of us steeped in Linux as the only alternative to we know who.

[Dec 30, 2020] Scientific Linux website states that they are going to reconsider (in 1st quarter of 2021) whether they will produce a clone of rhel version 8. Previously, they stated that they would not.

Dec 30, 2020 |

Centos (by David on 2020-12-22 04:29:46 GMT from United States)
I was using Centos 8.2 on an older, desktop home computer. When Centos dropped long term support on version 8, I was a little peeved, but not a whole lot, since it is free, anyway. Out of curiosity I installed Scientific Linux 7.9 on the same computer, and it works better that Centos 8. Then I tried installing SL 7.9 on my old laptop -- it even worked on that!

Previously, when I had tried to install Centos 8 on the laptop, an old Dell inspiron 1501, the graphics were garbage --the screen displayed kind of a color mosaic --and the keyboard/everthing else was locked up. I also tried Centos 7.9 on it and installation from minimal dvd produced a bunch of errors and then froze part way through.

I will stick with Scientific Linux 7 for now. In 2024 I will worry about which distro to migrate to. Note: Scientific Linux websites states that they are going to reconsider (in 1st quarter of 2021) whether they will produce a clone of rhel version 8. Previously, they stated that they would not.

[Dec 30, 2020] Springdale vs. CentOS

Dec 30, 2020 |

52Springdale vs. CentOS (by whoKnows on 2020-12-23 05:39:01 GMT from Switzerland)

@51 • Personal opinion only. (by R. Cain)

"Personal opinion only. [...] After all the years of using Linux, and experiencing first-hand the hobby mentality that has taken over [...], I prefer to use a distribution which has all the earmarks of [...] being developed AND MAINTAINED by a professional organization."

Yeah, your answer is exactly what I expected it to be.

The thing with Springdale is as following: it's maintained by the very professional team of IT specialists at the Institute for Advanced Study (Princeton University) for the own needs. That's why there's no fancy website, RHEL Wiki, live ISOs and such.

They also maintain several other repositories for add-on packages (computing, unsupported [with audio/video codecs] ...).

With other words, if you're a professional who needs an RHEL clone, you'll be fine with it; if you're a hobbyist who needs a how-to on everything and anything, you can still use the knowledge base of RHEL/CentOS/Oracle ...

If you're 'small business' who needs a professional support, you'd get RHEL - unlike CentOS, Springdale is not a commercial distribution selling you support and schooling. Springdale is made by professional and for the professionals.

[Dec 29, 2020] Migrer de CentOS Oracle Linux Petit retour d'exp rience Le blog technique de Microlinux

Highly recommended!
Google translation
Notable quotes:
"... Free to use, free to download, free to update. Always ..."
"... Unbreakable Enterprise Kernel ..."
"... (What You Get Is What You Get ..."
Dec 30, 2020 |

In 2010 I had the opportunity to put my hands in the shambles of Oracle Linux during an installation and training mission carried out on behalf of ASF (Highways of the South of France) which is now called Vinci Autoroutes. I had just published Linux on the onions at Eyrolles, and since the CentOS 5.3 distribution on which it was based looked 99% like Oracle Linux 5.3 under the hood, I had been chosen by the company ASF to train their future Linux administrators.

All these years, I knew that Oracle Linux existed, as did another series of Red Hat clones like CentOS, Scientific Linux, White Box Enterprise Linux, Princeton University's PUIAS project, etc. I didn't care any more, since CentOS perfectly met all my server needs.

Following the disastrous announcement of the CentOS project, I had a discussion with my compatriot Michael Kofler, a Linux guru who has published a series of excellent books on our favorite operating system, and who has migrated from CentOS to Oracle Linux for the Linux ad administration courses he teaches at the University of Graz. We were not in our first discussion on this subject, as the CentOS project was already accumulating a series of rather worrying delays for version 8 updates. In comparison, Oracle Linux does not suffer from these structural problems, so I kept this option in a corner of my head.

A problematic reputation

Oracle suffers from a problematic reputation within the free software community, for a variety of reasons. It was the company that ruined OpenOffice and Java, put the hook on MySQL and let Solaris sink. Oracle CEO Larry Ellison has been the center of his name because of his unhinged support for Donald Trump. As for the company's commercial policy, it has been marked by a notorious aggressiveness in the hunt for patents.

On the other hand, we have free and free apps like VirtualBox, which run perfectly on millions of developer workstations all over the world. And then the very discreet Oracle Linux , which works perfectly and without making any noise since 2006, and which is also a free and free operating system.

Install Oracle Linux

For a first test, I installed Oracle Linux 7.9 and 8.3 in two virtual machines on my workstation. Since it is a Red Hat Enterprise Linux-compliant clone, the installation procedure is identical to that of RHEL and CentOS, with a few small details.

Oracle Linux Installation

Info Normally, I never care about banner ads that scroll through graphic installers. This time, the slogan Free to use, free to download, free to update. Always still caught my attention.

An indestructible kernel?

Oracle Linux provides its own Linux kernel newer than the one provided by Red Hat, and named Unbreakable Enterprise Kernel (UEK). This kernel is installed by default and replaces older kernels provided upstream for versions 7 and 8. Here's what it looks like oracle Linux 7.9.

$ uname -a
Linux oracle-el7 5.4.17-2036.100.6.1.el7uek.x86_64 #2 SMP Thu Oct 29 17:04:48 
PDT 2020 x86_64 x86_64 x86_64 GNU/Linux
Well-crafted packet deposits

At first glance, the organization of official and semi-official package filings seems much clearer and better organized than under CentOS. For details, I refer you to the respective explanatory pages for the 7.x and 8.x versions.

Well-structured documentation

Like the organization of deposits, Oracle Linux's documentation is worth mentioning here, because it is simply exemplary. The main index refers to the different versions of Oracle Linux, and from there, you can access a whole series of documents in HTML and PDF formats that explain in detail the peculiarities of the system and its day-to-day management. As I go along with this documentation, I discover a multitude of pleasant little details, such as the fact that Oracle packages display metadata for security updates, which is not the case for CentOS packages.

Migrating from CentOS to Oracle Linux

The Switch your CentOS systems to Oracle Linux web page identifies a number of reasons why Oracle Linux is a better choice than CentOS when you want to have a company-grade free as in free beer operating system, which provides low-risk updates for each version over a decade. This page also features a script that transforms an existing CentOS system into a two-command Oracle Linux system on the fly.

So I tested this script on a CentOS 7 server from Online/Scaleway.

# curl -O
# chmod +x
# ./

The script grinds about twenty minutes, we restart the machine and we end up with a clean Oracle Linux system. To do some cleaning, just remove the deposits of saved packages.

# rm -f /etc/yum.repos.d/*.repo.deactivated
Migrating a CentOS 8.x server?

At first glance, the script only predicted the migration of CentOS 7.9 to Oracle Linux 7.9. On a whim, I sent an email to the address at the bottom of the page, asking if support for CentOS 8.x was expected in the near future.

A very nice exchange of emails ensued with a guy from Oracle, who patiently answered all the questions I asked him. And just twenty-four hours later, he sent me a link to an Oracle Github repository with an updated version of the script that supports the on-the-fly migration of CentOS 8.x to Oracle Linux 8.x.

So I tested it with a cool installation of a CentOS 8 server at Online/Scaleway.

# yum install git
# git clone
# cd centos2ol/
# chmod +x
# ./

Again, it grinds a good twenty minutes, and at the end of the restart, we end up with a public machine running oracle Linux 8.


I will probably have a lot more to say about that. For my part, I find this first experience with Oracle Linux rather conclusive, and if I decided to share it here, it is that it will probably solve a common problem to a lot of admins of production servers who do not support their system becoming a moving target overnight.

Post Scriptum for the chilly purists

Finally, for all of you who want to use a free and free clone of Red Hat Enterprise Linux without selling their soul to the devil, know that Springdale Linux is a solid alternative. It is maintained by Princeton University in the United States according to the principle WYGIWYG (What You Get Is What You Get ), it is provided raw de-cluttering and without any documentation, but it works just as well.

Writing this documentation takes time and significant amounts of espresso coffee. Do you like this blog? Give the editor a coffee by clicking on the cup.

[Dec 29, 2020] Oracle Linux is "CentOS done right"

Notable quotes:
"... If you want a free-as-in-beer RHEL clone, you have two options: Oracle Linux or Springdale/PUIAS. My company's currently moving its servers to OL, which is "CentOS done right". Here's a blog article about the subject: ..."
"... Each version of OL is supported for a 10-year cycle. Ubuntu has five years of support. And Debian's support cycle (one year after subsequent release) is unusable for production servers. ..."
"... [Red Hat looks like ]... of a cartoon character sawing off the tree branch they are sitting on." ..."
Dec 21, 2020 |


And what about Oracle Linux? (by Microlinux on 2020-12-21 08:11:33 GMT from France)

If you want a free-as-in-beer RHEL clone, you have two options: Oracle Linux or Springdale/PUIAS. My company's currently moving its servers to OL, which is "CentOS done right". Here's a blog article about the subject:

Currently Rocky Linux is not much more than a README file on Github and a handful of Slack (ew!) discussion channels.

Each version of OL is supported for a 10-year cycle. Ubuntu has five years of support. And Debian's support cycle (one year after subsequent release) is unusable for production servers.


9@Jesse on CentOS: (by dragonmouth on 2020-12-21 13:11:04 GMT from United States)

"There is no rush and I recommend waiting a bit for the dust to settle on the situation before leaping to an alternative. "

For private users there may be plenty of time to find an alternative. However, corporate IT departments are not like jet skis able to turn on a dime. They are more like supertankers or aircraft carriers that take miles to make a turn. By the time all the committees meet and come to some decision, by the time all the upper managers who don't know what the heck they are talking about expound their opinions and by the time the CentOS replacement is deployed, a year will be gone. For corporations, maybe it is not a time to PANIC, yet, but it is high time to start looking for the O/S that will replace CentOS.


"This looks like the vendor equivalent..." (by Ricardo on 2020-12-21 18:06:49 GMT from Argentina)

[Red Hat looks like ]... of a cartoon character sawing off the tree branch they are sitting on."

Jesse, I couldn't have articulated it better. I'm stealing that phrase :)

Cheers and happy holidays to everyone!

[Dec 28, 2020] Time to move to Oracle Linux

Dec 28, 2020 |
  • TuxRuffian Dec 8, 2020 @ 23:43

    Does this mean no more SIGs too? OEL 8 is about to see a giant surge in utilization! reply link

  • Just a geek Dec 8, 2020 @ 23:45

    Time to move to Oracle Linux. One of their partners is always talking about it, and being it is free, and tracks RHEL with 100% binary compatibly it's a good fit for use. Also looked at their support costs, and it's a fraction of RHEL pricing!

Kyle Dec 9, 2020 @ 2:13

It's an ibm money grab. It's a shame, I use centos to develop and host web applications om my linode. Obviously a small time like that I can't afford red hat, but use it at work. Centos allowed me to come home and take skills and dev on my free time and apply it to work.

I also use Ubuntu, but it looks like the shift will be greater to Ubuntu. Noname Dec 9, 2020 @ 4:20

As others said here, this is money grab. Me thinks IBM was the worst thing that happened to Linux since systemd... Yui Dec 9, 2020 @ 4:49

Hello CentOS users,

I also work for a non-profit (Cancer and other research) and use CentOS for HPC. We choose CentOS over Debian due to the 10-year support cycle and CentOS goes well with HPC cluster. We also wanted every single penny to go to research purposes and not waste our donations and grants on software costs. What are my CentOS alternatives for HPC? Thanks in advance for any help you are able to provide. Holmes Dec 9, 2020 @ 5:06

Folks who rely on CentOS saw this coming when Red Hat brought them 6 years ago. Last year IBM brought Red Hat. Now, IBM+Red Hat found a way to kill the stable releases in order to get people signing up for RHEL subscriptions. Doesn't that sound exactly like "EEE" (embrace, extend, and exterminate) model? Petr Dec 9, 2020 @ 5:08

For me it's simple.
I will keep my openSUSE Leap and expand it's footprint.
Until another RHEL compatible distro is out. If I need a RHEL compatible distro for testing, until then, I will use Oracle with the RHEL kernel.
OpenSUSE is the closest to RHEL in terms of stability (if not better) and I am very used to it. Time to get some SLES certifications as well. Someone Dec 9, 2020 @ 5:23

While I like Debian, and better still Devuan (systemd ), some RHEL/CentOS features like kickstart and delta RPMs don't seem to be there (or as good). Debian preseeding is much more convoluted than kickstart for example. Vonskippy Dec 10, 2020 @ 1:24

That's ok. For us, we left RHEL (and the CentOS testing cluster) when the satan spawn known as SystemD became the standard. We're now a happy and successful FreeBSD shop.

[Dec 28, 2020] This quick and dirty hack worked fine to convert centos 8 to oracle linux 8

Notable quotes:
"... this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv: ..."
Dec 28, 2020 |

Phil says: December 9, 2020 at 2:10 pm

this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv:

wget \
${repobase}/redhat-release-8.3- \
${repobase}/oracle-release-el8-1.0-1.el8.x86_64.rpm \
${repobase}/oraclelinux-release-8.3-1.0.4.el8.x86_64.rpm \
rpm -e centos-linux-release --nodeps
dnf --disablerepo='*' localinstall ./*rpm 
:> /etc/dnf/vars/ociregion
dnf remove centos-linux-repos
dnf --refresh distro-sync
# since I wanted to try out the unbreakable enterprise kernel:
dnf install kernel-uek
dnf remove kernel

[Dec 28, 2020] Red Hat interpretation of CenOS8 fiasco

Highly recommended!
" People are complaining because you are suddenly killing CentOS 8 which has been released last year with the promise of binary compatibility to RHEL 8 and security updates until 2029."
One of immanent features of GPL is that it allow clones to exist. Which means the Oracle Linix, or Rocky Linux, or Lenin Linux will simply take CentOS place and Red hat will be in disadvantage, now unable to control the clone to the extent they managed to co-opt and control CentOS. "Embrace and extinguish" change i now will hand on Red Hat and probably will continue to hand for years from now. That may not be what Redhat brass wanted: reputational damage with zero of narrative effect on the revenue stream. I suppose the majority of CentOS community will finally migrate to emerging RHEL clones. If that was the Red Hat / IBM goal - well, they will reach it.
Notable quotes:
"... availability gap ..."
"... Another long-winded post that doesn't address the single, core issue that no one will speak to directly: why can't CentOS Stream and CentOS _both_ exist? Because in absence of any official response from Red Hat, the assumption is obvious: to drive RHEL sales. If that's the reason, then say it. Stop being cowards about it. ..."
"... We might be better off if Red Hat hadn't gotten involved in CentOS in the first place and left it an independent project. THEY choose to pursue this path and THEY chose to renege on assurances made around the non-stream distro. Now they're going to choose to deal with whatever consequences come from the loss of goodwill in the community. ..."
"... If the problem was in money, all RH needed to do was to ask the community. You would have been amazed at the output. ..."
"... You've alienated a few hunderd thousand sysadmins that started upgrading to 8 this year and you've thrown the scientific Linux community under a bus. You do realize Scientific Linux was discontinued because CERN and FermiLab decided to standardize on CentOS 8? This trickled down to a load of labs and research institutions. ..."
"... Nobody forced you to buy out CentOS or offer a gratis distribution. But everybody expected you to stick to the EOL dates you committed to. You boast about being the "Enterprise" Linux distributor. Then, don't act like a freaking start-up that announces stuff today and vanishes a year later. ..."
"... They should have announced this at the START of CentOS 8.0. Instead they started CentOS 8 with the belief it was going to be like CentOS7 have a long supported life cycle. ..."
"... IBM/RH/CentOS keeps replaying the same talking points over and over and ignoring the actual issues people have ..."
"... What a piece of stinking BS. What is this "gap" you're talking about? Nobody in the CentOS community cares about this pre-RHEL gap. You're trying to fix something that isn't broken. And doing that the most horrible and bizzarre way imaginable. ..."
"... As I understand it, Fedora - RHEL - CENTOS just becomes Fedora - Centos Stream - RHEL. Why just call them RH-Alpha, RH-Beta, RH? ..."
Dec 28, 2020 |

Let's go back to 2003 where Red Hat saw the opportunity to make a fundamental change to become an enterprise software company with an open source development methodology.

To do so Red Hat made a hard decision and in 2003 split Red Hat Linux into Red Hat Enterprise Linux (RHEL) and Fedora Linux. RHEL was the occasional snapshot of Fedora Linux that was a product -- slowed, stabilized, and paced for production. Fedora Linux and the Project around it were the open source community for innovating -- speedier, prone to change, and paced for exploration. This solved the problem of trying to hold to two, incompatible core values (fast/slow) in a single project. After that, each distribution flourished within its intended audiences.

But that split left two important gaps. On the project/community side, people still wanted an OS that strived to be slower-moving, stable-enough, and free of cost -- an availability gap . On the product/customer side, there was an openness gap -- RHEL users (and consequently all rebuild users) couldn't contribute easily to RHEL. The rebuilds arose and addressed the availability gap, but they were closed to contributions to the core Linux distro itself.

In 2012, Red Hat's move toward offering products beyond the operating system resulted in a need for an easy-to-access platform for open source development of the upstream projects -- such as Gluster, oVirt, and RDO -- that these products are derived from. At that time, the pace of innovation in Fedora made it not an easy platform to work with; for example, the pace of kernel updates in Fedora led to breakage in these layered projects.

We formed a team I led at Red Hat to go about solving this problem, and, after approaching and discussing it with the CentOS Project core team, Red Hat and the CentOS Project agreed to " join forces ." We said joining forces because there was no company to acquire, so we hired members of the core team and began expanding CentOS beyond being just a rebuild project. That included investing in the infrastructure and protecting the brand. The goal was to evolve into a project that also enabled things to be built on top of it, and a project that would be exponentially more open to contribution than ever before -- a partial solution to the openness gap.

Bringing home the CentOS Linux users, folks who were stuck in that availability gap, closer into the Red Hat family was a wonderful side effect of this plan. My experience going from participant to active open source contributor began in 2003, after the birth of the Fedora Project. At that time, as a highly empathetic person I found it challenging to handle the ongoing emotional waves from the Red Hat Linux split. Many of my long time community friends themselves were affected. As a company, we didn't know if RHEL or Fedora Linux were going to work out. We had made a hard decision and were navigating the waters from the aftershock. Since then we've all learned a lot, including the more difficult dynamics of an open source development methodology. So to me, bringing the CentOS and other rebuild communities into an actual relationship with Red Hat again was wonderful to see, experience, and help bring about.

Over the past six years since finally joining forces, we made good progress on those goals. We started Special Interest Groups (SIGs) to manage the layered project experience, such as the Storage SIG, Virt Sig, and Cloud SIG. We created a governance structure where there hadn't been one before. We brought RHEL source code to be housed at . We designed and built out a significant public build infrastructure and CI/CD system in a project that had previously been sealed-boxes all the way down.

cmdrlinux says: December 19, 2020 at 2:36 pm

"This brings us to today and the current chapter we are living in right now. The move to shift focus of the project to CentOS Stream is about filling that openness gap in some key ways. Essentially, Red Hat is filling the development and contribution gap that exists between Fedora and RHEL by shifting the place of CentOS from just downstream of RHEL to just upstream of RHEL."

Another long-winded post that doesn't address the single, core issue that no one will speak to directly: why can't CentOS Stream and CentOS _both_ exist? Because in absence of any official response from Red Hat, the assumption is obvious: to drive RHEL sales. If that's the reason, then say it. Stop being cowards about it.

Mark Danon says: December 19, 2020 at 4:14 pm

Redhat has no obligation to maintain both CentOS 8 and CentOS stream. Heck, they have no obligation to maintain CentOS either. Maintaining both will only increase the workload of CentOS maintainers. I don't suppose you are volunteering to help them do the work? Be thankful for a distribution that you have been using so far, and move on.

Dave says: December 20, 2020 at 7:16 am

We might be better off if Red Hat hadn't gotten involved in CentOS in the first place and left it an independent project. THEY choose to pursue this path and THEY chose to renege on assurances made around the non-stream distro. Now they're going to choose to deal with whatever consequences come from the loss of goodwill in the community.

If they were going to pull this stunt they shouldn't have gone ahead with CentOS 8 at all and fulfilled any lifecycle expectations for CentOS 7.

Konstantin says: December 21, 2020 at 12:24 am

Sorry, but that's a BS. CentOS Stream and CentOS Linux are not mutually replaceable. You cannot sell that BS to any people actually knowing the intrinsics of how CentOS Linux was being developed.

If the problem was in money, all RH needed to do was to ask the community. You would have been amazed at the output.

No, it is just a primitive, direct and lame way to either force "free users" to either pay, or become your free-to-use beta testers (CentOS Stream *is* beta, whatever you say).

I predict you will be somewhat amazed at the actual results.

Not talking about the breach of trust. Now how much would cost all your (RH's) further promises and assurances?

Chris Mair says: December 20, 2020 at 3:21 pm

To: [email protected]
To: [email protected]



you can spin this to the moon and back. The fact remains you just killed CentOS Linux and your users' trust by moving the EOL of CentOS Linux 8 from 2029 to 2021.

You've alienated a few hunderd thousand sysadmins that started upgrading to 8 this year and you've thrown the scientific Linux community under a bus. You do realize Scientific Linux was discontinued because CERN and FermiLab decided to standardize on CentOS 8? This trickled down to a load of labs and research institutions.

Nobody forced you to buy out CentOS or offer a gratis distribution. But everybody expected you to stick to the EOL dates you committed to. You boast about being the "Enterprise" Linux distributor. Then, don't act like a freaking start-up that announces stuff today and vanishes a year later.

The correct way to handle this would have been to kill the future CentOS 9, giving everybody the time to cope with the changes.

I've earned my RHCE in 2003 (yes that's seventeen years ago). Since then, many times, I've recommended RHEL or CentOS to the clients I do free lance work for. Just a few weeks ago I was asked to give an opinion on six CentOS 7 boxes about to be deployed into a research system to be upgraded to 8. I gave my go. Well, that didn't last long.

What do you expect me to recommend now? Buying RHEL licenses? That may or may be not have a certain cost per year and may or may be not supported until a given date? Once you grant yourself the freedom to retract whatever published information, how can I trust you? What added values do I get over any of the community supported distributions (given I can support myself)?

And no, CentOS Stream cannot "cover 95% (or so) of current user workloads". Stream was introduces as "a rolling preview of what's next in RHEL".

I'm not interested at all in a "a rolling preview of what's next in RHEL". I'm interested in a stable distribution I can trust to get updates until the given EOL date.

You've made me look elsewhere for that.

-- Chris

Chip says: December 20, 2020 at 6:16 pm

I guess my biggest issue is They should have announced this at the START of CentOS 8.0. Instead they started CentOS 8 with the belief it was going to be like CentOS7 have a long supported life cycle. What they did was basically bait and switch. Not cool. Especially not cool for those running multiple nodes on high performance computing clusters.

Alex says: December 21, 2020 at 12:51 am

I have over 300,000 Centos nodes that require Long term support as it's impossible to turn them over rapidly. I also have 154,000 RHEL nodes. I now have to migrate 454,000 nodes over to Ubuntu because Redhat just made the dumbest decision short of letting IBM acquire them I've seen. Whitehurst how could you let this happen? Nothing like millions in lost revenue from a single customer.

Nika jous says: December 21, 2020 at 1:43 pm

Just migrated to OpenSUSE. Rather than crying for dead os it's better to act yourself. Redhat is a sinking ship it probably want last next decade.Legendary failure like ibm never have upper hand in Linux world. It's too competitive now. Customers have more options to choose. I think person who have take this decision probably ignorant about the current market or a top grade fool.

Ang says: December 22, 2020 at 2:36 am

IBM/RH/CentOS keeps replaying the same talking points over and over and ignoring the actual issues people have. You say you are reading them, but choose to ignore it and that is even worse!

People still don't understand why CentOS stream and CentOS can't co-exist. If your goal was not to support CentOS 8, why did you put 2029 date or why did you even release CentOS 8 in the first place?

Hell, you could have at least had the goodwill with the community to make CentOS 8 last until end of CentOS 7! But no, you discontinued CentOS 8 giving people only 1 year to respond, and timed it right after EOL of CentOS6.

Why didn't you even bother asking the community first and come to a compromise or something?

Again, not a single person had a problem with CentOS stream, the problem was having the rug pulled under their feet! So stop pretending and address it properly!

Even worse, you knew this was an issue, it's like literally #1 on your issue list "Shift Board to be more transparent in support of becoming a contributor-focused open source project"

And you FAILED! Where was the transparency?!

Ang says: December 22, 2020 at 2:36 am

A link to the issue:

AP says: December 22, 2020 at 6:55 am

What a piece of stinking BS. What is this "gap" you're talking about? Nobody in the CentOS community cares about this pre-RHEL gap. You're trying to fix something that isn't broken. And doing that the most horrible and bizzarre way imaginable.

Len Inkster says: December 22, 2020 at 4:13 pm

As I understand it, Fedora - RHEL - CENTOS just becomes Fedora - Centos Stream - RHEL. Why just call them RH-Alpha, RH-Beta, RH?

Anyone who wants to continue with CENTOS? Fork the project and maintain it yourselves. That how we got to CENTOS from Linus Torvalds original Linux.

Peter says: December 22, 2020 at 5:36 pm

I can only comment this as disappointment, if not betrayal, to whole CentOS user base. This decision was clearly done, without considering impact to majority of CentOS community use cases.

If you need upstream contributions channel for RHEL, create it, do not destroy the stable downstream. Clear and simple. All other 'explanations' are cover ups for real purpose of this action.

This stinks of politics within IBM/RH meddling with CentOS. I hope, Rocky will bring the desired stability, that community was relying on with CentOS.

Goodbye CentOS, it was nice 15 years.

Ken Sanderson says: December 23, 2020 at 1:57 pm

We've just agreed to cancel out RHEL subscriptions and will be moving them and our Centos boxes away as well. It was a nice run but while it will be painful, it is a chance to move far far away from the terrible decisions made here.

[Dec 28, 2020] Red Hat Goes Full IBM and Says Farewell to CentOS - ServeTheHome

Dec 28, 2020 |

The intellectually easy answer to what is happening is that IBM is putting pressure on Red Hat to hit bigger numbers in the future. Red Hat sees a captive audience in its CentOS userbase and is looking to covert a percentage to paying customers. Everyone else can go to Ubuntu or elsewhere if they do not want to pay...

[Dec 28, 2020] Call our sales people and open your wallet if you use CentOS in prod

Dec 28, 2020 |

It seemed obvious (via Occam's Razor) that CentOS had cannibalized RHEL sales for the last time and was being put out to die. Statements like:

If you are using CentOS Linux 8 in a production environment, and are
concerned that CentOS Stream will not meet your needs, we encourage you
to contact Red Hat about options.

That line sure seemed like horrific marketing speak for "call our sales people and open your wallet if you use CentOS in prod." ( cue evil mustache-stroking capitalist villain ).

... CentOS will no longer be downstream of RHEL as it was previously. CentOS will now be upstream of the next RHEL minor release .

... ... ...

I'm watching Rocky Linux closely myself. While I plan to use CentOS for the vast majority of my needs, Rocky Linux may have a place in my life as well, as an example powering my home router. Generally speaking, I want my router to be as boring as absolute possible. That said even that may not stay true forever, if for example CentOS gets good WireGuard support.

Lastly, but certainly not least, Red Hat has talked about upcoming low/no-cost RHEL options. Keep an eye out for those! I have no idea the details, but if you currently use CentOS for personal use, I am optimistic that there may be a way to get RHEL for free coming soon. Again, this is just my speculation (I have zero knowledge of this beyond what has been shared publicly), but I'm personally excited.

[Dec 27, 2020] Red Hat expects you to call their sales people and open your wallet if you use CentOS in production. That will not happen.

Dec 27, 2020 |

It seemed obvious (via Occam's Razor) that CentOS had cannibalized RHEL sales for the last time and was being put out to die. Statements like:

If you are using CentOS Linux 8 in a production environment, and are
concerned that CentOS Stream will not meet your needs, we encourage you
to contact Red Hat about options.

That line sure seemed like horrific marketing speak for "call our sales people and open your wallet if you use CentOS in prod." ( cue evil mustache-stroking capitalist villain ).

... CentOS will no longer be downstream of RHEL as it was previously. CentOS will now be upstream of the next RHEL minor release .

... ... ...

I'm watching Rocky Linux closely myself. While I plan to use CentOS for the vast majority of my needs, Rocky Linux may have a place in my life as well, as an example powering my home router. Generally speaking, I want my router to be as boring as absolute possible. That said even that may not stay true forever, if for example CentOS gets good WireGuard support.

Lastly, but certainly not least, Red Hat has talked about upcoming low/no-cost RHEL options. Keep an eye out for those! I have no idea the details, but if you currently use CentOS for personal use, I am optimistic that there may be a way to get RHEL for free coming soon. Again, this is just my speculation (I have zero knowledge of this beyond what has been shared publicly), but I'm personally excited.

[Dec 27, 2020] Why Red Hat dumped CentOS for CentOS Stream by Steven J. Vaughan-Nichols

Red hat always has uneasy relationship with CentOS. Red hat brass always viwed it as something that streal Red hat licences. So "Stop thesteal" mve might be not IBM inspired but it is firmly in IBM tradition. And like many similar IBM moves it will backfire.
This hiring of CentOS developers in 2014 gave them unprecedented control over the project. Why on Earth they now want independent projects like Rocly Linux to re-emerge to fill the vacuum. They can't avoid the side affect of using GPL -- it allows clones. .Why it is better to have a project that are hostile to Red Hat and that "in-house" domesticated project is unclear to me. As many large enterprises deploy mix of Red Hat and CentOS the initial reaction might in the opposite direction the Red Hat brass expected: they will get less licesses, not more by adopting "One IBM way"
Dec 21, 2020 |

On Hacker News , the leading comment was: "Imagine if you were running a business, and deployed CentOS 8 based on the 10-year lifespan promise . You're totally screwed now, and Red Hat knows it. Why on earth didn't they make this switch starting with CentOS 9???? Let's not sugar coat this. They've betrayed us."

Over at Reddit/Linux , another person snarled, "We based our Open Source project on the latest CentOS releases since CentOS 4. Our flagship product is running on CentOS 8 and we *sure* did bet the farm on the promised EOL of 31st May 2029."

A popular tweet from The Best Linux Blog In the Unixverse, nixcraft , an account with over 200-thousand subscribers, went: Oracle buys Sun: Solaris Unix, Sun servers/workstation, and MySQL went to /dev/null. IBM buys Red Hat: CentOS is going to >/dev/null . Note to self: If a big vendor such as Oracle, IBM, MS, and others buys your fav software, start the migration procedure ASAP."

Many others joined in this choir of annoyed CentOS users that it was IBM's fault that their favorite Linux was being taken away from them. Still, others screamed Red Hat was betraying open-source itself.

... ... ...

Still another ex-Red Hat official said. If it wasn't for CentOS, Red Hat would have been a 10-billion dollar company before Red Hat became a billion-dollar business .

... ... ...

[Dec 27, 2020] There are now countless Internet servers out there that run CentOS. This is why the Debian project is so important.

Dec 27, 2020 |

There are companies that sell appliances based on CentOS. Websense/Forcepoint is one of them. The Websense appliance runs the base OS of CentOS, on top of which runs their Web-filtering application. Same with RSA. Their NetWitness SIEM runs on top of CentOS.

Likewise, there are now countless Internet servers out there that run CentOS. There's now a huge user base of CentOS out there.

This is why the Debian project is so important. I will be converting everything that is currently CentOS to Debian. Those who want to use the Ubuntu fork of Debian, that is also probably a good idea.

[Dec 23, 2020] How do you guys handle Linux Updates-Patches

Dec 23, 2020 |

Do you use scripts? Configuration Management? Satellite/Spacewalk? Or do you do you practice immutable infrastructure, and simply replace old instances with new ones that have updates pre-baked (update during provisioning)?
I also see the likes of Katello and RH CloudForms System Engine from a Google search.
On top of that, what is your methodology of determining what gets updated and what doesn't? 0 0 0 0 Goals for 2018:
Certs: RHCSA, LFCS: Ubuntu, CNCF CKA, CNCF CKAD | AWS Certified DevOps Engineer, AWS Solutions Architect Pro, AWS Certified Security Specialist, GCP Professional Cloud Architect
Learn: Terraform, Kubernetes, Prometheus & Golang | Improve: Docker , Python Programming
To-do | In Progress | Completed · Share on Facebook Share on Twitter Comments

  • thomas_ thomas_ COMPTIA N+/S+/L+ CCNA R&S CCNP R&S/ENTERPRISE/COLLAB MEMBER POSTS: 959 ■■■■■■■□□□ JUNE 2016 I'm not really a sysadmin, so you probably don't really care about my answer. I run a few websites using DigitalOcean droplets with CentOS. I don't do any of what you mentioned. Every once in awhile I will SSH in and do a "yum makecache" and then a "yum update" and just update all of the packages. The blogs are low traffic and don't get a lot of visitors to them, so I'm not too concerned with them breaking. Maybe when they are higher traffic and make more money I'll look at doing those things. I havent had anything break yet(that I know about). I would rather mitigate against known issues than worry about potential unknown ones caused by the new updates. · Share on Facebook Share on Twitter
  • Mike7 Mike7 MEMBER POSTS: 1,081 ■■■■□□□□□□ JUNE 2016 I automated patching of my CentOS VMs via yum-cron. You can configure it to download patches only or apply patches after downloading.
    The VMs are backup daily, so I can roll-back if necessary. Works well for me so far. · Share on Facebook Share on Twitter
  • brombulec brombulec MEMBER POSTS: 186 ■■■□□□□□□□ JUNE 2016 Configuration management = Puppet/chef/ansible
    Patch management = satellite/katello/spacewalk

    End of story :) · Share on Facebook Share on Twitter

  • DoubleNNs DoubleNNs MEMBER POSTS: 2,013 ■■■■■□□□□□ JUNE 2016 @Mike7
    I didn't know yum-cron existed until now. Do you think you could point me to good resources about it?

    I'm particularly interested in what benefits yum-cron has over "manually" managing yum via a regular cronjob. Also how does the download only mechanism work? And if I download-only patches on say 6/15 can I then apply only the previously downloaded patches (without downloading new ones) on 6/30?

    Where's the SaltStack love? :)

    If an environment is mostly RHEL (as opposed to CentOS or SuSE/Debian-based) do you recommend Satellite over Spacewalk? Additionally it seems like Katello and CloudForms have tons of features not in Satellite or Spacewalk. Do you recommend those?

    Even more important, what benefits does having one of those Patch Management systems provide as opposed to simply scheduling yum or Config Mgmt to update?

    Sorry for the barrage of questions, I've just never really thought too much about package management, until now. Goals for 2018:
    Certs: RHCSA, LFCS: Ubuntu, CNCF CKA, CNCF CKAD | AWS Certified DevOps Engineer, AWS Solutions Architect Pro, AWS Certified Security Specialist, GCP Professional Cloud Architect
    Learn: Terraform, Kubernetes, Prometheus & Golang | Improve: Docker , Python Programming

[Dec 23, 2020] Red Hat and GPL: uneasy romance ended long ego, but Red Hat still depends on GPL as it does not develop many components and gets them for free from the community and other vendors

It all about money and about executive bonuses: shortsighted executive want more and more money as if the current huge revenue is not enough...
Dec 23, 2020 |

former Red Hat executive confided, "CentOS was gutting sales. The customer perception was 'it's from Red Hat and it's a clone of RHEL, so it's good to go!' It's not. It's a second-rate copy." From where, this person sits, "This is 100% defensive to stave off more losses to CentOS."

Still another ex-Red Hat official said. If it wasn't for CentOS, Red Hat would have been a 10-billion dollar company before Red Hat became a billion-dollar business .

Yet another Red Hat staffer snapped, "Look at the CentOS FAQ . It says right there:

CentOS Linux is NOT supported in any way by Red Hat, Inc.

CentOS Linux is NOT Red Hat Linux, it is NOT Fedora Linux. It is NOT Red Hat Enterprise Linux. It is NOT RHEL. CentOS Linux does NOT contain Red Hat® Linux, Fedora, or Red Hat® Enterprise Linux.

CentOS Linux is NOT a clone of Red Hat® Enterprise Linux.

CentOS Linux is built from publicly available source code provided by Red Hat, Inc for Red Hat Enterprise Linux in a completely different (CentOS Project maintained) build system.

We don't owe you anything."

[Dec 23, 2020] Where do I go now that CentOS Linux is gone

Dec 23, 2020 |

... ... ...

CloudLinux OS is a RHEL rebuild distro designed for shared hosting providers. CloudLinux OS itself probably isn't the free replacement for CentOS anyone is looking for -- it's more akin to RHEL itself, with subscription fees necessary for production use.

However, the CloudLinux OS maintainers have announced that they'll be releasing a 1:1 replacement for CentOS in Q1 2021. The new fork will be a "separate, totally free OS that is fully compatible with RHEL 8 and future versions."

There are a few upsides to this upcoming fork. CloudLinux OS has been around for a while, and it has a pretty solid reputation. The new fork they're announcing won't be a big challenge for Cloud -- they're already forking RHEL regularly and tracking changes to maintain the full CloudLinux OS.

All they really need to do is make certain they separate out their own branding and additional, license-only premium features.

This should also be a very easy upgrade for CentOS 8 users -- there's already a very easy one-script migration path from CentOS to the full CloudLinux OS. Converting from CentOS to "the new fork" should be just as simple and without the registration step necessary for the full Cloud Linux.

[Dec 10, 2020] Here's a hot tip for the IBM geniuses that came up with this. Rebrand CentOS as New Coke, and you've got yourself a real winner.

Dec 10, 2020 |

Ward Mundy says: December 9, 2020 at 3:12 am

Happy to report that we've invested exactly one day in CentOS 7 to CentOS 8 migration. Thanks, IBM. Now we can turn our full attention to Debian and never look back.

Here's a hot tip for the IBM geniuses that came up with this. Rebrand CentOS as New Coke, and you've got yourself a real winner.

[Dec 10, 2020] Does Oracle Linux have staying power against Red Hat

Notable quotes:
"... If you need official support, Oracle support is generally cheaper than RedHat. ..."
"... You can legally run OL free and have access to patches/repositories. ..."
"... Full binary compatibility with RedHat so if anything is certified to run on RedHat, it automatically certified for Oracle Linux as well. ..."
"... Premium OL subscription includes a few nice bonuses like DTrace and Ksplice. ..."
"... Forgot to mention that converting RedHat Linux to Oracle is very straightforward - just matter of updating yum/dnf config to point it to Oracle repositories. Not sure if you can do it with CentOS (maybe possible, just never needed to convert CentOS to Oracle). ..."
Dec 10, 2020 |

Matthew Stier says: December 8, 2020 at 8:11 pm

My office switched the bulk of our RHEL to OL years ago, and find it a great product, and great support, and only needing to get support for systems we actually want support on.

Oracle provided scripts to convert EL5, EL6, and EL7 systems, and was able to convert some EL4 systems I still have running. (Its a matter of going through the list of installed packages, use 'rpm -e --justdb' to remove the package from the rpmdb, and re-installing the package (without dependencies) from the OL ISO.)

art_ok 1 point· 5 minutes ago

We have been using Oracle Linux exclusively last 5-6 years for everything - thousands of servers both for internal use and hundred or so customers.

Not a single time regretted, had any issues or were tempted to move to RedHat let alone CentOS.

I found Oracle Linux has several advantages over RedHat/CentOS:

If you need official support, Oracle support is generally cheaper than RedHat. You can legally run OL free and have access to patches/repositories. Full binary compatibility with RedHat so if anything is certified to run on RedHat, it automatically certified for Oracle Linux as well. It is very easy to switch between supported and free setup (say, you have proof-of-concept setup running free OL, but then it is being promoted to production status - just matter of registering box with Oracle, no need to reinstall/reconfigure anything). You can easily move licensed/support from one box to another so you always run the same OS and do not have to think and decide (RedHat for production / CentOS for Dec/test). You have a choice to run good old RedHat kernel or use newer Oracle kernel (which is pretty much vanilla kernel with minimal modification - just newer). We generally run Oracle kernels on all boxes unless we have to support particularly pedantic customer who insist on using old RedHat kernel. Premium OL subscription includes a few nice bonuses like DTrace and Ksplice.

Overall, it is pleasure to work and support OL.


I found RedHat knowledge base / documentation is much better than Oracle's Oracle does not offer extensive support for "advanced" products like JBoss, Directory Server, etc. Obviously Oracle has its own equivalent commercial offerings (Weblogic, etc) and prefers customers to use them. Some complain about quality of Oracle's support. Can't really comment on that. Had no much exposure to RedHat support, maybe used it couple of times and it was good. Oracle support can be slower, but in most cases it is good/sufficient. Actually over the last few years support quality for Linux has improved noticeably - guess Oracle pushes their cloud very aggressively and as a result invests in Linux support (as Oracle cloud aka OCI runs on Oracle Linux).
art_ok 1 point· just now

Forgot to mention that converting RedHat Linux to Oracle is very straightforward - just matter of updating yum/dnf config to point it to Oracle repositories. Not sure if you can do it with CentOS (maybe possible, just never needed to convert CentOS to Oracle).

[Dec 10, 2020] Backlash against Red Hat management started

At the end IBM/Red Hat might even lose money as powerful organizations, such as universities, might abandon Red Hat as the platform. Or may be not. Red Hat managed to push systemd down the throat without any major hit to the revenue. Why not to repeat the trick with CentOS? In any case IBM owns enterprise Linux and bitter complains and threats of retribution in this forum is just a symptom that the development now is completely driven by corporate brass, and all key decisions belong to them.
Community wise, this is plain bad news for Open Source and all Open Source communities. IBM explained to them very clearly: you does not matter. And organized minority always beat disorganized majority. Actually most large organizations will probably stick with Red Hat compatible OS, probably moving to Oracle Linux or Rocky Linux, is it materialize, not to Debian.
What is interesting is that most people here believe that when security patches are stopped that's the end of the life for the particular Linux version. It is an interesting superstition and it shows how conditioned by corporations Linux folk are and how far from BSD folk they are actually are. Security is an architectural thing first and foremost. Patched are just band aid and they can't change general security situation in Linux no matter how hard anyone tries. But they now serve as a powerful tool of corporate mind control over the user population. Feat is a powerful instrument of mind control.
In reality security of most systems on internal network does no change one bit with patches. And on external network only applications that have open ports that matter (that's why ssh should be restricted to the subnets used, not to be opened to the whole world)
Notable quotes:
"... Bad idea. The whole point of using CentOS is it's an exact binary-compatible rebuild of RHEL. With this decision RH is killing CentOS and inviting to create a new *fork* or use another distribution ..."
"... We all knew from the moment IBM bought Redhat that we were on borrowed time. IBM will do everything they can to push people to RHEL even if that includes destroying a great community project like CentOS. ..."
"... First CoreOS, now CentOS. It's about time to switch to one of the *BSDs. ..."
"... I guess that means the tens of thousands of cores of research compute I manage at a large University will be migrating to Debian. ..."
"... IBM is declining, hence they need more profit from "useless" product line. So disgusting ..."
"... An entire team worked for months on a centos8 transition at the uni I work at. I assume a small portion can be salvaged but reading this it seems most of it will simply go out the window ..."
"... Unless the community can center on a new single proper fork of RHEL, it makes the most sense (to me) to seek refuge in Debian as it is quite close to CentOS in stability terms. ..."
"... Another one bites the dust due to corporate greed, which IBM exemplifies ..."
"... More likely to drive people entirely out of the RHEL ecosystem. ..."
"... Don't trust Red Hat. 1 year ago Red Hat's CTO Chris Wright agreed in an interview: 'Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."' ..."
"... 'To be exact, CentOS Stream is an upstream development platform for ecosystem developers. It will be updated several times a day. This is not a production operating system. It's purely a developer's distro.' ..."
"... Read again: CentOS Stream is not a production operating system. 'Nuff said. ..."
"... This makes my decision to go with Ansible and CentOS 8 in our enterprise simple. Nope, time to got with Puppet or Chef. ..."
"... Ironic, and it puts those of us who have recently migrated many of our development serves to CentOS8 in a really bad spot. Luckily we haven't licensed RHEL8 production servers yet -- and now that's never going to happen. ..."
"... What IBM fails to understand is that many of us who use CentOS for personal projects also work for corporations that spend millions of dollars annually on products from companies like IBM and have great influence over what vendors are chosen. This is a pure betrayal of the community. Expect nothing less from IBM. ..."
"... IBM is cashing in on its Red Hat acquisition by attempting to squeeze extra licenses from its customers.. ..."
"... Hoping that stabbing Open Source community in the back, will make it switch to commercial licenses is absolutely preposterous. This shows how disconnected they're from reality and consumed by greed and it will simply backfire on them, when we switch to Debian or any other LTS alternative. ..."
"... Centos was handy for education and training purposes and production when you couldn't afford the fees for "support", now it will just be a shadow of Fedora. ..."
"... There was always a conflict of interest associated with Redhat managing the Centos project and this is the end result of this conflict of interest. ..."
"... The reality is that someone will repackage Redhat and make it just like Centos. The only difference is that Redhat now live in the same camp as Oracle. ..."
"... Everyone predicted this when redhat bought centos. And when IBM bought RedHat it cemented everyone's notion. ..."
"... I am senior system admin in my organization which spends millions dollar a year on RH&IBM products. From tomorrow, I will do my best to convince management to minimize our spending on RH & IBM ..."
"... IBM are seeing every CentOS install as a missed RHEL subscription... ..."
"... Some years ago IBM bought Informix. We switched to PostgreSQL, when Informix was IBMized. One year ago IBM bought Red Hat and CentOS. CentOS is now IBMized. Guess what will happen with our CentOS installations. What's wrong with IBM? ..."
"... Remember when RedHat, around RH-7.x, wanted to charge for the distro, the community revolted so much that RedHat saw their mistake and released Fedora. You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time. ..."
"... As I predicted, RHEL is destroying CentOS, and IBM is running Red Hat into the ground in the name of profit$. Why is anyone surprised? I give Red Hat 12-18 months of life, before they become another ordinary dept of IBM, producing IBM Linux. ..."
"... Happy to donate and be part of the revolution away the Corporate vampire Squid that is IBM ..."
"... Red Hat's word now means nothing to me. Disagreements over future plans and technical direction are one thing, but you *lied* to us about CentOS 8's support cycle, to the detriment of *everybody*. You cost us real money relying on a promise you made, we thought, in good faith. ..."
Dec 10, 2020 |

Internet User says: December 8, 2020 at 5:13 pm

This is a pretty clear indication that you people are completely out of touch with your users.

Joel B. D. says: December 8, 2020 at 5:17 pm

Bad idea. The whole point of using CentOS is it's an exact binary-compatible rebuild of RHEL. With this decision RH is killing CentOS and inviting to create a new *fork* or use another distribution. Do you realize how much market share you will be losing and how much chaos you will be creating with this?

"If you are using CentOS Linux 8 in a production environment, and are concerned that CentOS Stream will not meet your needs, we encourage you to contact Red Hat about options". So this is the way RH is telling us they don't want anyone to use CentOS anymore and switch to RHEL?

Michael says: December 8, 2020 at 8:31 pm

That's exactly what they're saying. We all knew from the moment IBM bought Redhat that we were on borrowed time. IBM will do everything they can to push people to RHEL even if that includes destroying a great community project like CentOS.

OS says: December 8, 2020 at 6:20 pm

First CoreOS, now CentOS. It's about time to switch to one of the *BSDs.

JD says: December 8, 2020 at 6:35 pm

Wow. Well, I guess that means the tens of thousands of cores of research compute I manage at a large University will be migrating to Debian. I've just started preparing to shift from Scientific Linux 7 to CentOS due to SL being discontinued by 2024. Glad I've only just started - not much work to throw away.

ShameOnIBM says: December 8, 2020 at 7:07 pm

IBM is declining, hence they need more profit from "useless" product line. So disgusting

MLF says: December 8, 2020 at 7:15 pm

An entire team worked for months on a centos8 transition at the uni I work at. I assume a small portion can be salvaged but reading this it seems most of it will simply go out the window. Does anyone know if this decision of dumping centos8 is final?

MM says: December 8, 2020 at 7:28 pm

Unless the community can center on a new single proper fork of RHEL, it makes the most sense (to me) to seek refuge in Debian as it is quite close to CentOS in stability terms.

Already existing functioning distribution ecosystem, can probably do good with influx of resources to enhance the missing bits, such as further improving SELinux support and expanding Debian security team.

I say this without any official or unofficial involvement with the Debian project, other than being a user.

And we have just launched hundred of Centos 8 servers.

Faisal Sehbai says: December 8, 2020 at 7:32 pm

Another one bites the dust due to corporate greed, which IBM exemplifies. This is why I shuddered when they bought RH. There is nothing that IBM touches that gets better, other than the bottom line of their suits!


William Smith says: December 8, 2020 at 7:39 pm

This is a big mistake. RedHat did this with RedHat Linux 9 the market leading Linux and created Fedora, now an also-ran to Ubuntu. I spent a lot of time during Covid to convert from earlier versions to 8, and now will have to review that work with my customer.

Daniele Brunengo says: December 8, 2020 at 7:48 pm

I just finished building a CentOS 8 web server, worked out all the nooks and crannies and was very satisfied with the result. Now I have to do everything from scratch? The reason why I chose this release was that every website and its brother were giving a 2029 EOL. Changing that is the worst betrayal of trust possible for the CentOS community. It's unbelievable.

David Potterveld says: December 8, 2020 at 8:08 pm

What a colossal blunder: a pivot from the long-standing mission of an OS providing stability, to an unstable development platform, in a manner that betrays its current users. They should remove the "C" from CentOS because it no longer has any connection to a community effort. I wonder if this is a move calculated to drive people from a free near clone of RHEL to a paid RHEL subscription? More likely to drive people entirely out of the RHEL ecosystem.

a says: December 8, 2020 at 9:08 pm

From a RHEL perspective I understand why they'd want it this way. CentOS was probably cutting deep into potential RedHat license sales. Though why or how RedHat would have a say in how CentOS is being run in the first place is.. troubling.

From a CentOS perspective you may as well just take the project out back and close it now. If people wanted to run beta-test tier RHEL they'd run Fedora. "LATER SECURITY FIXES AND UNTESTED 'FEATURES'?! SIGN ME UP!" -nobody

I'll probably run CentOS 7 until the end and then swap over to Debian when support starts hurting me. What a pain.

Ralf says: December 8, 2020 at 9:08 pm

Don't trust Red Hat. 1 year ago Red Hat's CTO Chris Wright agreed in an interview: 'Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."'

I'm a current user of old school CentOS, so keep your promise, Mr CTO.

Tamas says: December 8, 2020 at 10:01 pm

That was quick: "Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."

Konstantin says: December 9, 2020 at 3:36 pm

From the same article: 'To be exact, CentOS Stream is an upstream development platform for ecosystem developers. It will be updated several times a day. This is not a production operating system. It's purely a developer's distro.'

Read again: CentOS Stream is not a production operating system. 'Nuff said.

Samuel C. says: December 8, 2020 at 10:53 pm

This makes my decision to go with Ansible and CentOS 8 in our enterprise simple. Nope, time to got with Puppet or Chef. IBM did what I thought they would screw up Red Hat. My company is dumping IBM software everywhere - this means we need to dump CentOS now too.

Brendan says: December 9, 2020 at 12:15 am

Ironic, and it puts those of us who have recently migrated many of our development serves to CentOS8 in a really bad spot. Luckily we haven't licensed RHEL8 production servers yet -- and now that's never going to happen.

vinci says: December 8, 2020 at 11:45 pm

I can't believe what IBM is actually doing. This is a direct move against all that open source means. They want to do exactly the same thing they're doing with awx (vs. ansible tower). You're going against everything that stands for open source. And on top of that you choose to stop offering support for Centos 8, all of a sudden! What a horrid move on your part. This only reliable choice that remains is probably going to be Debian/Ubuntu. What a waste...

Peter Vonway says: December 8, 2020 at 11:56 pm

What IBM fails to understand is that many of us who use CentOS for personal projects also work for corporations that spend millions of dollars annually on products from companies like IBM and have great influence over what vendors are chosen. This is a pure betrayal of the community. Expect nothing less from IBM.

Scott says: December 9, 2020 at 8:38 am

This is exactly it. IBM is cashing in on its Red Hat acquisition by attempting to squeeze extra licenses from its customers.. while not taking into account the fact that Red Hat's strong adoption into the enterprise is a direct consequence of engineers using the nonproprietary version to develop things at home in their spare time.

Having an open source, non support contract version of your OS is exactly what drives adoption towards the supported version once the business decides to put something into production.

They are choosing to kill the golden goose in order to get the next few eggs faster. IBM doesn't care about anything but its large enterprise customers. Very stereotypically IBM.

OSLover says: December 9, 2020 at 12:09 am

So sad. Not only breaking the support promise but so quickly (2021!)

Business wise, a lot of business software is providing CentOS packages and support. Like hosting panels, backup software, virtualization, Management. I mean A LOT of money worldwide is in dark waters now with this announcement. It took years for CentOS to appear in their supported and tested distros. It will disappear now much faster.

Community wise, this is plain bad news for Open Source and all Open Source communities. This is sad. I wonder, are open source developers nowadays happy to spend so many hours for something that will in the end benefit IBM "subscribers" only in the end? I don't think they are.

What a sad way to end 2020.

technick says: December 9, 2020 at 12:09 am

I don't want to give up on CentOS but this is a strong life changing decision. My background is linux engineering with over 15+ years of hardcore experience. CentOS has always been my go to when an organization didn't have the appetite for RHEL and the $75 a year license fee per instance. I fought off Ubuntu take overs at 2 of the last 3 organizations I've been with successfully. I can't, won't fight off any more and start advocating for Ubuntu or pure Debian moving forward.

RIP CentOS. Red Hat killed a great project. I wonder if Anisble will be next?

ConcernedAdmin says: December 9, 2020 at 12:47 am

Hoping that stabbing Open Source community in the back, will make it switch to commercial licenses is absolutely preposterous. This shows how disconnected they're from reality and consumed by greed and it will simply backfire on them, when we switch to Debian or any other LTS alternative. I can't think moving everything I so caressed and loved to a mess like Ubuntu.

John says: December 9, 2020 at 1:32 am

Assinine. This is completely ridiculous. I have migrated several servers from CentOS 7 to 8 recently with more to go. We also have a RHEL subscription for outward facing servers, CentOS internal. This type of change should absolutely have been announced for CentOS 9. This is garbage saying 1 year from now when it was supposed to be till 2029. A complete betrayal. One year to move everything??? Stupid.

Now I'm going to be looking at a couple of other options but it won't be RHEL after this type of move. This has destroyed my trust in RHEL as I'm sure IBM pushed for this. You will be losing my RHEL money once I chose and migrate. I get companies exist to make money and that's fine. This though is purely a naked money grab that betrays an established timeline and is about to force massive work on lots of people in a tiny timeframe saying "f you customers.". You will no longer get my money for doing that to me

Concerned Fren says: December 9, 2020 at 1:52 am

In hind sight it's clear to see that the only reason RHEL took over CentOS was to kill the competition.

This is also highly frustrating as I just completed new CentOS8 and RHEL8 builds for Non-production and Production Servers and had already begun deployments. Now I'm left in situation of finding a new Linux distribution for our enterprise while I sweat out the last few years of RHEL7/CentOS7. Ubuntu is probably a no go there enterprise tooling is somewhat lacking, and I am of the opinion that they will likely be gobbled up buy Microsoft in the next few years.

Unfortunately, the short-sighted RH/IBMer that made this decision failed to realize that a lot of Admins that used Centos at home and in the enterprise also advocated and drove sales towards RedHat as well. Now with this announcement I'm afraid the damage is done and even if you were to take back your announcement, trust has been broken and the blowback will ultimately mean the death of CentOS and reduced sales of RHEL. There is however an opportunity for another Corporations such as SUSE which is own buy Microfocus to capitalize on this epic blunder simply by announcing an LTS version of OpenSues Leap. This would in turn move people/corporations to the Suse platform which in turn would drive sale for SLES.

William Ashford says: December 9, 2020 at 2:02 am

So the inevitable has come to pass, what was once a useful Distro will disappear like others have. Centos was handy for education and training purposes and production when you couldn't afford the fees for "support", now it will just be a shadow of Fedora.

Christian Reiss says: December 9, 2020 at 6:28 am

This is disgusting. Bah. As a CTO I will now - today - assemble my teams and develop a plan to migrate all DataCenters back to Debian for good. I will also instantly instruct the termination of all mirroring of your software.

For the software (CentOS) I hope for a quick death that will not drag on for years.

Ian says: December 9, 2020 at 2:10 am

This is a bit sad. There was always a conflict of interest associated with Redhat managing the Centos project and this is the end result of this conflict of interest.

There is a genuine benefit associated with the existence of Centos for Redhat however it would appear that that benefit isn't great enough and some arse clown thought that by forcing users to migrate it will increase Redhat's revenue.

The reality is that someone will repackage Redhat and make it just like Centos. The only difference is that Redhat now live in the same camp as Oracle.

cody says: December 9, 2020 at 4:53 am

Everyone predicted this when redhat bought centos. And when IBM bought RedHat it cemented everyone's notion.

Ganesan Rajagopal says: December 9, 2020 at 5:09 am

Thankfully we just started our migration from CentOS 7 to 8 and this surely puts a stop to that. Even if CentOS backtracks on this decision because of community backlash, the reality is the trust is lost. You've just given a huge leg for Ubuntu/Debian in the enterprise. Congratulations!

Bomel says: December 9, 2020 at 6:22 am

I am senior system admin in my organization which spends millions dollar a year on RH&IBM products. From tomorrow, I will do my best to convince management to minimize our spending on RH & IBM, and start looking for alternatives to replace existing RH & IBM products under my watch.

Steve says: December 9, 2020 at 8:57 am

IBM are seeing every CentOS install as a missed RHEL subscription...

Ralf says: December 9, 2020 at 10:29 am

Some years ago IBM bought Informix. We switched to PostgreSQL, when Informix was IBMized. One year ago IBM bought Red Hat and CentOS. CentOS is now IBMized. Guess what will happen with our CentOS installations. What's wrong with IBM?

Michel-André says: December 9, 2020 at 5:18 pm

Hi all,

Remember when RedHat, around RH-7.x, wanted to charge for the distro, the community revolted so much that RedHat saw their mistake and released Fedora. You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.

Even though RedHat/CentOS has a very large share of the Linux server market, it will suffer the same fate as Novell (had 85% of the matket), disappearing into darkness !


PeteVM says: December 9, 2020 at 5:27 pm

As I predicted, RHEL is destroying CentOS, and IBM is running Red Hat into the ground in the name of profit$. Why is anyone surprised? I give Red Hat 12-18 months of life, before they become another ordinary dept of IBM, producing IBM Linux.

CentOS is dead. Time to either go back to Debian and its derivatives, or just pay for RHEL, or IBMEL, and suck it up.

JadeK says: December 9, 2020 at 6:36 pm

I am mid-migration from Rhel/Cent6 to 8. I now have to stop a major project for several hundred systems. My group will have to go back to rebuild every CentOS 8 system we've spent the last 6 months deploying.

Congrats fellas, you did it. You perfected the transition to Debian from CentOS.

Godimir Kroczweck says: December 9, 2020 at 8:21 pm

I find it kind of funny, I find it kind of sad. The dreams in which I moving 1.5K+ machines to whatever distro I yet have to find fitting for replacement to are the..

Wait. How could one with all the seriousness consider cutting down already published EOL a good idea?

I literally had to convince people to move from Ubuntu and Debian installations to CentOS for sake of stability and longer support, just for become looking like a clown now, because with single move distro deprived from both of this.

Paul R says: December 9, 2020 at 9:14 pm

Happy to donate and be part of the revolution away the Corporate vampire Squid that is IBM

Nicholas Knight says: December 9, 2020 at 9:34 pm

Red Hat's word now means nothing to me. Disagreements over future plans and technical direction are one thing, but you *lied* to us about CentOS 8's support cycle, to the detriment of *everybody*. You cost us real money relying on a promise you made, we thought, in good faith. It is now clear Red Hat no longer knows what "good faith" means, and acts only as a Trumpian vacuum of wealth.

[Dec 10, 2020] GPL bites Red hat in the butt: they might faceemarge of CentOs alternative due to the wave of support for such distro

Dec 10, 2020 |

Sam Callis says: December 8, 2020 at 3:58 pm

I have been using CentOS for over 10 years and one of the things I loved about it was how stable it has been. Now, instead of being a stable release, it is changing to the beta testing ground for RHEL 8.

And instead of 10 years of a support you need to update to the latest dot release. This has me, very concerned.

Sieciowski says: December 9, 2020 at 11:19 am

well, 10 years - have you ever contributed with anything for the CentOS community, or paid them a wage or at least donated some decent hardware for development or maybe just being parasite all the time and now are you surprised that someone has to buy it's your own lunches for a change?

If you think you might have done it even better why not take RH sources and make your own FreeRHos whatever distro, then support, maintain and patch all the subsequent versions for free?

Joe says: December 9, 2020 at 11:47 am

That's ridiculous. RHEL has benefitted from the free testing and corner case usage of CentOS users and made money hand-over-fist on RHEL. Shed no tears for using CentOS for free. That is the benefit of opening the core of your product.

Ljubomir Ljubojevic says: December 9, 2020 at 12:31 pm

You are missing a very important point. Goal of CentOS project was to rebuild RHEL, nothing else. If money was the problem, they could have asked for donations and it would be clear is there can be financial support for rebuild or not.

Putting entire community in front of done deal is disheartening and no one will trust Red Hat that they are pro-community, not to mention Red Hat employees that sit in CentOS board, who can trust their integrity after this fiasco?

Matt Phelps says: December 8, 2020 at 4:12 pm

This is a breach of trust from the already published timeline of CentOS 8 where the EOL was May 2029. One year's notice for such a massive change is unacceptable.

Move this approach to CentOS 9

fahrradflucht says: December 8, 2020 at 5:37 pm

This! People already started deploying CentOS 8 with the expectation of 10 years of updates. - Even a migration to RHEL 8 would imply completely reprovisioning the systems which is a big ask for systems deployed in the field.

Gregory Kurtzer says: December 8, 2020 at 4:27 pm

I am considering creating another rebuild of RHEL and may even be able to hire some people for this effort. If you are interested in helping, please join the HPCng slack (link on the website

Greg (original founder of CentOS)

A says: December 8, 2020 at 7:11 pm

Not a programmer, but I'd certainly use it. I hope you get it off the ground.

Michael says: December 8, 2020 at 8:26 pm

This sounds like a great idea and getting control away from corporate entities like IBM would be helpful. Have you considered reviving the Scientific Linux project?

Bond Masuda says: December 8, 2020 at 11:53 pm

Feel free to contact me. I'm a long time RH user (since pre-RHEL when it was RHL) in both server and desktop environments. I've built and maintained some RPMs for some private projects that used CentOS as foundation. I can contribute compute and storage resources. I can program in a few different languages.

Rex says: December 9, 2020 at 3:46 am

Dear Greg,

Thank you for considering starting another RHEL rebuild. If and when you do, please consider making your new website a Brave Verified Content Creator. I earn a little bit of money every month using the Brave browser, and I end up donating it to Wikipedia every month because there are so few Brave Verified websites.

The verification process is free, and takes about 15 to 30 minutes. I believe that the Brave browser now has more than 8 million users.

dovla091 says: December 9, 2020 at 10:47 am

Wikipedia. The so called organization that get tons of money from tech oligarchs and yet the whine about we need money and support? (If you don't believe me just check their biggest donors) also they keen to be insanely biased and allow to write on their web whoever pays the most... Seriously, find other organisation to donate your money

dan says: December 9, 2020 at 4:00 am

Please keep us updated. I can't donate much, but I'm sure many would love to donate to this cause.

Chad Gregory says: December 9, 2020 at 7:21 pm

Not sure what I could do but I will keep an eye out things I could help with. This change to CentOS really pisses me off as I have stood up 2 CentOS servers for my works production environment in the last year.

Vasile M says: December 8, 2020 at 8:43 pm

LOL... CentOS is RH from 2014 to date. What you expected? As long as CentOS is so good and stable, that cuts some of RHEL sales... RH and now IBM just think of profit. It was expected, search the net for comments back in 2014.

[Dec 10, 2020] Amazon Linux 2

Dec 10, 2020 |

Amazon Linux 2 is the next generation of Amazon Linux, a Linux server operating system from Amazon Web Services (AWS). It provides a secure, stable, and high performance execution environment to develop and run cloud and enterprise applications. With Amazon Linux 2, you get an application environment that offers long term support with access to the latest innovations in the Linux ecosystem. Amazon Linux 2 is provided at no additional charge.

Amazon Linux 2 is available as an Amazon Machine Image (AMI) for use on Amazon Elastic Compute Cloud (Amazon EC2). It is also available as a Docker container image and as a virtual machine image for use on Kernel-based Virtual Machine (KVM), Oracle VM VirtualBox, Microsoft Hyper-V, and VMware ESXi. The virtual machine images can be used for on-premises development and testing. Amazon Linux 2 supports the latest Amazon EC2 features and includes packages that enable easy integration with AWS. AWS provides ongoing security and maintenance updates for Amazon Linux 2.

[Dec 10, 2020] A letter to IBM brass

Notable quotes:
"... Redhat endorsed that moral contract when you brought official support to CentOS back in 2014. ..."
"... Now that you decided to turn your back on the community, even if another RHEL fork comes out, there will be an exodus of the community. ..."
"... Also, a lot of smaller developers won't support RHEL anymore because their target weren't big companies, making less and less products available without the need of self supporting RPM builds. ..."
"... Gregory Kurtzer's fork will take time to grow, but in the meantime, people will need a clear vision of the future. ..."
"... This means that we'll now have to turn to other linux flavors, like Debian, or OpenSUSE, of which at least some have hardware vendor support too, but with a lesser lifecycle. ..."
"... I think you destroyed a large part of the RHEL / CentOS community with this move today. ..."
"... Maybe you'll get more RHEL subscriptions in the next months yielding instant profits, but the long run growth is now far more uncertain. ..."
Dec 10, 2020 |

Orsiris de Jong says: December 9, 2020 at 9:41 am

Dear IBM,

As a lot of us here, I've been in the CentOS / RHEL community for more than 10 years.
Reasons of that choice were stability, long term support and good hardware vendor support.

Like many others, I've built much of my skills upon this linux flavor for years, and have been implicated into the community for numerous bug reports, bug fixes, and howto writeups.

Using CentOS was the good alternative to RHEL on a lot of non critical systems, and for smaller companies like the one I work for.

The moral contract has always been a rock solid "Community Enterprise OS" in exchange of community support, bug reports & fixes, and growing interest from developers.

Redhat endorsed that moral contract when you brought official support to CentOS back in 2014.

Now that you decided to turn your back on the community, even if another RHEL fork comes out, there will be an exodus of the community.

Also, a lot of smaller developers won't support RHEL anymore because their target weren't big companies, making less and less products available without the need of self supporting RPM builds.

This will make RHEL less and less widely used by startups, enthusiasts and others.

CentOS Stream being the upstream of RHEL, I highly doubt system architects and developers are willing to be beta testers for RHEL.

Providing a free RHEL subscription for Open Source projects just sounds like your next step to keep a bit of the exodus from happening, but I'd bet that "free" subscription will get more and more restrictions later on, pushing to a full RHEL support contract.

As a lot of people here, I won't go the Oracle way, they already did a very good job destroying other company's legacy.

Gregory Kurtzer's fork will take time to grow, but in the meantime, people will need a clear vision of the future.

This means that we'll now have to turn to other linux flavors, like Debian, or OpenSUSE, of which at least some have hardware vendor support too, but with a lesser lifecycle.

I think you destroyed a large part of the RHEL / CentOS community with this move today.

Maybe you'll get more RHEL subscriptions in the next months yielding instant profits, but the long run growth is now far more uncertain.

... ... ...

[Dec 10, 2020] CentOS will be RHEL's beta, but CentOS denies this

IBM have a history of taking over companies and turning them into junk, so I am not that surprised. I am surprised that it took IBM brass so long to kill CentOS after Red Hat acquisition.
Notable quotes:
"... By W3Tech 's count, while Ubuntu is the most popular Linux server operating system with 47.5%, CentOS is number two with 18.8% and Debian is third, 17.5%. RHEL? It's a distant fourth with 1.8%. ..."
"... Red Hat will continue to support CentOS 7 and produce it through the remainder of the RHEL 7 life cycle . That means if you're using CentOS 7, you'll see support through June 30, 2024 ..."
Dec 10, 2020 |

I'm far from alone. By W3Tech 's count, while Ubuntu is the most popular Linux server operating system with 47.5%, CentOS is number two with 18.8% and Debian is third, 17.5%. RHEL? It's a distant fourth with 1.8%.

If you think you just realized why Red Hat might want to remove CentOS from the server playing field, you're far from the first to think that.

Red Hat will continue to support CentOS 7 and produce it through the remainder of the RHEL 7 life cycle . That means if you're using CentOS 7, you'll see support through June 30, 2024

[Dec 10, 2020] Time to bring back Scientific Linux

Notable quotes:
"... I bet Fermilab are thrilled back in 2019 they announced that they wouldn't develop Scientific Linux 8, and focus on CentOS 8 instead. ..."
Dec 10, 2020 |

I bet Fermilab are thrilled back in 2019 they announced that they wouldn't develop Scientific Linux 8, and focus on CentOS 8 instead.;11d6001.1904 l

clickwir 19 points· 1 day ago

Time to bring back Scientific Linux.

[Dec 10, 2020] CentOS Project: Embraced, extended, extinguished.

Notable quotes:
"... My gut feeling is that something like Scientific Linux will make a return and current CentOS users will just use that. ..."
Dec 10, 2020 |

KugelKurt 18 points 1 day ago

I wonder what Red Hat's plan is WRT companies like Blackmagic Design that ship CentOS as part of their studio equipment.

The cost of a RHEL license isn't the issue when the overall cost of the equipment is in the tens of thousands but unless I missed a change in Red Hat's trademark policy, Blackmagic cannot distribute a modified version of RHEL and without removing all trademarks first.

I don't think a rolling release distribution is what BMD wants.

My gut feeling is that something like Scientific Linux will make a return and current CentOS users will just use that.

[Dec 10, 2020] Oracle Linux -- A better alternative to CentOS

Currently limited of CentOS 6 and CentOS7.
Dec 10, 2020 |
Oracle Linux: A better alternative to CentOS

We firmly believe that Oracle Linux is the best Linux distribution on the market today. It's reliable, it's affordable, it's 100% compatible with your existing applications, and it gives you access to some of the most cutting-edge innovations in Linux like Ksplice and DTrace.

But if you're here, you're a CentOS user. Which means that you don't pay for a distribution at all, for at least some of your systems. So even if we made the best paid distribution in the world (and we think we do), we can't actually get it to you... or can we?

We're putting Oracle Linux in your hands by doing two things:

  • We've made the Oracle Linux software available free of charge
  • We've created a simple script to switch your CentOS systems to Oracle Linux

We think you'll like what you find, and we'd love for you to give it a try.


Wait, doesn't Oracle Linux cost money?
Oracle Linux support costs money. If you just want the software, it's 100% free. And it's all in our yum repo at . Major releases, errata, the whole shebang. Free source code, free binaries, free updates, freely redistributable, free for production use. Yes, we know that this is Oracle, but it's actually free. Seriously.
Is this just another CentOS?
Inasmuch as they're both 100% binary-compatible with Red Hat Enterprise Linux, yes, this is just like CentOS. Your applications will continue to work without any modification whatsoever. However, there are several important differences that make Oracle Linux far superior to CentOS.
How is this better than CentOS?
Well, for one, you're getting the exact same bits our paying enterprise customers are getting . So that means a few things. Importantly, it means virtually no delay between when Red Hat releases a kernel and when Oracle Linux does:

Delay in kernel security advisories since January 2018: CentOS vs Oracle Linux; CentOS has large fluctuations in delays

So if you don't want to risk another CentOS delay, Oracle Linux is a better alternative for you. It turns out that our enterprise customers don't like to wait for updates -- and neither should you.

What about the code quality?
Again, you're running the exact same code that our enterprise customers are, so it has to be rock-solid. Unlike CentOS, we have a large paid team of developers, QA, and support engineers that work to make sure this is reliable.
What if I want support?
If you're running Oracle Linux and want support, you can purchase a support contract from us (and it's significantly cheaper than support from Red Hat). No reinstallation, no nothing -- remember, you're running the same code as our customers.

Contrast that with the CentOS/RHEL story. If you find yourself needing to buy support, have fun reinstalling your system with RHEL before anyone will talk to you.

Why are you doing this?
This is not some gimmick to get you running Oracle Linux so that you buy support from us. If you're perfectly happy running without a support contract, so are we. We're delighted that you're running Oracle Linux instead of something else.

At the end of the day, we're proud of the work we put into Oracle Linux. We think we have the most compelling Linux offering out there, and we want more people to experience it.

How do I make the switch?
Run the following as root:

curl -O

What versions of CentOS can I switch? can convert your CentOS 6 and 7 systems to Oracle Linux.
What does the script do?
The script has two main functions: it switches your yum configuration to use the Oracle Linux yum server to update some core packages and installs the latest Oracle Unbreakable Enterprise Kernel. That's it! You won't even need to restart after switching, but we recommend you do to take advantage of UEK.
Is it safe?
The script takes precautions to back up and restore any repository files it changes, so if it does not work on your system it will leave it in working order. If you encounter any issues, please get in touch with us by emailing [email protected] .

[Dec 10, 2020] The demise of CentOs and independent training providers

Dec 10, 2020 |

Anthony Mwai

says: December 8, 2020 at 8:44 pm

IBM is messing up RedHat after the take over last year. This is the most unfortunate news to the Free Open-Source community. Companies have been using CentOS as a testing bed before committing to purchase RHEL subscription licenses.

We need to rethink before rolling out RedHat/CentOS 8 training in our Centre.

Joe says: December 9, 2020 at 1:03 pm

You can use Oracle Linux in exactly the same way as you did CentOS except that you have the option of buying support without reinstalling a "commercial" variant.

Everything's in the public repos except a few addons like ksplice. You don't even have to go through the e-delivery to download the ISOs any more, they're all linked from

TechSmurf says: December 9, 2020 at 12:38 am

Not likely. Oracle Linux has extensive use by paying Oracle customers as a host OS for their database software and in general purposes for Oracle Cloud Infrastructure.

Oracle customers would be even less thrilled about Streams than CentOS users. I hate to admit it, but Oracle has the opportunity to take a significant chunk of the CentOS user base if they don't do anything Oracle-ish, myself included.

I'll be pretty surprised if they don't completely destroy their own windfall opportunity, though.

David Anderson says: December 8, 2020 at 7:16 pm

"OEL is literally a rebranded RH."

So, what's not to like? I also was under the impression that OEL was a paid offering, but apparently this is wrong - - "Oracle Linux is easy to download and completely free to use, distribute, and update."

Bill Murmor says: December 9, 2020 at 5:04 pm

So, what's the problem?

IBM has discontinued CentOS. Oracle is producing a working replacement for CentOS. If, at some point, Oracle attacks their product's users in the way IBM has here, then one can move to Debian, but for now, it's a working solution, as CentOS no longer is.

k1 says: December 9, 2020 at 7:58 pm

Because it's a trust issue. RedHat has lost trust. Oracle never had it in the first place.

[Dec 10, 2020] Oracle has a converter script for CentOS 7. And here is a quick hack to convert CentOs8 to Oracle Linux

You can use Oracle Linux exactly like CentOS, only better
Ang says: December 9, 2020 at 5:04 pm "I never thought we'd see the day Oracle is more trustworthy than RedHat/IBM. But I guess such things do happen with time..."
Notable quotes:
"... The link says that you don't have to pay for Oracle Linux . So switching to it from CentOS 8 could be a very easy option. ..."
"... this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv: ..."
Dec 10, 2020 |

Charlie F. says: December 8, 2020 at 6:37 pm

Oracle has a converter script for CentOS 7, and they will sell you OS support after you run it:

It would be nice if Oracle would update that for CentOS 8.

David Anderson says: December 8, 2020 at 7:15 pm

The link says that you don't have to pay for Oracle Linux . So switching to it from CentOS 8 could be a very easy option.

Max Grü says: December 9, 2020 at 2:05 pm

Oracle Linux is free. The only thing that costs money is support for it. I quote "Yes, we know that this is Oracle, but it's actually free. Seriously."

Phil says: December 9, 2020 at 2:10 pm

this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv:

wget \
${repobase}/redhat-release-8.3- \
${repobase}/oracle-release-el8-1.0-1.el8.x86_64.rpm \
${repobase}/oraclelinux-release-8.3-1.0.4.el8.x86_64.rpm \
rpm -e centos-linux-release --nodeps
dnf --disablerepo='*' localinstall ./*rpm 
:> /etc/dnf/vars/ociregion
dnf remove centos-linux-repos
dnf --refresh distro-sync
# since I wanted to try out the unbreakable enterprise kernel:
dnf install kernel-uek
dnf remove kernel

[Dec 10, 2020] Disable Boot Splash screen in Centos - smtsang

Dec 10, 2020 |

To try CentOS 6 display the details about what its doing while it boots, first we need to make a backup of the file at /etc/grub.conf in case there is something goes wrong. Then open /etc/grub.conf in the editor, and look for the line(s) that begin with 'kernel'.

At the end of line is shown 'rhgb' and 'quiet'. Remove both of those words from grub.conf. After saving the changes, reboot the server and it will see everything its doing when it starts up.

[Dec 09, 2020] Is Oracle A Real Alternative To CentOS

Notable quotes:
"... massive amount of extra packages and full rebuild of EPEL (same link): ..."
Dec 09, 2020 |

Is Oracle A Real Alternative To CentOS? Home " CentOS " Is Oracle A Real Alternative To CentOS? December 8, 2020 Frank Cox CentOS 33 Comments

Is Oracle a real alternative to CentOS ? I'm asking because genuinely don't know; I've never paid any attention to Oracle's Linux offering before now.

But today I've seen a couple of the folks here mention Oracle Linux and I see that Oracle even offers a script to convert CentOS 7 to Oracle. Nothing about CentOS 8 in that script, though. CentOS /

That page seems to say that Oracle Linux is everything that CentOS was prior to today's announcement.

But someone else here just said that the first thing Oracle Linux does is to sign you up for an Oracle account.

So, for people who know a lot more about these things than I do, what's the downside of using Oracle Linux versus CentOS? I assume that things like epel/rpmfusion/etc will work just as they do under CentOS since it's supposed to be bit-for-bit compatible like CentOS was. What does the "sign up with Oracle" stuff actually do, and can you cancel, avoid, or strip it out if you don't want it?

Based on my extremely limited knowledge around Oracle Linux, it sounds like that might be a go-to solution for CentOS refugees.

But is it, really?

Karl Vogel says: December 9, 2020 at 3:05 am

... ... ..

Go to , poke around a bit, and you end up here:

I just went to the ISO page and I can grab whatever I like without signing up for anything, so nothing's changed since I first used it.

... ... ...

Gianluca Cecchi says: December 9, 2020 at 3:30 am


Only to point out that while in CentOS (8.3, but the same in 7.x) the situation is like this:

[g.cecchi@skull8 ~]$ ll /etc/redhat-release /etc/CentOS-release
-rw-rr 1 root root 30 Nov 10 16:49 /etc/CentOS-release lrwxrwxrwx 1 root root 14 Nov 10 16:49 /etc/redhat-release -> CentOS-release
[g.cecchi@skull8 ~]$

[g.cecchi@skull8 ~]$ cat /etc/CentOS-release CentOS Linux release 8.3.2011

in Oracle Linux (eg 7.7) you get two different files:

$ ll /etc/redhat-release /etc/oracle-release 
-rw-rr 1 root root 32 Aug 8 2019 /etc/oracle-release 
-rw-rr 1 root root 52 Aug 8 2019 /etc/redhat-release 
$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.7 (Maipo)
$ cat /etc/oracle-release Oracle Linux Server release 7.7 

This is generally done so that sw pieces officially certified only on upstream enterprise vendor and that test contents of the redhat-release file are satisfied. Using the lsb_release command on an Oracle Linux 7.6 machine:

# lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: OracleServer Description: Oracle Linux Server release 7.6 
Release: 7.6 
Codename: n/a 


Rainer Traut says: December 9, 2020 at 4:18 am

Am 08.12.20 um 18:54 schrieb Frank Cox:

Yes, it is better than CentOS and in some aspects better than RHEL:

faster security updates than CentOS, directly behind RHEl
better kernels than RHEL and CentOS (UEKs) wih more features
free to download (no subscription needed):
free to use:
massive amount of extra packages and full rebuild of EPEL (same link):

Rainer Traut says: December 9, 2020 at 4:26 am


Am 08.12.20 um 19:03 schrieb Jon Pruente:

KVM is a subscription feature. They want you to run Oracle VM Server for x86 (which is based on Xen) so they can try to upsell you to use the Oracle Cloud. There's other things, but that stood out immediately.

Oracle Linux FAQ (PDF):

There is no subscription needed. All needed repositories for the oVirt based virtualization are freely available.

Rainer Traut says: December 10, 2020 at 4:40 am

Am 09.12.20 um 17:52 schrieb Frank Cox:

I'll try to answer best to my knowledge.

  • No Account needed.

I have an oracle account but never used it for/with Oracle linux. There are oracle communities where you need an oracle account:

Niki Kovacs says: December 10, 2020 at 10:22 am

Le 10/12/2020 17:18, Frank Cox a crit :

That's it. I know Oracle's history, but I think for Oracle Linux, they may be much better than their reputation. I'm currently fiddling around with it, and I like it very much. Plus there's a nice script to turn an existing CentOS installation into an Oracle Linux system.



Microlinux Solutions informatiques durables
7, place de l'glise 30730 Montpezat Site : Blog : Mail : [email protected] Tl. : 04 66 63 10 32
Mob. : 06 51 80 12 12

Ljubomir Ljubojevic says: December 10, 2020 at 12:53 pm

There is always Springdale Linux made by Princeton University:

Johnny Hughes says: December 10, 2020 at 4:10 pm

Am 10.12.20 um 19:53 schrieb Ljubomir Ljubojevic:

I did a conversion of a test webserver from C8 to Springdale. It went smoothly.

Niki Kovacs says: December 12, 2020 at 11:29 am

Le 08/12/2020 18:54, Frank Cox a crit :

I spent the last three days experimenting with it. Here's my take on it:

tl;dr: Very nice if you don't have any qualms about the company.



Microlinux Solutions informatiques durables 7, place de l'glise 30730 Montpezat Site : Blog : Mail : [email protected] Tl. : 04 66 63 10 32
Mob. : 06 51 80 12 12

Frank Cox says: December 12, 2020 at 11:52 am

That's a really excellent article, Nicholas. Thanks ever so much for posting about your experience.

Peter Huebner says: December 15, 2020 at 5:07 am

Am Dienstag, den 15.12.2020, 10:14 +0100 schrieb Ruslanas Gibovskis:

According to the Oracle license terms and official statements, it is "free to download, use and share. There is no license cost, no need for a contract, and no usage audits."

Recommendation only: "For business-critical infrastructure, consider Oracle Linux Support." Only optional, not a mandatory requirement. see:

No need for such a construct. Oracle Linux can be used on any production system without the legal requirement to obtain a extra commercial license. Same as in CentOS.

So Oracle Linux can be used free as in "free-beer" currently for any system, even for commercial purposes. Nevertheless, Oracle can change that license terms in the future, but this applies as well to all other company-backed linux distributions.
Peter Huebner

[Nov 28, 2017] Sometimes the Old Ways Are Best by Brian Kernighan

Notable quotes:
"... Sometimes the old ways are best, and they're certainly worth knowing well ..."
Nov 01, 2008 | IEEE Software, pp.18-19

As I write this column, I'm in the middle of two summer projects; with luck, they'll both be finished by the time you read it.

  • One involves a forensic analysis of over 100,000 lines of old C and assembly code from about 1990, and I have to work on Windows XP.
  • The other is a hack to translate code written in weird language L1 into weird language L2 with a program written in scripting language L3, where none of the L's even existed in 1990; this one uses Linux. Thus it's perhaps a bit surprising that I find myself relying on much the same toolset for these very different tasks.

... ... ...

Here has surely been much progress in tools over the 25 years that IEEE Software has been around, and I wouldn't want to go back in time.

But the tools I use today are mostly the same old ones-grep, diff, sort, awk, and friends. This might well mean that I'm a dinosaur stuck in the past.

On the other hand, when it comes to doing simple things quickly, I can often have the job done while experts are still waiting for their IDE to start up. Sometimes the old ways are best, and they're certainly worth knowing well

[Aug 13, 2017] Is Modern Linux Becoming Too Complex

One man's variety is another man's hopelessly confusing goddamn mess.

Feb 11, 2015 | Slashdot

An anonymous reader writes: Debian developer John Goerzen asks whether Linux has become so complex that it has lost some of its defining characteristics. "I used to be able to say Linux was clean, logical, well put-together, and organized. I can't really say this anymore. Users and groups are not really determinitive for permissions, now that we have things like polkit running around. (Yes, by the way, I am a member of plugdev.) Error messages are unhelpful (WHY was I not authorized?) and logs are nowhere to be found. Traditionally, one could twiddle who could mount devices via /etc/fstab lines and perhaps some sudo rules. Granted, you had to know where to look, but when you did, it was simple; only two pieces to fit together. I've even spent time figuring out where to look and STILL have no idea what to do."

Lodragandraoidh (639696) on Wednesday February 11, 2015 @11:21AM (#49029667)

Re:So roll your own. (Score:5, Insightful)

I think you're missing the point. Linux is the kernel - and it is very stable, and while it has modern extensions, it still keeps the POSIX interfaces consistent to allow inter-operation as desired. The issue here is not that forks and new versions of Linux distros are an aberration, but how the major distributions have changed and the article is a symptom of those changes towards homogeneity.

The Linux kernel is by definition identically complex on any distro using a given version of the kernel (the variances created by compilation switches notwithstanding). The real variance is in the distros - and I don't think variety is a bad thing, particularly in this day and age when we are having to focus more and more on security, and small applications on different types of devices - from small ARM processor systems, to virtual cluster systems in data centers.

Variety creates a strong ecosystem that is more resilient to security exploitation as a whole; variety is needed now more than ever given the security threats we are seeing. If you look at the history of Linux distributions over time - you'll see that from the very beginning it was a vibrant field with many distros - some that bombed out - some that were forked and then died, and forks and forks of forks that continued on - keeping the parts that seemed to work for those users.

Today - I think people perceive what is happening with the major distros as a reduction in choice (if Redhat is essentially identical to Debian, Ubuntu, et al - why bother having different distros?) - a bottleneck in variability; from a security perspective, I think people are worried that a monoculture is emerging that will present a very large and crystallized attack surface after the honeymoon period is over.

If people don't like what is available, if they are concerned about the security implications, then they or their friends need to do something about it. Fork an existing distro, roll your own distro, or if you are really clever - build your own operating system from scratch to provide an answer, and hopefully something better/different in the long run. Progress isn't a bad thing; sitting around doing nothing and complaining about it is.

NotDrWho (3543773) on Wednesday February 11, 2015 @11:28AM (#49029739)

Re: So roll your own. (Score:5, Funny)

One man's variety is another man's hopelessly confusing goddamn mess.

Anonymous Coward on Wednesday February 11, 2015 @09:31AM (#49028605)

Re: Yes (Score:4, Insightful)

Systemd has been the most divisive force in the Linux community lately, and perhaps ever. It has been foisted upon many unwilling victims. It has torn apart the Debian community. It has forced many long-time Linux users to the BSDs, just so they can get systems that boot properly.

Systemd has harmed the overall Linux community more than anything else has. Microsoft and SCO, for example, couldn't have dreamed of harming Linux as much as systemd has managed to do, and in such a short amount of time, too.

Amen. It's sad, but a single person has managed to kill the momentum of GNU/Linux as an operating system. Microsoft should give the guy a medal.

People are loath to publish new projects because keeping them running with systemd and all its dependencies in all possible permutations is a full time job. The whole "do one thing only and do it well" concept has been flushed down the drain.

I know that I am not the only sysadmin who refuses to install Red Hat Enterprise Linux 7, but install new systems with RHE

gmack (197796) <[email protected] minus caffeine> on Wednesday February 11, 2015 @11:55AM (#49030073) Homepage Journal

Score:4, Informative)

Who modded this up?

SystemD has put in jeopardy the entire presence of Linux in the server room:

1: AFIAK, as there have been zero mention of this, SystemD appears to have had -zero- formal code testing, auditing, or other assurance that it is stable. It was foisted on people in RHEL 7 and downstreams with no ability to transition to it.

Formal code testing is pretty much what Redhat brings to the table.

2: It breaks applications that use the init.d mechanism to start with. This is very bad, since some legacy applications can not be upgraded. Contrast that to AIX where in some cases, programs written back in 1991 will run without issue on AIX 7.1. Similar with Solaris.

At worst it breaks their startup scripts, and since they are shell scripts they are easy to fix.

3: SystemD is one large code blob with zero internal separation... and it listens on the network with root permissions. It does not even drop perms which virtually every other utility does. Combine this with the fact that this has seen no testing... and this puts every production system on the Internet at risk of a remote root hole. It will be -decades- before SystemD becomes a solid program. Even programs like sendmail went through many bug fixes where security was a big problem... and sendmail has multiple daemons to separate privs, unlike SystemD.

Do you really understand the architecture of either SystemD or sendmail? Sendmail was a single binary written in a time before anyone cared about security. I don't recall sendmail being a bundle programs but then it's been a decade since I stopped using it precisely because of it's poor security track record. Contrary to your FUD, Systemd runs things as separate daemons with each component using the least amount of privileges needed to do it's job and on top of that many of the network services (ntp, dhcpd) that people complain about are completely optional addons and quite frankly, since they seem designed around the single purpose of Linux containers, I have not installed them. This is a basic FAQ entry on the systemd web site so I really don't get how you didn't know this.

4: SystemD cannot be worked around. The bash hole, I used busybox to fix. If SystemD breaks, since it encompasses everything including the bootloader, it can't be replaced. At best, the system would need major butchery to work. In the enterprise, this isn't going to happen, and the Linux box will be "upgraded" to a Windows or Solaris box.

Unlikely, it is a minority of malcontents who are upset about SystemD who have created an echo chamber of half truths and outright lies. Anyone who needs to get work done will not even notice the transition.

5: SystemD replaces many utilities that have stood 20+ years of testing, and takes a step back in security by the monolithic userland and untested code. Even AIX with its ODM has at least seen certification under FIPS, Common Criteria, and other items.

Again you use the word "monolitic without having a shred of knowledge about how SystemD works.The previous init system despite all of it's testing was a huge mess. There is a reason there were multiple projects that came before SystemD that tried to clean up the horrific mess that was the previous init.

6: SystemD has no real purpose, other than ego. The collective response justifying its existence is, "because we say so. Fuck you and use it." Well, this is no way to treat enterprise customers. Enterprise customers can easily move to Solaris if push comes to shove, and Solaris has a very good record of security, without major code added without actual testing being done, and a way to be compatible. I can turn Solaris 11's root role into a user, for example.

Solaris has already transitioned to it's own equivalent daemon that does roughly what SystemD does.

As for SystemD: It allows booting on more complicated hardware. Debian switched because they were losing market share on larger systems that the current init system only handles under extreme protest. As a side affect of the primary problem it was meant to solve, it happens to be faster which is great for desktops and uses a lot less memory which is good for embedded systems.

So, all and all, SystemD is the worst thing that has happened with Linux, its reputation, and potentially, its future in 10 years, since the ACTA treaty was put to rest. SystemD is not production ready, and potentially can put every single box in jeopardy of a remote root hole.

Riight.. Meanwhile in the real world, none of my desktops or servers have any SystemD related network services running so no root hole here.

Dragonslicer (991472) on Wednesday February 11, 2015 @12:26PM (#49030407)

Score:5, Insightful)

3: SystemD is one large code blob with zero internal separation... and it listens on the network with root permissions. It does not even drop perms which virtually every other utility does. Combine this with the fact that this has seen no testing... and this puts every production system on the Internet at risk of a remote root hole. It will be -decades- before SystemD becomes a solid program.

Even programs like sendmail went through many bug fixes where security was a big problem... and sendmail has multiple daemons to separate privs, unlike SystemD.

Because of course it's been years since anyone found any security holes in well-tested software like Bash or OpenSSL.

Anonymous Coward on Wednesday February 11, 2015 @08:24AM (#49028117)

Score:5, Interesting)

I was reading through the article's comments and saw this thread of discussion []. Well, it's hard to call it a thread of discussion because John apparently put an end to it right away.

The first comment in that thread is totally right though. It is systemd and Gnome 3 that are causing so many of these problems with Linux today. I don't use Debian, but I do use another distro that switched to systemd, and it is in fact the problem here. My workstation doesn't work anywhere as well as it did a couple of years ago, before systemd got installed. So when somebody blames systemd for these kinds of problems, that person is totally correct. I don't get why John would censor the discussion like that. I also don't get why he'd label somebody who points out the real problem as being a 'troll'.

John needs to admit that the real problem here is not the people who are against systemd. These people are actually the ones who are right, and who have the solution to John's problems!

The comment I linked to says 'Systemd needs to be removed from Debian immediately.', and that's totally right. But I think we need to expand it to 'Systemd needs to be removed from all Linux distros immediately.'

If we want Linux to be usable again, systemd does need to go. It's just as simple as that. Censoring any and all discussion of the real problem here, systemd, sure isn't going to get these problems resolved any quicker!

Re:Why does John shut down all systemd talk? (Score:5, Insightful)

[Aug 08, 2017] Unattended Installation of Red Hat Enterprise Linux 7 Operating System on Dell PowerEdge Servers Using iDRAC With Lifecycle Controller

The OS Deployment feature available in Lifecycle Controller enables you to deploy standard and custom operating systems on the managed system. You can also configure RAID before installing the operating system if it is not already configured. You can deploy the operating system using any of the following methods:

  • Manual installation
  • Unattended installation

The unattended installation feature requires an OS configuration or answer file. During unattended installation, the answer file is provided to the OS loader. This activity requires minimal or no user intervention. Currently, the unattended installation feature is supported only for Microsoft Windows and Red Hat Enterprise Linux 7 operating systems from Lifecycle Controller.

Note: This paper only covers unattended installation of Red Hat Enterprise Linux 7 operating system from Lifecycle Controller. For more information about unattended installation of Microsoft Windows operating systems, see the "Unattended Installation of Windows Operating Systems

[Jul 15, 2017] Red Hat 6.9 is now available

After the Phase 2, RHEL 6 will only receive security updates till 2020 under the Phase 3 which will commence on May 10, 2017. Red Hat's current focus of development is the RHEL 7 platform which was updated to RHEL 7.3 last year.
Mar 21, 2017

Release Notes

Red Hat Enterprise Linux 6 is now in Production Phase 2, and Red Hat Enterprise Linux 6.9 therefore provides a stable release focused on bug fixes. Red Hat Enterprise Linux 6 enters Production Phase 3 on May 10, 2017. Subsequent updates will be limited to qualified critical security fixes and business-impacting urgent issues. Please refer to Red Hat Enterprise Linux Life Cycle for more information

Migration to RHEL 7 is now supported

In-place Upgrade

As Red Hat Enterprise Linux subscriptions are not tied to a particular release, existing customers can update their Red Hat Enterprise Linux 6 infrastructure to Red Hat Enterprise Linux 7 at any time, free of charge, to take advantage of recent upstream innovations. To simplify the upgrade to Red Hat Enterprise Linux 7, Red Hat provides the Preupgrade Assistant and Red Hat Upgrade Tool. For more information, see Chapter 2, General Updates.

Red Hat Insights
Since Red Hat Enterprise Linux 6.7, the Red Hat Insights service is available. Red Hat Insights is a proactive service designed to enable you to identify, examine, and resolve known technical issues before they affect your deployment. Insights leverages the combined knowledge of Red Hat Support Engineers, documented solutions, and resolved issues to deliver relevant, actionable information to system administrators.

The service is hosted and delivered through the customer portal at or through Red Hat Satellite. To register your systems, follow the Getting Started Guide for Insights. For further information, data security and limits, refer to

Red Hat Customer Portal Labs

Red Hat Customer Portal Labs is a set of tools in a section of the Customer Portal available at The applications in Red Hat Customer Portal Labs can help you improve performance, quickly troubleshoot issues, identify security problems, and quickly deploy and configure complex applications. Some of the most popular applications are, for example
NetworkManager now supports manual DNS configuration with dns=none

With this update, the user has the option to prevent NetworkManager from modifying the /etc/resolv.conf file. This is useful for manual management of DNS settings. To protect the file from being modified, add the dns=none option to the /etc/NetworkManager/NetworkManager.conf file. (BZ#1308730)

Red Hat Software Collections

is a Red Hat content set that provides a set of dynamic programming languages, database servers, and related packages that you can install and use on all supported releases of Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures. Red Hat Developer Toolset is included as a separate Software Collection.

Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection, GNU Debugger, and other development, debugging, and performance monitoring tools. Since Red Hat Software Collections 2.3, the Eclipse development platform is provided as a separate Software Collection.

Dynamic languages, database servers, and other tools distributed with Red Hat Software Collections do not replace the default system tools provided with Red Hat Enterprise Linux, nor are they used in preference to these tools. Red Hat Software Collections uses an alternative packaging mechanism based on the scl utility to provide a parallel set of packages. This set enables optional use of alternative package versions on Red Hat Enterprise Linux. By using the scl utility, users can choose which package version they want to run at any time.

See the Red Hat Software Collections documentation for the components included in the set, system requirements, known problems, usage, and specifics of individual Software Collections.

See the Red Hat Developer Toolset documentation for more information about the components included in this Software Collection, installation, usage, known problems, and more.

[Apr 25, 2016] What's New in Red Hat Enterprise Linux 7.2

Video presentation.

[Apr 25, 2016] Red Hat Enterprise Linux 7.2 Beta Now Available

Red Hat

With a nod to the importance of continuously maintaining stable and secure enterprise environments, the beta release of Red Hat Enterprise Linux 7.2 includes several new and enhanced security capabilities. The introduction of a new SCAP module in the installer (anaconda) allows enterprise customers to apply SCAP-based security profiles during installation. Another new capability allows for the binding of data to local networks. This allows enterprises to encrypt systems at scale with centralized management. In addition, Red Hat Enterprise 7.2 beta introduces support for DNSSEC for DNS zones managed by Identity Management (IdM) as well as federated identities, a mechanism that allows users to access resources using a single set of digital credentials.

Given the complexity and necessary due diligence required to efficiently and effectively manage the modern datacenter at scale, the beta release of Red Hat Enterprise Linux 7.2 includes new and improved tools to facilitate a more streamlined system administration experience. These new features and enhancements include:

  • Better processor allocation for tasks that require dedicated processor time. Modern systems have multiple processors and certain demanding workloads typically prioritize a dedicated processor at all times rather than efficiently sharing processor time with other applications and services. The introduction of on-demand vmstat workers in the kernel helps achieve more efficient CPU sharing and resource balancing.
  • The ability to remotely manage local disk data security based on network identity, making the task easier and more secure.
  • Enhancements to the storage management API (libStorageMgmt) now provide a vendor agnostic mechanism to query disk health and RAID configuration management.
  • The introduction of conntrack-tools for better network connection tracking
  • Enhancements to NetworkManager, facilitating better integration with external programs
  • A new web-based user interface for of network and system performance
  • New tooling that aids in diagnostics and allows system administrators to easily gather key I/O metrics for device mapper devices.

As always, leveraging work in the Fedora community, Red Hat continuously monitors upstream developments and systematically incorporates select enterprise-ready features and technologies into Red Hat Enterprise Linux. The beta release of Red Hat Enterprise Linux 7.2 accomplishes this through the rebasing of the GNOME 3 desktop, the inclusion of GNOME Software, and the addition of new tuned profiles (inclusive of a profile for Red Hat Enterprise Linux for Real Time).

For more information on Red Hat Enterprise Linux 7.2, you can read the full release notes or, as an existing Red Hat customer, take Red Hat Enterprise Linux 7.2 beta for a test drive yourself via the Red Hat Customer Portal.

[Apr 25, 2016] What's Coming in Red Hat Enterprise Linux 7.2

DNSSEC for DNS zones managed by Red Hat Identity Management

RHEL 7.2 will also bring live kernel patching to RHEL, which Dumas sees as a critical security measure. Using elements of the KPATCH technology that recently landed in the upstream Linux 4.0 kernel, RHEL users will be able to patch their running kernels dynamically.

...Dumas is particularly excited about the performance gains that RHEL 7.2 introduces. In particular she noted that core networking patch performance is being accelerated by 35 percent for RHEL 7.2.

...With RHEL 7.2, Red Hat is refreshing the desktop with GNOME 3.14, which includes the GNOME software package manager and improvements to multi-monitor deployment capabilities.

[Apr 25, 2016] Red Hat Enterprise Linux 7 What's New

Jun 10, 2014 | InformationWeek

Red Hat released the 7.0 version of Red Hat Enterprise Linux today, with embedded support for Docker containers and support for direct use of Microsoft's Active Directory. The update uses XFS as its new file system.

"[Use of XFS] opens the door to a new class of data warehouse and big data analytics applications," said Mark Coggin, senior director of product marketing, in an interview before the announcement.

The high-capacity, 64-bit XFS file system, now the default file system in Red Hat Enterprise Linux, originated in the Silicon Graphics Irix operating system. It can scale up to 500 TB of addressable memory. In comparison, previous file systems, such EXT 4, typically supported 16 TBs.

RHEL 7's support for Linux containers amounts to a Docker container format integrated into the operating system so that users can begin building a "layered" application. Applications in the container can be moved around and will be optimized to run on Red Hat Atomic servers, which are hosts that use the specialized Atomic version of Enterprise Linux to manage containers.

[Want to learn more about Red Hat's commitment to Linux containers? Read Red Hat Containers OS-Centric: No Accident.]

RHEL 7 will also work with Active Directory, using cross-realm trust. Since both Linux and Windows are frequently found in the same enterprise data centers, cross-realm trust lets Linux use Active Directory as either a secondary check on a primary identity management system, or simply as a trusted source to identify users, Coggin says.

RHEL 7 also has more built-in instrumentation and tuning for optimized performance based on a selected system profile. "If you're running a compute-bound workload, you can select a profile that's better geared to it," Coggin notes.

[Nov 08, 2015] Getting Service or Asset Tags on Linux by Nick Geoghegan

Jul 3, 2015

At one point in time, you will need to find out your service or asset tag. Maybe you need to find out when your machine is out of vendor warranty, or are actually finding out what is in the machine. Popping the service tag into the Dell support site will tell you this But what if you don't have them written down?

The Dell "tools", it should be pointed out, require you restarting the machine with a CD in the drive or using a COM file. There is no way in hell that I'm digging out a DOS disk to try and run a COM file to get the service tag. The CD, as it turns out, is just a rebadged Ubuntu CD Success!

So I mounted the Dell ISO, which was rather fiddly, and took a look around. A program called serviceTag was the first thing I noticed. Was this a specific Dell tool? What would happen if I ran it?

Being paranoid, I decided to see what was linked to this binary.

$ ldd serviceTag =>  (0xf773f000) => not found => /usr/lib32/ (0xf7635000) => /lib32/ (0xf760b000) => /usr/lib32/ (0xf75ed000) => /lib32/ (0xf7473000)
     /lib/ (0xf7740000)

Hmmmm. Never heard of libsmbios before. A quick Google vision quest lead me here.

The SMBIOS Specification addresses how motherboard and system vendors present management information about their products in a standard format by extending the BIOS interface on x86 architecture systems.


Debian (and RHEL) have these tools in their standard repos! For Debian, it's just a matter of

apt-get install libsmbios-bin

You can then, simply, run

[root@calculon /home/nick]$ /usr/sbin/getSystemId
Libsmbios version:      2.2.28
Product Name:           Gazelle Professional
Vendor:                 System76, Inc.
BIOS Version:           4.6.5
System ID:              XXXXXXXXXXXXXX
Service Tag:            XXXXXXXXXXXXXX
Express Service Code:   0
Asset Tag:              XXXXXXXXXXXXXX
Property Ownership Tag:

[May 07, 2015] Red Hat Enterprise Linux Life Cycle - Red Hat Customer Portal

* The life cycle dates are subject to adjustment.

In Red Hat Enterprise Linux 4, EUS was available for the following minor releases:

  • 4.5 (ends January 31, 2009)
  • 4.7 (ends August 31st, 2011)

In Red Hat Enterprise Linux 5, EUS is available for the following minor releases:

  • 5.2 (ends March 31, 2010)
  • 5.3 (ends November 30, 2010)
  • 5.4 (ends July 31, 2011)
  • 5.6 (ends July 31, 2013)
  • 5.9 (ends March 31, 2015)

In Red Hat Enterprise Linux 6, EUS is available for all minor releases released during the Production 1 Phase, but not for the minor release marking transition to Production 2 or any minor releases released during Production Phases 2 or 3. Each Red Hat Enterprise Linux 6 EUS stream is available for 24 months from the availability of the minor release.

In Red Hat Enterprise Linux 6, EUS is available for the following minor releases:

  • 6.0 (ends November 30, 2012)
  • 6.1 (ends May 31, 2013)
  • 6.2 (ends January 7, 2014)
  • 6.3 (ends June 30, 2014)
  • 6.4 (ends March 3, 2015)
  • 6.5 (ends November 30, 2015)
  • 6.6 (ends October 31, 2016)

Future Red Hat Enterprise Linux 6 releases for which EUS is available will be added to the above list upon their release.

In Red Hat Enterprise Linux 7, EUS will be available for all minor releases during the Production 1 Phase, but not for 7.0 or the minor release marking the transition to Production 2, or for any minor releases released during Production Phases 2 or 3. Each Red Hat Enterprise Linux 7 EUS stream is available for 24 months from the availability of the minor release.

In Red Hat Enterprise Linux 7, EUS is available for the following releases:

  • 7.0 (N/A)
  • 7.1 (ends March 31, 2017)

Future Red Hat Enterprise Linux 7 releases for which EUS is available will be added to the above list upon their release.

Please see this Knowledgebase Article for more details on EUS.

[Jun 27, 2014] What's new in Red Hat Enterprise Linux 7

Red Hat


...Red Hat Enterprise Linux 7 delivers dramatic improvements in reliability, performance, and scalability. A wealth of new features provides the architect, system administrator, and developer with the resources necessary to innovate and manage more efficiently.


Linux containers have emerged as a key open source application packaging and delivery technology, combining lightweight application isolation with the flexibility of image-based deployment methods. Developers have rapidly embraced Linux containers because they simplify and accelerate application deployment, and many Platform-as-a-Service (PaaS) platforms are built around Linux container technology, including OpenShift by Red Hat. Red Hat Enterprise Linux 7 implements Linux containers using core technologies such as control groups (cGroups) for resource management, namespaces for process isolation, and SELinux for security, enabling secure multi-tenancy and reducing the potential for security exploits. The Red Hat container certification ensures that application containers built using Red Hat Enterprise Linux will operate seamlessly across certified container hosts.
With more and more systems, even at the low end, presenting non-uniform memory access (NUMA) topologies, Red Hat Enterprise Linux 7 addresses the performance irregularities that such systems present. A new, kernel-based NUMA affinity mechanism automates memory and scheduler optimization. It attempts to match processes that consume significant resources with available memory and CPU resources in order to reduce cross-node traffic. The resulting improved NUMA resource alignment improves performance for applications and virtual machines, especially when running memory-intensive workloads.
Red Hat Enterprise Linux 7 unifies hardware event reporting into a single reporting mechanism. Instead of various tools collecting errors from different sources with different timestamps, a new hardware event reporting mechanism (HERM) will make it easier to correlate events and get an accurate picture of system behavior. HERM reports events in a single location and in a sequential timeline. HERM uses a new userspace daemon, rasdaemon, to catch and log all RAS events coming from the kernel tracing infrastructure.
Red Hat Enterprise Linux 7 advances the level of integration and usability between the Red Hat Enterprise Linux guest and VMware vSphere. Integration now includes: Open VM Tools - bundled open source virtualization utilities. 3D graphics drivers for hardware-accelerated OpenGL and X11 rendering. Fast communication mechanisms between VMware ESX and the virtual machine.
The ability to revert to a known, good system configuration is crucial in a production environment. Using LVM snapshots with ext4 and XFS (or the integrated snapshotting feature in Btrfs described in the "Snapper" section) an administrator can capture the state of a system and preserve it for future use. An example use case would involve an in-place upgrade that does not present a desired outcome and an administrator who wants to restore the original configuration.
Red Hat Enterprise Linux 7 introduces Live Media Creator for creating customized installation media from a kickstart file for a range of deployment use cases. Media can then be used to deploy standardized images whether on standardized corporate desktops, standardized servers, virtual machines, or hyperscale deployments. Live Media Creator, especially when used with templates, provides a way to control and manage configurations across the enterprise.
Red Hat Enterprise Linux 7 features the ability to use installation templates to create servers for common workloads. These templates can simplify and speed creating and deploying Red Hat Enterprise Linux servers, even for those with little or no experience with Linux.

Red Hat Red Hat Enterprise Linux 7 Setting World Records At Launch

June 10, 2014

Today's announcement of general availability of Red Hat Enterprise Linux 7 marks a significant milestone for Red Hat. The culmination of a multi-year effort by Red Hat's engineering team and our partners, the latest major release of our flagship platform redefines the enterprise operating system, and is designed to power the spectrum of enterprise IT: applications running on physical servers, containerized applications, and also cloud services.

Since its introduction more than a decade ago, Red Hat Enterprise Linux has become the world's leading enterprise Linux platform, setting industry standards for performance along the way, with Red Hat Enterprise Linux 7 continuing this trend. On its first day of general availability, Red Hat Enterprise Linux 7 already claims multiple world record-breaking benchmark results running on HP ProLiant servers, including:

SPECjbb2013 Multi-JVM Benchmark
One processor world record for both max-jOPS (16,252) and critical-jOPS (4,721) metrics
Two processor world record for both max-jOPS (119,517) and critical-jOPS (36,411) metrics
Four processor world record for both max-jOPS (202,763) and critical-jOPS (65,950) metrics

The SPECjbb2013 benchmark is an industry-standard measurement of Java-based application performance developed by the Standard Performance Evaluation Corporation (SPEC). Application performance remains an important attribute for many customers, and this set of results demonstrates Red Hat Enterprise Linux's continued ability to deliver world-class performance, alongside support from our ecosystem of partners and OEMs. With these impressive results to its name already, we like to think that this is only the tip of the iceberg for Red Hat Enterprise Linux 7's achievements, especially since the platform is designed to power a broad spectrum of enterprise IT workloads.

SPEC and SPECjbb are registered trademarks of the Standard Performance Evaluation Corporation. Results as of June 10, 2014. See for more information.

For further details on SPECjbb2013 benchmark results achieved on HP ProLiant XL220a Gen8 v2 (1P), HP ProLiant DL580 Gen8 (2P), and HP ProLiant DL580 Gen8 (4P) servers, see

[Jun 27, 2014] Red Hat Enterprise Linux 7 in evaluation for Common Criteria certification

June 19, 2014

Security is a crucial component of the technology Red Hat provides for its customers and partners, especially those who operate in sensitive environments, including the military.

[Jun 27, 2014] Oracle Announces OpenStack Support for Oracle Linux and Oracle VM

A technology preview of an OpenStack distribution that allows Oracle Linux and Oracle VM to work with the open source cloud software is now available. Users can install this OpenStack technology preview in their test environments with the latest version of Oracle Linux and the beta release of Oracle VM 3.3.

Read the Press Release
Read More from Oracle Senior Vice President of Linux and Virtualization Wim Coekaerts
Read More from Oracle Product Management Director Ronen Kofman

Oracle Linux Free as in Speech AND Free as in Beer by Monica Kumar

Jan 08, 2014 | Oracle's Linux Blog

One of the biggest benefits of Oracle Linux is that binaries, patches, errata, and source are always free. Even if you don't have a support subscription, you can download and run exactly the same enterprise-grade distribution that is deployed in production by thousands of customers around the world. You can receive binaries and errata reliably and on schedule, and take advantage of the thousands of hours Oracle spends testing Oracle Linux every day. And, of course, Oracle Linux is completely compatible with Red Hat Enterprise Linux, so switching to Oracle Linux is easy.

CentOS is another Linux distribution that offers free binaries with Red Hat compatibility. Traditionally, CentOS has been used for Linux systems which do not require support in order to reduce or avoid expensive Red Hat Enterprise Linux subscription costs. Recently, Red Hat announced it was "joining forces" with the CentOS project, hiring many the key CentOS developers, and "building a new CentOS." This is a curious development given that the primary factors that have made CentOS popular are that it is free and Red Hat compatible.

It would be natural for existing CentOS users to wonder what Red Hat actually has in mind for the "new CentOS" when the FAQ accompanying the announcement states that Red Hat does not recommend CentOS for production deployment, is not recommending mixed CentOS and Red Hat Enterprise Linux deployments, will not support JBoss and other products on CentOS, and is not including CentOS in Red Hat's developer offerings designed to create "applications for deployment into production environments."

If Red Hat truly wished to satisfy the key requirements of most CentOS users, they would take a much simpler step: they would make Red Hat Enterprise Linux binaries, patches, and errata available for free download just like Oracle already does.

Fortunately, no matter what future CentOS faces in Red Hat's hands, Oracle Linux offers all users a single distribution for development, testing, and deployment, for free or with a paid support subscription. Oracle does not require customers to buy a subscription for every server running Oracle Linux (or any server running Oracle Linux). If a customer wants to pay for support for production systems only, that's the customer's choice. The Oracle model is simple, economical, and well suited to environments with rapidly changing needs.

Oracle is focused on providing what we have since day one a fast, reliable Linux distribution that is completely compatible with Red Hat Enterprise Linux, coupled with enterprise class support, indemnity, and flexible support policies. If you are CentOS user, or a Red Hat user, why not download and try Oracle Linux today? You have nothing to lose after all, it's of the CentOS community while remaining committed to our current and new users."

Al Gillen, program vice president, System Software, IDC
"CentOS is one of the major non-commercial distributions in the industry, and a key adjacent project for many Red Hat Enterprise Linux customers. This relationship helps strengthen the CentOS community, and will ensure that CentOS benefits directly from the community-centric development approach that Red Hat both understands and heavily supports. Given the growing opportunities for Linux in the market today in areas such as OpenStack, cloud and big data, a stronger CentOS technology backed by the CentOS community-including Red Hat-is a positive development that helps the overall industry."

Stephen O'Grady, principal analyst, RedMonk
"Though it will doubtless come as a surprise, this move by Red Hat represents the logical embrace of an adjacent ecosystem. Bringing the CentOS and Red Hat communities closer together should be a win for both parties."

Additional Resources

Connect with Red Hat

Red Hat + CentOS - Red Hat Open Source Community

Red Hat + CentOS

Red Hat and the CentOS Project are building a new CentOS, capable of driving forward development and adoption of next-generation open source projects.

Red Hat will contribute its resources and expertise in building thriving open source communities to help establish more open project governance, broaden opportunities for participation, and provide new ways for CentOS users and contributors to collaborate on next-generation technologies such as cloud, virtualization, and Software-Defined Networking (SDN).

With Red Hat's contributions and investment, the CentOS Project will be better able to serve the needs of open source community members who require different or faster-moving components to be integrated with CentOS, expanding on existing efforts to collaborate with open source projects such as OpenStack, Gluster, OpenShift Origin, and oVirt.

Red Hat has worked with the CentOS Project to establish a merit-based open governance model for the CentOS Project, allowing for greater contribution and participation through increased transparency and access.


Today, the CentOS Project produces CentOS, a popular community Linux distribution built from much of the Red Hat Enterprise Linux codebase and other sources. Over the coming year, the CentOS Project will expand its mission to establish CentOS as a leading community platform for emerging open source technologies coming from other projects such as OpenStack.

How is CentOS different from Red Hat Enterprise Linux?

CentOS is a community project that is developed, maintained, and supported by and for its users and contributors. Red Hat Enterprise Linux is a subscription product that is developed, maintained, and supported by Red Hat for its subscribers.

While CentOS is derived from the Red Hat Enterprise Linux codebase, CentOS and Red Hat Enterprise Linux are distinguished by divergent build environments, QA processes, and, in some editions, different kernels and other open source components. For this reason, the CentOS binaries are not the same as the Red Hat Enterprise Linux binaries.

The two also have very different focuses. While CentOS delivers a distribution with strong community support, Red Hat Enterprise Linux provides a stable enterprise platform with a focus on security, reliability, and performance as well as hardware, software, and government certifications for production deployments. Red Hat also delivers training, and an entire support organization ready to fix problems and deliver future flexibility by getting features worked into new versions.

Once in use, the operating systems often diverge further, as users selectively install patches to address bugs and security vulnerabilities to maintain their respective installs. In addition, the CentOS Project maintains code repositories of software that are not part of the Red Hat Enterprise Linux codebase. This includes feature changes selected by the CentOS Project. These are available as extra/additional packages and environments for CentOS users.

[Oct 26, 2013] RHEL handling of DST change

Most server hardware clocks are use UTC. UTC stands for the Universal Time, Coordinated, also known as Greenwich Mean Time (GMT). Other time zones are determined by adding or subtracting from the UTC time. Server typically displays local time, which now is subject of DST correction twice a year.

Wikipedia defines DST as follows:

Daylight saving time (DST), also known as summer time in British English, is the convention of advancing clocks so that evenings have more daylight and mornings have less. Typically clocks are adjusted forward one hour in late winter or early spring and are adjusted backward in autumn.

DST patch is only required in few countries such as USA. Please see this wikipedia article.

Linux will change to and from DST when the HWCLOCK setting in /etc/sysconfig/clock is set to -u, i.e. when the hardware clock is set to UTC (which is closely related to GMT), regardless of whether Linux was running at the time DST is entered or left.

When the HWCLOCK setting is set to `--localtime', Linux will not adjust the time, operating under the assumption that your system may be a dual boot system at that time and that the other OS takes care of the DST switch. If that was not the case, the DST change needs to be made manually.


EST is defined as being GMT -5 all year round. US/Eastern, on the other hand, means GMT-5 or GMT-4 depending on whether Daylight Savings Time (DST) is in effect or not.

The tzdata package contains data files with rules for various timezones around the world. When this package is updated, it will update multiple timezone changes for all previous timezone fixes.

[Feb 28, 2012] Red Hat vs. Oracle Linux Support 10 Years Is New Standard

The VAR Guy

The support showdown started a couple of weeks ago, when Red Hat extended the life cycle of Red Hat Enterprise Linux (RHEL) versions 5 and 6 from the norm of seven years to a new standard of 10 years. A few days later, Oracle responded by extending Oracle Linux life cycles to 10 years. Side note: It sounds like SUSE, now owned by Attachmate, also offers extended Linux support of up to 10 years.

[Feb 07, 2012] Virtualization With Xen On CentOS 6.2 (x86_64)

Linux Howtos

This tutorial provides step-by-step instructions on how to install Xen (version 4.1.2) on a CentOS 6.2 (x86_64) system.

Xen lets you create guest operating systems (*nix operating systems like Linux and FreeBSD), so called "virtual machines" or domUs, under a host operating system (dom0). Using Xen you can separate your applications into different virtual machines that are totally independent from each other (e.g. a virtual machine for a mail server, a virtual machine for a high-traffic web site, another virtual machine that serves your customers' web sites, a virtual machine for DNS, etc.), but still use the same hardware. This saves money, and what is even more important, it's more secure. If the virtual machine of your DNS server gets hacked, it has no effect on your other virtual machines. Plus, you can move virtual machines from one Xen server to the next one.

[Jan 11, 2012] Red Hat Enterprise Linux 6.2 Announcement

They continue to push KVM which is seldom used in enterprise environment. The most important addition is Linux containers.
Dec 06, 2011 [rhelv6-announce]

Hardware support

  • support for PCI-e 3.0 as well as USB 3.0, enables faster and wider busses with the limited number of devices currently available on the market.
  • support for many new 10 GbE Network adapters and Host Bus Adapters (HBAs)
  • Expanded support and utilities simplify the configuration and deployment of Fibre Channel over Ethernet (FCoE) environments.
  • Added support for new Infiniband-based devices as well as SR-I/OV integration
  • Fusion IO SSD performance: 1.3 million iops per second on Red Hat Enterprise Linux 6 - better than Oracle's published results

Linux Containers

  • Linux containers provide a flexible approach to application runtime containment on bare-metal without the need to fully virtualize the workload. This release provides application level containers to separate and control the application resource usage policies via cgroup and namespaces. This release introduces basic management of container life-cycle by allowing for creation, editing and deletion of containers via the libvirt API and the virt-manager GUI.
  • Linux Containers provides a means to run applications in a container, a deployment model familiar to UNIX administrators. Also provides container life-cycle management for these containerized applications through a graphical user interface (GUI) and user space utility (libvirt).
  • Linux Containers is in Technology Preview at this time.


  • ext4 File System Faster and simplified initialization Key functionality added to support ext4 file system provides customers the option to delay time-consuming initialization of the data structure until after the filesystem is mounted. Typically the creation of an ext4 filesystem consumes a lot of time when you try to initialize the data structure (inode) and mount the filesystem at the same time.
  • pNFS is an extension of NFS that provides significantly larger data transfer rates compared to the traditional NFS architectures. This is achieved by processing the meta-data (a typical bottleneck for I/O throughput) separately from the actual data. By separating the meta-data and the data paths, pNFS allows clients to access storage devices directly and in parallel. pNFS is a part of standard NFS version 4.1 specification.
    • Support for pNFS is limited to client-side functionality for file layout only.


  • discard enhancements Enhanced discard commands that allow the operating system to mass information to the storage device about block ranges that are no longer in use. This enhancement improves the efficiency of solid state flash based devices, and thinly-provisioned storage devices. Red Hat Enterprise Linux 6.2 adds more discards in conjunction with LVM freeing operations.
  • Improved efficiency on flash based solid state disks.


  • Fusion IO SSD performance: 1.3 million iops per second on Red Hat Enterprise Linux 6 - better than Oracle's published results
  • Red Hat Enterprise Linux 6 is 20%-30% lower idle power than Red Hat Enterprise Linux 5

Error detection and reporting

  • Improved Automated Bug Reporting Tool (ABRT)
  • The ABRT is a framework for collecting and analyzing application errors and system exceptions. Improvements in this release include easier configuration of plug ins and settings, a more consistent way to store error reports, a more secure environment as most of the processing is performed with a non-privileged account and more stability for plug ins.
  • Customers will have better tools to either self-diagnose problems or send error reports to Red Hat for expedited resolution for problem reports.


The X server has been re-based in this release. Updating the X server will increase system stability through the isolation of the system display drivers and will provide a better base for new features. Overall improved support for newer workstation optional hardware, multiple displays and new input devices.

[Jul 31, 2011] Scientific Linux pushes RHEL clones forward by Sean Michael Kerner

July 29, 2011 | InternetNews.
From the 'Clone Wars' files:

"Scientific Linux 6.1 is now available providing users with a stable reliable Free (as in Beer) version of Red Hat Enterprise Linux 6.1.

Red Hat released RHEL 6.1 in May, providing improved driver support and hardware enablement and oh yeah security fixes too.

Scientific Linux is a joint effort by Fermilab and CERN and is targeted at the scientific community, but it's a solid RHEL version in its own right. It's also one that could now be attracting some new users, thanks to delays at the 'other' popular RHEL clone -- CentOS.

The CentOS project just releases CentOS 6 and are many months behind Scientific Linux and even more time behind RHEL. That's a problem for some and could also represent a real security risk for most.

With the more rapid release cycle of Scientific Linux I will not be surprised if some disgruntled CentOS users make the switch and/or if new users just start off with Scientific Linux first.

While Scientific Linux is faster than CentOS at replicating RHEL 6.1, they aren't the fastest clone.

Oracle Linux 6.1 came out in June, barely a month after Red Hat's release.

It's somewhat ironic that Oracle is now the fasted clone tracking RHEL, since Red Hat has made it harder to clone with the way they package releases. As it turns out, it's not slowing Oracle down at all - though it might be impacting the community releases.

[May 31, 2011] RHEL Tuning and Optimization for Oracle V11

The Completely Fair Queuing (CFQ) scheduler is the default algorithm in Red Hat Enterprise Linux 4 which is suitable for a wide variety of applications and provides a good compromise between throughput and latency. In comparison to the CFQ algorithm, the Deadline scheduler caps maximum latency per request and maintains a good disk throughput which is best for disk-intensive database applications.

Hence, the Deadline scheduler is recommended for database systems. Also, at the time of this writing there is a bug in the CFQ scheduler which affects heavy I/O, see Metalink Bug:5041764. Even though this bug report talks about OCFS2 testing, this bug can also happen during heavy IO access to raw or block devices and as a consequence could evict RAC nodes.

To switch to the Deadline scheduler, the boot parameter elevator=deadline must be passed to the kernel that is being used.

Edit the /etc/grub.conf file and add the following parameter to the kernel that is being used, in this example 2.4.21-32.0.1.ELhugemem:

title Red Hat Enterprise Linux Server (2.6.18-8.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.el5 ro root=/dev/sda2 elevator=deadline initrd /initrd-2.6.18-8.el5.img

This entry tells the 2.6.18-8.el5 kernel to use the Deadline scheduler. Make sure to reboot the system to activate the new scheduler.

Changing Network Adapter Settings

To check the speed and settings of network adapters, use the ethtool command which works now

for most network interface cards. To check the adapter settings of eth0 run:

# ethtool eth0

To force a speed change to 1000Mbps, full duplex mode, run:

# ethtool -s eth0 speed 1000 duplex full autoneg off

To make a speed change permanent for eth0, set or add the ETHTOOL_OPT environment variable in


ETHTOOL_OPTS="speed 1000 duplex full autoneg off"

This environment variable is sourced in by the network scripts each time the network service is


Changing Network Kernel Settings

Oracle now uses User Datagram Protocol (UDP) as the default protocol on Linux for interprocess

communication, such as cache fusion buffer transfers between the instances. However, starting with

Oracle 10g network settings should be adjusted for standalone databases as well.

Oracle recommends the default and maximum send buffer size (SO_SNDBUF socket option) and

receive buffer size (SO_RCVBUF socket option) to be set to 256 KB. The receive buffers are used

by TCP and UDP to hold received data until it is read by the application. The receive buffer cannot

overflow because the peer is not allowed to send data beyond the buffer size window. This means that

datagrams will be discarded if they do not fit in the socket receive buffer. This could cause the sender

to overwhelm the receiver.

The default and maximum window size can be changed in the proc file system without reboot:

The default setting in bytes of the socket receive buffer

# sysctl -w net.core.rmem_default=262144

The default setting in bytes of the socket send buffer

# sysctl -w net.core.wmem_default=262144

The maximum socket receive buffer size which may be set by using the SO_RCVBUF socket option

# sysctl -w net.core.rmem_max=262144

The maximum socket send buffer size which may be set by using the SO_SNDBUF socket option

# sysctl -w net.core.wmem_max=262144

To make the change permanent, add the following lines to the /etc/sysctl.conf file, which is used

during the boot process:





To improve fail over performance in a RAC cluster, consider changing the following IP kernel

parameters as well:





Changing these settings may be highly dependent on your system, network, and other applications.

For suggestions, see Metalink Note:249213.1 and Note:265194.1.

On Red Hat Enterprise Linux systems the default range of IP port numbers that are allowed for TCP

and UDP traffic on the server is too low for 9i and 10g systems. Oracle recommends the following port


# sysctl -w net.ipv4.ip_local_port_range="1024 65000"

To make the change permanent, add the following line to the /etc/sysctl.conf file, which is used during

the boot process:

net.ipv4.ip_local_port_range=1024 65000

The first number is the first local port allowed for TCP and UDP traffic, and the second number is the last port number.

10.3. Flow Control for e1000 Network Interface Cards

The e1000 network interface card family do not have flow control enabled in the 2.6 kernel on Red Hat

Enterprise Linux 4 and 5. If you have heavy traffic, then the RAC interconnects may lose blocks, see

Metalink Bug:5058952. For more information on flow control, see Wikipedia Flow control1.

To enable Receive flow control for e1000 network interface cards, add the following line to the /etc/

modprobe.conf file:

options e1000 FlowControl=1

The e1000 module needs to be reloaded for the change to take effect. Once the module is loaded with

flow control, you should see e1000 flow control module messages in /var/log/messages.

Verifying Asynchronous I/O Usage

To verify whether $ORACLE_HOME/bin/oracle was linked with asynchronous I/O, you can use the

Linux commands ldd and nm.

In the following example, $ORACLE_HOME/bin/oracle was relinked with asynchronous I/O:

$ ldd $ORACLE_HOME/bin/oracle | grep libaio => /usr/lib/ (0x0093d000)

$ nm $ORACLE_HOME/bin/oracle | grep io_getevent

w io_getevents@@LIBAIO_0.1


In the following example, $ORACLE_HOME/bin/oracle has NOT been relinked with asynchronous I/


$ ldd $ORACLE_HOME/bin/oracle | grep libaio

$ nm $ORACLE_HOME/bin/oracle | grep io_getevent

w io_getevents


If $ORACLE_HOME/bin/oracle is relinked with asynchronous I/O it does not necessarily mean that

Oracle is really using it. You also have to ensure that Oracle is configured to use asynchronous I/O

calls, see Enabling Asynchronous I/O Support.

To verify whether Oracle is making asynchronous I/O calls, you can take a look at the /proc/

slabinfo file assuming there are no other applications performing asynchronous I/O calls on the

system. This file shows kernel slab cache information in real time.

On a Red Hat Enterprise Linux 3 system where Oracle does not make asynchronous I/O calls, the

output looks like this:

$ egrep "kioctx|kiocb" /proc/slabinfo

kioctx 0 0 128 0 0 1 : 1008 252

kiocb 0 0 128 0 0 1 : 1008 252


Once Oracle makes asynchronous I/O calls, the output on a Red Hat Enterprise Linux 3 system will

look like this:

$ egrep "kioctx|kiocb" /proc/slabinfo

kioctx 690 690 128 23 23 1 : 1008 252

kiocb 58446 65160 128 1971 2172 1 : 1008 252 Red Hat Enterprise Linux 5.7 Released in Beta

Storage Drivers

  • The bnx2i driver for Broadcom NetXtreme II iSCSI has been updated to version
  • The mpt2sas driver that supports the SAS-2 family of adapters from LSI has been updated to version Most notably, this update provides support for WarpDrive SSS-6200 devices.
  • The megaraid driver is updated to version 5.34
  • The arcmsr driver for Areca RAID controllers is updated
  • The bfa driver for Brocade Fibre Channel to PCIe Host Bus Adapters is updated to the current scsi-misc version.
  • The be2iscsi driver for ServerEngines BladeEngine 2 Open iSCSI devices is updated.
  • The qla2xxx driver for QLogic Fibre Channel HBAs is updated to version Additionally, the qla24xx and 25xx firmware is updated to 5.03.16.
  • the lpfc driver for Emulex Fibre Channel Host Bus Adapters is updated to version driver.
  • The IBM Virtual Ethernet (ibmveth) driver is updated, adding support for the optional flush of the rx buffer, scatter-gather, rx_copybreak, and tx_copybreak support, and enhanced power virtual ethernet performance.
  • The ibmvfc driver is updated to version 1.0.9
  • The mptfusion driver is updated to version to 3.04.18rh
  • The cciss driver for HP Smart Array controllers has been updated, adding kdump support and performance mode support on the controller.

4.2. Network Drivers

  • The cxgb4 driver for Chelsio Terminator4 10G Unified Wire Network Controllers is updated.
  • The cxgb3 driver for the Chelsio T3 Family of network devices is updated.
  • The e1000 driver for Intel PRO/1000 network devices has been updated, adding support for the Marvell Alaska M88E1118R PHY and CE4100 reference platform.
  • The enic driver for Cisco 10G Ethernet devicesis updated to version
  • The myri10ge driver for Myricom Myri-10G Ethernet devices has been updated to version 1.5.2
  • The igb driver for Intel Gigabit Ethernet Adapters is updated
  • the tg3 driver for Broadcom Tigon3 ethernet devices is updated to version 3.116, adding support for EEE.
  • The bna driver for Brocade 10Gb Ethernet devices is updated to version
  • The qlcnic driver is updated to 5.0.13, adding support for large receive offload (LRO) and generic receive offload (GRO)
  • The netxen driver for NetXen Multi port (1/10) Gigabit Network devices is updated to version 4.0.75, adding support for GbE port settings.
  • The be2net driver for ServerEngines BladeEngine2 10Gbps network devices is updated, adding support for multicast filter on the Lancer family of CNAs and enabling IPv6 TSO support.
  • The ixgbe driver for Intel 10 Gigabit PCI Express network devices is updated to version 3.2.9-k2, adding support for FCoE and kcq2 support on the 57712 device.

[May 21, 2011] 6.1 Technical Notes


  • In some circumstances, disks that contain a whole disk format (e.g. a LVM Physical Volume populating a whole disk) are not cleared correctly using the clearpart --initlabel kickstart command. Adding the --all switch - as in clearpart --initlabel --all - ensures disks are cleared correctly.
  • ... ... ...
  • The anaconda partition editing interface includes a button labeled Resize. This feature is intended for users wishing to shrink an existing filesystem and underlying volume to make room for installation of the new system. Users performing manual partitioning cannot use the Resize button to change sizes of partitions as they create them. If you determine a partition needs to be larger than you initially created it, you must delete the first one in the partitioning editor and create a new one with the larger size.

[May 21, 2011] Red Hat Delivers Red Hat Enterprise Linux 6.1

RHEL 6.0 was pretty raw, hopefully they fixed the host glaring flaws.
May 19, 2011 | Red Hat

Red Hat, Inc. (NYSE: RHT) today announced the general availability of Red Hat Enterprise Linux 6.1, the first update to the platform since the delivery of Red Hat Enterprise Linux 6 in November 2010.
... ... ... ...

Red Hat Enterprise Linux 6.1 is already established as a performance leader serving both as a virtual machine guest and hypervisor host in SPECvirt benchmarks. Red Hat and HP recently announced that the combination of Red Hat Enterprise Linux with KVM running on a HP ProLiant BL620c G7 20-core Blade server delivered a record-setting SPECvirt_sc2010 benchmark result. Red Hat and IBM also recently announced that the companies submitted a benchmark to SPEC in which a combination of Red Hat Enterprise Linux, Red Hat Enterprise Virtualization and IBM systems delivered 45% better consolidation capability than competitors in performance tests conducted by Red Hat and IBM. See for details.

"Building on our decade-long partnership to optimize Red Hat Enterprise Linux for IBM platforms, our companies have collaborated closely on the development of Red Hat Enterprise Linux 6.1," said Jean Staten Healy, director, Cross-IBM Linux and Open Virtualization. "Red Hat Enterprise Linux 6.1 combined with IBM hardware capabilities offers our customers expanded flexibility, performance and scalability across their bare metal, virtualized and cloud environments. Our collaboration continues to drive innovation and leading results in the industry."

In addition to performance improvements, Red Hat Enterprise Linux 6.1 also provides numerous technology updates, including:

  • Additional configuration options for advanced storage configurations with improvements in FCoE, Datacenter Bridging and iSCSI offload, which allow networked storage to deliver the quality of service commonly associated with directly connected storage
  • Enhancements in virtualization, file systems, scheduler, resource management and high availability
  • New technologies that enable smoother enterprise deployments and tighter integration with heterogeneous systems
  • A technology preview of Red Hat Enterprise Identity (IPA) services, based on the open source FreeIPA project
  • Support for automatic failover for virtual machines and applications using the Red Hat High Availability Add-On
  • Integrated developer tools that provide the ability to write, debug, profile and deploy applications without leaving the graphical environment
  • Improvements to network traffic processing to leverage multi-processor servers that are getting increasingly common

[May 19, 2011] CentOS 6? by David Sumsky

Oracle Linux might be an alternative...

dsumsky lines

I'm a big fan of CentOS project. I use it in production and I recommend it to the others as an enterprise ready Linux distro. I have to admit that I was quite disappointed by the behaviour of project developers who weren't able to tell the community the reasons why the upcoming releases were and are so overdue. I was used to downloading CentOS images one or two months after the current RHEL release was announced. The situation has changed with RHEL 5.6 which is available since January, 2011 but the corresponding CentOS was released not before April, 2011. It took about 3 months to release it instead of one or two as usual. By the way, the main news in RHEL 5.6 are:

  • full support for EXT4 filesystem (included in previous releases as technical preview)
  • new version 9.7 of BIND nameserver supporting NSEC3 resource records in DNSSEC and new cryptographic algorithms in DNSEC and TSIG
  • new version 5.3 of PHP language
  • SSSD daemon centralizing identity management and authentication
More details on RHEL 5.6 are officially available here.

The similar or perhaps worse situation was around the release date of CentOS 6. As you know, RHEL 6 is available since November, 2011. I considered CentOS 6 almost dead after I read about transitions to Scientific Linux or about purchasing support from Red Hat and migrating the CentOS installations to RHEL . But according to this schedule people around CentOS seem to be working hard again and the CentOS 6 should be available at the end of May.

I hope the project will continue as I don't know about better alternative to RHEL (RHEL clone) than CentOS. The question is how the whole, IMO unnecessary situation, will influence the reputation of the project.

[Nov 14, 2010] Red Hat releases RHEL 6

"Red Hat on Wednesday released version 6 of its Red Hat Enterprise Linux (RHEL) distribution. 'RHEL 6 is the culmination of 10 years of learning and partnering,' said Paul Cormier, Red Hat's president of products and technologies, in a webcast announcing the launch. Cormier positioned the OS both as a foundation for cloud deployments and a potential replacement for Windows Server. 'We want to drive Linux deeper into every single IT organization. It is a great product to erode the Microsoft Server ecosystem,' he said. Overall, RHEL 6 has more than 2,000 packages, and an 85 percent increase in the amount of code from the previous version, said Jim Totton, vice president of Red Hat's platform business unit. The company has added 1,800 features to the OS and resolved more than 14,000 bug issues."

5.6 Release Notes

Fourth Extended Filesystem (ext4) Support

The fourth extended filesystem (ext4) is now a fully supported feature in Red Hat Enterprise Linux 5.6. ext4 is based on the third extended filesystem (ext3) and features a number of improvements, including: support for larger file size and offset, faster and more efficient allocation of disk space, no limit on the number of subdirectories within a directory, faster file system checking, and more robust journaling.

To complement the addition of ext4 as a fully supported filesystem in Red Hat Enterprise Linux 5.6, the e4fsprogs package has been updated to the latest upstream version. e4fsprogs contains utilities to create, modify, verify, and correct the ext4 filesystem.

Logical Volume Manager (LVM)

Volume management creates a layer of abstraction over physical storage by creating logical storage volumes. This provides greater flexibility over just using physical storage directly. Red Hat Enterprise Linux 5.6 manages logical volumes using the Logical Volume Manager (LVM). Further Reading The Logical Volume Manager Administration document describes the LVM logical volume manager, including information on running LVM in a clustered environment.

  • Mirroring Mirror Logs LVM maintains a small log (on a separate device) which it uses to keep track of which regions are in sync with the mirror or mirrors. Red Hat Enterprise Linux 5.6 introduces the ability to mirror this log device.
  • Splitting a Redundant Image of a Mirror Red Hat Enterprise Linux 5.6 introduces the use of the --splitmirrors argument of the lvconvert command to split off a redundant image of a mirrored logical volume to form a new logical volume.
  • Configuration LVM in Red Hat Enterprise Linux 5.6 also provides additionally configuration options for default data alignment and volume group metadata.

[Apr 20, 2009] Sun goes to Oracle for $7.4B

Oracle+Sun has the power to seriously harm IBM. Solaris still has the highest market share among proprietary Unixes. And AIX is only third after HP-UX. Wonder if Solaris will become Oracle's main development platform again. Oracle is a top contributor to Linux and that might help to bridge the gap in shell and packaging. Telecommunications and database administrators always preferred Solaris over Linux.
Yahoo! Finance

Oracle Corp. snapped up computer server and software maker Sun Microsystems Inc. for $7.4 billion Monday, trumping rival IBM Corp.'s attempt to buy one of Silicon Valley's best known -- and most troubled -- companies.

... ... ...

Jonathan Schwartz, Sun's CEO, predicted the combination will create a "systems and software powerhouse" that "redefines the industry, redrawing the boundaries that have frustrated the industry's ability to solve." Among other things, he predicted Oracle will be able to offer its customers simpler computing solutions at less expensive prices by drawing upon Sun's technology.

... ... ...

Yet Oracle says it can run Sun more efficiently. It expects the purchase to add at least 15 cents per share to its adjusted earnings in the first year after the deal closes. The company estimated Santa Clara, Calif.-based Sun will contribute more than $1.5 billion to Oracle's adjusted profit in the first year and more than $2 billion in the second year.

If Oracle can hit those targets, Sun would yield more profit than the combined contributions of three other major acquisitions -- PeopleSoft Inc., Siebel Systems Inc. and BEA Systems -- that cost Oracle a total of more than $25 billion.

A deal with Oracle might not be plagued by the same antitrust issues that could have loomed over IBM and Sun, since there is significantly less overlap between the two companies. Still, Oracle could be able to use Sun's products to enhance its own software.

Oracle's main business is database software. Sun's Solaris operating system is a leading platform for that software. The company also makes "middleware," which allows business computing applications to work together. Oracle's middleware is built on Sun's Java language and software.

Calling Java the "single most important software asset we have ever acquired," Ellison predicted it would eventually help make Oracle's middleware products generate as much revenue as its database line does.

Sun's takeover is a reminder that a few missteps and bad timing can cause a star to come crashing down.

Sun was founded in 1982 by men who would become legendary Silicon Valley figures: Andy Bechtolsheim, a graduate student whose computer "workstation" for the Stanford University Network (SUN) led to the company's first product; Bill Joy, whose work formed the basis for Sun's computer operating system; and Stanford MBAs Vinod Khosla and Scott McNealy.

Sun was a pioneer in the concept of networked computing, the idea that computers could do more when lots of them were linked together. Sun's computers took off at universities and in the government, and became part of the backbone of the early Internet. Then the 1990s boom made Sun a star. It claimed to put "the dot in dot-com," considered buying a struggling Apple Computer Inc. and saw its market value peak around $200 billion.

[Apr 17, 2009] Adobe Reader 9 released - Linux and Solaris x86

Tabbed viewing was added
Ashutosh Sharma

Adobe Reader 9.1 for Linux and Solaris x86 has been released today. Solaris x86 support was one of the most requested feature by users. As per the Reader team's announcement, this release includes the following major features:

- Support for Tabbed Viewing (preview)
- Super fast launch, and better performance than previous releases
- Integration with
- IPv6 support
- Enhanced support for PDF portfolios (preview)

The complete list is available here.

Adobe Reader 9.1 is now available for download and works on OpenSolaris, Solaris 10 and most modern Linux distributions such as Ubuntu 8.04, PCLinuxOS, Mandriva 2009, SLED 10, Mint Linux 6 and Fedora 10.

See also Sneak Preview of the Tabbed Viewing interface in Adobe Reader 9.x (on Ubuntu)

[Feb 22, 2009] 10 shortcuts to master bash - Program - Linux - Builder AU By Guest Contributor, TechRepublic >


If you've ever typed a command at the Linux shell prompt, you've probably already used bash -- after all, it's the default command shell on most modern GNU/Linux distributions.

The bash shell is the primary interface to the Linux operating system -- it accepts, interprets and executes your commands, and provides you with the building blocks for shell scripting and automated task execution.

Bash's unassuming exterior hides some very powerful tools and shortcuts. If you're a heavy user of the command line, these can save you a fair bit of typing. This document outlines 10 of the most useful tools:

  1. Easily recall previous commands

    Bash keeps track of the commands you execute in a history buffer, and allows you to recall previous commands by cycling through them with the Up and Down cursor keys. For even faster recall, "speed search" previously-executed commands by typing the first few letters of the command followed by the key combination Ctrl-R; bash will then scan the command history for matching commands and display them on the console. Type Ctrl-R repeatedly to cycle through the entire list of matching commands.

  2. Use command aliases

    If you always run a command with the same set of options, you can have bash create an alias for it. This alias will incorporate the required options, so that you don't need to remember them or manually type them every time. For example, if you always run ls with the -l option to obtain a detailed directory listing, you can use this command:

    bash> alias ls='ls -l' 

    To create an alias that automatically includes the -l option. Once this alias has been created, typing ls at the bash prompt will invoke the alias and produce the ls -l output.

    You can obtain a list of available aliases by invoking alias without any arguments, and you can delete an alias with unalias.

  3. Use filename auto-completion

    Bash supports filename auto-completion at the command prompt. To use this feature, type the first few letters of the file name, followed by Tab. bash will scan the current directory, as well as all other directories in the search path, for matches to that name. If a single match is found, bash will automatically complete the filename for you. If multiple matches are found, you will be prompted to choose one.

  4. Use key shortcuts to efficiently edit the command line

    Bash supports a number of keyboard shortcuts for command-line navigation and editing. The Ctrl-A key shortcut moves the cursor to the beginning of the command line, while the Ctrl-E shortcut moves the cursor to the end of the command line. The Ctrl-W shortcut deletes the word immediately before the cursor, while the Ctrl-K shortcut deletes everything immediately after the cursor. You can undo a deletion with Ctrl-Y.

  5. Get automatic notification of new mail

    You can configure bash to automatically notify you of new mail, by setting the $MAILPATH variable to point to your local mail spool. For example, the command:

    bash> MAILPATH='/var/spool/mail/john'
    bash> export MAILPATH 

    Causes bash to print a notification on john's console every time a new message is appended to John's mail spool.

  6. Run tasks in the background

    Bash lets you run one or more tasks in the background, and selectively suspend or resume any of the current tasks (or "jobs"). To run a task in the background, add an ampersand (&) to the end of its command line. Here's an example:

    bash> tail -f /var/log/messages &
    [1] 614

    Each task backgrounded in this manner is assigned a job ID, which is printed to the console. A task can be brought back to the foreground with the command fg jobnumber, where jobnumber is the job ID of the task you wish to bring to the foreground. Here's an example:

    bash> fg 1

    A list of active jobs can be obtained at any time by typing jobs at the bash prompt.

  7. Quickly jump to frequently-used directories

    You probably already know that the $PATH variable lists bash's "search path" -- the directories it will search when it can't find the requested file in the current directory. However, bash also supports the $CDPATH variable, which lists the directories the cd command will look in when attempting to change directories. To use this feature, assign a directory list to the $CDPATH variable, as shown in the example below:

    bash> CDPATH='.:~:/usr/local/apache/htdocs:/disk1/backups'
    bash> export CDPATH

    Now, whenever you use the cd command, bash will check all the directories in the $CDPATH list for matches to the directory name.

  8. Perform calculations

    Bash can perform simple arithmetic operations at the command prompt. To use this feature, simply type in the arithmetic expression you wish to evaluate at the prompt within double parentheses, as illustrated below. Bash will attempt to perform the calculation and return the answer.

    bash> echo $((16/2))
  9. Customise the shell prompt

    You can customise the bash shell prompt to display -- among other things -- the current username and host name, the current time, the load average and/or the current working directory. To do this, alter the $PS1 variable, as below:

    bash> PS1='\u@\h:\w \@> '
    bash> export PS1
    root@medusa:/tmp 03:01 PM>

    This will display the name of the currently logged-in user, the host name, the current working directory and the current time at the shell prompt. You can obtain a list of symbols understood by bash from its manual page.

  10. Get context-specific help

    Bash comes with help for all built-in commands. To see a list of all built-in commands, type help. To obtain help on a specific command, type help command, where command is the command you need help on. Here's an example:

    bash> help alias
    ...some help text...

    Obviously, you can obtain detailed help on the bash shell by typing man bash at your command prompt at any time.

[Feb 3, 2009] Using The Red Hat Rescue Environment

LG #159
There are several different rescue CDs out there, and they all provide slightly different rescue environments. The requirement here at Red Hat Academy is, perhaps unsurprisingly, an intimate knowledge of how to use the Red Hat Enterprise Linux (RHEL) 5 boot CD.

All these procedures should work exactly the same way with Fedora and CentOS. As with any rescue environment, it provides a set of useful tools; it also allows you to configure your network interfaces. This can be helpful if you have an NFS install tree to mount, or if you have an RPM that was corrupted and needs to be replaced. There are LVM tools for manipulating Logical Volumes, "fdisk" for partitioning devices, and a number of other tools making up a small but capable toolkit.

The Red Hat rescue environment provided by the first CD or DVD can really come in handy in many situations. With it you can solve boot problems, bypass forgotten GRUB bootloader passwords, replace corrupted RPMs, and more. I will go over some of the most important and common issues. I also suggest reviewing a password recovery article written by Suramya Tomar ( that deals with recovering lost root passwords in a variety of ways for different distributions. I will not be covering that here since his article is a very good resource for those problems.

Start by getting familiar with using GRUB and booting into single user mode. After you learn to overcome and repair a variety of boot problems, what initially appears to be a non-bootable system may be fully recoverable. The best way to get practice recovering non-bootable systems is by using a non-production machine or a virtual machine and trying out various scenarios. I used Michael Jang's book, "Red Hat Certified Engineer Linux Study Guide", to review non-booting scenarios and rehearse how to recover from various situations. I would highly recommend getting comfortable with recovering non-booting systems because dealing with them in real life without any practice beforehand can be very stressful. Many of these problems are really easy to fix but only if you have had previous experience and know the steps to take.

When you are troubleshooting a non-booting system, there are certain things that you should be on the alert for. For example, an error in /boot/grub/grub.conf, /etc/fstab, or /etc/inittab can cause the system to not boot properly; so can an overwritten boot sector. In going through the process of troubleshooting with the RHEL rescue environment, I'll point out some things that may be of help in these situations.

[Dec 24, 2008] Alan Cox and the End of an Era

ComputerworldUK blogs

And now, it seems, after ten years at the company, Cox is leaving Red Hat:

I will be departing Red Hat mid January having handed in my notice. I'm not going to be spending more time with the family, gardening or other such wonderous things. I'm leaving on good terms and strongly supporting the work Red Hat is doing.

I've been at Red Hat for ten years as contractor and employee and now have an opportunity to get even closer to the low level stuff that interests me most. Barring last minute glitches I shall be relocating to Intel (logically at least, physically I'm not going anywhere) and still be working on Linux and free software stuff.

I know some people will wonder what it means for Red Hat engineering. Red Hat has a solid, world class, engineering team and my departure will have no effect on their ability to deliver.

[Sep 11, 2008] The LXF Guide 10 tips for lazy sysadmins Linux Format The website of the UK's best-selling Linux magazine

A lazy sysadmin is a good sysadmin. Time spent in finding more-efficient shortcuts is time saved later on for that ongoing project of "reading the whole of the internet", so try Juliet Kemp's 10 handy tips to make your admin life easier...

  1. Cache your password with ssh-agent
  2. Speed up logins using Kerberos
  3. screen: detach to avoid repeat logins
  4. screen: connect multiple users
  5. Expand Bash's tab completion
  6. Automate your installations
  7. Roll out changes to multiple systems
  8. Automate Debian updates
  9. Sanely reboot a locked-up box
  10. Send commands to several PCs

[Sep 9, 2008] The Fedora-Red Hat Crisis by Bruce Byfield

September 9, 2008 |

A few weeks ago, when I wrote that, "forced to choose, the average FOSS-based business is going to choose business interests over FOSS [free and open source software] every time," many people, including Mathew Aslett and Matt Assay, politely accused me of being too cynical. Unhappily, you only have to look at the relations between Red Hat and Fedora, the distribution Red Hat sponsors, during the recent security crisis for evidence that I might be all too accurate.

That this evidence should come from Red Hat and Fedora is particularly dismaying. Until last month, most observers would have described the Red Hat-Fedora relationship as a model of how corporate and community interests could work together for mutual benefit.

Although Fedora was initially dismissed as Red Hat's beta release when it was first founded in 2003, in the last few years, it had developed laudatory open processes and become increasingly independent of Red Hat. As Max Spevack, the former chair of the Fedora Board, said in 2006, the Red Hat-Fedora relationship seemed a "good example of how to have a project that serves the interests of a company that also is valuable and gives value to community members."

Yet it seems that, faced with a problem, Red Hat moved to protect its corporate interests at the expense of Fedora's interests and expectations as a community -- and that Fedora leaders were as surprised by the response as the general community.

Outline of a crisis

What happened last month is still unclear. My request a couple of weeks ago to discuss events with Paul W. Frields, the current Fedora Chair, was answered by a Red Hat publicist, who told me that the official statements on the crisis were all that any one at Red Hat or Fedora was prepared to say in public -- a response so stereotypically corporate in its caution that it only emphasizes the conflict of interests.

However, the Fedora announcements mailing list gave the essentials. On August 14, Frields sent out a notice that Fedora was "currently investigating an issue in the infrastructure systems." He warned that the entire Fedora site might become temporarily unavailable and warned that users should "not download or update any additional packages on your Fedora systems." As might be expected, the cryptic nature of this corporate-sounding announcement caused considerable curiosity, both within and without Fedora, with most people wanting to know more.

A day later, Frield's name was on another notice, saying that the situation was continuing, and pleading for Fedora users to be patient. A third notice followed on August 19, announcing that some Fedora services were now available, and providing the first real clue to what was happening when a new SSH fingerprint was released.

It was only on August 22 that Frields was permitted to announce that, "Last week we discovered that some Fedora servers were illegally accessed. The intrusion into the servers was quickly discovered, and the servers were taken offline . . . .One of the compromised Fedora servers was a system used for signing Fedora packages. However, based on our efforts, we have high confidence that the intruder was not able to capture the passphrase used to secure the Fedora package signing key."

Since then, plans for changing security keys have been announced. However, as of September 8, the crisis continues, with Fedora users still unable to get security updates or bug-fixes. Three weeks without these services might seem trivial to Windows users, but for Fedora users, like those of other GNU/Linux distribution, many of whom are used to daily updates to their system, the crisis amounts to a major disruption of service.

A conflict of cultures

From a corporate viewpoint, Red Hat's close-lipped reaction to the crisis is understandable. Like any company based on free and open source software, Red Hat derives its income from delivering services to customers, and obviously its ability to deliver services is handicapped (if not completely curtailed) when its servers are compromised. Under these circumstances, the company's wish to proceed cautiously and with as little publicity as possible is perfectly natural.

The problem is that, in moving to defend its own credibility, Red Hat has neglected Fedora's. While secrecy about the crisis may be second nature to Red Hat's legal counsel, the FOSS community expects openness.

In this respect, Red Hat's handling of the crisis could not contrast more strongly with the reaction of the community-based Debian distribution when a major security flaw was discovered in its openssl package last May. In keeping with Debian's policy of openness, the first public announcement followed hard on the discovery, and included an explanation of the scope, what users could do, and the sites where users could find tools and instructions for protecting themselves.

[Aug 23, 2008] OpenSSH blacklist script

That's sad -- RHN was compromised due and some Troyanized OpenSSH packages were uploaded.
August 22, 2008

Last week Red Hat detected an intrusion on certain of its computer systems and took immediate action. While the investigation into the intrusion is on-going, our initial focus was to review and test the distribution channel we use with our customers, Red Hat Network (RHN) and its associated security measures. Based on these efforts, we remain highly confident that our systems and processes prevented the intrusion from compromising RHN or the content distributed via RHN and accordingly believe that customers who keep their systems updated using Red Hat Network are not at risk. We are issuing this alert primarily for those who may obtain Red Hat binary packages via channels other than those of official Red Hat subscribers.

In connection with the incident, the intruder was able to get a small number of OpenSSH packages relating only to Red Hat Enterprise Linux 4 (i386 and x86_64 architectures only) and Red Hat Enterprise Linux 5 (x86_64 architecture only) signed. As a precautionary measure, we are releasing an updated version of these packages and have published a list of the tampered packages and how to detect them.

To reiterate, our processes and efforts to date indicate that packages obtained by Red Hat Enterprise Linux subscribers via Red Hat Network are not at risk.

We have provided a shell script which lists the affected packages and can verify that none of them are installed on a system:

The script has a detached GPG signature from the Red Hat Security Response Team (key) so you can verify its integrity:

This script can be executed either as a non-root user or as root. To execute the script after downloading it and saving it to your system, run the command:

                                                 bash ./

If the script output includes any lines beginning with "ALERT" then a tampered package has been installed on the system. Otherwise, if no tampered packages were found, the script should produce only a single line of output beginning with the word "PASS", as shown below:

                                                 bash ./
   PASS: no suspect packages were found on this system

The script can also check a set of packages by passing it a list of source or binary RPM filenames. In this mode, a "PASS" or "ALERT" line will be printed for each filename passed; for example:

                                                 bash ./ openssh-4.3p2-16.el5.i386.rpm
   PASS: signature of package "openssh-4.3p2-16.el5.i386.rpm" not on blacklist

Red Hat customers who discover any tampered packages, need help with running this script, or have any questions should log into the Red Hat support website and file a support ticket, call their local support center, or contact their Technical Account Manager.

[Aug 7, 2008] rsyslog 2.0.6 (v2 Stable) by Rainer Gerhards

This is new syslog daemon used by RHEL.

About: Rsyslog is an enhanced multi-threaded syslogd. Among others, it offers support for on-demand disk buffering, reliable syslog over TCP, SSL, TLS, and RELP, writing to databases (MySQL, PostgreSQL, Oracle, and many more), email alerting, fully configurable output formats (including high-precision timestamps), the ability to filter on any part of the syslog message, on-the-wire message compression, and the ability to convert text files to syslog. It is a drop-in replacement for stock syslogd and able to work with the same configuration file syntax.

Changes: IPv6 addresses could not be specified in forwarding actions, because they contain colons and the colon character was already used for some other purpose. IPv6 addresses can now be specified inside of square brackets. This is a recommended update for all v2-stable branch users.

[Mar 26, 2008] Oracle Expands Its Linux Base by Sean Michael Kerner


Oracle claims that it continues to pick up users for its Linux offering and now is set to add new clustering capabilities to the mix.

So how is Oracle doing with its Oracle Unbreakable Linux? Pretty well. According to Monica Kumar, senior director Linux and open source product marketing at Oracle, there are now 2,000 customers for Oracle's Linux. Those customers will now be getting a bonus from Oracle: free clustering software.

Oracle's Clusterware software previously had only been available to Oracle's Real Application Clusters (RAC) customers, but now will also be part of the Unbreakable Linux support offering at no additional cost.

Clusterware is the core Oracle (NASDAQ: ORCL) software offering that enables the grouping of individual servers together into a cluster system. Kumar explained to that the full RAC offering provides additional components beyond just Clusterware that are useful for managing and deploying Oracle databases on clusters.

The new offering for Linux users, however, does not necessarily replace the need for RAC.

"We're not saying that this [Clusterware] replaces RAC," Kumar noted. "We are taking it out of RAC for other general purpose uses as well. Clusterware is general purpose software that is part of RAC but that isn't the full solution."

The Clusterware addition to the Oracle Unbreakable Linux support offering is expected by Kumar to add further impetus for users to adopt Oracle's Linux support program.

Oracle Unbreakable Linux was first announced in October 2006 and takes Red Hat's Enterprise Linux as a base. To date, Red Hat has steadfastly denied on its quarterly investor calls that Oracle's Linux offering has had any tangible impact on its customer base.

In 2007, Oracle and Red Hat both publicly traded barbs over Yahoo, which apparently is a customer of both Oracle's Unbreakable Linux as well as Red Hat Enterprise Linux.

"We can't comment on them [Red Hat] and what they're saying," Kumar said. "I can tell you that we're seeing a large number of Oracle customers who were running on Linux before coming to Unbreakable Linux. It's difficult to say if they're moving all of their Linux servers to Oracle or not."

That said, Kumar added that Linux customers are coming to Oracle for more than just running Oracle on Linux, they're also coming with other application loads as well.

"Since there are no migration issues we do see a lot of RHEL [Red Hat Enterprise Linux] customers because it's easy for them to transition," Kumar claimed.

Ever since Oracle's Linux first appeared, Oracle has claimed that it was fully compatible with RHEL and it's a claim that Kumar reiterated.

"In the beginning, people had questions about how does compatibility work, but we have been able to address all those questions," Kumar said. "In the least 15 months, Oracle has proved that we're fully compatible and that we're not here to fork Linux but to make it stronger."

[Feb 26, 2008] Role-based access control in SELinux

Learn how to work with RBAC in SELinux, and see how the SELinux policy, kernel, and userspace work together to enforce the RBAC and tie users to a type enforcement policy.

[Jan 24, 2008] Project details for cgipaf

The package also contain Solaris binary of chpasswd clone, which is extremely useful for mass changes of passwords in mixed corporate environments which along with Linux and AIX (both have native chpasswd implementation) include Solaris or other Unixes that does not have chpasswd utility (HP-UX is another example in this category). Version 1.3.2 now includes Solaris binary of chpasswd which works on Solaris 9 and 10.

cgipaf is a combination of three CGI programs.

  • passwd.cgi, which allow users to update their password,
  • viewmailcfg.cgi, which allows users to view their current mail configuration,
  • mailcfg.cgi, which updates the mail configuration.

All programs use PAM for user authentication. It is possible to run a script to update SAMBA passwords or NIS configuration when a password is changed. mailcfg.cgi creates a .procmailrc in the user's home directory. A user with too many invalid logins can be locked. The minimum and maximum UID can be set in the configuration file, so you can specify a range of UIDs that are allowed to use cgipaf.

[Dec 21, 2007] LXER interview with John Hull - the manager of the Dell Linux engineering team

The original sales estimates for Ubuntu computers was around 1% of the total sales, or about 20,000 systems annually. Have the expectations been met so far? Will Dell ever release sales figures for Ubuntu systems?

The program so far is meeting expectations. Customers are certainly showing their interest and buying systems preloaded with Ubuntu, but it certainly won't overtake Microsoft Windows anytime soon. Dell has a policy not to release sales numbers, so I don't expect us to make Ubuntu sales figures available publicly.

[Dec 21, 2007] Red Hat to get new CEO from Delta Air Lines Underexposed

"When you take them out of the big buildings, without the imprimatur of Hewlett-Packard, IBM and Oracle, or HP around them, they just didn't hold up."

Szulik, who took over as CEO from Bob Young in 1999 just a few months after its initial public offering, said he's stepping down because of family health issues.

"For the last nine months, I've struggled with health issues in my family," and that priority couldn't be balanced with work, Szulik said in an interview. "This job requires a 7x24, 110 percent commitment."

Szulik, who remains chairman of the board, praised Whitehurst in a statement, saying he's a "hands-on guy who will be a strong cultural fit at Red Hat" and "a talented executive who has successfully led a global technology-focused organization at Delta."

On a conference call, Szulik said Whitehurst stood "head and shoulders" above other candidates interviewed in a recruiting process. He was a programmer earlier in his career and runs four versions of Linux at home, he said.

Moreover, Szulik said he wasn't satisfied with more traditional tech executives who were interviewed.

"What we encountered was in many cases was a lack of understanding of open-source software development and of our model," he said. During the interview, he added about the tech industry candidates, "When you take them out of the big buildings, without the imprimatur of Hewlett-Packard, IBM and Oracle, or HP around them, they just didn't hold up."

The surprise move was announced as the leading Linux seller announced results for its third quarter of fiscal 2008. Its revenue increased 28 percent to $135.4 million and net income went up 12 percent to $20.3 million, or 10 cents per share. The company also raised estimates for full-year results to revenue of $521 million to $523 million and earnings of about 70 cents per share.

[Oct 29, 2007] Oracle's Linux Unbreakable Or Just A Necessary Adjustment


.. In fact, Coekaerts has to say this often because Oracle is widely viewed as an opportunistic supporter of Linux, taking Red Hat's product, stripping out its trademarks, and offering it as its own. Coekaerts says what's more important is that Oracle is a contributor to Linux. It contributed the cluster file system and hasn't really generated a competing distribution.

Yet, in some cases, there is an Oracle distribution. Most customers Coekaerts deals with get their Linux from Red Hat and then ask for Oracle's technical support in connection with the Oracle database. But Oracle has been asked often enough to supply Linux with its applications or database that it makes available a version of Red Hat Enterprise Linux, with the Red Hat logos and labels stripped out. Oracle's version of Linux has a "cute" penguin inserted and is optimized to work with Oracle database applications. It may also have a few Oracle-added "bug fixes," Coekaerts says.

The bug fixes, however, lead to confusion about Coekaert's relatively simple formulation of Oracle enterprise support, not an Oracle fork. And that confusion stems from Oracle CEO Larry Ellison's attention-getting way of introducing Unbreakable Linux at the October 2006 Oracle OpenWorld.

When enterprise customers call with a problem, Oracle's technical support finds the problem and supplies a fix. If it's a change in the Linux kernel, the customer would normally have to wait for the fix to be submitted to kernel maintainers for review, get merged into the kernel, and then get included in an updated version of an enterprise edition from Red Hat or Novell. Such a process can take up to two years, observers inside and outside the kernel process say.

The pace of bug fixes "is the most serious problem facing the Linux community today," Ellison explained during an Oracle OpenWorld keynote a year ago.

When Oracle's Linux technical support team has a fix, it gives that fix to the customer without waiting for Red Hat's uptake or the kernel process itself, Ellison said.

Red Hat's Berman argues that when it comes to the size of the problem, Oracle makes too much of too little.

When Red Hat learns of bugs, it retrofits the fixes into its current and older versions of Red Hat Enterprise Linux. That's one of Red Hat's main engineering investments in Linux, Berman said in an interview.

Coekaerts responds, "There are disagreements on what is considered critical by the distribution vendors and us or our customers."

Berman acknowledges that several judgment calls are involved. Some bugs affect only a few enterprise customers. They may apply to an old RHEL version. "Three or four times a year" a proposed fix may not be deemed important enough to undergo this retrofit, he says.

But Coekaerts told InformationWeek: "Oracle customers encounter this problem more than three or four times a year. I cannot give a number, it tends to vary. But it does happen rather frequently."

Berman counters that when Oracle changes Red Hat's tested code with its own bug fixes, it breaks the certification that Red Hat offers on its distribution, so it's no longer guaranteed to work with other software. "Oracle claims they will patch things for a customer. That's a fork," he says.

What Red Hat calls a fork is what Oracle calls a "one-off fix to customers at the time of the problem. If the customer runs version 5 but Red Hat is at version 8, and the customer runs into a bug, does he want to go into [the next release with a fix] version 9? Likely not. He wants to minimize the amount of change. Oracle will fix the customer's problem in version 5" Coekaerts says.

I think it's fair to characterize what Oracle does as technical support, not a fork. There's no attempt to sustain the aberration through a succession of Linux kernels offered to the general public as an alternative to the mainstream kernel.

But the Oracle/Red Hat debate defines a gray area in a fast-moving kernel development process. Bugs that affect many users get addressed through the kernel process or the Red Hat and Novell (NSDQ: NOVL) retrofits. That still may not always cover a problem for an individual user or a set of users sitting on a particular piece of aging hardware or caught in a specific hardware/software configuration.

If Oracle fixes some of these problems, I say more power to it.

But if they are problems that are isolated in nature or limited in scope, as I suspect they are, that makes them something less than Ellison's "most serious problem facing the Linux community today."

Ellison needed air cover to take Red Hat's product and do what he wanted with it. In the long run, he's probably increasing the use of Linux in the enterprise and keeping Red Hat on its toes as a support organization. That's less benefit than claimed, but still something.

[Oct 23, 2007] Yast (Yet Another Setup Tool) part of its distribution.

Oracle Enterprise Linux became more compatible with Suse

Yet Another Setup Tool. Yast helps make system administration easier by providing a single utility for configuring and maintaining Linux systems. The version of Yast available here is modified to work with all Enterprise Linux distributions including Enterprise Linux and SuSE.

Special note to Oracle Management Pack for Linux users:

[Oct 23, 2007] UK Unix group newsletter

Oracle hasn't "talked about how our Linux is better than anyone else's Linux. Oracle has not forked and has no desire to fork Red Hat Enterprise Linux and maintain its own version. We don't differentiate on the distribution because we use source code provided by Red Hat to produce Oracle Enterprise Linux and errata. We don't care whether you run Red Hat Enterprise Linux or Enterprise Linux from Oracle and we'll support you in either case because the two are fully binary- and source-compatible. Instead, we focus on the nature and the quality of our support and the way we test Linux using real-world test cases and workloads."


data=writeback While the writeback option provides lower data consistency guarantees than the journal or ordered modes, some applications show very significant speed improvement when it is used. For example, speed improvements can be seen when heavy synchronous writes are performed, or when applications create and delete large volumes of small files, such as delivering a large flow of short email messages. The results of the testing effort described in Chapter 3 illustrate this topic.

When the writeback option is used, data consistency is similar to that provided by the ext2 file system. However, file system integrity is maintained continuously during normal operation in the ext3 file system.

In the event of a power failure or system crash, the file system may not be recoverable if a significant portion of data was held only in system memory and not on permanent storage. In this case, the filesystem must be recreated from backups. Often, changes made since the file system was last backed up are inevitably lost.

[Aug 7, 2007] Linux Replacing atime

August 7, 2007 | KernelTrap

Submitted by Jeremy on August 7, 2007 - 9:26am.

In a recent lkml thread, Linus Torvalds was involved in a discussion about mounting filesystems with the noatime option for better performance, "'noatime,data=writeback' will quite likely be *quite* noticeable (with different effects for different loads), but almost nobody actually runs that way."

He noted that he set O_NOATIME when writing git, "and it was an absolutely huge time-saver for the case of not having 'noatime' in the mount options. Certainly more than your estimated 10% under some loads."

The discussion then looked at using the relatime mount option to improve the situation, "relative atime only updates the atime if the previous atime is older than the mtime or ctime. Like noatime, but useful for applications like mutt that need to know when a file has been read since it was last modified."

Ingo Molnar stressed the significance of fixing this performance issue, "I cannot over-emphasize how much of a deal it is in practice. Atime updates are by far the biggest IO performance deficiency that Linux has today. Getting rid of atime updates would give us more everyday Linux performance than all the pagecache speedups of the past 10 years, _combined_." He submitted some patches to improve relatime, and noted about atime:

"It's also perhaps the most stupid Unix design idea of all times. Unix is really nice and well done, but think about this a bit: 'For every file that is read from the disk, lets do a ... write to the disk! And, for every file that is already cached and which we read from the cache ... do a write to the disk!'"

[Aug 7, 2007] Expect plays a crucial role in network management by Cameron Laird

Jul 31, 2007 |

If you manage systems and networks, you need Expect.

More precisely, why would you want to be without Expect? It saves hours common tasks otherwise demand. Even if you already depend on Expect, though, you might not be aware of the capabilities described below.

Expect automates command-line interactions

You don't have to understand all of Expect to begin profiting from the tool; let's start with a concrete example of how Expect can simplify your work on AIX or other operating systems:

Suppose you have logins on several UNIX or UNIX-like hosts and you need to change the passwords of these accounts, but the accounts are not synchronized by Network Information Service (NIS), Lightweight Directory Access Protocol (LDAP), or some other mechanism that recognizes you're the same person logging in on each machine. Logging in to a specific host and running the appropriate passwd command doesn't take long-probably only a minute, in most cases. And you must log in "by hand," right, because there's no way to script your password?

Wrong. In fact, the standard Expect distribution (full distribution) includes a command-line tool (and a manual page describing its use!) that precisely takes over this chore. passmass (see Resources) is a short script written in Expect that makes it as easy to change passwords on twenty machines as on one. Rather than retyping the same password over and over, you can launch passmass once and let your desktop computer take care of updating each individual host. You save yourself enough time to get a bit of fresh air, and multiple opportunities for the frustration of mistyping something you've already entered.

The limits of Expect

This passmass application is an excellent model-it illustrates many of Expect's general properties:

  • It's a great return on investment: The utility is already written, freely downloadable, easy to install and use, and saves time and effort.
  • Its contribution is "superficial," in some sense. If everything were "by the book"-if you had NIS or some other domain authentication or single sign-on system in place-or even if login could be scripted, there'd be no need for passmass. The world isn't polished that way, though, and Expect is very handy for grabbing on to all sorts of sharp edges that remain. Maybe Expect will help you create enough free time to rationalize your configuration so that you no longer need Expect. In the meantime, take advantage of it.
  • As distributed, passmass only logs in by way of telnet, rlogin, or slogin. I hope all current developerWorks readers have abandoned these protocols for ssh, which passmasss does not fully support.
  • On the other hand, almost everything having to do with Expect is clearly written and freely available. It only takes three simple lines (at most) to enhance passmass to respect ssh and other options.

You probably know enough already to begin to write or modify your own Expect tools. As it turns out, the passmass distribution actually includes code to log in by means of ssh, but omits the command-line parsing to reach that code. Here's one way you might modify the distribution source to put ssh on the same footing as telnet and the other protocols:
Listing 1. Modified passmass fragment that accepts the -ssh argument

} "-rlogin" {
set login "rlogin"
} "-slogin" {
set login "slogin"
} "-ssh" {
set login "ssh"
} "-telnet" {
set login "telnet"

In my own code, I actually factor out more of this "boilerplate." For now, though, this cascade of tests, in the vicinity of line #100 of passmass, gives a good idea of Expect's readability. There's no deep programming here-no need for object-orientation, monadic application, co-routines, or other subtleties. You just ask the computer to take over typing you usually do for yourself. As it happens, this small step represents many minutes or hours of human effort saved.

[Jul 30, 2007] Due to problems on high loads in Linux 2.6.23 kernel the Linux kernel process scheduler has been completely ripped out and replaced with a completely new one called Completely Fair Scheduler (CFS) modeled after Solaris 10 scheduler.

This is will not affect the current Linux distributions (Suse 9, 10 and RHEL 4.x) as they forked the kernel and essentially develop it as a separate tree.

But it will affect any future Red Hat or Suse distribution (Suse 11 and RHEL 6 respectively).

How it will fair in comparison with Solaris 10 remains to be seen:

The main idea of CFS's design can be summed up in a single sentence: CFS basically models an "ideal, precise multi-tasking CPU" on real hardware.

Ideal multi-tasking CPU" is a (non-existent) CPU that has 100% physical power and which can run each task at precise equal speed, in parallel, each at 1/n running speed. For example: if there are 2 tasks running then it runs each at exactly 50% speed.

[Apr 10, 2007] Here come the RHEL 5 clones

Of course if you go with a cloned RHEL, while you get the code goodies, you don't get Red Hat's support. Various Red Hat clone distributions, such StartCom AS-5, CentOS, and White Box Enterprise Linux, are built from Red Hat's source code, which is freely available at the Raleigh, NC company's FTP site. The "cloned" versions alter or otherwise remove non-free packages within the RHEL distribution, or non-redistributable bits such as the Red Hat logo.

StartCom Enterprise Linux AS-5 is specifically positioned as a low-cost, server alternative to RHEL 5. This is typical of the RHEL clones.

These distributions, which usually don't offer support options, are meant for expert Linux users who want Red Hat's Linux distribution, but don't feel the need for Red Hat's support.

[Apr 10, 2007] Red Hat Enterprise Linux 5 Some Assembly Required

With RHEL 5, Red Hat has shuffled its SKUs around a bit-what had previously been the entry-level ES server version is now just called Red Hat Enterprise Linux. This version is limited to two CPU sockets, and is priced, per year, at $349 for a basic support plan, $799 for a standard support plan and $1,299 for a premium support plan.

This version comes with an allowance for running up to four guest instances of RHEL. You can run more than that, as well as other operating systems, but only four get updates from, and may be managed through, RHN (Red Hat Network). We thought it was interesting how RHN recognized the difference between guests and hosts on its own and tracked our entitlements accordingly.

What had been the higher-end, AS version of RHEL is now called Red Hat Enterprise Linux Advanced Platform. This version lacks arbitrary hardware limitations and allows for an unlimited number of RHEL guest instances per host. RHEL's Advanced Platform edition is priced, per year, at $1,499 with a standard support plan and $2,499 with a premium plan.

[Mar 23, 2007] Using YUM in RHEL5 for RPM systems

There is more to Red Hat Enterprise Linux 5 (RHEL5) than Xen. I, for one, think people will develop a real taste for YUM (Yellow dog Updater Modified), an automatic update and package installer/remover for RPM systems.

YUM has already been used in the last few Fedora Core releases, but RHEL4 uses the up2date package manager. RHEL5 will use YUM 3.0. Up2date is used as a wrapper around YUM in RHEL5. Third-party code repositories, prepared directories or websites that contain software packages and index files, will also make use of the Anaconda-YUM combination.

... ... ...

Using YUM makes it much easier to maintain groups of machines without having to manually update each one using RPM. Some of its features include:

  • Multiple repositories
  • Simple config file
  • Correct dependency calculation
  • Fast operation
  • RPM-consistent behavior
  • comps.xml group support, including multiple repository groups
  • Simple interface

RHEL5 moves the entire stack of tools which install and update software to YUM. This includes everything from the initial install (through Anaconda) to host-based software management tools, like system-config-packages, to even the updating of your system via Red Hat Network (RHN). New functionality will include the ability to use a YUM repository to supplement the packages provided with your in-house software, as well as plugins to provide additional behavior tweaks.

YUM automatically locates and obtains the correct RPM packages from repositories. It frees you from having to manually find and install new applications or updates. You can use one single command to update all system software, or search for new software by specifying criteria.

[Dec 7, 2006] Survey Finds Red Hat Customers Willing To Stay With Company if it Cuts Prices

(SeekingAlpha) Eric Savitz submits: Red Hat customers are mulling their options. But they can be bought.

That's one of the takeaways from a fascinating report today from Pacific Crest's Brendan Barnicle based on a survey he did of 118 enterprise operating system buyers, including 86 Red Hat support customers. The goal of the survey was to see how Linux users are responding to the new offerings from Oracle (MSFT)/Novell (NOVL) partnership.

Reading the results of the study, you reach several conclusions. One, most customers are seriously considering the new offerings. Two, Red Hat can hold on to most of them, if they are willing to cut prices far enough. And three, customers seem a little more interested in the Microsoft/Novell offerings than those from Oracle.

Here are a few details:

  • Asked whether they would consider switching from their current Linux support provider to Oracle, 26% said they definitely would not; 29% said they definitely would consider it. For Microsoft/Novell, 17% would definitely not consider switching, 27% definitely would consider it.
  • Asked who they would chose as a provider if they were to switch Linux support, 29% of Red Hat customers named Microsoft/Novell; 20% named Oracle.
  • The survey asked, what price discount would your current provide have to offer to keep you as a customer. Among Red Hat customers, 31% said they would need a discount of 50%-74%; 37% said they want a discount of 25%-49%; 27% said they would stay for a discount of 1%-24%.
  • The survey asked, how important would a discount be in order to keep you as a customer? Among Red Hat customers, 64% said "very important." Just 3% said "not at all important."

[Dec 1, 2006] Red Hat From 'Cuddly Penguin' to Public Enemy No. 1

We have suffered from that image in the past. And some of our competitors have played up the fact that the JBoss guys are behaving like a sect. When, in fact, if you look at the composition of our community, we have an order of magnitude more committers than our direct open-source competitors.

But the perception is still there. Bull even said something about that perception. And we'd been thinking about opening up the governance. So when Bull provided us with a great study case, we decided to put the pedal to the metal. But make no mistake this is not going to be a free-for-all. We care a lot about the quality of what gets committed. We invest very heavily in all our projects. We're serious about this so we expect the same level of seriousness from our collaborators.

There is going to be a hybrid model where there is an opening up of the governance. In terms of code contributions it's always been there. But now it's been made explicit instead of implicit and open to attacks of "closedness." JBoss has always been an open community, but we've hired most of our primary committers.

Well, you seem more willing to compromise and evolve your stance on things. Like SCA [Service Component Architecture]-initially you were against it, but it seems like you've changed your mind.

Well, yeah, the specific SCA stance today is there is no reason for us to be for or against it. If it plays out in the market, we'll support it. And I think Mark Little [a JBoss core developer] said it very well that the ESB implementations usually outlive standards.

So what you're seeing from us is mostly due to Mark Little's influence. Mark has been around in the standards arena and has seen all these standards come and go. So it's not about the standards, it's about our implementation in support of all these standards. And it's not our place to be waging a standards war. It's our place to implement and let the market decide and we'll follow the market.

So where I'll agree with you is that it's less of a dogmatic position in terms of perceived competition and more focus on what we do well, which is implementations.

Another thing is JBoss four years ago was very much Marc Fleury and the competitive stance against Sun and things like that. Today I don't do anything. In fact, I actively stay out in terms of not getting in the way of my guys.

So it's both a sign of maturity and of a more diverse organization. I'm representing more than leading the technical direction these days. And that's a very good thing.

You said you approached David Heinemeier Hansson, the creator of Ruby on Rails, to work at JBoss. What other types of developers are you interested in hiring?

Yeah, we did approach him. There is a lot of talent around the Web framework. One of the problems is it's a very fragmented community at a personal level. You have one guy and his framework. Though, this is not the case with Ruby on Rails. But there's a lot of innovation that's going on that would benefit from unification under a bigger distribution umbrella and bigger R&D umbrella. And I think JBoss/Red Hat is in a position to offer that. So we're always talking about new guys.

One of the things I like to do is talk to the core developers and say, "Where are you in terms of recruitment?" And we're talking to scripting guys. I think scripting is the next frontier as [Ruby on Rails] has showed. We have a unique opportunity of bringing under one big branded umbrella a diverse group of folks that today are doing excellent work, be it the scripting crowd, REST, Web framework, or the Faces, or the guys integrating with Seam. All of the work we're doing is going to take more people and we're always on the lookout for the right talent and the right fit.

[Sep 14, 2005] Dr. Dobb's Red Hat Releases Enterprise Linux 5 Beta September 13

... The Red Hat Enterprise Linux 5 Beta 1 release contains virtualization on the i386 and x86_64 architectures as well as a technology preview for IA64.

... ... ...

Aside from Xen, Red Hat Enterprise Linux 5 Beta 1 features AutoFS and iSCSI network storage support, smart card integration, SELinux security, clustering and a cluster file system, Infiniband and RDMA support, and Kexec and Kdump, which replace the current Diskdump and Netdump. Beta 1 also incorporates improvements to the installation process, analysis and development tools SystemTap and Frysk, a new driver model and enablers for stateless Linux.

Linux Client Migration Cookbook A Practical Planning and Implementation Guide for Migrating to Desktop Linux

IBM Redbooks

The goal of this IBM Redbook is to provide a technical planning reference for IT organizations large or small that are now considering a migration to Linux-based personal computers. For Linux, there is a tremendous amount of "how to" information available online that addresses specific and very technical operating system configuration issues, platform-specific installation methods, user interface customizations, etc. This book includes some technical "how to" as well, but the overall focus of the content in this book is to walk the reader through some of the important considerations and planning issues you could encounter during a migration project. Within the context of a pre-existing Microsoft Windows-based environment, we attempt to present a more holistic, end-to-end view of the technical challenges and methods necessary to complete a successful migration to Linux-based clients.

[Jun 24, 2004] Open Source Blog: Open Sourcery by Blane Warrene

I recently spent some time speaking with a popular Yankee Group analyst who covers the enterprise sector in the US, focusing in on open source and where the movement may go in the next few years.

Just to be clear, I differentiate, as most industry watchers do, between Linux and open source. While Linux is open source, the primary Linux distributors have caught on to how they need to position themselves for success and are starting to run their businesses just as any proprietary software company does.

Red Hat and SUSE make prime examples, realizing the path to long term success and revenue streams resided in proving themselves enterprise worthy to larger businesses and institutions, have shifted business models or been acquired by organizations with roots in the enterprise.

Her views, while not always popular in the open source community. are right on point if open source seeks widespread adoption and a permanent seat at the table for longer term financial success.

There are a few obstacles open source proponents need to accept and move forward on:

  1. It will be more costly for a company to migrate away from Windows to Linux, even in light of slightly reduced ongoing maintenance and improved security and uptime. While I have not always agreed that the costs are higher, having migrated corporate systems to Linux in the past, their research showed it to be true in many cases -- especially when migrating beyond standard web hosting and email systems. The costs are higher when factoring in re-certifying drivers, application integrity and training.
  2. To truly become entrenched as a viable financially-rewarding option (meaning open source companies make money and create jobs), a shift toward commercial software models is necessary. This does not mean forgoing open source, however, what it does mean is developing a structure for development, distribution, patching and support that passes muster with corporate IT managers who could be investing substantial amounts of money in open source.

What it boils down to is that while open source has definitely revolutionized software, and it is found internationally in companies large and small, businesses still pick software because it provides a solution not just because it is open source.

The fact that it is cheaper or free simply means the user will save money, but this does not win the favor of those buyers who could be injecting millions into open source projects rather than proprietary software makers.

I would use Firebird as a model. In an interview with Helen Borrie, forthcoming in my July column on SitePoint, she noted that since many Fortune 500 companies are using an open source database like Firebird speaks volumes to the maturing of their project and open source at large.

The reason as I see it, is due to the treatment of Firebird like an enterprise scale proprietary software project. They have a well managed developer community and active support lists, commercial offerings for support through partnerships with several companies, and commercial development projects for corporate clients.

If more open source projects looked at Borrie's team model and discipline in development and support, we just might see more penetration that attracts longer and more profitable contracts and work for those like us in the SitePoint community.

Selected Comments


It will be more costly for a company to migrate away from Windows to Linux, even in light of slightly reduced ongoing maintenance and improved security and uptime. You mean relative to staying with Windows? Does this include recurring costs of Windows licensing / upgrades?

The costs are higher when factoring in re-certifying drivers, application integrity and training.

On the drivers front, that assumes (if we're saying Linux cf. Windows) that systems need upgrades as frequently. There's generally less need to keep upgrading Linux, when used as a server.

Re application integrity, think thats very hard to research accurately - kind of a wooly comment that needs qualification.

On the training side, it's an interesting area where it's kind of like comparing Apples with Pears.

Windows generally hides administrators from much of what's really happening, so it's probably easier to train someone to the point where they're feeling confident but given serious problems, who do you turn to?

*Nix effectively exposes administrators to everything so more time is required to reach the point where sysadmins are confident. Once they reach that point though, they're typically capable of handling anything. The result is stable systems. I'd also argue that a single *Nix sysadmin is capable of maintaining a greater number of systems (scripts / automation etc.) although no figures to back that.

Firebird is an interesting example. The flip side of Firebirds way of doing things seems to be the Open Source "community" is largely unaware of it (compared to, say, MySQL).

Posted by: HarryF from Jun 24th, 2004 @ 8:03 AM MDT


Yes - on costs - Linux was actually found to be more expensive in numerous cases compared to staying with Windows. This is unfortunate as I am a proponent of finding migration paths from Windows to Linux for stability and administration automation. However, the research did show the total cost of ownership eventually balances out, it simply is much more expensive at the outset than staying on a Windows upgrade path.

This survey (partially on site with staff and others via questionnaire) - 1000 companies with 5000 or more employees - found that they did have to certify drivers at the initial migration, certify all new disk images, provide training or certification to adhere to corporate policy, buy indemnification insurance, perform migrations, test, establish support contracts and finally, pay about a 15 percent premium when bringing in certified L:inux staff.

The benefit if the company decided to take the financial hit: over an extended period they experienced the benefits of Linux - uptime, experienced admins and flexibility of the platform.

Application integrity was ambiguous in the study - however - managers cited it constantly when trying to retire commercial Unix and move apps to Linux, needing certification that an entire applications runs exactly as before.

Perhaps it is time for the open source community to begin establishing central organizational points that act as clearinghouses - like Open Source Development labs does for Linux - to certify open source applications on a major scale.

Posted by: bwarrene from Jun 24th, 2004 @ 1:12 PM MDT


I beg to differ on Harry's view about Firebird. Firebird is not as popular as MySQL because 1) it's a newer project (project, not software) and 2) MySQL support comes built into PHP; no need for additional software. Firebird requires either recompilation or loading this DLL into the extension space.

Posted by: andrecruz Jun 24th, 2004 @ 9:37 PM MDT


It was nice to read about your chat with L... DiD... (why are we keeping her name secret?).

Second, I don't understand your distinction between Linux and Open Source. Maybe I'm slow or something, but what it seems to boil down to is:

"Open Source = unprofessional Proprietary = professional (unstated) Linux = open source, but starting to become professional despite itself by acting like proprietary."

Well I'll grant you there are a lot of unprofesssional Free Software projects out there; but the same is true of proprietary. Bad proprietary programs are slightly less likely to see the light of day, but there's still a bevy of them out there.

Now, on the assertion that Linux companies are succeeding by acting like proprietary companies: there's truth and non-truth to it. On the one hand, Red Hat and SuSE have no doubt learned a lot about management, marketing, and good business practices from established companies. On the other hand, an effective open source player does not act the same as an effective proprietary player: there are all kinds of issues with dealing with the developer community that are not an issue in the proprietary world: they bring plusses and minuses, but have to be dealt with rather than ignored.

And I will note that Red Hat, the most successful Linux distributor, is a pure-play Open Source vendor: they do not ship proprietary code. In fact, they devote a lot of developer time to a community distribution that they make no direct money on (but do get free testing from). Likewise, one of the first things Novell did after its so-far successful acquisition of SuSE was to GPL SuSE's proprietary installer. This suggests that while good management is indispensible in anythin, Open Source ventures should not be running off and trying to ape proprietary vendors blindly.

Finally, there's a big difference between the way mass-market shrinkwrapped proprietary software and the way big-iron stuff is. With big-iron stuff you often have consultants in the field, lots of direct customer feedback, maybe even code sharing under NDA with the client: in short, it works a lot like an Open Source project. And that's where Open Source has shined: *nix boxes, web servers, network infrastructure, compilers, developer tools, and increasingly RDMSes. With mass shrinkwrap you have to do much more seeking out of customer needs on your own and also be prepared to tell customers to shove it and wait for the next release. On stuff like this (desktop guis and apps) Open Source has been less successful.

At least one high-profile OSS desktop project (Mozilla) was a legendary quagmire for a long time and is only beginning to claw its way back. Many of the mistakes came from not being open to community input ("dammit, we don't need a whole platform, just a good browser") as any good project of any kind should be. Thing is, no one has a clear idea of how to be usefully open to community input on a mass-market OSS project yet: the twin dangers of adding every requested feature or my-way-or-the-highway-ism have been so far hard to avoid.

Personally, I think the question of the Open Source desktop is given too much importance. Windows server shipments still account for 60% of the market, so it's not like that area is all sewn up. A company that wants to avoid vendor lock-in would do best to migrate its server infrastructure first - that's gonna be least painful and probably highest long-term benefit. Then maybe desktop apps, the maybe desktop operating system.

On MySQL vs. Firebird: yes, MySQL is more widespread, but they're used for entirely different things.

Posted by: jmcginty Jun 25th, 2004 @ 12:34 PM MDT

Dag Wieers

I'm a bit confused to why you want to differentiate between Linux (eg. Red Hat) and Open Source.

Red Hat releases source packages and contributes largely to Open Source projects, both in resources as in code. Improvements by Red Hat are included in SuSE and vice versa. Everybody wins.

This ensures that Red Hat will have to be the best on its own merits. Competition will always be lurking around the corner to take over. Despite that, Red Hat is doing a good job.

You cannot compare this to proprietary vendors were your money goes into the big company bucket being used for the next version that you have to pay for again.

If I can choose I'd rather pay for services, if it guarantees that the money is used for Open Source development. If my Open Source vendor goes belly-up, its work is still available for anyone to use.

Paying for Open Source just guarantees you that you have freedom and are never tight to any vendor. Red Hat is just one example to show that the money is used for the good of the public.

And if you don't have deep pockets, there's still Fedora, CentOS, TaoLinux or Whitebox. Plenty of competition in the same vendor segment. Hard to beat IMO.

Posted by: Dag Wieers from Jun 26th, 2004 @ 3:57 AM MDT

Ron Johnson

One thing I notice that is never mentioned when talking about Windows vs. Linux TCO is virus & worm costs. Both the cost of AV s/w and clean-up after an infection sneaks into the corporate LAN. That *huge* expense will never be borne by a Linux shop.

Posted by: Ron Johnson Jun 26th, 2004 @ 7:56 AM MDT

HP Throws Weight Behind MySQL, JBoss By Clint Boulton

HP (Quote, Chart) stepped up its commitment to open source software Monday by pledging to offer and support the MySQL database server and JBoss application server software in its servers.

The Palo Alto, Calif. systems vendor said it has inked agreements with those open source purveyors to certify and support MySQL and JBoss software on its servers.

Jeffrey Wade, manager of Linux Marketing Communications at HP, said the certifications factor in the company's Linux reference architecture is a software stack that covers everything from the hardware to the operating system, drivers and management agents.

Deployed on HP ProLiant servers, the open source Linux Reference Architectures are based on software from MySQL, JBoss, Apache, and OpenLDAP. The company's commercial Linux Reference Architectures are based on product from Oracle, BEA and SAP.

Both MySQL and JBoss will join the HP Partner Program and receive joint testing and engineering support on HP's hardware systems.

Wade told the added layer of MySQL and JBoss support addresses one of the largest concerns customers have today in opting to pick open source technology over mainstay proprietary products such as Microsoft (Quote, Chart)Windows, Sun Microsystems' (Quote, Chart) Solaris or UNIX.

"We can provide support for that entire solution stack and we're also now giving our customers flexibility in choice and the types of solutions they want to deploy whether that's a commercial or open source application," Wade said.

Bob Bickel, vice president of strategy and corporate development at JBoss, said commercial use remains somewhat constrained because a CIO doesn't know whom they can turn to for support.

"They don't know who they can turn to for indemnification," Bickel told "Yeah, it works great and it's cheap but what happens in the middle of their big selling season if something goes down. Who do they turn to and get it from. What HP's doing is taking an all encompassing view of this with certification and testing."

Testing keeps customers from guessing what version of a Java virtual machine, operating system, MySQL or JBoss product can all work together in a guaranteed way, Bickel explained.

MySQL Vice President of Marketing Zack Urlocker said companies such as Sabre are using an open source stack for business applications. Partnering with HP, then, provides great validation for MySQL and JBoss software.

"A couple of years ago the big knock on open source was that it might be good on the periphery or Web applications, but was not quite ready for business critical applications," Urlocker told "Now, the No. 1 issues have been support. People who have had a lot of success with Linux are now looking at how to use a whole open source stack."

The deal is truly symbiotic. While MySQL and JBoss get backing from a technology driver such as HP, HP gets the added credibility of being cozy with open source, a label many enterprises and HP rivals, such as IBM (Quote, Chart) and Dell (Quote, Chart), are working toward.

Linux sales are trending tall regardless; according to recent hardware server and database software studies from high-tech research outfit Gartner.

Despite legal threats from SCO Group and competition from Microsoft, Gartner said Linux continued to be the growth powerhouse in the operating systems server market, with a revenue increase of 57.3 percent in the first quarter of 2004.

Gartner also found that Linux siphoned market share from UNIX in the relational database management system (RDBMS) market, a niche that grew 158 percent from $116 million in new license revenue in 2002 to nearly $300 million in 2003.

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

[Jun 12, 2021] Seven-year-old make-me-root bug in Linux service polkit patched Published on Jun 12, 2021 |

[Jun 12, 2021] Seven years old bug in Polkit gives unprivileged users root access Published on Jun 12, 2021 |

[Dec 29, 2020] Migrer de CentOS Oracle Linux Petit retour d'exp rience Le blog technique de Microlinux Published on Dec 30, 2020 |

[Dec 28, 2020] Red Hat interpretation of CenOS8 fiasco Published on Dec 28, 2020 |


LXDE - Wikipedia



Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haters Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

Copyright 1996-2021 by Softpanorama Society. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: June 02, 2021