Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

The tar pit of Red Hat overcomplexity

RHEL 6 and RHEL 7 differences are no smaller then between SUSE and RHEL which essentially doubles workload of sysadmins as any "extra" OS leads to mental overflow and loss of productivity. That's why most sysadmin hate RHEL 7.

News Administration Recommended Books Recommended Links Installation of Red Hat from a USB drive Installation Differences between RHEL 6 and RHEL 7 Migrating systems from RHN to RHNSM
Red Hat Certification Program Modifying ISO image to include kickstart file Kickstart LVM Disabling useless daemons in RHEL 6 servers Disabling useless daemons in RHEL 5 linux servers Disabling RHEL 6 Network Manager Biosdevname and renaming of Ethernet interfaces
Systemd Networking Network Manager overwrites resolv.conf RHEL 6 NTP configuration RHEL handling of DST change RHEL6 registration on proxy protected network Log rotation in RHEL/Centos/Oracle linux Xinetd
Disabling useless daemons in RHEL/Centos/Oracle 6 servers Disabling useless daemons in RHEL/Centos/Oracle 5 servers Certification Program Oracle Linux Changing runlevel when booting with grub Infiniband Subnet Manager Infiniband Installing Mellanox Infiniband Driver on RHEL 6.5
RPM YUM Anaconda VNC-based Installation in Anaconda Cron Wheel Group PAM RHEL handling of DST change
RHEL subscription management RHEL Runlevels RHEL4 registration RHEL5 registration on proxy protected network SELinux Security Disk Management IP address change
Redundant daemons in RHEL Disabling the avahi daemon Apache rsyslog SSH NFS Samba NTP
 serial console Screen vim Log rotation rsync Sendmail VNC/VINO Midnight Commander
Red Hat Startup Scripts bash Fedora Oracle Linux Administration Notes RHEL 7 lands with systemd RHEL5 registration on proxy protected network    
Tuning Virtualization Xen Red Hat vs. Solaris

Sysadmin Horror Stories

Tips Humor Etc
 
Without that discipline, too often, software teams get lost in what are known in the field as "boil-the-ocean" projects -- vast schemes to improve everything at once. That can be inspiring, but in the end we might prefer that they hunker down and make incremental improvements to rescue us from bugs and viruses and make our computers easier to use.

Idealistic software developers love to dream about world-changing innovations; meanwhile, we wait and wait for all the potholes to be fixed.

Frederick Brooks, Jr,
The Mythical Man-Month, 1975


Introduction

Image a language in which both grammar and vocabulary is changing each decade. Add to this that the syntax is complex and the vocabulary is huge.  Far beyond any normal human comprehension. You can learn some subset when  you closely work with a particular subsystem only to forget it after a couple quarters of half a year.  This is the reality of Red Hat enterprise editions.

Moreover, Red Hat exists in several incarnations (some of which are free for developers, and some are low cost ($100 a year for patches or so):

The key problem with Red hat is its adoption of systems in RHEL 7 making this distribution unattractive choice for many large enterprises. This daemon replaces init in a way I find problematic, including, but not limited to, paying outsize attention for the  subsystem that load the initial set of daemons and replacing runlevels with a different and 100 timers more complex alternative.  It made RHEL7 as different from RHEL6 as Suse distribution is from Red Hat.

Hopefully before 2020 we do not need to upgrade to RHEL7 but eventually, when support of RHE 6 ends, all RHEL users iether need to switch, or abandon RHEL for other distribution that does not use systemd. This is actually an opportunity for Oracle to bring Solaris solution of this problem to Linux, but I doubt that they want to spend money on it. Taking into account dominant market share of RHEL (which became Microsoft of Linux) finding alternative is a difficult proposition.

If Devuan survives this can be one for some customers who do not depend much of commercial software (for example for genomic computational clusters), but this is not an enterprise distribution per se, as there no commercial support for it from the vendor.  So most probably you will be forced to convert in the same way as you was forced to convert from Windows 7 in 2020 to Windows 10.  So much about open source. With the current complexity of Linux the key question is "Open for whom?"  I do not see it radically different from, say Solaris 11 (which while more expensive is also more secure and have better polished light weight virtual machines -- zones and better filesystem ZFS, although XFS is not bad iether). 

Actually Lennart Poettering is an interesting Trojan horse within Red Hat.  This is an example how one talented, motivated and productive Apple (or Windows) desktop biased C programmer can cripple a large open source project with no any organized resistance. Of course, that means that his goal align with the goals of Red Hat management, which is to control the Linux  environment in a way  similar to Microsoft -- via complexity (Windows can be called the king of software complexity)  providing for lame sysadmins GUI based tools for "click, click, click" style administration, inner working of which they do not understand.

And respite wide resentment, I did not see the slogan "Boycott RHEL 7 too often and none on major websites jointed the "resistance".  While the key for stowing systems in other distributions developers throats (both Suse and Ubuntu adopted systemd)  was its close integration with Gnome, it is evident, that the absence of "architectural council" in projects like Red Hat is a serious weakness. It also suggests that developers from companies representing major Linux distribution became uncontrolled elite of Linux word, "open source nomenklatura", if you wish.  In other words we see the  "iron law of oligarchy" in action here. See Systemd invasion into Linux distributions

RHEL as a very complex mess

Architectural quality of RHEL is low.  RHEL is way too complex to administer and requires regular human being to remember too much things, which can never fit into one  head. This means this is by definition a brittle system, the elephant that nobody understands  completely.   Add to this some  examples of sloppy programming and old wards (RPM format now is old and partially outlived its usefulness, creating rather complex issues with patching and installing software, issues that take a lot of sysadmin time to resolve) and you get the picture. 

Nice example of Red Hat inaptitude is how they handle proxy settings. Even for software fully controlled by Red Hat  such as yum and subscription manager they use proxy setting in each and every configuration file. Why not to put it  /etc/sysconfig/network or at least read those setting first, if they exist? And any well behaving application need to read environment variable which should take precedence over settings in configuration files. They do not do it.  God knows why. 

Also some programs pick up setting from environment such as http_proxy and those setting overwrite configuration files, some do not and for this category configuration file is the truth in its last instance.

Those giants of system programming even manage to embed proxy settings from /etc/rhsm/rhsm.conf into yum file /etc/yum.repos.d/redhat.repo, so the proxy value is taken from this file. Not from  your /etc/yum.conf settings, as you would expect.  Moreover this is done without any elementary checks for consistency: if you make a pretty innocent  mistake and specify proxy setting in /etc/rhsm/rhsm.conf as

proxy http://yourproxy.yourdomain.com

The Red Hat registration manager will accept this and will work fine. But for yum to work properly /etc/rhsm/rhsm.conf proxy specification requires just DNS name without prefix http:// or https://  -- prefix https will be added blindly (and that's wrong) in redhat.repo   without checking if you specified http:// (or https://) prefix or not. This SNAFU will lead to generation in  redhat.repo  the proxy statement of the form https://http://yourproxy.yourdomain.com

At this point you are up for a nasty surprise -- yum will not work with any Redhat repository. And there is no any meaningful diagnostic messages. Looks like RHEL managers are iether engaged in binge drinking, or watch too much porn on the job ;-). 

Yum which started as a very helpful utility. But gradually it turned into a complex monster that requires quite a bit of study and has a set of very complex bugs, some of which are almost features.

SELinux was never a transparent security subsystems and has a lot of quirks of its own. And its key idea as far from being elegant like were the key ideas of AppArmor, which actually disappeared from he Linux security landscape. Many sysadmins simply disable SElinux leaving only firewall to protect the server. Some application require disabling SELinux for proper functioning. 

The deterioration of architectural vision within Red Hat as company is clearly visible in the terrible (simply terrible, without any exaggeration) quality of the customer portal, which is probably the worst I ever encountered. Sometimes I just put tickets to understand how to perform particular operation. Old or Classic as they call it RHEL customer portal actually was OK, and even used to have some useful features. Then for some reason they tried to introduced something new and completely messes the thing.  As of August  2017  quality somewhat improves, but still leaves much to be desired. Sometimes I wonder why I am using the distribution, if the company which produced it (and charges substantial money for it ) is so tremendously architecturally inapt, that it is unable to create a usable customer portal.

RHEL 7 is a bigger mess then RHEL 6

I hate RHEL 7. I really do. Red Hat honchos iether out of envy to Apple (and/or Microsoft) success in desktop space, or for some other reason broke way to many things by introducing systemd (which is actually useful only for laptops and similar portable devices). See Systemd invasion into Linux Server space.

Revamping anaconda in "consumer friendly" fashion and doing a lot of other unnecessary for server space things somewhat destroys or at least diminishes the value of Red Hat as a server OS. Most of those changes also increase complexity by hiding "basic" things upon the layers of "indirection".  Try to remove Network manager in RHEL 7.  Now explain to me why we need it for server room with "forever" attached by 40 Mbit/sec cable servers.

It is undeniable that in Version 7 RHEL became even more complex. It denies usefulness of previous knowledge of Red Hat and tons of books (some of which are good) published in 1998-2010  -- dot com boom and inertia after dot-com crash. After then the genre of Red Hat books dies like died computer books genre  in general and shelves with computer books almost disappeared from Banes and Nobles.

This utter disrespect to people who spend years learning Red Hat crap increased my desire to switch to CentOS or Oracle Linux. I do not want to pay Red Hat money any longer and support deteriorated to the level when it is almost completely useless, unless you buy premium support. And even in this case much depends on to what analyst your ticket is assigned. 

This is an expensive open source, my friend

RHEL is very expensive distribution for small and medium firms. There is no free lunch and if you are using commercial distribution you need to pay annual maintenance or get used to some delays in availability of new versions and security patches. In most cases this is acceptable, so if CentOS or Scientific Linux work OK for a particular application it should be used instead of commercial version just to avoid troubles with licensing.

For HPC clusters Red Hat provides discounted version (for computations nodes only; the headnode is licensed for a full price) with limited number of packages, called Red Hat Enterprise Linux for High Performance Computing. See sp.ts.fujitsu.com  for more or less OK explanations of that you should expect.

Essentially, you pay the full price for the headnode and discounted price for each computational node. I am not sure that Oracle Linux is not a better deal as in this case you have the same distribution both for headnode and computational nodes for the same price as Red Hat HPC license with two different distributions.  Truth be told Red Hat does provides optimized networking stack with HPC computer node license.  The question is what is the difference and should you pay such a price for it.

Quality of support

RHEL support deteriorated recently while prices almost doubled from RHEE5 to 6 (especially is you use virtual guests a lot; see discussion at RHEL 6 how much for your package (The Register) and now it is now not very clear what you are paying for.

The product is so complex and big that to provide quality support for it is impossible. First of all deep understand of OS is lacking. Now all tech support does when trying to resolve most of tickets is to search the database of cases, and post as a solution that is related to your case (or may be not ;-)

Premium support still is above average, and they can switch  you to a live engineer on a different continent in critical cases in later hours,  so if server downtime is important this is a kind of (weak) insurance.

In any case, Red Hat support is probably overwhelmed by problems with RHEL7 and even for subsystem fully developed by Red Hat such as subscription manager and yum is usually dismal, unless you are lucky and get knowledgeable guy, who is willing to help (I once did; but only once). 

As I already mentioned before, in most case RHEL tech support limits themselves to searching of the database (which is good first stem, but only the first step) and recommending something from the article that they found most close to your case.  To me it looks they have no intelligent way to analyze SOS files which are provided with each more or less complex ticket. Often their replies demonstrates complete lack of understanding what problem you are experiencing.

Sometimes this "quote service" from their database that they sell instead of customer support helps, but often it is just a "go to hell" type of response, an imitation of support, if you wish.  In the past (in time of RHEL 4) the quality of support was much better. Now it is unclear what we are paying for.  So it you have substantial money (and I mean 100 or more systems to support) you probably should be thinking about third party that suit your needs

There are two viable options here:

Dismal quality and the large amount of problems with RHEL7 means also that you need to spend real money of configuration management (or hire a couple of dedicated guys to create your own system).  Making images before each change,  storing a set of images so that you can return to the previous one in a matter of hours is a better insurance then tech support.

Using GIT or Subversion (or any other version control system that you are comfortable with; GIT is not well suited for  sysadmin changes management control as by  default it does not preserve attributes and ACLs, but it can be added) for all changes is another opportunity (at least via etckeeper, which is far from being perfect but  at least can serve as starting point).   That capitalizes on  the fact that after the installation OS usually works. So we observe  the phenomenon who is well known to Windows administrators: self-disintegration of OS with time ;-)

Initially RHEL7 works and is more or less stable. The problems typically comes later with additional packages, libraries, etc, which often are not critical for system functionality. "Keep it simple stupid" is a good approach  here.  Although for servers that are in research this  is impossible to implement.

For example for many servers you do not need X11 to be installed (and Gnome is a horrible package, if you ask me, LDXE is smaller and is adequate for most sysadmin and  most users needs)   That cuts a lot of packages and  an lot of complexity. Exotic protocols designed for laptops also can be eliminated for server which uses regulate wired connection  and static IP address (although in RHEL 7 this is difficult/impossible to do; I would not recommend trying to deinstall Network Manager on RHEL7 -- you need to suffer) . Avoiding complex taxing package in favor of something simpler is another worthwhile approach.

In any case Unix  now (and Linux in particular) is an operating  system which is clearly above that human capability to comprehend it. So in a way it is amazing that it still works. Also strong architects (like Thomson) are long gone, so "entropy" is high and initially clean architectural solutions with time became much less clean.

Despite several level of support included in licenses (with premium supposedly to be higher level) technical support for really complex cases is uniformly weak, with mostly "monkey looking in database" type of service. If you have a complex problem, you are usually stuck, although premium service provide you an opportunity to talk with a live person, which might help.   In a way, unless you buy premium license,  the only way to use RHEL is "as is".  And with RHEL 7 even this is not a very attractive proposition as the switch to systemd creates its own set of problems and a learning curve for sysadmins.

Some of this deterioration is connected with the fact that Linux became very complex, Byzantine OS that nobody actually knows. Even a number of utilities are such that nobody knows probably more then 30% or 50% of them. and even if you learn some utility during particular case of troubleshooting you will soon forget as you probably will not get the same case in a year or two. In this sense the title "Red Hat Engineer" became a sad joke.

Even if you learned something important today you will soon forget if you do not use it as there way too may utilities, application, configuration files. You name it.

Licensing model: four different types of RHEL licenses

RHEL is struggling to fence off "copycats" by complicating access to the source of patches, but the problem is that its licensing model in Byzantium.  It is based of a half-dozen of different types of subscriptions. Some pretty expensive. In the past it I resented paying Red Hat for our 4 socket servers to the extent that I stop using this type of servers and completely switched to two socket servers. Which with Intel CPUs rising core count was a easy escape from RHEL restrictions and greed.  Currently Red Hat probably has most complex, the most Byzantine system of subscriptions after IBM (which is probably the leader in licensing obscurantism and the ability to force customers to overpay for its software ;-).

And there are at least four different RHEL licenses for real (hardware-based) servers ( https://access.redhat.com/support/offerings/production/sla  )

  1. Self-support. If you have many identical or almost identical servers or virtual machines it does not make sense to buy standard or premium licenses for all. It can be bought for one server and all other can be used with self-support licenses, which provides access to patches). They sometimes are used for a group for servers, for example a park of webservers with only one server getting standard of premium licensing. 
  2. Standard (actually means web only, although formally you can try chat and phone during business hours (if you manage to get to a support specialist). So this is mostly web-based support.
  3. Premium (Web and phone with phone 24 x7 for severity 1 and 2 problems). Here you really can get specialist on phone if you have a critical problems. Even in after hours.
  4. HPC computational nodes with limited number of packages in the distribution and repositories (I wonder if using Oracle Linux is not a better deal for computational nodes then this type of RHEL licenses; sometime CENTOS can be used too which eliminates this problem)
  5. No-Cost RHEL Developer Subscription is available from March 2016  -- I do not know much about this license.

RHEL licensing scheme is based on so called "entitlements" which oversimplifying is one license for a 2 socket server. In the past they were "mergeable" so if your 4 socket license expired and you have two spare two socket licenses RHEL was happy to accommodate your needs. Now they are not. 

But that does not assure the right mix if you need different types of licenses for different classes of servers. All is fine until you use mixture of licenses (for example some cluster licenses, some patch only ( aka self-support), some premium license -- 4 types of licenses altogether). In this case to understand where is particular license landed in the past was almost impossible. But if you use uniform licensees this scheme works reasonably well. Red hat fixed this problem with the introduction of subscription manager, so it is in the past. 

This path works well but to cut the costs you need to buy five year license with the server which is a lot of money and you lack the ability to switch linux flavor. This also a problem with buying cluster license -- Dell and HP can install basic cluster software on the enclosure for minimal fee but they force upon an additional software which you might not want or need.  And believe me this HPC can be used outside computational tasks. It is actually an interesting paradigm of managing heterogeneous datacenter.  The only problem is that you need to learn to use it :-). For example SGE is very well engineered scheduler (originally from SUN, but later open sourced). While this is a free software it  beats many commercial offerings and while it lacks calendar scheduling, any calendar scheduler can be used with it to compensate for this (even cron -- in this each cron task becomes SGE submit script).

Still using HPC-like config might be an option to lower the fees if you use multiple similar servers (for example blade enclosure with 16 identical blades). It is to organize a particular set of servers as a cluster with SGE (or other similar scheduler) installed on the head node. Now Hadoop is a fashionable thing (while being just a simple case of distributed search) and you can already claim tat this is a Hadoop type of service. In this case you pay twice higher price for headnode, but all computation nodes are $100 a year each or so. Although you can get the same self-support license from Oracle for the same price without  Red hat restrictions, so from other point of view, why bother?.

Two licensing management systems: RHN and  RHSM

There are two licensing system used by Red Hat

  1.  Classic(RHN) -- old system that was phased out in mid 2017
  2. "New" (RHSM) -- a new, better, system used predominantly on RHEL 6 and 7. Obligatory from August 2017.

Both are complex and require study.  Many hours of sysadmin time are wasted on mastering its complexities, while in reality this is an overhead that allows Red Hat to charge money for the product. So the fact that they are NOT supporting it well tells us a lot about the level of deterioration of the company. 

All-in-all Red Hat successful created almost un-penetrable mess of obsolete and semi obsolete notes, poorly written and incomplete documentation, dismal diagnostic and poor troubleshooting tools. And the level of frustration sometimes reaches such a level that people just abandon RHEL. I did for several non-critical system. If CentOS or Academic Linux works there is no reason to suffer from Red Hat licensing issues. Also that makes Oracle, surprisingly, more attractive option too :-). Oracle Linux is also cheaper. But usually you are bound by corporate policy here. 

"New" subscription system (RHSM) is slightly better then RHN for large organizations.  It allows to assign specific license to specific box and list the current status of licensing.  But like RHN it requires to use proxy setting in configuration file, it does not take them from the environment. If the company has several proxies and you have mismatch you can be royally screwed. In general you need already to check consistently of your environment with conf file settings.  The level of understanding of proxies environment by RHEL tech support is basic of worse, so they are  using the database of articles instead of actually troubleshooting based on sosreport data. Moreover each day there might a new person working on your ticket, so there no continuity.

RHEL System Registration Guide (https://access.redhat.com/articles/737393) is weak and does not cover more complex cases and typical mishaps.

RHN system of RHEL licenses also can cover various  of sockets (the default is 2). For 4 socket server it will two licenses. This is not the case with RHNSM.

In general licensing by physical socket or even core is the old and dirty IBM trick that now many companies reuse ( and now Red Hat simply can't claim that they are not greedy).

In RHN, at least, licenses were eventually converted into some kind of  uniform licensing tokens that are assigned to unlicensed systems more or less automatically (for example if you have 4 socket system then two tokens were consumed). With RHNSM this is not true, which creating for large enterprises a set of complex problems.

But the major drawback of RHN for large enterprises is that there is no way (or at least I do not know how) to specify which type of license particular system requires.

In its current stage classic licensing system is simply not functional enough for a large enterprise that has complex mix of systems (HPC clusters, servers that require premium support, regular support (most of the servers), self-help system (only patching), etc).  You can slightly improve things by using you own patch distribution server (, but the licensing system remain complex and sysadmin unfriendly.  Using multiple accounts with RHN (one for each type of license) might help but I never tried that. There might be better ways to use RHN but as far as I know most organization use the most primitive "flat license space" model.  And most companies have a single account in Red Hat.

"New" subscription system (RHSM) is slightly better.  It allows to assign specific license to specific box and list the current status of licensing.  But like RHN it requires to use proxy setting in configuration file, it does not take them from the environment. If the company has several proxies and you have mismatch you can be royally screwed. In general you need already to check consistently of your environment with conf file settings.  The level of understanding of proxies environment by RHEL tech support is basic of worse, so they are  using the database of articles instead of actually troubleshooting based on sosreport data. Moreover each day there might a new person working on your ticket, so there no continuity. RHEL System Registration Guide (https://access.redhat.com/articles/737393) is weak and does not cover more complex cases and typical mishaps.

Troubleshooting

Learn More

So those Red Hat honchos with high salaries essentially create a new job -- license administrator. Congratulations !

If you unlucky guy without such a person, they you need yourself to read and understand at least The RHEL System Registration Guide (https://access.redhat.com/articles/737393) which outlines major options available for registering a system (and carefully avoids mentioning bugs and pitfalls, which are many).  For some reason migration from RHN to RHNSM usually works well so it might make sense to register system first in RHN and then to migrate it.

Also might be useful (to the extent any Red Hat purely written documentation is useful) is How to register and subscribe a system to the Red Hat Customer Portal using Red Hat Subscription-Manager (https://access.redhat.com/solutions/253273). At least it tires to  answers to some most basic questions:

There is also an online tool to assist you in selecting the most appropriate registration technology for your system - Red Hat Labs Registration Assistant (https://access.redhat.com/labs/registrationassistant/). If you would prefer to use this tool, please visit https://access.redhat.com/labs/registrationassistant"

Pretty convoluted RPM packaging system which creates problems

The idea of RPM was to simplify installation of complex packages. But they create of a set of problem of their own. Especially connected with libraries (which not exactly Red Hat problem, it is Linux problem called "libraries hell"). One example is so called multilib problem that is detected by YUM:

--> Finished Dependency Resolution

Error:  Multilib version problems found. This often means that the root
       cause is something else and multilib version checking is just
       pointing out that there is a problem. Eg.:

         1. You have an upgrade for libicu which is missing some
            dependency that another package requires. Yum is trying to
            solve this by installing an older version of libicu of the
            different architecture. If you exclude the bad architecture
            yum will tell you what the root cause is (which package
            requires what). You can try redoing the upgrade with
            --exclude libicu.otherarch ... this should give you an error
            message showing the root cause of the problem.

         2. You have multiple architectures of libicu installed, but
            yum can only see an upgrade for one of those arcitectures.
            If you don't want/need both architectures anymore then you
            can remove the one with the missing update and everything
            will work.

         3. You have duplicate versions of libicu installed already.
            You can use "yum check" to get yum show these errors.

       ...you can also use --setopt=protected_multilib=false to remove
       this checking, however this is almost never the correct thing to
       do as something else is very likely to go wrong (often causing
       much more problems).

       Protected multilib versions: libicu-4.2.1-14.el6.x86_64 != libicu-4.2.1-11.el6.i686

Selecting packages for installation

You can improve the typical for RHEL situation with a lot of useless daemons installed by carefully selecting packages and then reusing generated kickstart file. That can be done via advanced menu for one box and then using this kickstart file for all other boxes with minor modifications. Kickstart still works, despite trend toward overcomplexity in other parts of distribution ;-)

Problems with architectural vision of Red Hat brass

Both architectural level of thinking of Red Hat brass (with daemons like avahi, systemd installed by default) and clear attempts along the lines "Not invented here" in virtualization creates some concerns. It is clear that Red Hat by itself can't become a major virtualization player like VMware. It just does not have enough money for development.

You would think that the safest bat is to reuse the leader among open source offerings which is currently Xen. But Red Hat brass thinks differently and wants to play more dangerous poker game: it started promoting KVM: Red Hat has released Enterprise Linux 5 with integrated virtualization (Xen) and then changed their mind after RHEL 5.5 or so. In RHEL 6 Xen is replaced by KVM.

What is good that after ten years they eventually manage to re-implement Solaris 10 zones. In RHEL 7 they are usable.

Security overkill with SELinux

RHEL contain security layer called SELinux, but in most cases of corporate deployment it is either disabled, or operates in permissive mode.  The reason is that is notoriously difficult to configure correctly and in most case the game does not worth the candles.

Firewall is more usable in  corporate deployments, especially in cases when you have obnoxious or incompetent security department (a pretty typical situation for a large corporation ;-) as it prevents a lot of stupid questions from utterly incompetent "security gurus" about opened ports and can stop dead scanning attempts of tools that test for known vulnerabilities and by using which security departments are trying to justify their miserable existence. Generally it is dangerous to allow exploits used in such tools which local script kiddies (aka "security team") recklessly launch against your production server (as if checking for a particular vulnerability using internal script is inferior solution). There were reports of crashes of production servers due to such games. Some "security script kiddie" who understand very little in Unix even try to prove their worth by downloading exploits from hacker site and then using it against production servers on the internal corporate network. Unfortunately they are not always fired for such valiant efforts.   

To get the idea about the level of complexity try to read the Deployment Guide. Full set of documentation is available from www.redhat.com/docs/manuals/enterprise/

So it is not accidental that in many cases SElinix is disabled in enterprise installations. Some commercial software packages explicitly recommend to disable it in their installation manuals.

There is an alternative to SElinux which is more elegant, usable and understandable approach -- AppArmor which is/was used in SLES  by knowledgeable sysadmins, but never achived real popularity iether (SLES now  ha SELinux  too and suffers from overcomplexity even more then RHEL ;-). But it did not get enough traction.  Still IMHO if you need a really high level of security for a particular server this is a preferable path to go. Or you can use Solaris if you have knowledgeable Solaris sysadmin on the floor (security via obscurity actually works pretty well in this case).

RHEL became kind of Microsoft of Linux  world and as such this is the most hackable flavor of linux just due to its general popularity. That means that it is a very bad idea to use RHEL, if security is of vital importance, although with enabled SE it definably more hardened variant of OS then without. See Potemkin Villages of Computer Security  for more detailed discussion.

Road to hell in paved with good intentions

Loss of architectural integrity of Unix is now very pronounced in RHEL. Both RHEL6 and  RHEL7 although 7 is definitely worse. And this is not only systemd fiasco. For example, recently I spend a day troubleshooting an interesting and unusual problem: one out of 16 identical (both hardware and software wise) blades in a small HPC cluster (and only one) failed to enable bonded interface on boot and this remained off line. Most sysadmins would think that something is wrong with hardware. For example, Ethernet card on the blade and/or switch port or even internal enclosure interconnect. I also initially thought his way. But this was not the case  ;-)  

Tech support from Dell was not able to locate any hardware problem although they did diligently upgraded CMC on enclosure and BIOS and firmware on the blade. BTW this blade has had similar problems in the past and Dell tech support once even replaced Ethernet card in it, thinking that it is culprit. Now i know that this was a completely wrong decision on their part, and waist of both time and money  :-), They come to this conclusion by swapping the blade to a different slot and seeing that the problem migrated into new slot. Bingo -- the card is the root cause.  The problem is that it was not.  What is especially funny is that replacing the card did solve the problem for a while. After reading the information provided below you will be as puzzled as me why it happened. 

But after yet another power outage the problem  returned.

This  time I started to suspect that the card has nothing to do with the problem. After more close examination I discovered that in its infinite wisdom in RHEL 6 Red Hat introduced a package called biosdevname. The package was developed by Dell (the fact which seriously undermined my trust in Dell hardware ;-).

This package renames interfaces to a new set of names, supposedly consistent with their etching on the case of rack servers. It is useless (or more correctly harmful) for blades.  The package is primitive and does not understand that server is a rack server or blade.

Moreover by doing this supposedly useful remaining this package introduces in  70-persistent-net.rules file a stealth rule:

KERNEL=="eth*", ACTION=="add", PROGRAM="/sbin/biosdevname -i %k", NAME="%c"

I did not look at the code but from the observed behaviour it looks that in some cases in RHEL 6 (and most probably in RHEL 7 too) the package adds a "stealth" rule to the END (not the beginning but the end !!!) of  /udev/rules.d/70-persistent-net.rules file, which means that if similar rule in 70-persistent-net.rules exits it is overwritten. Or something similar to this effect. 

If you look at Dell knowledge base there are dozens of advisories related to this package (search just for biosdevname). Which suggests that there some deeply wrong with its architecture.

What I observed  that on some (the key word is some, converting the situation in Alice in Wonderland environment) rules for interfaces listed in  70-persistent-net.rules file simply does not work if this package is enabled. For example Dell Professional services in their infinite wisdom renamed interfaces  back to eth0-eth4 for Inter X710 4-port 10Gb Ethernet card that  we have on some blades. On 15 out of 16 blades in the Dell enclosure this absolutely wrong  idea works perfectly well. But on blades 16 sometimes it does not as a result this blade does not boot after power outage of reboot. When this happens is unpredictable. Sometimes it  boots, but sometimes it does not.  And you can't understand what is happening, no matter how hard you try because of stealth nature of changes introduced by biosdevname package. 

Two interfaces on this blades (as you now suspect eth0 and eth1) were bonded on this blade. After around 6 hours of poking around the problem I discover that despite presence of rule for "eth0-eth4 in 70-persistent-net.rules file RHEL 6.7 still renames on boot all four interfaces to em0-em4 scheme and naturally bonding fails as eth0 and eth1 interfaces do not exit.

First I decided to deinstall biosdevname package and see what will happen. Did not work (see below why -- de-installation script in this RPM is incorrect and contains a bug -- it is not enough to remove files  you also need to run  the command update-initramfs -u (Hat tip to Oler).

Searching for "Renaming em to eth" I found a post in which the author recommended disabling this "feature" via adding biosdevname=0 to kernel parameters /etc/grub.conf

That worked.  So two days my life were lost for finding a way to disable this completely unnecessary for blades RHEL "enhancement".

Here is some information about this package

biosdevname
Copyright (c) 2006, 2007 Dell, Inc.  
Licensed under the GNU General Public License, Version 2.

biosdevname in its simplest form takes a kernel device name as an argument, and returns the BIOS-given name it "should" be.  This is
necessary on systems where the BIOS name for a given device (e.g. the label on the chassis is "Gb1") doesn't map directly 
and obviously to the kernel name (e.g. eth0).
The distro-patches/sles10/ directory contains a patch needed to integrate biosdevname into the SLES10 udev ethernet 
naming rules.This also works as a straight udev rule.  On RHEL4, that looks like:
KERNEL=="eth*", ACTION=="add", PROGRAM="/sbin/biosdevname -i %k", NAME="%c"
This makes use of various BIOS-provided tables:
PCI Confuration Space
PCI IRQ Routing Table ($PIR)
PCMCIA Card Information Structure
SMBIOS 2.6 Type 9, Type 41, and HP OEM-specific types
therefore it's likely that this will only work well on architectures that provide such information in their BIOS.

To add insult to injury this behaviour was demonstrated on only one of 16 absolutely identically configured Dell M630 blades with identical hardware and absolutely identical (cloned) OS instances.  Which makes RHEL kind of "Alice in Wonderland" system. After this experience it is difficult not to hate Red Hat, but we can do very little to change this situation to the better. 

This is just one example. I have more similar  stories to tell

I would like to stress that  the fact that the utility that is included in this package does not understand that it the target for installation is a blade ( there is no etching on blade network interfaces ;-) or server is a pretty typical RHEL behaviour, despite the fact that the package was developed by Dell. They also install audio packages on boxes that have no audio card and do a lot of similar things ;-) 

If you look at this topic using your favorite search engine (which should be Google anymore ;-) you will find dozens of posts in which people try to resolve this problem with various levels of competency and success. Such a tremendous waist of time and efforts.  Among best that I have found were:

Current versions and year of end of support

Supported versions of RHEL are 6.10 and 7.3-7.5. Usually a large enterprise uses a mixture of versions, often all three of them.  Compatibility within a single version is usually very good (I would say on par with Solaris) and the risk on upgrading from, say, 6.5 to 6.10 is minimal.

Not so in case of major versions.  Formally you can upgrade from RHEL6.10 to RHEL 7.5.  You need to remove Gnome via yum groupremove gnome-desktop, if it works (how to remove gnome-desktop using yum ) and several other packages.

But here your mileage may vary and reinstallation might be a better option (RedHat 6 to RedHat 7 upgrade):

1. Prerequisites

1. Make sure that you are on running on the latest minor version (i.e rhel 6.8).

2. The upgrade process can handle only the following package groups and packages: Minimal (@minimal),
Base (@base), Web Server (@web-server), DHCP Server, File Server (@nfs-server), and Print Server
(@print-server).

3. Upgrading GNOME and KDE is not supported, So please uninstall the GUI desktop before the upgrade and
install it after the upgrade.

4. Backup the entire system to avoid potential data loss.

5. If in case you system is registered with RHN classic , you must first unregister from RHN classic and
register with subscription-manager.

6. Make sure /usr is not on separate partition.

2. Assessment

Before upgrading we need to assess the machine first and check if it is eligible for the upgrade and this can be
done by an utility called "Preupgrade Assistant".

Preupgrade Assistant does the following:

2.1. Installing dependencies of Preupgrade Assistant:

Preupgrade Assistant needs some dependencies packages (openscap , openscap-engine-sce, openscap-
utils , pykickstart, mod_wsgi) all these packages can be installed from the installation media but "openscap-
engine-sce" package needs to be downloaded from the redhat portal.

 

See Red Hat Enterprise Linux - Wikipedia.  and DistroWatch.com Red Hat Enterprise Linux.

Tip:

In linux there is no convention for determination which flavor of linux you are running. For Red Hat in order to  determine which version is installed on the server you can use command

cat /etc/redhat-release

Oracle linux adds its own file preserving RHEL file, so a more appropriate  command would be

cat /etc/*release

End of support issues

See Red Hat Enterprise Linux Life Cycle - Red Hat Customer Portal:

For more packages version information see DistroWatch.com Red Hat Enterprise Linux

Updates in RHEL 5

RHEL 5, especially versions  5.6-5.9 is probably one of the most stable version of  Red Hat I ever encountered. It still support more or less recent hardware (Oracle provides updated kernel if you want it).  This is a very conservative distribution. For example, it still uses such really old (or obsolete, if you wish) versions as bash 3.2.25, Perl 5.8.8, and Python 2.4.3.

Oracle produced improved kernel for 5.x versions based of later version of linux kernel then "stock" RHEL kernel. It might benefit stability if you are running Oracle applications. It is 64-bit only and is more capricious toward hardware then Red Hat stack kernel so your mileage can vary.

RHEL 5 suffers from proliferation of useless or semi-useless daemons and as such is not secure and probably can't  be made secure in default installation. You need carefully minimize the system to get s usable server.

Systemtap
Systemtap is a GPL-based infrastructure which simplifies information gathering on a running Linux system. This assists in diagnosis of performance or functional problems. With systemtap, the tedious and disruptive "instrument, recompile, install, and reboot" sequence is no longer needed to collect diagnostic data. Systemtap is now fully supported. For more information refer to http://sources.redhat.com/systemtap.
iSNS-utils
The Internet storage name service for Linux (isns-utils) is now supported. This allows you to register iSCSI and iFCP storage devices on the network. isns-utils allows dynamic discovery of available storage targets through storage initiators.

isns-utils provides intelligent storage discovery and management services comparable to those found in fibre-channel networks. This allows an IP network to function in a similar capacity to a storage area network.

With its ability to emulate fibre-channel fabric services, isns-utils allows for seamless integration of IP and fibre-channel networks. In addition, isns-utils also provides utilities for managing both iSCSI and fibre-channel devices within the network.

For more information about isns-utils specifications, refer to http://tools.ietf.org/html/rfc4171. For usage instructions, refer to /usr/share/docs/isns-utils-[version]/README and /usr/share/docs/isns-utils-[version]/README.redhat.setup.

rsyslog
rsyslog is an enhanced multi-threaded syslogd daemon that supports the following (among others):

rsyslog is compatible with the stock sysklogd, and can be used as a replacement in most cases. Its advanced features make it suitable for enterprise-class, encrypted syslog relay chains; at the same time, its user-friendly interface is designed to make setup easy for novice users.

For more information about rsyslog, refer to http://www.rsyslog.com/.

Openswan
Openswan is a free implementation of Internet Protocol Security (IPsec) and Internet Key Exchange (IKE) for Linux. IPsec uses strong cryptography to provide authentication and encryption services. These services allow you to build secure tunnels through untrusted networks. Everything passing through the untrusted network is encrypted by the IPsec gateway machine and decrypted by the gateway at the other end of the tunnel. The resulting tunnel is a virtual private network (VPN).

This release of Openswan supports IKEv2 (RFC 4306, 4718) and contains an IKE2 daemon that conforms to IETF RFCs. For more information about Openswan, refer to http://www.openswan.org/.

Password Hashing Using SHA-256/SHA-512
Password hashing using the SHA-256 and SHA-512 hash functions is now supported.

To switch to SHA-256 or SHA-512 on an installed system, run authconfig --passalgo=sha256 --update or authconfig --passalgo=sha512 --update. To configure the hashing method through a GUI, use authconfig-gtk. Existing user accounts will not be affected until their passwords are changed.

For newly installed systems, using SHA-256 or SHA-512 can be configured only for kickstart installations. To do so, use the --passalgo=sha256 or --passalgo=sha512 options of the kickstart command auth; also, remove the --enablemd5 option if present.

If your installation does not use kickstart, use authconfig as described above. After installation, change all created passwords, including the root password.

Appropriate options were also added to libuser, pam, and shadow-utils to support these password hashing algorithms. authconfig configures necessary options automatically, so it is usually not necessary to modify them manually:

OFED in comps.xml
The group OpenFabrics Enterprise Distribution is now included in comps.xml. This group contains components used for high-performance networking and clustering (for example, InfiniBand and Remote Direct Memory Access).

Further, the Workstation group has been removed from comps.xml in the Red Hat Enterprise Linux 5.2 Client version. This group only contained the openib package, which is now part of the OpenFabrics Enterprise Distribution group.
 

system-config-netboot
system-config-netboot is now included in this update. This is a GUI-based tool used for enabling, configuring, and disabling network booting. It is also useful in configuring PXE-booting for network installations and diskless clients.
 
openmpi
In order to accommodate the use of compilers other than gcc for specific applications that use message passing interface (MPI), the following updates have been applied to the openmpi and lam packages:

Note that when upgrading to this release's version of openmpi, you should migrate any default parameters set for lam or openmpi to /usr/lib(64)/lam/etc/ and /usr/lib(64)/openmpi/[openmpi version]-[compiler name]/etc/. All configurations for either openmpi or lam should be set in these directories.
 

lvm2 Snapshot Volume Warning
lvm2 will now warn if a snapshot volume is near its maximum capacity. However, this feature is not enabled by default. To enable this feature, uncomment the following line in /etc/lvm/lvm.conf
snapshot_library = "libdevmapper-event-lvm2snapshot.so"

Ensure that the dmeventd section and its delimiters ({ }) are also uncommented.

 
bash
bash has been updated to version 3.2. This version fixes a number of outstanding bugs, most notably:

Note that with this update, the output of ulimit -a has also changed from the Red Hat Enterprise Linux 5.1 version. This may cause a problems with some automated scripts. If you have any scripts that use ulimit -a output strings, you should revise them accordingly.

Updates in RHEL 6

RHEL 6 was released in November 2010. So technical support and patching will last till 2020.  See Red Hat Enterprise Linux Life Cycle - Red Hat Customer Portal

RHEL 6 cut the number of daemons installed by default in comparison with RHEL 5.  NFS4 became default and that cause problems as if master shutdown often client can't recover and enter zombie state.  Luckily they spared us from systemd in this version ;-)

RHEL 6 initially gave me impression of half-baked, rushed to customer distribution and may be signal internal crisis in RHEL development as in some areas it is worse then RHEL 5.6. It stabilized around version 6.5. Some changes were arbitrary and just make distribution look "new" without bring anything significant to the table. For example during installation, the partitioning procedure changed and probably not to the better. Some "mostly-desktop or home network" daemons are present by default.   For example, complex  and potentially insecure avahi daemon (implementation of Zeroconf).

The Avahi daemon discovers network resources, allocates IP addresses without the DHCP server, makes the computer accessible by its local

As RHEL is targeted to corporate environments which typically use static IP for servers it makes little or no sense. It is better to disable it on installation. See   Disabling the Avahi daemon Kioskea.net

Also the ability of the distribution to select right set of daemons is compromised in RHEL 6 more then in RHEL 5 despite adding useful concept of "server roles": by default there is a lot of useless daemons. If you try for example to install "database server" role  you then need to check and delete/disable redundant  manually.

Documentation for version 6

Red Hat Enterprise Linux 6 Technical Details : What's New

... ... ...

Scalability

* Red Hat Enterprise Linux 6 supports more sockets, more cores, more threads, and more memory.

Efficient Scheduling

* The CFS schedules the next task to be run based on which task has consumed the least time, task prioritization, and other factors. Using hardware awareness and multi-core topologies, the CFS optimizes task performance and power consumption.

Reliability, Availability, and Serviceability (RAS)

* RAS hardware-based hot add of CPUs and memory is enabled.
* When supported by machine check hardware, the system can recover from some previously fatal hardware errors with minimal disruption.
* Memory pages with errors can be declared as "poisoned", and will be avoided.

Filesystems

* The new default file system, ext4, is faster, more robust, and scales to 16TB.
* The Scalable File System Add-On contains the XFS file system which scales to 100TB.
* The Resilient Storage Add-On includes the high availability, clustered GFS2 file system.
* NFSv4 is significantly improved over NFSv3, and backwards compatible.
* Fuse allows filesystems to run in user space allowing testing and development on newer fused-based filesystems (such as cloud filesystems).

High Availability

* The web interface based on Conga has been re-designed for added functionality and ease of use.
* The cluster group communication system, Corosync, is mature, secure, high performance, and light-weight.
* Nodes can re-enable themselves after failure without administrative intervention using unfencing.
* Unified logging and debugging simplifies administrative work.
* Virtualized KVM guests can be run as managed services which enables fail-over, including between physical and virtual hosts.
* Centralized configuration and management is provided by Conga.
* A single cluster command can be used to manage system logs from different services, and the logs have a consistent format that is easier to parse.

Power Management

* The tickless kernel feature keeps systems in the idle state longer, resulting in net power savings.
* Active State Power Management and Aggressive Link Power Management provide enhanced system control, reducing the power consumption of I/O subsystems. Administrators can actively throttle power levels to reduce consumption.
* Realtime drive access optimization reduces filesystem metadata write overhead.

System Resource Allocation

* Cgroups organize system tasks so that they can be tracked, and so that other system services can control the resources that cgroup tasks may consume (Partitioning). Two user-space tools, cgexec and cgclassify, provide easy configuration and management of cgroups.
* Cpuset applies CPU resource limits to cgroups, allowing processing performance to be allocated across tasks.
* The memory resource controller applies memory resource limits to cgroups.
* The network resource controller applies network traffic limits to cgroups.

Storage

* A snapshot of a logical volume may be merged back into the original logical volume, reverting changes that occurred after the snapshot.
* Mirror logs of regions that need to be synchronized can be replicated, supporting high availability.
* LVM hot spare allows the behavior of a mirrored logical volume after a device failure to be explicitly defined.
* DM-Multipath allows paths to be dynamically selected based on queue size or I/O time data.
* Very large SAN-based storage is supported.
* Automated I/O alignment and self-tuning is supported.
* Filesystem usage information is provided to the storage device, allowing administrators to use thin provisioning to allocate storage on-demand.
* SCSI and ATA standards have been extended to provide alignment and I/O hints, allowing automated tuning and I/O alignment.
* DIF/DIX provides better integrity checks for application data.

Networking

* UDP Lite tolerates partially corrupted packets to provide better service for multimedia protocols, such as VOIP, where partial packets are better than none.
* Multiqueue Networking increases processing parallelism for better performance from multiple processors and CPU cores.
* Large Receive Offload (LRO) and Generic Receive Offload (GRO) aggregate packets for better performance.
* Support for Data Center Bridging includes data traffic priorities and flow control for increased Quality of Service.
* New support for software Fiber Channel over Ethernet (FCoE) is provided.
* iSCSI partitions may be used as either root or boot filesystems.
* IPv6 is supported.

Security and Access Control

* SELinux policies have been extended to more system services.
* SELinux sandboxing allows users to run untrusted applications safely and securely.
* File and process permissions have been systematically reduced whenever possible to reduce the risk of privilege escalation.
* New utilities and system libraries provide more control over process privileges for easily managing reduced capabilities.
* Walk-up kiosks (as in banks, HR departments, etc.) are protected by SELinux access control, with on-the-fly environment setup and take-down, for secure public use.
* Openswan includes a general implementation of IPsec that works with Cisco IPsec.

Enforcement and Verification of Security Policies

* OpenScap standardizes system security information, enabling automatic patch verification and system compromise evaluation.

Identity and Authentication

* The new System Security Services Daemon (SSSD) provides centralized access to identity and authentication resources, enables caching and offline support.
* OpenLDAP is a compliant LDAP client with high availability from N-way MultiMaster replication, and performance improvements.

Web Infrastructure

* This release of Apache includes many improvements, see Overview of new features in Apache 2.2
* A major revision of Squid includes manageability and IPv6 support
* Memcached 1.4.4 is a high-performance and highly scalable, distributed, memory-based object caching system which enhances the speed of dynamic web applications.

Java

* OpenJDK 6 is an open source implementation of the Java Platform Standard Edition (SE) 6 specification. It is TCK-certified based on the IcedTea project, and the implementation of a Java Web Browser plugin and Java web start removes the need for proprietary plugins.
* Tight integration of OpenJDK and Red Hat Enterprise Linux includes support for Java probes in SystemTap to enable better debugging for Java.
* Tomcat 6 is an open source and best-of-breed application server running on the Java platform. With support for Java Servlets and Java Server Pages (JSP), tomcat provides a robust environment for developing and deploying dynamic web applications.

Development

* Ruby 1.8.7 is included, and Rails 3 supports dependencies.
* Version 4.4 of gcc includes OpenMP3 conformance for portable parallel programs, Integrated Register Allocator, Tuples, additional C++0x conformance implementations, and debuginfo handling improvements.
* Improvements to the libraries include malloc optimizations, improved speed and efficiency for large blocks, NUMA considerations, lock-free C++ class libraries, NSS crypto consolidation for LSB 4.0 and FIPS level 2, and improved automatic parallel mode in the C++ library.
* Gdb 7.1.29 improvements include C++ function, class, templates, variables, constructor / destructor improvements, catch / throw and exception improvements, large program debugging optimizations, and non-blocking thread debugging (threads can be stopped and continued independently).
* TurboGears 2 is a powerful internet-enabled framework that enables rapid web application development and deployment in Python.
* Updates to the popular web scripting and programming languages PHP (5.3.2), Perl (5.10.1) include many improvements.

Application Tuning

* SystemTap uses the kernel to generate non-intrusive debugging information about running applications.
* The tuned daemon monitors system use and uses that information to automatically and dynamically adjust system settings for better performance.
* SELinux can be used to observe, then tighten application access to system resources, leading to greater security.

Databases

* PostgreSQL 8.4.4 includes many improvements, please see PostgreSQL 8.4 Feature List for details.
* MySQL 5.1.47 improvement are listed here: What Is New in MySQL 5.1.
* SQLite 3.6.20 includes significant performance improvements, and many important bug fixes. Note that this release has made incompatible changes to the internal OS interface and VFS layers (compared to earlier releases).

System API / ABI Stability

* The API / ABI Compatibility Commitment defines stable, public, system interfaces for the full ten-year life cycle of Red Hat Enterprise Linux 6. During that time, applications will not be affected by security errata or service packs, and will not require re-certification. Backward compatibility for the core ABI is maintained across major releases, allowing applications to span subsequent releases.

Integrated Virtualization, Kernel-Based Virtualization

* The KVM hypervisor is fully integrated into the kernel, so all RHEL system improvements benefit the virtualized environment.
* The application environment is consistent for physical and virtual systems.
* Deployment flexibility, provided by the ability to easily move guests between hosts, allows administrators to consolidate resources onto fewer machines during quiet times, or free up hardware for maintenance downtime.

Leverages Kernel Features

* Hardware abstraction enables applications to move from physical to virtualized environments independently of the underlying hardware.
* Increased scalability of CPUs and memory provides more guests per server.
* Block storage benefits from selectable I/O schedulers and support for asynchronous I/O.
* Cgroups and related CPU, memory, and networking resource controls provide the ability to reduce resource contention and improve overall system performance.
* Reliability, Availability, and Serviceability (RAS) features (e.g., hot add of processors and memory, machine check handling, and recovery from previously fatal errors) minimize downtime.
* Multicast bridging includes the first release of IGMP snooping (in IPv4) to build intelligent packet routing and enhance network efficiency.
* CPU affinity assigns guests to specific CPUs.

Guest Acceleration

* CPU masking allows all guests to use the same type of CPU.
* SR-IOV virtualizes physical I/O card resources, primarily networking, allowing multiple guests to share a single physical resource.
* Message signaled interrupts deliver interrupts as specific signals, increasing the number of interrupts.
* Transparent hugepages provides significant performance improvements for guest memory allocation.
* Kernel Same Page (KSM) provides reuse of identical pages across virtual machines (known as deduplication in the storage context).
* The tickless kernel defines a stable time model for guests, avoiding clock drift.
* Advanced paravirtualization interfaces include non-traditional devices such as the clock (enabled by the tickless kernel), interrupt controller, spinlock subsystem, and vmchannel.

Security

* In virtualized environments, sVirt (powered by SELinux) protects guests from one another

Microsoft Windows Support

* Windows WHQL-certified drivers enable virtualized Windows systems, and allow Microsoft customers to receive technical support for virtualized instances of Windows Server.

Installation, Updates, and Deployment

* Anaconda supports installation of a “minimal platform” as a specific server installation, or as a strategy for reducing the number of software packages to increase security.
* Red Hat Network (RHN) and Satellite continue to provide management, provisioning and monitoring for large deployments.
* Installation options have been reorganized into “workload profiles” so that each system installation will provide the right software for specific tasks.
* Dracut, a replacement for mkinitrd, minimizes the impact of underlying hardware changes, is more maintainable, and makes it easier to support third party drivers.
* The new yum history command provides information about yum transactions, and supports undo and redo of selected operations.
* Yum and RPM offer significantly improved performance.
* RPM signatures use the Secure Hash Algorithm (SHA256) for data verification and authentication, improving security.
* Storage devices can be designated for encryption at installation time, protecting user and system data. Key escrow allows recovery of lost keys.
* Standards Based Linux Instrumentation for Manageability (SBLIM) manages systems using Web-Based Enterprise Management (WBEM).
* ABRT enhanced error reporting speeds triage and resolution of software failures.

Routine Task Delegation

* PolicyKit allows administrators to provide users access to privileged operations, such adding a printer or rebooting a desktop, without granting administrative privileges.

Printing

* Improvements include better printing, printer discovery, and printer configuration services from cups and system-config-printer.
* SNMP-based monitoring of ink and toner supply levels and printer status provides easier monitoring to enable efficient inventory management of ink and toner cartridges.
* Automatic PPD configuration for postscript printers, where PPD option values are queried from printer, are available in CUPS web interface.

Microsoft Interoperability

* Samba improvements include support for Windows 2008R2 trust relationships: Windows cross-forest, transitive trust, and one-way domain trust.
* Applications can use OpenChange to gain access to Microsoft Exchange servers using native protocols, allowing mail clients like Evolution to have tighter integration with Exchange servers.

RHEL 7 and systemd invasion into server space

RHEL 7 was released in June 2014. With the release of RHEL 7 we see hard push to systemd exclusivity.  Runlevels are gone. The release of RHEL 7 with systemd as the only option for system and process management has reignited the old debate weather Red Hat is trying to establish Microsoft-style monopoly over enterprise Linux and move Linux closer to Windows: closed but user-friendly system. 

for server sysadmins systemd is a massive, fundamental change to core Linux administration for no perceivable gain. So while there is a high level of support of systemd from Linux users who run Linux on their laptops and maybe as home server, there is a strong backlash against systemd from Linux system administrators who are responsible for significant number of Linux servers in enterprise environment.

After all runlevels were used in production environment, if only to run system with or without X11.  Please read  an interesting essay on systemd (ProSystemdAntiSystemd).

Often initiated by opponents, they will lament on the horrors of PulseAudio and point out their scorn for Lennart Poettering. This later became a common canard for proponents to dismiss criticism as Lennart-bashing. Futile to even discuss, but it’s a staple.

Lennart’s character is actually, at times, relevant.. Trying to have a large discussion about systemd without ever invoking him is like discussing glibc in detail without ever mentioning Ulrich Drepper. Most people take it overboard, however.

A lot of systemd opponents will express their opinions regarding a supposed takeover of the Linux ecosystem by systemd, as its auxiliaries (all requiring governance by the systemd init) expose APIs, which are then used by various software in the desktop stack, creating dependency chains between it and systemd that the opponents deemed unwarranted. They will also point out the udev debacle and occasionally quote Lennart. Opponents see this as anti-competitive behavior and liken it to “embrace, extend, extinguish”. They often exaggerate and go all out with their vitriol though, as they start to contemplate shadowy backroom conspiracies at Red Hat (admittedly it is pretty fun to pretend that anyone defending a given piece of software is actually a shill who secretly works for it, but I digress), leaving many of their concerns to be ignored and deem ridiculous altogether.

... ... ...

In addition, the Linux community is known for reinventing the square wheel over and over again. Chaos is both Linux’s greatest strength and its greatest weakness. Remember HAL? Distro adoption is not an indicator of something being good, so much as something having sufficient mindshare.

... ... ...

The observation that sysinit is dumb and heavily flawed with its clunky inittab and runlevel abstractions, is absolutely nothing new. Richard Gooch wrote a paper back in 2002 entitled “Linux Boot Scripts”, which criticized both the SysV and BSD approaches, based on his earlier work on simpleinit(8). That said, his solution is still firmly rooted in the SysV and BSD philosophies, but he makes it more elegant by supplying primitives for modularity and expressing dependencies.

Even before that, DJB wrote the famous daemontools suite which has had many successors influenced by its approach, including s6, perp, runit and daemontools-encore. The former two are completely independent implementations, but based on similar principles, though with significant improvements. An article dated to 2007 entitled “Init Scripts Considered Harmful” encourages this approach and criticizes initscripts.

Around 2002, Richard Lightman wrote depinit(8), which introduced parallel service start, a dependency system, named service groups rather than runlevels (similar to systemd targets), its own unmount logic on shutdown, arbitrary pipelines between daemons for logging purposes, and more. It failed to gain traction and is now a historical relic.

Other systems like initng and eINIT came afterward, which were based on highly modular plugin-based architectures, implementing large parts of their logic as plugins, for a wide variety of actions that software like systemd implements as an inseparable part of its core. Initmacs, anyone?

Even Fefe, anti-bloat activist extraordinaire, wrote his own system called minit early on, which could handle dependencies and autorestart. As is typical of Fefe’s software, it is painful to read and makes you want to contemplate seppuku with a pizza cutter.

And that’s just Linux. Partial list, obviously.

At the end of the day, all comparing to sysvinit does is show that you’ve been living under a rock for years. What’s more, it is no secret to a lot of people that the way distros have been writing initscripts has been totally anathema to basic software development practices, like modularizing and reusing common functions, for years. Among other concerns such as inadequate use of already leaky abstractions like start-stop-daemon(8). Though sysvinit does encourage poor work like this to an extent, it’s distro maintainers who do share a deal of the blame for the mess. See the BSDs for a sane example of writing initscripts. OpenRC was directly inspired by the BSDs’ example. Hint: it’s in the name - “RC”.

The rather huge scope and opinionated nature of systemd leads to people yearning for the days of sysvinit. A lot of this is ignorance about good design principles, but a good part may also be motivated from an inability to properly convey desires of simple and transparent systems. In this way, proponents and opponents get caught in feedback loops of incessantly going nowhere with flame wars over one initd implementation (that happened to be dominant), completely ignoring all the previous research on improving init, as it all gets left to bite the dust. Even further, most people fail to differentiate init from rc scripts, and sort of hold sysvinit to be equivalent to the shoddy initscripts that distros have written, and all the hacks they bolted on top like LSB headers and startpar(2). This is a huge misunderstanding that leads to a lot of wasted energy.

Don’t talk about sysvinit. Talk about systemd on its own merits and the advantages or disadvantages of how it solves problems, potentially contrasting them to other init systems. But don’t immediately go “SysV initscripts were way better and more configurable, I don’t see what systemd helps solve beyond faster boot times.”, or from the other side “systemd is way better than sysvinit, look at how clean unit files are compared to this horribly written initscript I cherrypicked! Why wouldn’t you switch?”

... ... ...

Now that we pointed out how most systemd debates play out in practice and why it’s usually a colossal waste of time to partake in them, let’s do a crude overview of the personalities that make this clusterfuck possible.

The technically competent sides tend to largely fall in these two broad categories:

a) Proponents are usually part of the modern Desktop Linux bandwagon. They run contemporary mainstream distributions with the latest software, use and contribute to large desktop environment initiatives and related standards like the *kits. They’re not necessarily purely focused on the Linux desktop. They’ll often work on features ostensibly meant for enterprise server management, cloud computing, embedded systems and other needs, but the rhetoric of needing a better desktop and following the example set by Windows and OS X is largely pervasive amongst their ranks. They will decry what they perceive as “integration failures”, “fragmentation” and are generally hostile towards research projects and anything they see as “toy projects”. They are hackers, but their mindset is largely geared towards reducing interface complexity, instead of implementation complexity, and will frequently argue against the alleged pitfalls of too much configurability, while seeing computers as appliances instead of tools.

b) Opponents are a bit more varied in their backgrounds, but they typically hail from more niche distributions like Slackware, Gentoo, CRUX and others. They are largely uninterested in many of the Desktop Linux “advancements”, value configuration, minimalism and care about malleability more than user friendliness. They’re often familiar with many other Unix-like environments besides Linux, though they retain a fondness for the latter. They have their own pet projects and are likely to use, contribute to or at least follow a lot of small projects in the low-level system plumbing area. They can likely name at least a dozen alternatives to the GNU coreutils (I can name about 7, I think), generally favor traditional Unix principles and see computers as tools. These are the people more likely to be sympathetic to things like the suckless philosophy.

It should really come as no surprise that the former group dominates. They’re the ones that largely shape the end user experience. The latter are pretty apathetic or even critical of it, in contrast. Additionally, the former group simply has far more manpower in the right places. Red Hat’s employees alone dominate much of the Linux kernel, the GNU base system, GNOME, NetworkManager, many projects affiliated with Freedesktop.org standards (including Polkit) and more. There’s no way to compete with a vast group of paid individuals like those.

Conclusion

The “Year of the Linux Desktop” has become a meme at this point, one that is used most often sarcastically. Yet there are still a lot of people who deeply hold onto it and think that if only Linux had a good abstraction engine for package manager backends, those Windows users will be running Fedora in no time.

What we’re seeing is undoubtedly a cultural clash by two polar opposites that coexist in the Linux community. We can see it in action through the vitriol against Red Hat developers, and conversely the derision against Gentoo users on part of Lennart Poettering, Greg K-H and others. Though it appears in this case “Gentoo user” is meant as a metonym for Linux users whose needs fall outside the mainstream application set. Theo de Raadt infamously quipped that Linux is “for people who hate Microsoft”, but that quote is starting to appear outdated.

Many of the more technically competent people with views critical of systemd have been rather quiet in public, for some reason. Likely it’s a realization that the Linux desktop’s direction is inevitable, and thus trying to criticize it is a futile endeavor. There are people who still think GNOME abandoning Sawfish was a mistake, so yes.

The non-desktop people still have their own turf, but they feel threatened by systemd to one degree or another. Still, I personally do not see them dwindling down. What I believe will happen is that they will become even more segregated than they already are from mainstream Linux and that using their software will feel more otherworldly as time goes on.

There are many who are predicting a huge renaissance for BSD in the aftermath of systemd, but I’m skeptical of this. No doubt there will be increased interest, but as a whole it seems most of the anti-systemd crowd is still deeply invested in sticking to Linux.

Ultimately, the cruel irony is that in systemd’s attempt to supposedly unify the distributions, it has created a huge rift unlike any other and is exacerbating the long-present hostilities between desktop Linux and minimalist Linux sides at rates that are absolutely atypical. What will end up of systemd remains unknown. Given Linux’s tendency for chaos, it might end up the new HAL, though with a significantly more painful aftermath, or it might continue on its merry way and become a Linux standard set in stone, in which case the Linux community will see a sharp ideological divide. Or perhaps it won’t. Perhaps things will go on as usual, on an endless spiral of reinvention without climax. Perhaps we will be doomed to flame on systemd for all eternity. Perhaps we’ll eventually get sick of it and just part our own ways into different corners.

Either way, I’ve become less and less fond of politics for uselessd and see systemd debates as being metaphorically like car crashes. I likely won’t help but chime in at times, though I intend uselessd to branch off into its own direction with time.

SYSTEMD

A very controversial subsystem, systemd is implemented. systemd is a suite of system management daemons, libraries, and utilities designed for Linux and programmed exclusively for the Linux API. There is no more runlevels. For servers systemd makes little sense. Sysadmins now need to learn new systemd commands  for starting and stopping various services. There is still ‘service’ command included for backwards compatibility, but it may go away in future releases. See CentOS 7 - RHEL 7 systemd commands Linux BrigadeCentOS 7 - RHEL 7 systemd commands

From Wikipedia (systemd)

In a 2012 interview, Slackware's founder Patrick Volkerding  expressed the following reservations about the systemd architecture which are fully applicable to the server environment

Concerning systemd, I do like the idea of a faster boot time (obviously), but I also like controlling the startup of the system with shell scripts that are readable, and I'm guessing that's what most Slackware users prefer too. I don't spend all day rebooting my machine, and having looked at systemd config files it seems to me a very foreign way of controlling a system to me, and attempting to control services, sockets, devices, mounts, etc., all within one daemon flies in the face of the UNIX concept of doing one thing and doing it well.

In an August 2014 article published in InfoWorld, Paul Venezia wrote about the systemd controversy, and attributed the controversy to violation of the Unix philosophy, and to "enormous egos who firmly believe they can do no wrong."[42] The article also characterizes the architecture of systemd as more similar to that of Microsoft Windows software:[42]

While systemd has succeeded in its original goals, it's not stopping there. systemd is becoming the Svchost of Linux – which I don't think most Linux folks want. You see, systemd is growing, like wildfire, well outside the bounds of enhancing the Linux boot experience. systemd wants to control most, if not all, of the fundamental functional aspects of a Linux system – from authentication to mounting shares to network configuration to syslog to cron.
 

LINUX CONTAINERS

After 10 years or so after Solaris 10 Linux at last got them.

Linux containers have emerged as a key open source application packaging and delivery technology, combining lightweight application isolation with the flexibility of image-based deployment methods. Developers have rapidly embraced Linux containers because they simplify and accelerate application deployment, and many Platform-as-a-Service (PaaS) platforms are built around Linux container technology, including OpenShift by Red Hat.

Red Hat Enterprise Linux 7 implements Linux containers using core technologies such as control groups (cGroups) for resource management, namespaces for process isolation, and SELinux for security, enabling secure multi-tenancy and reducing the potential for security exploits. The Red Hat container certification ensures that application containers built using Red Hat Enterprise Linux will operate seamlessly across certified container hosts.

NUMA AFFINITY

With more and more systems, even at the low end, presenting non-uniform memory access (NUMA) topologies, Red Hat Enterprise Linux 7 addresses the performance irregularities that such systems present. A new, kernel-based NUMA affinity mechanism automates memory and scheduler optimization. It attempts to match processes that consume significant resources with available memory and CPU resources in order to reduce cross-node traffic. The resulting improved NUMA resource alignment improves performance for applications and virtual machines, especially when running memory-intensive workloads.

HARDWARE EVENT REPORTING MECHANISM

Red Hat Enterprise Linux 7 unifies hardware event reporting into a single reporting mechanism. Instead of various tools collecting errors from different sources with different timestamps, a new hardware event reporting mechanism (HERM) will make it easier to correlate events and get an accurate picture of system behavior. HERM reports events in a single location and in a sequential timeline. HERM uses a new userspace daemon, rasdaemon, to catch and log all RAS events coming from the kernel tracing infrastructure.

VIRTUALIZATION GUEST INTEGRATION WITH VMWARE

Red Hat Enterprise Linux 7 advances the level of integration and usability between the Red Hat Enterprise Linux guest and VMware vSphere. Integration now includes: • Open VM Tools — bundled open source virtualization utilities. • 3D graphics drivers for hardware-accelerated OpenGL and X11 rendering. • Fast communication mechanisms between VMware ESX and the virtual machine.

PARTITIONING DEFAULTS FOR ROLLBACK

The ability to revert to a known, good system configuration is crucial in a production environment. Using LVM snapshots with ext4 and XFS (or the integrated snapshotting feature in Btrfs described in the “Snapper” section) an administrator can capture the state of a system and preserve it for future use. An example use case would involve an in-place upgrade that does not present a desired outcome and an administrator who wants to restore the original configuration.

CREATING INSTALLATION MEDIA

Red Hat Enterprise Linux 7 introduces Live Media Creator for creating customized installation media from a kickstart file for a range of deployment use cases. Media can then be used to deploy standardized images whether on standardized corporate desktops, standardized servers, virtual machines, or hyperscale deployments. Live Media Creator, especially when used with templates, provides a way to control and manage configurations across the enterprise.

SERVER PROFILE TEMPLATES

Red Hat Enterprise Linux 7 features the ability to use installation templates to create servers for common workloads. These templates can simplify and speed creating and deploying Red Hat Enterprise Linux servers, even for those with little or no experience with Linux.

Top Visited
Switchboard
Latest
Past week
Past month


NEWS CONTENTS

Old News ;-)

[Oct 18, 2018] Fedora switching from NetworkManager to explicit ifcfg networking Linux

Nov 23, 2015 | lxer.com

penguinist. Nov 23, 2015

Now that I've spent uncounted hours reaching a solution on this one I wanted to document it somewhere for other LXers who might be faced with a similar problem in the future.

In a fresh installation of Fedora23 the default configuration came up with NetworkManager running the show. This workstation however has a more complex configuration than average with two network interfaces, one running with dhcp and the other serving as a static gateway to an internal lan. Since this configuration is set up once and never changes, the right way seemed to be an explicit configuration, one with custom crafted /etc/sysconfig/network-scripts/ifcfg-ethx config files. After setting up those files, the next part went fairly easily:

systemctl stop NetworkManager.service
systemctl disable NetworkManager.service
systemctl enable network.service
systemctl start network.service

Checking the result however showed errors which indicated that dhclient had been invoked on the eth1 interface even though its ifcfg-eth1 configuration was clearly marked:

BOOTPROTO=none

After a long investigation it turned out that NetworkManager had saved an eth1 lease file under /var/lib/dhclient/ and network.service dutifully attempted to restore that lease even though the interface was explicitly marked for no dhcp service (should file a bugzilla report on this one).

Manually removing the extraneous lease file fixed the problem and we now start network cleanly with NetworkManager disabled.

JaseP

Nov 23, 2015
8:48 PM EDT Nice catch & fix!

[Oct 16, 2018] How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command by Prakash Subramanian

Oct 15, 2018 | www.2daygeek.com
It's a important topic for Linux admin (such a wonderful topic) so, everyone must be aware of this and practice how to use this in the efficient way.

In Linux, whenever we install any packages which has services or daemons. By default all the services "init & systemd" scripts will be added into it but it wont enabled.

Hence, we need to enable or disable the service manually if it's required. There are three major init systems are available in Linux which are very famous and still in use.

What is init System?

In Linux/Unix based operating systems, init (short for initialization) is the first process that started during the system boot up by the kernel.

It's holding a process id (PID) of 1. It will be running in the background continuously until the system is shut down.

Init looks at the /etc/inittab file to decide the Linux run level then it starts all other processes & applications in the background as per the run level.

BIOS, MBR, GRUB and Kernel processes were kicked up before hitting init process as part of Linux booting process.

Below are the available run levels for Linux (There are seven runlevels exist, from zero to six).

Below three init systems are widely used in Linux.

What is System V (Sys V)?

System V (Sys V) is one of the first and traditional init system for Unix like operating system. init is the first process that started during the system boot up by the kernel and it's a parent process for everything.

Most of the Linux distributions started using traditional init system called System V (Sys V) first. Over the years, several replacement init systems were released to address design limitations in the standard versions such as launchd, the Service Management Facility, systemd and Upstart.

But systemd has been adopted by several major Linux distributions over the traditional SysV init systems.

What is Upstart?

Upstart is an event-based replacement for the /sbin/init daemon which handles starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running.

It was originally developed for the Ubuntu distribution, but is intended to be suitable for deployment in all Linux distributions as a replacement for the venerable System-V init.

It was used in Ubuntu from 9.10 to Ubuntu 14.10 & RHEL 6 based systems after that they are replaced with systemd.

What is systemd?

Systemd is a new init system and system manager which was implemented/adapted into all the major Linux distributions over the traditional SysV init systems.

systemd is compatible with SysV and LSB init scripts. It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1.

It's a parant process for everything and Fedora 15 is the first distribution which was adapted systemd instead of upstart. systemctl is command line utility and primary tool to manage the systemd daemons/services such as (start, restart, stop, enable, disable, reload & status).

systemd uses .service files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring /cgroup/systemd file.

How to Enable or Disable Services on Boot Using chkconfig Commmand?

The chkconfig utility is a command-line tool that allows you to specify in which
runlevel to start a selected service, as well as to list all available services along with their current setting.

Also, it will allows us to enable or disable a services from the boot. Make sure you must have superuser privileges (either root or sudo) to use this command.

All the services script are located on /etc/rd.d/init.d .

How to list All Services in run-level

The -–list parameter displays all the services along with their current status (What run-level the services are enabled or disabled).

# chkconfig --list
NetworkManager     0:off    1:off    2:on    3:on    4:on    5:on    6:off
abrt-ccpp          0:off    1:off    2:off    3:on    4:off    5:on    6:off
abrtd              0:off    1:off    2:off    3:on    4:off    5:on    6:off
acpid              0:off    1:off    2:on    3:on    4:on    5:on    6:off
atd                0:off    1:off    2:off    3:on    4:on    5:on    6:off
auditd             0:off    1:off    2:on    3:on    4:on    5:on    6:off
.
.
How to check the Status of Specific Service

If you would like to see a particular service status in run-level then use the following format and grep the required service.

In this case, we are going to check the auditd service status in run-level

[Oct 16, 2018] CentOS - [CentOS] Cemtos 7 Systemd alternatives

Oct 16, 2018 | n5.nabble.com

Ned Slider

On 08/07/14 02:22, Always Learning wrote:
>
> On Mon, 2014-07-07 at 20:46 -0400, Robert Moskowitz wrote:
>
>> On 07/07/2014 07:47 PM, Always Learning wrote:
>>> Reading about systemd, it seems it is not well liked and reminiscent of
>>> Microsoft's "put everything into the Windows Registry" (Win 95 onwards).
>>>
>>> Is there a practical alternative to omnipresent, or invasive, systemd ?
>
>> So you are following the thread on the Fedora list? I have been
>> ignoring it.
>
> No. I read some of
> http://www.phoronix.com/scan.php?page=news_topic&q=systemd
>
> The systemd proponent, advocate and chief developer? wants to
> abolish /etc and /var in favour of having the /etc and /var data
> in /usr.
>
> Seems a big revolution is being forced on Linux users when stability and
> the "same old familiar Linux" is desired by many, including me.
> ... [ show rest of quote ]
It's already started. Some configs have already moved from /etc to /usr
under el7.

Whilst I'm as resistant to change as the next man, I've learned you can't fight it so best start getting used to it ;-)

[Oct 15, 2018] Breaking News! SUSE Linux Sold for $2.5 Billion It's FOSS by Abhishek Prakash

Aqusition by a private equity shark is never good news for a software vendor...
Jul 03, 2018 | itsfoss.com

British software company Micro Focus International has agreed to sell SUSE Linux and its associated software business to Swedish private equity group EQT Partners for $2.535 billion. Read the details. ­ rm 3 months ago

This comment is awaiting moderation

Novell acquired SUSE in 2003 for $210 million ­ asoc 4 months ago

This comment is awaiting moderation

"It has over 1400 employees all over the globe "
They should be updating their CVs.

[Oct 15, 2018] I honestly, seriously sometimes wonder if systemd is Skynet... or, a way for Skynet to 'waken'.

Oct 15, 2018 | linux.slashdot.org

thegarbz ( 1787294 ) , Sunday August 30, 2015 @04:08AM ( #50419549 )

Re:Hang on a minute... ( Score: 5 , Funny)
I honestly, seriously sometimes wonder if systemd is Skynet... or, a way for Skynet to 'waken'.

Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. At 2:15am it crashes.
No one knows why. The binary log file was corrupted in the process and is unrecoverable. All anyone could remember is a bug listed in the systemd bug tracker talking about su which was classified as WON'T FIX as the developer thought it was a broken concept.

[Oct 15, 2018] Systemd as doord interface for cars ;-) by Nico Schottelius

Notable quotes:
"... Let's say every car manufacturer recently discovered a new technology named "doord", which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead of 1.2 seconds on average. So every time you open a door, you are much, much faster! ..."
"... Many of the manufacturers decide to implement doord, because the company providing doord makes it clear that it is beneficial for everyone. And additional to opening doors faster, it also standardises things. How to turn on your car? It is the same now everywhere, it is not necessarily to look for the keyhole anymore. ..."
"... Unfortunately though, sometimes doord does not stop the engine. Or if it is cold outside, it stops the ignition process, because it takes too long. Doord also changes the way how your navigation system works, because that is totally related to opening doors, but leads to some users being unable to navigate, which is accepted as collateral damage. In the end, you at least have faster door opening and a standard way to turn on the car. Oh, and if you are in a traffic jam and have to restart the engine often, it will stop restarting it after several times, because that's not what you are supposed to do. You can open the engine hood and tune that setting though, but it will be reset once you buy a new car. ..."
Oct 15, 2018 | blog.ungleich.ch

Let's say every car manufacturer recently discovered a new technology named "doord", which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead of 1.2 seconds on average. So every time you open a door, you are much, much faster!

Many of the manufacturers decide to implement doord, because the company providing doord makes it clear that it is beneficial for everyone. And additional to opening doors faster, it also standardises things. How to turn on your car? It is the same now everywhere, it is not necessarily to look for the keyhole anymore.

Unfortunately though, sometimes doord does not stop the engine. Or if it is cold outside, it stops the ignition process, because it takes too long. Doord also changes the way how your navigation system works, because that is totally related to opening doors, but leads to some users being unable to navigate, which is accepted as collateral damage. In the end, you at least have faster door opening and a standard way to turn on the car. Oh, and if you are in a traffic jam and have to restart the engine often, it will stop restarting it after several times, because that's not what you are supposed to do. You can open the engine hood and tune that setting though, but it will be reset once you buy a new car.

[Oct 15, 2018] Future History of Init Systems

Oct 15, 2018 | linux.slashdot.org

AntiSol ( 1329733 ) , Saturday August 29, 2015 @03:52PM ( #50417111 )

Re:Approaching the Singularity ( Score: 4 , Funny)

Future History of Init Systems

Future History of Init Systems
  • 2015: systemd becomes default boot manager in debian.
  • 2017: "complete, from-scratch rewrite" [jwz.org]. In order to not have to maintain backwards compatibility, project is renamed to system-e.
  • 2019: debut of systemf, absorbtion of other projects including alsa, pulseaudio, xorg, GTK, and opengl.
  • 2021: systemg maintainers make the controversial decision to absorb The Internet Archive. Systemh created as a fork without Internet Archive.
  • 2022: systemi, a fork of systemf focusing on reliability and minimalism becomes default debian init system.
  • 2028: systemj, a complete, from-scratch rewrite is controversial for trying to reintroduce binary logging. Consensus is against the systemj devs as sysadmins remember the great systemd logging bug of 2017 unkindly. Systemj project is eventually abandoned.
  • 2029: systemk codebase used as basis for a military project to create a strong AI, known as "project skynet". Software behaves paradoxically and project is terminated.
  • 2033: systeml - "system lean" - a "back to basics", from-scratch rewrite, takes off on several server platforms, boasting increased reliability. systemm, "system mean", a fork, used in security-focused distros.
  • 2117: critical bug discovered in the long-abandoned but critical and ubiquitous system-r project. A new project, system-s, is announced to address shortcomings in the hundred-year-old codebase. A from-scratch rewrite begins.
  • 2142: systemu project, based on a derivative of systemk, introduces "Artificially intelligent init system which will shave 0.25 seconds off your boot time and absolutely definitely will not subjugate humanity". Millions die. The survivors declare "thou shalt not make an init system in the likeness of the human mind" as their highest law.
  • 2147: systemv - a collection of shell scripts written around a very simple and reliable PID 1 introduced, based on the brand new religious doctrines of "keep it simple, stupid" and "do one thing, and do it well". People's computers start working properly again, something few living people can remember. Wyld Stallyns release their 94th album. Everybody lives in peace and harmony.

[Oct 15, 2018] They should have just rename the machinectl into command.com.

Oct 15, 2018 | linux.slashdot.org

RabidReindeer ( 2625839 ) , Saturday August 29, 2015 @11:38AM ( #50415833 )

What's with all the awkward systemd command names? ( Score: 5 , Insightful)

I know systemd sneers at the old Unix convention of keeping it simple, keeping it separate, but that's not the only convention they spit on. God intended Unix (Linux) commands to be cryptic things 2-4 letters long (like "su", for example). Not "systemctl", "machinectl", "journalctl", etc. Might as well just give everything a 47-character long multi-word command like the old Apple commando shell did.

Seriously, though, when you're banging through system commands all day long, it gets old and their choices aren't especially friendly to tab completion. On top of which why is "machinectl" a shell and not some sort of hardware function? They should have just named the bloody thing command.com.

[Oct 15, 2018] Oh look, another Powershell

Oct 15, 2018 | linux.slashdot.org

Anonymous Coward , Saturday August 29, 2015 @11:37AM ( #50415825 )

Cryptic command names ( Score: 5 , Funny)

Great to see that systemd is finally doing something about all of those cryptic command names that plague the unix ecosystem.

Upcoming systemd re-implementations of standard utilities:

ls to be replaced by filectl directory contents [pathname] grep to be replaced by datactl file contents search [plaintext] (note: regexp no longer supported as it's ambiguous) gimp to be replaced by imagectl open file filename draw box [x1,y1,x2,y2] draw line [x1,y1,x2,y2] ...
Anonymous Coward , Saturday August 29, 2015 @11:58AM ( #50415939 )
Re: Cryptic command names ( Score: 3 , Funny)

Oh look, another Powershell

[Oct 15, 2018] Check man chroot. The authors of chroot say it's useless for security

Notable quotes:
"... Noexec is basically a suggestion, not an enforcement mechanism . Just run ld /path/to/executable. ld is the loader/lilinker for elf binaries. Without ld ,you can't run bash, or ls. With ld, noexec is ignored. ..."
Oct 15, 2018 | linux.slashdot.org

raymorris ( 2726007 ) , Saturday August 29, 2015 @07:53PM ( #50418235 ) Journal

read the man page ( Score: 5 , Informative)

> In short: I think chroot is plenty good for security

Check man chroot. The authors of chroot say it's useless for security. Perhaps you think you know more than they do ,and more than security professionals like myself do. Let's find out.

> you get a shell in one of my chroot's used for security, then.....
ur uid and gid are not going to be 0. Good luck telling the kernel to try and get you out.
There aren't going to be any /dev, /proc, or other special filesystems

Gonna be kind of tthough to have a ahell without a tty, aka /dev/*tty*

So yeah, you need /dev. Can't launch a process, including /bin/ls, without /proc, so you're going to need proc. Have a look in /proc/1. You'll see a very interesting symlink there.

> mounted noexec

Noexec is basically a suggestion, not an enforcement mechanism . Just run ld /path/to/executable. ld is the loader/lilinker for elf binaries. Without ld ,you can't run bash, or ls. With ld, noexec is ignored.

My company does IT security for banks. Meaning we show the banks how they can be hacked. When I say chroot is not a security control, I'm not guessing.

[Oct 15, 2018] Systemd moved Linux closer to Windows

And that's why it is supported by Red Hat management. It's role as middleware for containers is very questionable indeed.
Oct 15, 2018 | linux.slashdot.org

Opportunist ( 166417 ) , Monday December 11, 2017 @05:19AM ( #557 14501 )

Systemd moved Linux closer to Windows ( Score: 5 , Interesting)

Windows is a very complex system. Not necessarily because it needs to be complex, but rather because of "wouldn't it be great if we could also..." thinking. Take the registry. Good idea in its core, a centralized repository for all configuration files. Great. But wouldn't it be nice if we could also store some states in there? And we could put the device database in there, too. And how about the security settings? And ...

And eventually you had the mess you have now, where we're again putting configuration files into the %appdata% directory. But when we have configuration in there already anyway, couldn't we... and we could sync this for roaming, ya know...

Which is the second MS disease. How many users actually need roaming? 2, maybe 3 out of 10? The rest is working on a stationary desktop, never moving, never roaming. But they have to have this feature, needed or not. And if you take a look through the services, you'll notice that a lot of services that you simply know you don't need MUST run because the OS needs them for some freakish reason. Because of "wouldn't it be great if this service did also...".

systemd now brought this to the Linux world. Yes, it can do a lot. But unfortunately it does so, whether you need it or not. And it requires you to take these "features" into account when configuring it, even if you have exactly zero use for them and wouldn't potentially not even know just wtf they're supposed to do.

systemd is as overengineered as many Windows components. And thus of course as error prone. And while it can make things more manageable for huge systems, everything becomes more convoluted and complicated for anyone that has no use for these "wouldn't it be great if it also..." features.

[Oct 15, 2018] I don't care about systemd. By the time I used systemd, main problems were ironed out and now system just works

A very naive and self-centered view. Systemd is an architectural blunder. In such cases problem never seize to exist. That's the nature of the beast.
Oct 15, 2018 | linux.slashdot.org

Plus1Entropy ( 4481723 ) , Monday December 11, 2017 @02:34AM ( #55714139 )

I have no problem with systemd ( Score: 4 , Interesting)

Yeah, yeah I know the history of its development and how log files are binary and the whole debug kernel flag fiasco. And I don't care. By the time I used systemd, that had already long passed.

I switched from Squeeze to Jessie a couple years ago, had some growing pains as I learned how to use systemd... but that was it. No stability issues, no bugs. Can't say whether things run better, but they definitely don't run worse.

I had only really been using Linux for a few years before the onset of systemd, and honestly I think that's part of the problem. People who complain about systemd the most seem to have been using Linux for a very long time and just "don't want to change". Whether its nostalgia or sunk-cost fallacy, I can't say, but beyond that it seems much more like a philosophical difference than a practical one. It just reminds me of people's refusal to use the metric system, for no better reason than they are unfamiliar with it.

If systemd is so terrible, then why did a lot of the major distros switch over? If they didn't, it would just be a footnote in the history of open source: "Hey remember when they tried to replace sysV and init with that stupid thing with the binary log files? What was it called? SystemP?" The fact that Devaun has not overtaken Debian in any real way (at least from what I've seen, feel free to correct me if I'm wrong) indicates that my experience with systemd is the norm, not the exception. The market has spoken.

I read TFA, there is not one single specific bug or instability mentioned about systemd. What is the "tiny detail" that split the community? I have no idea, because TFA doesn't say what it is. I know that part of the philosophy behind Linux is "figuring it out yourself", but if you don't explain to me these low level kernel details (if that's even what they are; again, I have no idea), then don't expect people like me to be on your side. Linux is just a tool to me, I don't have any emotional attachment to it, so if things are working OK I am not going to start poking around under the hood just because someone posts an article claiming there are problems, but never specifying what those problems are and how they affect me as a user.

Honestly TFA reads like "We are having development problems, therefore systemd sucks." I get that when major changes to the platform happens there are going to be issues and annoyances, but that's the way software development has always been and will always be. Even if systemd was perfect there would still be all kinds of compatibility issues and new conventions that developers would have to adapt to. That's what I would expect to happen whenever any major change is made to a widely used and versatile platform like Linux.

Even Linus doesn't really care [zdnet.com]:

"I don't actually have any particularly strong opinions on systemd itself. I've had issues with some of the core developers that I think are much too cavalier about bugs and compatibility, and I think some of the design details are insane (I dislike the binary logs, for example), but those are details, not big issues."

I'm not saying systemd is "better" or "the right answer". If you want to stick to distros that don't use it, that's up to you. But what I am saying is, get over it.

chaoskitty ( 11449 ) writes: < john AT sixgirls DOT org > on Monday December 11, 2017 @03:14AM ( #55714245 ) Homepage
Re:I have no problem with systemd ( Score: 5 , Insightful)

Perhaps you have had no problems with systemd because you aren't trying to use it to do much.

Lots of people, myself included, have had issues trying to get things which are trivial in pre-systemd or on other OSes to work properly and consistently on systemd. There are many, many, many examples of issues. If someone asked me for examples, I'd have a hard time deciding where to start because so many things have been gratuitously changed. If you really think there aren't examples, just read this thread.

On the other hand, I have yet to see real technical discussion about problems that systemd apparently is fixing. I honestly and openmindedly am curious about what makes systemd good, so I've tried on several occasions to find these discussions where good technical reasoning is used to explain the motivations behind systemd. If they exist, I haven't found any yet. I'm hoping some will appear as a result of this thread.

But you bring up the idea that the "market has spoken"? You do realize that a majority of users use Windows, right? And people in the United States are constantly electing politicians who directly hurt the people who vote for them more than anyone else. It's called marketing. Just because something has effective marketing doesn't mean it doesn't suck.

Plus1Entropy ( 4481723 ) writes:
Re: ( Score: 3 )

Don't let the perfect be the enemy of the good. If you have so many examples that you "don't know where to start", then start anywhere. You don't have to come at me with the best, most perfect example. Any example will do! I'm actually very interested. And I have, out of curiosity, looked a bit. But like you looking for why systemd is better, I came across a similar problem.

Your reply just continues the cycle I spoke of, where people who potentially know better than me, like you, claim there are problems bu

amorsen ( 7485 ) writes: < benny+slashdot@amorsen.dk > on Monday December 11, 2017 @04:35AM ( #55714405 )
Re:I have no problem with systemd ( Score: 5 , Insightful)

systemd fails silently if something is wrong in /etc/fstab. It just doesn't finish booting. Which is moderately annoying if you have access to the system console and you can guess that an unchanged /etc/fstab from before systemd that worked for a while with systemd is now suddenly toxic

If you do not have easy access to the system console or you are not blessed with divine inspiration, that is quite a bit more than annoying. Thanks to the binary log files you cannot even boot something random and read the logs, but at least you aren't missing anything, because nothing pertinent to the error is logged anyway.

The problem is that one camp won't admit that old init is a pile of shit from the 80's whose only virtue is that the stench has faded over time, and the other camp won't admit that their new shiny toy needs to be understandable and debuggable.

A proper init system needs dependencies and service monitoring. init + monit does not cut it today. Systemd does that bit rather impressively well. It's just terrible at actually booting the system, all the early boot stuff that you could depend on old init to get right every time, or at least spit out aggressive messages about why it failed.

marcansoft ( 727665 ) writes: < (moc.tfosnacram) (ta) (rotceh) > on Monday December 11, 2017 @05:25AM ( #55714513 ) Homepage
Re:I have no problem with systemd ( Score: 4 , Interesting)

Meanwhile here I am, running Gentoo, with init scripts that have had real dependencies for over 15 years (as well as a bash-based but much nicer scaffolding to write them), with simple to use admin tools and fully based on text files, with cgroup-based process monitoring (these days), and I'm wondering why everyone else didn't get the memo and suddenly decided to switch to systemd instead and bring along all the other baggage it comes with. Debian and Ubuntu had garbage init systems, and yet it seems *nobody* ever took notice of how Gentoo has been doing things right for a decade and a half. You can also use systemd with Gentoo if you want, because user choice is a good thing.

lkcl ( 517947 ) writes: < lkcl@lkcl.net > on Monday December 11, 2017 @05:05AM ( #55714467 ) Homepage
Re:I have no problem with systemd ( Score: 5 , Informative)
People who complain about systemd the most seem to have been using Linux for a very long time and just "don't want to change".

no, that's not it. people who have been using linux for a long time usually *know the corner-cases better*. in other words, they know *exactly* why it doesn't work and won't work, they know *exactly* the hell that it can and will create, under what circumstances, and they know *precisely* how they've been betrayed by the rail-roaded decisions made by distros without consulting them as to the complexities of the scenario to which they have been (successfully up until that point) deploying a GNU/Linux system.

also they've done the research - looked up systemd vs other init systems on the CVE mitre databases and gone "holy fuck".

also they've seen - perhaps even reported bugs themselves over the years - how well bugs are handled, and how reasonable and welcoming (or in some sad cases not, but generally it's ok) the developers are... then they've looked up the systemd bug database and how pottering abruptly CLOSES LEGITIMATE BUGREPORTS and they've gone "WHAT the fuck??"

also, they've been through the hell that was the "proprietary world", if they're REALLY old they've witnessed first-hand the "Unix Wars" and if they're not that old they experienced the domination of Windows through the 1990s. they know what a monoculture looks like and how dangerous that is for a computing eco-system.

in short, i have to apologise for pointing this out: they can read the danger signs far better than you can. sorry! :)

marcansoft ( 727665 ) writes: < (moc.tfosnacram) (ta) (rotceh) > on Monday December 11, 2017 @05:10AM ( #55714477 ) Homepage
Re:I have no problem with systemd ( Score: 4 , Informative)

Everyone* switched to systemd because everyone* was using something that was much, much worse. Traditional sysvinit is a joke for service startup, it can't even handle dependencies in a way that actually works reliably (sure, it works until a process fails to start or hangs, then all bets are off, and good luck keeping dependencies starting in the right order as the system changes). Upstart is a mess (with plenty of corner case bugs) and much harder to make sense of and use than systemd. I'm a much happier person writing systemd units than Upstart whatever-you-call-thems on the Ubuntu systems I have to maintain.

The problem with systemd is that although it does init systems *better* than everything else*, it's also trying to take over half a dozen more responsibilities that are none of its damn business. It's a monolithic repo, and it's trying as hard as it can to position itself as a hard dependency for every Linux system on the face of the planet. Distros needed* a new init system, and they got an attempt to take over the Linux ecosystem along with it.

* The exception is Gentoo, which for over 15 years has had an rc-script system (later rewritten as OpenRC) based on sysvinit as PID 1 but with real dependencies, easy to write initscripts, and all the features you might need in a server environment (works great for desktops too). It's the only distro that has had a truly server-worthy init system, with the right balance of features and understandability and ease of maintenance. Gentoo is the only major distro that hasn't switched to systemd, though it does offer systemd as an option for those who want it. OpenRC was proposed as a systemd alternative in the Debian talks, but Gentoo didn't advertise it, and nobody on the Debian side cared to give it a try. Interestingly Poettering seems to be *very* careful to *never, ever* mention OpenRC when he talks about how systemd is better than everything else. I wonder why. Gentoo developers have had to fork multiple things assimilated by systemd (like udev) in order to keep offering OpenRC as an option.

[Oct 15, 2018] The role played by http://angband.pl/debian/ [angband.pl] should be absorbed into the main debian packaging, providing "Replaces / Provides / Conflicts" alternatives of pulseaudio, libcups, bsdutils, udev, util-linux, uuid-runtime, xserver-xorg and many more - all with a -nosystemd extension on the package name

Oct 15, 2018 | linux.slashdot.org

lkcl ( 517947 ) writes: < lkcl@lkcl.net > on Monday December 11, 2017 @01:55AM ( #55714053 ) Homepage

Re:It's the implementation. ( Score: 5 , Interesting)
I don't think there's a problem with the idea of systemd. Having a standard way to handle process start-up, dependencies, failures, recovery, "contracts", etc... isn't a bad, or unique, thing -- Solaris has Service Manager, for example.

the difference is that solaris is - was - written and maintained by a single vendor. they have - had - the resources to keep it running, and you "bought in" to the sun microsystems (now oracle) way, and that was that. problems? pay oracle some money, get support... fixed.

free software is not *just* about a single way of doing things... because the single way doesn't fit absolutely *all* cases. take angstrom linux for example: an embedded version of GNU/Linux that doesn't even *have* an init system! you're expected to write your own initialisation system with hard-coded entries in /dev. why? because on an embedded system with only 32mb of RAM there *wasn't room* to run an init service.

then also we have freebsd and netbsd to consider, where security is much tighter and the team is smaller. in short: in the free software world unlike solaris there *is* no "single way" and any "single way" is guaranteed to be a nightmare pain-in-the-ass for at least somebody, somewhere.

this is what the "majority voting" that primarily debian - other distros less so because to some extent they have a narrower focus than debian - completely failed to appreciate. the "majority rule" decision-making, for all that it is blindly accepted to be "How Democracy Works" basically pissed in the faces of every debian sysadmin who has a setup that the "one true systemd way" does not suit - for whatever reason, where that reason ultimately DOES NOT MATTER, betraying an IMPLICIT trust placed by those extremely experienced users in the debian developers that you DO NOT fuck about with the underlying infrastructure without making it entirely optional.

now, it has to be said that the loss of several key debian developers, despite the incredible reasonable-ness of the way that they went about making their decision, made it clear to the whole debian team quite how badly they misjudged things: joey hess leaving with the declaration that debian's charter is a "toxic document" for example, and on that basis they have actually tried very hard to undo some of that damage.

the problem is that their efforts simply don't go far enough. udisk2, policykit, and several absolutely CRITICAL programs without which it is near flat-out impossible to run a desktop system - all gone. the only way to get those back is to add http://angband.pl/debian/ [angband.pl] to /etc/apt/sources.list and use the (often out-of-date) nosystemd recompiled versions of packages that SHOULD BE A PERMANENT PART OF DEBIAN.

in essence: whilst debian developers are getting absolutely fed up of hearing about systemd, they need to accept that the voices that tell them that there is a problem - even though those voices cannot often actually quite say precisely what is wrong - are never, ever, going to stop, UNTIL the day that the role played by http://angband.pl/debian/ [angband.pl] is absorbed into the main debian packaging, providing "Replaces / Provides / Conflicts" alternatives of pulseaudio, libcups, bsdutils, udev, util-linux, uuid-runtime, xserver-xorg and many more - all with a -nosystemd extension on the package name.

ONLY WHEN it is possible for debian users to run a debian system COMPLETELY free of everything associated with systemd - including libsystemd - will the utterly relentless voices and complaints stop, because only then, FINALLY, will people feel safer about running a debian system where there is absolutely NO possibility of harm, cost or inconvenience caused by the poisonous and utterly irresponsible attitude shown by pottering, with his blatant disregard for security, good design practices, and complete lack of respect for other peoples' valuable input by abruptly and irrationally closing extremely important bugreports. we may have been shocked that there were people who *literally* wanted to kill him, but those people did not react the way that they did, despite their inability to properly and rationally express themselves, without having a good underlying reason for doing so.

software libre is supposed to be founded on ethical principles. that's what the GPL license is actually about (the four freedoms are a reflection of ETHICAL standards). can you honestly declare that systemd has been developed - and then adopted - in a truly ethical fashion? because everything i see about systemd says the complete and absolute opposite. and that is why i won't allow it on any computers that i run - not just because technically it's an inferior design with no overall redeeming features (mass-adoption is NOT a redeeming feature, it's a monoculture-level microsoft-emulating disaster), but because its developers and its blatant rail-roaded adoption across so many distributions fundamentally violates the ethical principles on which the software libre community is *supposed* to be based.

[Oct 15, 2018] Systemd Absorbs su Command Functionality

Systemd might signify the change of generations of developers...
Notable quotes:
"... Lennart Poettering's long story short: "`su` is really a broken concept ..."
Oct 15, 2018 | linux.slashdot.org

mysidia ( 191772 ) , Saturday August 29, 2015 @11:34AM ( #50415809 )

Bullshit ( Score: 5 , Insightful)

Lennart Poettering's long story short: "`su` is really a broken concept

Declaring established concepts as broken so you can "fix" them.

Su is not a broken concept; it's a long well-established fundamental of BSD Unix/Linux. You need a shell with some commands to be run with additional privileges in the original user's context.

If you need a full login you invoke 'su -' or 'sudo bash -'

Deciding what a full login comprises is the shell's responsibility, not your init system's job.

RightwingNutjob ( 1302813 ) , Saturday August 29, 2015 @12:38PM ( #50416133 )
Re:Hang on a minute... ( Score: 5 , Interesting)

I've had a job now for about 10 years where a large fraction of the time I wear a software engineer's hat. Looking back now, I can point to a lot of design decisions in the software I work on that made me go "WTF?" when I first saw them as a young'un, but after having to contend with them for a good number of years, and thinking about how I would do them differently, I've come to the conclusion that the original WTF may be ugly and could use some polish, but the decisionmaking that produced it was fundamentally sound.

The more I hear about LP and systemd, the more it screams out that this guy just hasn't worked with Unix and Linux long enough to understand what it's used for and why it's built the way it is. His pronouncements just sound to me like an echo of my younger, stupider, self (and I just turned 30), and I can't take any of his output seriously. I really hope a critical mass of people are of the same mind with me and this guy can be made to redirect his energies somewhere where it doesn't fuck it up for the rest of us.

magamiako1 ( 1026318 ) , Saturday August 29, 2015 @01:42PM ( #50416503 )
Re:Hang on a minute... ( Score: 5 , Insightful)

Welcome to IT. Where the youngin's come in and rip up everything that was built for decades because "oh that's too complicated".

TheGratefulNet ( 143330 ) , Saturday August 29, 2015 @02:19PM ( #50416699 )
Re:Hang on a minute... ( Score: 5 , Insightful)

its the other way around. we used to have small, simple programs that did not take whole systems to build and gigs of mem to run in. things were easier to understand and concepts were not overdone a hundred times, just because 'reasons'.

now, we have software that can't be debugged well, people who are current software eng's have no attention span to fix bugs or do proper design, older guys who DO remember 'why' are no longer being hired and we can't seem to stand on our giants' shoulders anymore. again, because 'reasons'.

[Oct 15, 2018] Developers with the mentality of hobbyists are a problem; super productive developers with the mentality of a hobbyist can be a menace

I wonder how it happens that Red Hat has no developers able to to form a countervailing force to Poettering and his "desktop linux" clique. May be because he has implicit support of management as Windowization of Linux is a the strategic goal of Red hat.
Looks like Pottering never used environment modules package
Oct 15, 2018 | linux.slashdot.org

rubycodez ( 864176 ) , Saturday August 29, 2015 @11:53AM ( #50415919 )

Re:Bullshit ( Score: 4 , Interesting)

Poettering is so very wrong on many things, having a superficial and shallow understanding of why Unix is designed the way it is. He is just a hobbyist, not a hardened sys admin with years of experience. It's almost time to throw popular Linux distros in the garbage can and just go to BSD

Anonymous Coward writes:
Change for change's sake ( Score: 2 , Insightful)
he is the guy who delivers.

"Delivering" the wrong thing is not an asset, it's a liability.

And that's why Poettering is a liability to the Linux community.

0123456 ( 636235 ) , Saturday August 29, 2015 @12:24PM ( #50416057 )
Re:Bullshit ( Score: 4 , Insightful)

There are plenty of programmers who can spew out hundreds of lines of crap code in a day.

The problem is that others then have to spend years fixing it.

It's even worse when you let the code-spewers actually design the system, because you'll never be allowed to go back and redo things right.

Anonymous Coward , Saturday August 29, 2015 @12:53PM ( #50416235 )
Re:Bullshit ( Score: 5 , Insightful)

He bring new code, but brings nothing new. That's called re-inventing the wheel, and in Poettering's case, the old wheels worked better and didn't go flat as often, and were easier for average people to fix.

lucm ( 889690 ) writes:
Re: ( Score: 3 )
What we're dealing with now is something that neither "average person" nor "master geek" find easy to fix.

This is the best summary I've seen of the whole systemd thing. They try to Apple-ize linux but it's half-baked and neither more user-friendly or more reliable than the stuff they replace.

rnturn ( 11092 ) , Saturday August 29, 2015 @04:37PM ( #50417391 )
Re:Bullshit ( Score: 4 , Insightful)
``They try to Apple-ize linux but it's half-baked and neither more user-friendly or more reliable than the stuff they replace.

I've had the same complaint about CUPS -- Apple's screwball replacement for simple lpd -- for years. (And it's not just the Linux version that, IMHO, sucks. I recently had to live through using CUPS in an Apple shop and getting hard copy of anything was a real time sink.) I have a hard time figuring out what problem CUPS was intended to solve. All I can come up with was that it was shiny and new whereas lpd was old (but reliable). For my trusty, rock-solid HP LaserJet, I keep an old Linux distribution running so I can set it up using LPRng. A couple of lines in a text file and -- Voila! -- I have a print queue. Time spent^Wwasted in CUPS' GUI never seemed to make anything work.

Systemd and well, just about anything Poettering touches is more obtuse than what it replaces, has commands that are difficult to remember, require more typing (making them prone to typos), and don't make much sense. Am I looking for the status of "servicename" or am I looking for the status of "servicename.target"? What's the difference? The guy's pushing me back to Slackware. Or, as someone above mentioned, BSD.

techno-vampire ( 666512 ) , Saturday August 29, 2015 @04:02PM ( #50417183 ) Homepage
The way this should end ( Score: 4 , Insightful)

PoetteringOS

In the long run, he's not going to be satisfied until he's created his own OS, kernel and all because he calls anything he didn't write a "broken concept," whatever that is, and does his best to shove his version down everybody's throat. And, since his version is far more complex, far more pervasive and much, much harder to use or maintain, the community suffers. I do wish he would get off the pot and start developing the One True (Pottering) kernel so that the rest of the world can go back to ignoring him.

Kavonte ( 4239129 ) writes:
Re: ( Score: 3 , Interesting)

I tried a bunch of them a few years ago. I found that FreeBSD was the best one, even though it doesn't come with a GUI by default, and so you have to install it afterwards. (Seems kind of ridiculous to me, but that's how they package it for some reason.) I don't know if they've changed the documentation since then, but note that you don't have to compile X11 and your window manager, as there is a system that can install pre-compiled packages that they don't bother to mention until after they tell you how

rubycodez ( 864176 ) writes:
Re: ( Score: 3 )

But the are distros based on FreeBSD such as PC-BSD that have the UI and other desktop features and apps canned and ready to go

Anonymous Coward , Saturday August 29, 2015 @11:55AM ( #50415925 )
Re:Bullshit ( Score: 5 , Informative)

Just like he considers exit statuses, stderr, and syslog "broken concepts." That is why systemd supports them so poorly. He just doesn't understand why those things are critical. An su system that doesn't properly log to syslog is a serious security problem.

phantomfive ( 622387 ) , Saturday August 29, 2015 @01:55PM ( #50416559 ) Journal
Re:Bullshit ( Score: 5 , Interesting)

ok, I just spent my morning researching the problem, and why the feature got built, starting from here [github.com] (linked to in the article). Essentially, the timeline goes like this:

1) On Linux, the su command uses PAM to manage logins (that's probably ok).
2) systemd wrote their own version of PAM (because containers)
3) Unlike normal su, the systemd-pam su doesn't transfer over all environment variables, which led to:
4) A bug filed by a user, that the XDG_RUNTIME_DIR variable wasn't being maintained when su was run.
5) Lennart said that's because su is confusing, and he wouldn't fix it.
6) The user asked for a feature request to be added to machinectl, that would retain that environment variable
7) Lennart said, "sure, no problem." (Which shows why systemd is gaining usage, when people want a feature, he adds it)

It's important to note that there isn't a conspiracy here to destroy su. The process would more accurately be called "design by feature accretion," which doesn't really make you feel better, but it's not malice.

gweihir ( 88907 ) , Saturday August 29, 2015 @12:35PM ( #50416121 )
Re:Bullshit ( Score: 5 , Insightful)
Deciding what a full login comprises is the shell's responsibility, not your init system's job.

And certainly not the job of one Poettering, who still has not produced one piece of good software in his life.

Anonymous Coward , Saturday August 29, 2015 @12:27PM ( #50416069 )
Re:Bullshit ( Score: 5 , Interesting)
If you want a FULL shell
Oh I dont know 'su bash' usually works pretty fng good...

It does if you are fine to only get root privilege, without FULL environment of root. But if you would have to make sure you have FULL root environment, first discarding anything you had in calling user and then executing root users environment (/etc/profile etc.) you better use "su - bash" or "sudo -i". Compare what you get both ways "su bash" vs "su - bash" with runnint "set" and "env" commands, please.

Failing to have FULL root environment, can have security implications (umask, wrong path, wrong path order, ...) which may or may not be critical depending what system you are operating and to whom. Also some commands may fail or misbehave just because of path differences etc.

Above is trivial information and should be clear without further explanation anyone running *nix systems for someone else as part of job ie. work professionally on the field. Incase you don't, it's still useful information you should learn about sysadmin of the platform you happen to use.

[Oct 15, 2018] The debate over replacing the "init system" was a complete red herring; systemd knows no boundaries and continues to expand its tentacles over the system as it subsumes more and more components.

Notable quotes:
"... The debate over replacing the "init system" was a complete red herring; systemd knows no boundaries and continues to expand its tentacles over the system as it subsumes more and more components. ..."
"... My problem with this is that once a distribution has adopted systemd, they have to basically just accept whatever crap is shovelled out in the subsequent systemd releases--it's all or nothing and once you're on the train you can't get off it. This was absolutely obvious years ago. Quality software engineering and a solid base system walked out of the door when systemd arrived; I certainly did. ..."
Oct 15, 2018 | linux.slashdot.org

wnfJv8eC ( 1579097 ) , Saturday August 29, 2015 @12:43PM ( #50416173 )

Thinking about leaving any systemd linux behind ( Score: 5 , Insightful)

I am really tired of systemd. So really tired of the developers shoving that shit down the linux throat. It's not pretty, it seems to grow out of control, taking on more and more responsibility .... I don't even have an idea how to look at my logs anymore. Nor how to clear the damn things out! Adding toolkits should make the system as clear to understand as it was, not more complex. If it gets any worse it might as well be Windows 10! init was easy to understand, easy to use. syslog was easy read easy to understand and easy to clear. All this bull about "it's a faster startup" is just ... well bull. I'm using a computer 20 times faster than I was a decade ago. You think 20 seconds off a minute startup is an achievement? It's seconds on a couple of days uptime; big f*cking deal. Redhat, Fedora, turn away from the light and return to your roots!

rl117 ( 110595 ) writes: < rleigh@[ ]elibre.net ['cod' in gap] > on Saturday August 29, 2015 @03:57PM ( #50417157 ) Homepage
Re:What path have we chosen? ( Score: 5 , Interesting)

I can't speak for any distribution, after quitting as a Debian developer some months back, for several reasons one of which was systemd. But speaking for myself, it was quite clear during the several years of "debate" (i.e. flamewars) over systemd that this was the inevitable outcome. The debate over replacing the "init system" was a complete red herring; systemd knows no boundaries and continues to expand its tentacles over the system as it subsumes more and more components.

My problem with this is that once a distribution has adopted systemd, they have to basically just accept whatever crap is shovelled out in the subsequent systemd releases--it's all or nothing and once you're on the train you can't get off it. This was absolutely obvious years ago. Quality software engineering and a solid base system walked out of the door when systemd arrived; I certainly did.

When I commit to a system such as a Linux distribution like Debian, I'm making an investment of my time and effort to use it. I do want to be able to rely on future releases being sane and not too radical a departure from previous releases--I am after all basing my work and livelihood upon it. With systemd, I don't know what I'm going to get with future versions and being able to rely on the distribution being usable and reliable in the future is now an unknown. That's why I got off this particular train before the jessie release. After 18 years, that wasn't an easy decision to make, but I still think it was the right one. And yes, I'm one of the people who moved to FreeBSD. Not because I wanted to move from Debian after having invested so much into it personally, but because I was forced to by this stupidity. And FreeBSD is a good solid dose of sanity.

[Oct 15, 2018] Ever stop and ask why Red Hat executives support systemd?

That does not prevent Oracle copying it, does it ?
Oct 15, 2018 | linux.slashdot.org

walterbyrd ( 182728 ) , Saturday August 29, 2015 @11:09PM ( #50418815 )

Ever stop and ask why? ( Score: 5 , Insightful)

This has been going on for years, and has years more to go. This is a long term strategy.

But why?

Why has Red Hat been replacing standard Linux components with Red Hat components, when the Red Hat stuff is worse?

Why isn't systemd optional? It is just an init replacement, right? Why does Red Hat care which init you use?

Why is systemd being tied to so many other components?

Why binary logging? Who asked for that?

Why throw away POSIX, and the entire UNIX philosophy? Clearly you do not have to do that just to replace init.

Why does Red Hat instantly berate anybody who does not like systemd? Why the barrage of ad hominem attacks systemd critics?

I think there is only one logical answer to all of those questions, and it's glaringly obvious.

[Oct 15, 2018] Actually, the 'magic' in su is in the kernel. Basically, since it's marked suid root, the kernel sets the uid on the new process to root before it even starts running. The program itself just then decides if it is willing to do anything for you.

Oct 15, 2018 | linux.slashdot.org

sjames ( 1099 ) , Saturday August 29, 2015 @12:55PM ( #50416253 ) Homepage Journal

Re:Is it April 1st already? ( Score: 2 )

Actually, the 'magic' in su is in the kernel. Basically, since it's marked suid root, the kernel sets the uid on the new process to root before it even starts running. The program itself just then decides if it is willing to do anything for you.

rubycodez ( 864176 ) , Saturday August 29, 2015 @11:55AM ( #50415923 )
Re:BSD is looking better all the time ( Score: 2 , Insightful)

That's what Poettering has been doing his whole life, getting into good open source projects, squatting and then shitting all over them. The infection, stink and filth then linger for decades. He's a cancer on open source.

0123456 ( 636235 ) , Saturday August 29, 2015 @12:25PM ( #50416063 )
Re:BSD is looking better all the time ( Score: 5 , Insightful)
That's a bit rude... I think Poettering's main motivation has been to simply modernize Linux.

Where 'modernize' is a codeword for 'shit all over'.

el_chicano ( 36361 ) writes:
Re: ( Score: 2 )
That's a bit rude... I think Poettering's main motivation has been to simply modernize Linux.

I can see that as being one of his goals but if you want to improve Linux why a new init system plus? I did not hear any system admins asking for this.

He would be considered a saint if he would do something useful like fix the desktop environments so the "Year of the Linux Desktop" finally gets here.

phantomfive ( 622387 ) , Saturday August 29, 2015 @12:35PM ( #50416117 ) Journal
Re:BSD is looking better all the time ( Score: 5 , Insightful)
That's a bit rude... I think Poettering's main motivation has been to simply modernize Linux.

Yeah, that's true. He sees features people want, and he builds them. For example, Debian distro builders were frustrated writing init scripts, so Poettering made something that filled the need of those distro builders [slashdot.org]. That's why it got adopted, because it contained features they wanted.

The problem of course is that he doesn't understand the Unix way [catb.org], especially when it comes to good interfaces between code [slashdot.org] (IMNSHO).

The people who like systemd tend to like the features.......the people who dislike it, the architecture.

RabidReindeer ( 2625839 ) writes:
Re: ( Score: 3 )

I had trouble with init scripts. The systemd init subsystem was a better approach. The problem was, systemd also brought in a lot of stuff that wasn't directly part of the init subsystem that I didn't want, don't want, and don't see any probability of ever wanting.

Because Poettering doesn't understand "modular", I don't get just the good stuff - it's all or nothing. And because systemd isn't even modular as an overgrown bloated monstrosity, the only way to avoid it is to either run old distros or some other

phantomfive ( 622387 ) , Saturday August 29, 2015 @02:32PM ( #50416759 ) Journal
Re:BSD is looking better all the time ( Score: 4 , Insightful)
I had trouble with init scripts. The systemd init subsystem was a better approach. The problem was, systemd also brought in a lot of stuff that wasn't directly part of the init subsystem that I didn't want, don't want, and don't see any probability of ever wanting.

Yeah, that's basically the problem. Systemd is really three different things:

1) init system
2) cgroups manager (cgroups architecture is still crap, btw)
3) session manager

It probably does more stuff, but it's hard to keep track of it all

ezakimak ( 160186 ) , Saturday August 29, 2015 @02:16PM ( #50416681 )
Re:BSD is looking better all the time ( Score: 5 , Informative)

OpenRC++

openrc init scripts are fairly straight forward.
Coupled with gentoo's baselayout, and the config file layout is fairly normalized also.

Electricity Likes Me ( 1098643 ) writes:
Re: ( Score: 3 )

Yes and init scripts are just a bastion of race-free stateful design, and service monitoring. Except not at all those things.

menkhaura ( 103150 ) writes: < espinafre@gmail.com > on Saturday August 29, 2015 @12:39PM ( #50416147 ) Homepage Journal
Re:BSD is looking better all the time ( Score: 5 , Insightful)

Please remember devuan (http://www.devuan.org), a Debian fork which aims to do away with systemd and all that bullcrap. It's picking up steam, and I believe things like these make it more and more worth it to help the new fork.

[Oct 15, 2018] There are two types of German engineering. Good engineering and over engineering. And there is a fine line between them. And it looks like Mr. Poettering crossed it

RHEL7 story looks more and more like Windows 10 story.
Oct 15, 2018 | linux.slashdot.org

prefec2 ( 875483 ) , Saturday August 29, 2015 @12:39PM ( #50416149 )

Strange path he is taking ( Score: 2 )

First of all, there are two types of German engineering. Good engineering and over engineering. And there is a fine line between them. And it looks like Mr. Poettering crossed it. However, it could also be German advertising and that is either bad or worse. In general, you do not build bloated components. In old Unix days these where called programs and could be combined in various ways including pipes and files. In GNU days many of these programs were bundled together in one archive, but stayed separate.

Now with systemd I am puzzled, is he really integrating that thing in the init system?

Integrating something which does not belong to a init system? In that case he is nuts and definitely over engineering. Or he has just created a new program and just bundles it in the same package as systemd.

Then this is acceptable, however, a little weird. It would be like bundling systemd with a sound service.

Session separation or VM separation is a task of the operating system. And you may write any number of tool to call the necessary OS functions, but PLEASE keep them out of components which have nothing to do with that.

[Oct 15, 2018] The rumours that vi will become part of systemd are groundless, comrade. Anyone who suggests such a thing is guilty of agitation and propaganda, and will be sent to the re-education camps.

Oct 15, 2018 | linux.slashdot.org

0123456 ( 636235 ) , Saturday August 29, 2015 @12:31PM ( #50416085 )

Re:Embrace, Extend, Extinguish ( Score: 2 )
The feature creep will be fast and merciless, but I'm just a systemd "hater", right?

The rumours that vi will become part of systemd are groundless, comrade. Anyone who suggests such a thing is guilty of agitation and propaganda, and will be sent to the re-education camps.

[Oct 15, 2018] Doing everything as systemd do, and adding 'su', is likely a new security threat

Oct 15, 2018 | linux.slashdot.org

slashways ( 4172247 ) , Saturday August 29, 2015 @11:42AM ( #50415845 )

Security ( Score: 5 , Insightful)

Doing everything as systemd do, and adding 'su', is likely a new security threat.

ThorGod ( 456163 ) writes:
Re: ( Score: 2 )

That's a pretty good point I think

Microlith ( 54737 ) writes:
Re: ( Score: 3 , Interesting)

No offense, but I see lots of attacks like this on systemd. Can you explain how it is "likely a new security threat" or is it simply FUD?

phantomfive ( 622387 ) , Saturday August 29, 2015 @12:42PM ( #50416169 ) Journal
Re:Security ( Score: 5 , Insightful)
Can you explain how it is "likely a new security threat" or is it simply FUD?

Bruce Schneier (in Cryptography Engineering ) pointed out that to keep something secure, you need to keep it simple (because exploits hide in complexity). When you have a large, complex, system that does a lot of different things, there's a high chance that there are security flaws. If you go to DefCon, speakers will actually say that one of the things they look for when doing 'security research' is a large, complex interface.

So that's the reason. When you see a large complex system running as root, it means hackers will be root.

phantomfive ( 622387 ) , Saturday August 29, 2015 @11:42AM ( #50415853 ) Journal
quality engineering ( Score: 4 , Insightful)

There is no reason the creation of privileged sessions should depend on a particular init system. It's fairly obvious that is a bad idea from a software design perspective. The only architectural reason to build it like that is because so many distros already include systemd, so they don't have to worry about getting people to adopt this (incidentally, that's the same reason Microsoft tried to deeply embed the browser in their OS.....remember active desktop?)

If there are any systemd fans out there, I would love to hear them justify this from an architectural perspective.

QuietLagoon ( 813062 ) , Saturday August 29, 2015 @12:10PM ( #50415985 )
Re:quality engineering ( Score: 5 , Insightful)

Poettering is following the philosophy that has created nearly every piece of bloated software that is in existence today: the design is not complete unless there is nothing more than can be added. Bloated software feeds upon the constant influx of new features, regardless of whether those new features are appropriate or not. They are new therefore they are justified.

.
You know you have achieved perfection in design, not when you have nothing more to add, but when you have nothing more to take away.
-- Antoine de Saint-Exupery

penguinoid ( 724646 ) , Saturday August 29, 2015 @11:51AM ( #50415899 ) Homepage Journal
Upgrade ( Score: 5 , Funny)

You should replace it with the fu command.

QuietLagoon ( 813062 ) , Saturday August 29, 2015 @12:02PM ( #50415953 )
systemd is a broken concept ( Score: 5 , Insightful)

... Lennart Poettering's long story short: "`su` is really a broken concept. ...

So every command that Poettering thinks may be broken is added to the already bloated systemd?

.
How long before there is nothing left to GNU/Linux besides the Linux kernel and systemd?

Anonymous Coward writes:
Re: ( Score: 3 , Insightful)

I'd just like to interject for moment. What you're refering to as GNU/Linux, is in fact, Systemd/Linux, or as I've recently taken to calling it, Systemd plus Linux. GNU is not a modern userland unto itself, but rather another free component of a fully functioning Linux system that needs to be replaced by a shitty nonfunctional init system, broken logging system, and half-assed vital system components comprising a fully broken OS as defined by Lennart Poettering.

Many computer users run a version of the Syste

Anonymous Coward writes:
Seems like a 'while they were at it' sort of thing ( Score: 2 , Interesting)

So systemd has ambition of being a container and VM management infrastucture (I have no idea how this should make sense for VMs though.)

machinectl shell looks to be designed to be some way to attach to a container environment with an interactive shell, without said container needing to do anything to provide such a way in. While they were at the task of doing that not too terribly unreasonable thing, they did the same function for what they call '.host', essentially meaning they can use the same syntax for

tlambert ( 566799 ) , Saturday August 29, 2015 @12:23PM ( #50416047 )
I, for one, welcome this addition... ( Score: 5 , Insightful)

I, for one, welcome this addition... every privilege escalation path you add is good for literally years of paid contract work.

Anonymous Coward , Saturday August 29, 2015 @12:11PM ( #50415997 )
Seems like a 'while they were at it' sort of thing ( Score: 2 , Interesting)

So systemd has ambition of being a container and VM management infrastucture (I have no idea how this should make sense for VMs though.)

machinectl shell looks to be designed to be some way to attach to a container environment with an interactive shell, without said container needing to do anything to provide such a way in. While they were at the task of doing that not too terribly unreasonable thing, they did the same function for what they call '.host', essentially meaning they can use the same syntax for current container context as guest contexts. A bit superfluous, but so trivial as not to raise any additional eyebrows (at least until Lennart did his usual thing and stated one of the most straightforward, least troublesome parts of UNIX is hopelessly broken and the world desperately needed his precious answer). In short, systemd can have their little 'su' so long as no one proposes removal of su or sudo or making them wrappers over the new and 'improved' systemd behavior.

Funnily enough, they used sudo in the article talking about how awesome an idea this is... I am amused.

tlambert ( 566799 ) , Saturday August 29, 2015 @12:23PM ( #50416047 )
I, for one, welcome this addition... ( Score: 5 , Insightful)

I, for one, welcome this addition... every privilege escalation path you add is good for literally years of paid contract work.

butlerm ( 3112 ) , Saturday August 29, 2015 @12:25PM ( #50416059 )
Only incidentally similar to su ( Score: 5 , Informative)

machinectl shell is only incidentally similar to su. Its primary purpose is to establish an su-like session on a different container or VM. Systemd refers to these as 'machines', hence the name machinectl.

http://www.freedesktop.org/sof... [freedesktop.org]

su cannot and does not do that sort of thing. machinectl shell is more like a variant of rsh than a replacement for su.

LVSlushdat ( 854194 ) , Saturday August 29, 2015 @04:09PM ( #50417233 )
Re:What path have we chosen? ( Score: 4 , Informative)

I currently run Ubuntu 14.04, and see where part of systemd has already begun its encroachment on what *had* been a great Linux distro. My only actual full-on experience so far with systemd is trying to get Virtualbox guest additions installed on a CentOS7 vm... I've installed those additions countless times since I started using VBox, and I think I could almost do the install in my sleep.. Not so with CentOS7.. systemd bitches loudly with strange "errors" and when it tells me to use journalctl to see what the error was, there *is* no error.. But still the additions don't install... I'm soooooo NOT looking forward to the next LTS out of Ubuntu, which I'm told will be infested with this systemd crap... Guess its time to dust off the old Slackware DVD and get acquainted with Pat again... GO FUCK YOURSELF, POETTERING.....

rl117 ( 110595 ) writes: < rleigh@[ ]elibre.net ['cod' in gap] > on Saturday August 29, 2015 @04:47PM ( #50417429 ) Homepage
Re:What path have we chosen? ( Score: 4 , Informative)

The main thing I noticed with Ubuntu 15.04 at work is that rather than startup becoming faster and more deterministic as claimed, it's actually slower and randomly fails due to what looks like some race condition, requiring me to reset the machine. So the general experience is "meh", plus annoyance that it's actually degraded the reliability of booting.

I also suffered from the "we won't allow you to boot if your fstab contains an unmountable filesystem". So I reformatted an ext4 filesystem as NTFS to accomplish some work task on Windows; this really shouldn't be a reason to refuse to start up. I know the justification for doing this, and I think it's as bogus as the first time I saw it. I want my systems to boot, not hang up on a technicality because the configuration or system wasn't "perfect". i.e. a bit of realism and pragmatism rather than absolutionist perfectionism--like we used to have when people like me wrote the init scripts.

PPH ( 736903 ) , Saturday August 29, 2015 @01:30PM ( #50416429 )
Fully isolated? ( Score: 5 , Interesting)

I just skimmed TFA (Pottering's rambling really don't make much

sense anyway). By "fully isolated", it sounds like machinectl breaks the audit trail that su has always supported (not being 'fully isolated' by design). Many *NIX systems are configured to prohibit root logins from anything other than the system console. And the reason that su doesn't do a 'full login' either as root or another user is to maintain the audit trail of who (which system user) is actually running what.

Lennart, this UNIX/Linus stuff appears to be way over your head. Sure, it seems neat for lots of gamers who can't be bothered with security and just want all the machine cycles for rendering FPS games. Perhaps you'd be better off playing with an XBox.

alvieboy ( 61292 ) , Saturday August 29, 2015 @02:49PM ( #50416849 ) Homepage
What about sandwiches ? ( Score: 3 )

So, now we have to say "machinectl shell systemd-run do make me a sandwich" ?

Looks way more complicated.

https://xkcd.com/149/ [xkcd.com]

lucm ( 889690 ) , Saturday August 29, 2015 @04:03PM ( #50417185 )
Fountainhead anyone? ( Score: 3 )

This systemd guy is just like Ellsworth Toohey. As long as the sheep follow he'll keep pushing things further and further into idiotland and have a good laugh in the process.

"Kill man's sense of values. Kill his capacity to recognise greatness or to achieve it. Great men can't be ruled. We don't want any great men. Don't deny conception of greatness. Destroy it from within. The great is the rare, the difficult, the exceptional. Set up standards of achievement open to all, to the least, to the most inept – and you stop the impetus to effort in men, great or small. You stop all incentive to improvement, to excellence, to perfection. Laugh at Roark and hold Peter Keating as a great architect. You've destroyed architecture. Build Lois Cook and you've destroyed literature. Hail Ike and you've destroyed the theatre. Glorify Lancelot Clankey and you've destroyed the press. Don't set out to raze all shrines – you'll frighten men, Enshrine mediocrity - and the shrines are razed."

-- Ellsworth Toohey

[Oct 15, 2018] The importance of Devuan by Nico Schottelius

Without strong corporate support is difficult to develop and maintain a distrubution. Red hat, Oracle, Suse and Ubuntu are versors that have some money. which means that now Ubuntu indirectly supports Debian with systemd. So what will happen with Devian in 10 year is unclear. Hopefully it will survive.
Notable quotes:
"... Let's say every car manufacturer recently discovered a new technology named "doord", which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead of 1.2 seconds on average. So every time you open a door, you are much, much faster! ..."
"... Many of the manufacturers decide to implement doord, because the company providing doord makes it clear that it is beneficial for everyone. And additional to opening doors faster, it also standardises things. How to turn on your car? It is the same now everywhere, it is not necessarily to look for the keyhole anymore. ..."
"... Unfortunately though, sometimes doord does not stop the engine. Or if it is cold outside, it stops the ignition process, because it takes too long. Doord also changes the way how your navigation system works, because that is totally related to opening doors, but leads to some users being unable to navigate, which is accepted as collateral damage. In the end, you at least have faster door opening and a standard way to turn on the car. Oh, and if you are in a traffic jam and have to restart the engine often, it will stop restarting it after several times, because that's not what you are supposed to do. You can open the engine hood and tune that setting though, but it will be reset once you buy a new car. ..."
Dec 10, 2017 | blog.ungleich.ch
Good morning,

my name is Nico Schottelius, I am the CEO of ungleich glarus ltd . It is a beautiful Sunday morning with the house surrounded by a meter of snow. A perfect time to write about the importance of Devuan to us, but also for the future of Linux.

But first, let me put some warning out here: Dear Devuan friends, while I honor your work, I also have to be very honest with you: in theory, you should not have done this. Looking at creating Devuan, which means splitting of Debian, economically, you caused approximately infinite cost. Additional maintenance to what is being done in Debian already plus the work spent to clean packages of their systemd dependencies PLUS causing headache for everyone else: Should I use Debian? Is it required to use Devuan? What are the advantages of either? Looking at it from a user point of view, you added "a second, almost equal option". That's horrible!

Think of it in real world terms: You are in a supermarket and there is a new variant of a product that you used to buy (let it be razor blades, toilet paper rolls, whiskey, you name it). Instead of instantly buying what you used to buy, you might spend minutes staring at both options, comparing and in the end still being unable to properly choose, because both options are TOO SIMILAR. Yes, dear Devuan community, you have to admit it, you caused this cost for every potential Linux user.

For those who have read until here and actually wonder, why systemd is considered to be a problem, let me give you a real world analogy:

Let's say every car manufacturer recently discovered a new technology named "doord", which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead of 1.2 seconds on average. So every time you open a door, you are much, much faster!

Many of the manufacturers decide to implement doord, because the company providing doord makes it clear that it is beneficial for everyone. And additional to opening doors faster, it also standardises things. How to turn on your car? It is the same now everywhere, it is not necessarily to look for the keyhole anymore.

Unfortunately though, sometimes doord does not stop the engine. Or if it is cold outside, it stops the ignition process, because it takes too long. Doord also changes the way how your navigation system works, because that is totally related to opening doors, but leads to some users being unable to navigate, which is accepted as collateral damage. In the end, you at least have faster door opening and a standard way to turn on the car. Oh, and if you are in a traffic jam and have to restart the engine often, it will stop restarting it after several times, because that's not what you are supposed to do. You can open the engine hood and tune that setting though, but it will be reset once you buy a new car.

Some of you might ask yourselves now "Is systemd THAT bad?". And my answer to it is: No. It is even worse. Systemd developers split the community over a tiny detail that decreases stability significantly and increases complexity for not much real value. And this is not theoretical: We tried to build Data Center Light on Debian and Ubuntu, but servers that don't boot, that don't reboot or systemd-resolved that constantly interferes with our core network configuration made it too expensive to run Debian or Ubuntu.

Yes, you read right: too expensive. While I am writing here in flowery words, the reason to use Devuan is hard calculated costs. We are a small team at ungleich and we simply don't have the time to fix problems caused by systemd on a daily basis. This is even without calculating the security risks that come with systemd. Our objective is to create a great, easy-to-use platform for VM hosting, not to walk a tightrope.

So, coming back to the original title of this essay: the importance of Devuan. Yes, the Devuan community creates infinite economic costs, but it is not their fault. Creating Devuan is simply a counteraction to ensure Linux stays stable. which is of high importance for a lot of people.

Yes, you read right: what the Devuan developers are doing is creating stability. Think about it not in a few repeating systemd bugs or about the insecurity caused by a huge, monolithic piece of software running with root privileges. Why do people favor Linux on servers over Windows? It is very easy: people don't use Windows, because it is too complex, too error-prone and not suitable as a stable basis. Read it again. This is exactly what systemd introduces into Linux: error-prone complexity and instability.

With systemd the main advantage to use Linux is obsolete.

So what is the importance of Devuan? It is not only crucial to Linux users, but to everyone who is running servers. Or rockets. Or watches. Or anything that actually depends on a stable operating system.

Thus I would like to urge every reader who made it until here: Do what we do:

Support Devuan.

Support the future with stability.

[Oct 15, 2018] Does Systemd Make Linux Complex, Error-Prone, and Unstable

Oct 14, 2018 | linux.slashdot.org

nightfire-unique ( 253895 ) , Monday December 11, 2017 @12:45AM ( #55713823 )

Problems with Linux that should have been solved ( Score: 5 , Insightful)

Here's a list of actual problems that should have been solved instead of introducing the nightmare of systemd upon the Linux (Debian specifically) world:

My $0.02, as a 25-year Linux admin.

gweihir ( 88907 ) , Monday December 11, 2017 @01:56AM ( #55714055 )
Re:Problems with Linux that should have been solve ( Score: 5 , Insightful)

I disagree on SELinux, not because its interface is well-designed (it is not), but because it is needed for some things.

On the rest, I fully agree. And instead, systemd solves things that were already solved and does it badly. The amount of stupidity in that decision is staggering.

drinkypoo ( 153816 ) writes: < martin.espinoza@gmail.com > on Monday December 11, 2017 @10:03AM ( #55715515 ) Homepage Journal
Re:Problems with Linux that should have been solve ( Score: 5 , Informative)
I really struggle to reconcile the Slashdot view that systemd is total crap and the fact that every major Linux distro has switched to it.

The Linux ecosystem is not sane . Redhat wanted more control of Linux so they pushed systemd. GNOME developers are easily distracted by shiny things (as proof I submit GNOME 3) so they went ahead and made GNOME dependent on it. And then Debian (which most Linux distributions are based upon) adopted systemd because GNOME depended on it.

There were some other excuses, but that's the biggest reason. You can blame Redhat and Debian for this clusterfuck, and really, only a small handful of people in the Debian community are actually responsible for Debian's involvement.

Debian's leaders were split almost down the middle on whether they should go to systemd. This is why major changes should require a 2/3 vote (or more!)

phantomfive ( 622387 ) , Monday December 11, 2017 @05:43AM ( #55714555 ) Journal
Re:Problems with Linux that should have been solve ( Score: 5 , Informative)
Can I ask, why don't you and other admins/devs like you start to contribute to systemd?

Lennart Poettering has specifically said that he will not accept many important kinds of patches, for example he refuses to merge any patch that improves cross-platform compatibility.

And what's the reason, because people on forums are complaining? Because binary log files break the UNIX philosophy?

Here is my analysis of systemd, spread across multiple posts (links towards the bottom) [slashdot.org]. It's poorly written software (the interfaces are bad, you can read through my links for more explanation), and that will only get worse over time if an effort isn't made to isolate it over time. This is basic system architecture.

silentcoder ( 1241496 ) , Monday December 11, 2017 @06:02AM ( #55714599 )
Re:Problems with Linux that should have been solve ( Score: 5 , Insightful)

>To me, the fact that the major distros have adopted systemd is strong evidence that it is probably better

"Better" is a subjective term. Software (and any product really) does not have some absolute measurable utility. It's utility is specific to an audience. The fact that the major distros switch is probably strong evidence that systemd is "better" for distro developers. But the utility it brings them may not apply to all users, or even any particular user.
A big part of the reason people were upset was exactly that - the key reasons distros had for switching was benefits to people building distros which subsequent users would never experience. These should not have trumped the user experience.

All that would still have been fine - we could easily have ended up with a world that had systemd for those who wanted it, and didn't have it for those who didn't want it. Linux systems are supposed to be flexible enough that you can set them up to whatever purpose you desire.

So where the real anger came in was the systemd's obsessive feature-creep made it go into a lots and lots of areas that have nothing to do with it's supposed purpose (boot process management), in that area it's biggest advantages are only useful to people building distributions (who have to maintain thousands of packages and ensure they reliable handle their bootup requirements regardless of what combination of them is actually installed- systemd genuinely did make that easier on them - but no user or admin ever experiences that scenario). But that feature creep itself wasn't even the issue, the issue was that - as it entered into all these unrelated areas (login was the first of many) - it broke compatibility with the existing software to do those jobs. This meant that, if you built a system to support systemd, that same system could not use any alternatives. So now, you had to create hard dependencies on systemd to support it at all - for distros to gain those benefits, they had to remove the capacity for anybody to forgo them, or alternatively provide two versions of every package - even ones that never touch the boot process and get no benefit from systemd's changes there.

And the trouble is - in none of those other areas has it offered anything of significant value to anybody. Logind doesn't actually do anything that good old login didn't do anyway, but it's incompatible so a distro that compiles it's packages around logind can't work with anything else. Replacing the process handler... and not only did it not add any new functionality it broke some existing functionality (granted, in rarer edge cases -but there was no reason for any breakage at all because these were long-solved problems).

Many years ago, I worked as a unix admin for a company that developed for lots of different target unix systems. As such I had to maintain test environments running all the targets. I had several linux systems running about 5 different distros, I had solaris boxes with every version from 8 onwards (yep, actual Sparcs), I had IBM's running AIX, I even had two original (by then 30 year old) DEC Alphas running Tru64... and I had several HPUX boxes.

At the time, while adminning all these disparate unix environments on a day-to-day basis and learning all their various issues and problems - I came to announce routinely that Solaris pre-Version-10 had the worst init system in the word to admin, but the worst Unix in the world was definitely HPUX because HPUX was the only Unix where I could not, with absolute certainty, know that if I kill -9 a process - that process would definitely be gone. WIped out of memory and the process table with absolutely no regard for anything else - it's a nuclear option, and it's supposed to work that way - because sometimes that what you need to keep things running.
SystemD brought to Linux an init system that replicated everything I used to hate about the Solaris 8/9 init system - but what's worse than that, it brought the one breakage that got me to declare HPUX the absolute worst unix system in history: it made kill -9 less than one hundred percent absolutely, infallibly reliable (nothing less than perfect is good enough - because perfect HAS been achieved there, in fact outside of HPUX and SystemD - no other Unix system has ever had anything LESS than absolute perfection on this one).

I absolutely despise it. And yet I'm running systemd systems - both professionally and at home, because I'm a grown man now, I have other responsibilities, I don't want to spend all my time working and even my home playing-with-the-computer time is limited so I want to focus on interesting stuff - there is simply not enough time for the amount of effort required to use a non-niche distro. I don't have the time to custom build the many software the small distros simply don't have packages for and deal with the integration issues of not using proper distro-built-and-tested packages.
I live with systemd. I tolerate it. It's not an unsurvivable trainsmash -but I still hate it. It still makes my life harder than it used to be.
It makes my job more difficult and time-consuming. it makes my personal ventures more complicated and annoying. It adds no value whatsoever to my life (seriously - who reboots a Linux system often enough to CARE about boot-time - you only DO that if you have a security patch for the kernel or glibc - anything else is a soft-restart) it just adds hassle and extra effort... the best thing I can say about it is that it adds LESS extra effort than avoiding it does, but that's not because it's superior to me in any way - it's because it's taken over every distro with a decent sized package repository that isn't "built-by-hand" like arch or gentoo.

silentcoder ( 1241496 ) , Monday December 11, 2017 @08:49AM ( #55715147 )
Re: Problems with Linux that should have been solv ( Score: 4 , Insightful)

Tough question. Depends what that functionality is. Compatibility is valuable but sometimes it must be sacrificed to deal with technical debt or make genuine progress. Even Microsoft had a huge compatibility break with Vista which was needed at the time (even if Vista itself was atrocious).
It would depend what those features were, what benefits it gave me. It would be a trade off and should be evaluated as such. A major sacrifice requires an even more major advantage to be worthwhile. I've yet to see any such advantage from anything systemd has added. I'm not saying advantages don't exist, I'm saying whatever they may be they do not benefit me, personally, in any measurable way. The disadvantages however do, and compatibility is the least of them.
Config outside /etc is a major deal - it utterly breaks with a standard around which disk space allocation is done professionally. /use ought to not even need backups because everything there is supposed to be installed and never hand edited. It means modifying backup strategy which is a big, very risky, cange. Logs aren't where I expect them. Boot errors flash on screen and disappear before you can read them so you have to remember to go look in the binary log to figure out if it was something serious.

I was never a fan of system V. It was a complicated, slow, mess if code duplication. It needed a replacement. I was championing Richard Gooch's make-init circa 2001 (and his devfs, the forerunner to udev, was in my kernels - I built a powerful hardware autoconfig system on it in 2005 when I built the first installable live CD distribution, the way they all work now: I invented it [I later discovered that pclinuxos had invented the same thing independently at the same time but Ubuntu for example still came on two disks, a live CD and separate text based installation disk and more than once I had machines where the live cd ran great but the installed system broke due to disparate hardware setup systems]). Later I praised upstart - it was a fantastic unit system that solved the issues with system V, retained compatibility but was easy to admin, standards and philosophy compliant and fast. It was even parallel.

That is the system that should have won the unit wars. I'm not a huge fan of Ubuntu's eclectic side, unity has always been a fugly unusable mess of a desktop to me - but upstart was great, that and PPAs are Ubuntu two most amazing accomplishments. Sadly one got lost instead of being the world changing tech it deserved to be and it lost to a wholly inferior technology for no sane reason.

It's the Amiga of the Linux world.

fisted ( 2295862 ) , Monday December 11, 2017 @06:11AM ( #55714613 )
Re:Problems with Linux that should have been solve ( Score: 3 )
To me, the fact that the major distros have adopted systemd is strong evidence that it is probably better.

Raises the question, better for whom? Systemd seems to make some things easier for distro maintainers, at the cost of fucking shit up for users and admins.

That said, Debian's vote on the matter was essentially 50:50, and they're going to keep supporting SysV init. Most distros are descendants of Debian, so there's that. Redhat switched for obvious reasons (having the main systemd developer on their payroll and massively profiting from increased support demands).

With Debian and Redhat removed, what remains on the list of major distros [futurist.se]?

Yeah.. strong evidence...

lkcl ( 517947 ) writes: < lkcl@lkcl.net > on Monday December 11, 2017 @01:01AM ( #55713891 ) Homepage
faster boot time as well ( Score: 5 , Interesting)

it turns out that, on arm embedded systems at the very least, where context-switching is a little slower and booting off of microsd cards results in amplification of any performance-related issues associated with drive reads/writes when compared to an SSD or HDD, sysvinit easily outperforms systemd for boot times.

Anonymous Coward , Monday December 11, 2017 @01:04AM ( #55713901 )
It violates fundamental Unix principles ( Score: 5 , Insightful)

Do one thing, and do it well. Systemd has eaten init, udev, inetd, syslog and soon dhcpd. Yes, that is getting ridiculous.

fahrbot-bot ( 874524 ) , Monday December 11, 2017 @01:10AM ( #55713923 )
It's the implementation. ( Score: 5 , Insightful)

I don't think there's a problem with the idea of systemd. Having a standard way to handle process start-up, dependencies, failures, recovery, "contracts", etc... isn't a bad, or unique, thing -- Solaris has Service Manager, for example. I think there's just too many things unnecessarily built into systemd rather than it utilizing external, usually, already existing utilities. Does systemd really need, for example, NFS, DNS, NTP services built-in? Why can't it run as PID 2 and leave PID1 for init to simply reap orphaned processes? Would make it easier to restart or handle a failed systemd w/o rebooting the entire system (or so I've read).

In short, systemd has too many things stuffed into its kitchen sink -- if you want that, use Emacs :-)
[ Note, I'm a fan and long-time user of Emacs, so the joke's in good fun. ]

[Oct 14, 2018] Does Systemd Make Linux Complex, Error-Prone, and Unstable

Or in other words, a simple, reliable and clear solution (which has some faults due to its age) was replaced with a gigantic KISS violation. No engineer worth the name will ever do that. And if it needs doing, any good engineer will make damned sure to achieve maximum compatibility and a clean way back. The systemd people seem to be hell-bent on making it as hard as possible to not use their monster. That alone is a good reason to stay away from it.
Oct 14, 2018 | linux.slashdot.org

Reverend Green ( 4973045 ) , Monday December 11, 2017 @04:48AM ( #55714431 )

Re: Does systemd make ... ( Score: 5 , Funny)

Systemd is nothing but a thinly-veiled plot by Vladimir Putin and Beyonce to import illegal German Nazi immigrants over the border from Mexico who will then corner the market in kimchi and implement Sharia law!!!

Anonymous Coward , Monday December 11, 2017 @01:38AM ( #55714015 )

Re:It violates fundamental Unix principles ( Score: 4 , Funny)

The Emacs of the 2010s.

DontBeAMoran ( 4843879 ) , Monday December 11, 2017 @01:57AM ( #55714059 )
Re:It violates fundamental Unix principles ( Score: 5 , Funny)

We are systemd. Lower your memory locks and surrender your processes. We will add your calls and code distinctiveness to our own. Your functions will adapt to service us. Resistance is futile.

serviscope_minor ( 664417 ) , Monday December 11, 2017 @04:47AM ( #55714427 ) Journal
Re:It violates fundamental Unix principles ( Score: 4 , Insightful)

I think we should call systemd the Master Control Program since it seems to like making other programs functions its own.

Anonymous Coward , Monday December 11, 2017 @01:47AM ( #55714035 )
Don't go hating on systemd ( Score: 5 , Funny)

RHEL7 is a fine OS, the only thing it's missing is a really good init system.

[Oct 14, 2018] The problem isn't so much new tools as new tools that suck

Systemd looks OK until you get into major troubles and start troubleshooting. After that you are ready to kill systemd developers and blow up Red Hat headquarters ;-)
Notable quotes:
"... Crap tools written by morons with huge egos and rather mediocre skills. Happens time and again an the only sane answer to these people is "no". Good new tools also do not have to be pushed on anybody, they can compete on merit. As soon as there is pressure to use something new though, you can be sure it is inferior. ..."
Oct 14, 2018 | linux.slashdot.org

drinkypoo ( 153816 ) writes: < martin.espinoza@gmail.com > on Sunday May 27, 2018 @11:14AM ( #56683018 ) Homepage Journal

Re:That would break scripts which use the UI ( Score: 5 , Informative)
In general, it's better for application programs, including scripts to use an application programming interface (API) such as /proc, rather than a user interface such as ifconfig, but in reality tons of scripts do use ifconfig and such.

...and they have no other choice, and shell scripting is a central feature of UNIX.

The problem isn't so much new tools as new tools that suck. If I just type ifconfig it will show me the state of all the active interfaces on the system. If I type ifconfig interface I get back pretty much everything I want to know about it. If I want to get the same data back with the ip tool, not only can't I, but I have to type multiple commands, with far more complex arguments.

The problem isn't new tools. It's crap tools.

gweihir ( 88907 ) , Sunday May 27, 2018 @12:22PM ( #56683440 )
Re:That would break scripts which use the UI ( Score: 5 , Insightful)
The problem isn't new tools. It's crap tools.

Crap tools written by morons with huge egos and rather mediocre skills. Happens time and again an the only sane answer to these people is "no". Good new tools also do not have to be pushed on anybody, they can compete on merit. As soon as there is pressure to use something new though, you can be sure it is inferior.

Anonymous Coward , Sunday May 27, 2018 @02:00PM ( #56684068 )
Re:That would break scripts which use the UI ( Score: 5 , Interesting)
The problem isn't new tools. It's crap tools.

The problem isn't new tools. It's not even crap tools. It's the mindset that we need to get rid of an ~70KB netstat, ~120KB ifconfig, etc. Like others have posted, this has more to do with the ego of the new tools creators and/or their supporters who see the old tools as some sort of competition. Well, that's the real problem, then, isn't it? They don't want to have to face competition and the notion that their tools aren't vastly superior to the user to justify switching completely, so they must force the issue.

Now, it'd be different if this was 5 years down the road, netstat wasn't being maintained*, and most scripts/dependents had already been converted over. At that point there'd be a good, serious reason to consider removing an outdated package. That's obviously not the debate, though.

* Vs developed. If seven year old stable tools are sufficiently bug free that no further work is necessary, that's a good thing.

locofungus ( 179280 ) , Sunday May 27, 2018 @02:46PM ( #56684296 )
Re:That would break scripts which use the UI ( Score: 4 , Informative)
If I type ifconfig interface I get back pretty much everything I want to know about it

How do you tell in ifconfig output which addresses are deprecated? When I run ifconfig eth0.100 it lists 8 global addreses. I can deduce that the one with fffe in the middle is the permanent address but I have no idea what the address it will use for outgoing connections.

ip addr show dev eth0.100 tells me what I need to know. And it's only a few more keystrokes to type.

Anonymous Coward , Sunday May 27, 2018 @11:13AM ( #56683016 )
Re:So ( Score: 5 , Insightful)

Following the systemd model, "if it aint broken, you're not trying hard enough"...

Anonymous Coward , Sunday May 27, 2018 @11:35AM ( #56683144 )
That's the reason ( Score: 5 , Interesting)

It done one thing: Maintain the routing table.

"ip" (and "ip2" and whatever that other candidate not-so-better not-so-replacement of ifconfig was) all have the same problem: They try to be the one tool that does everything "ip". That's "assign ip address somewhere", "route the table", and all that. But that means you still need a complete zoo of other tools, like brconfig, iwconfig/iw/whatever-this-week.

In other words, it's a modeling difference. On sane systems, ifconfig _configures the interface_, for all protocols and hardware features, bridges, vlans, what-have-you. And then route _configures the routing table_. On linux... the poor kids didn't understand what they were doing, couldn't fix their broken ifconfig to save their lives, and so went off to reinvent the wheel, badly, a couple times over.

And I say the blogposter is just as much an idiot.

Per various people, netstat et al operate by reading various files in /proc, and doing this is not the most efficient thing in the world

So don't use it. That does not mean you gotta change the user interface too. Sheesh.

However, the deeper issue is the interface that netstat, ifconfig, and company present to users.

No, that interface is a close match to the hardware. Here is an interface, IOW something that connects to a radio or a wire, and you can make it ready to talk IP (or back when, IPX, appletalk, and whatever other networks your system supported). That makes those tools hardware-centric. At least on sane systems. It's when you want to pretend shit that it all goes awry. And boy, does linux like to pretend. The linux ifconfig-replacements are IP-only-stack-centric. Which causes problems.

For example because that only does half the job and you still need the aforementioned zoo of helper utilities that do things you can have ifconfig do if your system is halfway sane. Which linux isn't, it's just completely confused. As is this blogposter.

On the other hand, the users expect netstat, ifconfig and so on to have their traditional interface (in terms of output, command line arguments, and so on); any number of scripts and tools fish things out of ifconfig output, for example.

linux' ifconfig always was enormously shitty here. It outputs lots of stuff I expect to find through netstat and it doesn't output stuff I expect to find out through ifconfig. That's linux, and that is NOT "traditional" compared to, say, the *BSDs.

As the Linux kernel has changed how it does networking, this has presented things like ifconfig with a deep conflict; their traditional output is no longer necessarily an accurate representation of reality.

Was it ever? linux is the great pretender here.

But then, "linux" embraced the idiocy oozing out of poettering-land. Everything out of there so far has caused me problems that were best resolved by getting rid of that crap code. Point in case: "Network-Manager". Another attempt at "replacing ifconfig" with something that causes problems and solves very few.

locofungus ( 179280 ) , Sunday May 27, 2018 @03:27PM ( #56684516 )
Re:That's the reason ( Score: 4 , Insightful)
It done one thing: Maintain the routing table.

Should the ip rule stuff be part of route or a separate command?

There are things that could be better with ip. IIRC it's very fussy about where the table selector goes in the argument list but route doesn't support this at all.

I also don't think route has anything like 'nexthop dev $if' which is a godsend for ipv6 configuration.

I stayed with route for years. But ipv6 exposed how incomplete the tool is - and clearly nobody cares enough to add all the missing functionality.

Perhaps ip addr, ip route, ip rule, ip mroute, ip link should be separate commands. I've never looked at the sourcecode to see whether it's mostly common or mostly separate.

Anonymous Coward writes:
Re: That's the reason ( Score: 3 , Informative)

^this^

The people who think the old tools work fine don't understand all the advanced networking concepts that are only possible with the new tools: interfaces can have multiple IPs, one IP can be assigned to multiple interfaces, there's more than one routing table, firewall rules can add metadata to packets that affects routing, etc. These features can't be accommodated by the old tools without breaking compatibility.

DamnOregonian ( 963763 ) , Sunday May 27, 2018 @09:11PM ( #56686032 )
Re:That's the reason ( Score: 3 )
Someone cared enough to implement an entirely different tool to do the same old jobs plus some new stuff, it's too bad they didn't do the sane thing and add that functionality to the old tool where it would have made sense.

It's not that simple. The iproute2 suite wasn't written to *replace* anything.
It was written to provide a user interface to the rapidly expanding RTNL API.
The net-tools maintainers (or anyone who cared) could have started porting it if they liked. They didn't. iproute2 kept growing to provide access to all the new RTNL interfaces, while net-tools got farther and farther behind.
What happened was organic. If someone brought net-tools up to date tomorrow and everyone liked the interface, iproute2 would be dead in its tracks. As it sits, myself, and most of the more advanced level system and network engineers I know have been using iproute2 for just over a decade now (really, the point where ifconfig became on incomplete and poorly simplified way to manage the networking stack)

DamnOregonian ( 963763 ) , Monday May 28, 2018 @02:26AM ( #56686960 )
Re:That's the reason ( Score: 4 , Informative)

Nope. Kernel authors come up with fancy new netlink interface for better interaction with the kernel's network stack. They don't give two squirts of piss whether or not a user-space interface exists for it yet. Some guy decides to write an interface to it. Initially, it only support things like modifying the routing rule database (something that can't be done with route) and he is trying to make an implementation of this protocal, not try to hack it into software that already has its own framework using different APIs.
This source was always freely available for the net-tools guys to take and add to their own software.
Instead, we get this. [sourceforge.net]
Nobody is giving a positive spin. This is simply how it happened. This is what happens when software isn't maintained, and you don't get to tell other people to maintain it. You're free, right now, today, to port the iproute2 functionality into net-tools. They're unwilling to, however. That's their right. It's also the right of other people to either fork it, or move to more functional software. It's your right to help influence that. Or bitch on slashdot. That probably helps, too.

TeknoHog ( 164938 ) writes:
Re: ( Score: 2 )
keep the command names the same but rewrite how they function?

Well, keep the syntax too, so old scripts would still work. The old command name could just be a script that calls the new commands under the hood. (Perhaps this is just what you meant, but I thought I'd elaborate.)

gweihir ( 88907 ) , Sunday May 27, 2018 @12:18PM ( #56683412 )
Re:So ( Score: 4 , Insightful)
What was the reason for replacing "route" anyhow? It's worked for decades and done one thing.

Idiots that confuse "new" with better and want to put their mark on things. Because they are so much greater than the people that got the things to work originally, right? Same as the systemd crowd. Sometimes, they realize decades later they were stupid, but only after having done a lot of damage for a long time.

TheRaven64 ( 641858 ) writes:
Re: ( Score: 2 )

I didn't RTFA (this is Slashdot, after all) but from TFS it sounds like exactly the reason I moved to FreeBSD in the first place: the Linux attitude of 'our implementation is broken, let's completely change the interface'. ALSA replacing OSS was the instance of this that pushed me away. On Linux, back around 2002, I had some KDE and some GNOME apps that talked to their respective sound daemon, and some things like XMMS and BZFlag that used /dev/dsp directly. Unfortunately, Linux decided to only support s

zippthorne ( 748122 ) writes:
Re: ( Score: 3 )

On the other hand, on most systems, vi is basically an alias to vim....

goombah99 ( 560566 ) , Sunday May 27, 2018 @11:08AM ( #56682986 )
Bad idea ( Score: 5 , Insightful)

Unix was founded on the ideas of lots os simple command line tools that do one job well and don't depend on system idiosyncracies. If you make the tool have to know the lower layers of the system to exploit them then you break the encapsulation. Polling proc has worked across eons of linux flavors without breaking. when you make everthing integrated it creates paralysis to change down the road for backward compatibility. small speed game now for massive fragility and no portability later.

goombah99 ( 560566 ) writes:
Re: ( Score: 3 )

Gnu may not be unix but it's foundational idea lies in the simple command tool paradigm. It's why GNU was so popular and it's why people even think that Linux is unix. That idea is the character of linux. if you want an marvelously smooth, efficient, consistent integrated system that then after a decade of revisions feels like a knotted tangle of twine in your junk drawer, try Windows.

llamalad ( 12917 ) , Sunday May 27, 2018 @11:46AM ( #56683198 )
Re:Bad idea ( Score: 5 , Insightful)

The error you're making is thinking that Linux is UNIX.

It's not. It's merely UNIX-like. And with first SystemD and now this nonsense, it's rapidly becoming less UNIX-like. The Windows of the UNIX(ish) world.

Happily, the BSDs seem to be staying true to their UNIX roots.

petes_PoV ( 912422 ) , Sunday May 27, 2018 @12:01PM ( #56683282 )
The dislike of support work ( Score: 5 , Interesting)
In theory netstat, ifconfig, and company could be rewritten to use netlink too; in practice this doesn't seem to have happened and there may be political issues involving different groups of developers with different opinions on which way to go.

No, it is far simpler than looking for some mythical "political" issues. It is simply that hackers - especially amateur ones, who write code as a hobby - dislike trying to work out how old stuff works. They like writing new stuff, instead.

Partly this is because of the poor documentation: explanations of why things work, what other code was tried but didn't work out, the reasons for weird-looking constructs, techniques and the history behind patches. It could even be that many programmers are wedded to a particular development environment and lack the skill and experience (or find it beyond their capacity) to do things in ways that are alien to it. I feel that another big part is that merely rewriting old code does not allow for the " look how clever I am " element that is present in fresh, new, software. That seems to be a big part of the amateur hacker's effort-reward equation.

One thing that is imperative however is to keep backwards compatibility. So that the same options continue to work and that they provide the same content and format. Possibly Unix / Linux only remaining advantage over Windows for sysadmins is its scripting. If that was lost, there would be little point keeping it around.

DamnOregonian ( 963763 ) , Sunday May 27, 2018 @05:13PM ( #56685074 )
Re:The dislike of support work ( Score: 5 , Insightful)

iproute2 exists because ifconfig, netstat, and route do not support the full capabilities of the linux network stack.
This is because today's network stack is far more complicated than it was in the past. For very simple networks, the old tools work fine. For complicated ones, you must use the new ones.

Your post could not be any more wrong. Your moderation amazes me. It seems that slashdot is full of people who are mostly amateurs.
iproute2 has been the main network management suite for linux amongst higher end sysadmins for a decade. It wasn't written to sate someone's desire to change for the sake of change, to make more complicated, to NIH. It was written because the old tools can't encompass new functionality without being rewritten themselves.

Craig Cruden ( 3592465 ) , Sunday May 27, 2018 @12:11PM ( #56683352 )
So windowification (making it incompatible) ( Score: 5 , Interesting)

So basically there is a proposal to dump existing terminal utilities that are cross-platform and create custom Linux utilities - then get rid of the existing functionality? That would be moronic! I already go nuts remoting into a windows platform and then an AIX and Linux platform and having different command line utilities / directory separators / etc. Adding yet another difference between my Linux and macOS/AIX terminals would absolutely drive me bonkers!

I have no problem with updating or rewriting or adding functionalities to existing utilities (for all 'nix platforms), but creating a yet another incompatible platform would be crazily annoying.

(not a sys admin, just a dev who has to deal with multiple different server platforms)

Anonymous Coward , Sunday May 27, 2018 @12:16PM ( #56683388 )
Output for 'ip' is machine readable, not human ( Score: 5 , Interesting)

All output for 'ip' is machine readable, not human.
Compare
$ ip route
to
$ route -n

Which is more readable? Fuckers.

Same for
$ ip a
and
$ ifconfig
Which is more readable? Fuckers.

The new commands should generally make the same output as the old, using the same options, by default. Using additional options to get new behavior. -m is commonly used to get "machine readable" output. Fuckers.

It is like the systemd interface fuckers took hold of everything. Fuckers.

BTW, I'm a happy person almost always, but change for the sake of change is fucking stupid.

Want to talk about resolv.conf, anyone? Fuckers! Easier just to purge that shit.

SigmundFloyd ( 994648 ) , Sunday May 27, 2018 @12:39PM ( #56683558 )
Linux' userland is UNSTABLE ! ( Score: 3 )

I'm growing increasingly annoyed with Linux' userland instability. Seriously considering a switch to NetBSD because I'm SICK of having to learn new ways of doing old things.

For those who are advocating the new tools as additions rather than replacements: Remember that this will lead to some scripts expecting the new tools and some other scripts expecting the old tools. You'll need to keep both flavors installed to do ONE thing. I don't know about you, but I HATE to waste disk space on redundant crap.

fluffernutter ( 1411889 ) , Sunday May 27, 2018 @12:46PM ( #56683592 )
Piss and vinigar ( Score: 5 , Interesting)

What pisses me off is when I go to run ifconfig and it isn't there, and then I Google on it and there doesn't seem to be *any* direct substitute that gives me the same information. If you want to change the command then fine, but allow the same output from the new commands. Furthermore, another bitch I have is most systemd installations don't have an easy substitute for /etc/rc.local.

what about ( 730877 ) , Sunday May 27, 2018 @01:35PM ( #56683874 ) Homepage
Let's try hard to break Linux ( Score: 3 , Insightful)

It does not make any sense that some people spend time and money replacing what is currently working with some incompatible crap.

Therefore, the only logical alternative is that they are paid (in some way) to break what is working.

Also, if you rewrite tons of systems tools you have plenty of opportunities to insert useful bugs that can be used by the various spying agencies.

You do not think that the current CPU Flaws are just by chance, right ?
Immagine the wonder of being able to spy on any machine, regardless of the level of SW protection.

There is no need to point out that I cannot prove it, I know, it just make sense to me.

Kjella ( 173770 ) writes:
Re: ( Score: 3 )
It does not make any sense that some people spend time and money replacing what is currently working with some incompatible crap. (...) There is no need to point out that I cannot prove it, I know, it just make sense to me.

Many developers fix problems like a guy about to lose a two week vacation because he can't find his passport. Rip open every drawer, empty every shelf, spread it all across the tables and floors until you find it, then rush out the door leaving everything in a mess. It solved HIS problem.

WaffleMonster ( 969671 ) , Sunday May 27, 2018 @01:52PM ( #56684010 )
Changes for changes sake ( Score: 4 , Informative)

TFA is full of shit.

IP aliases have always and still do appear in ifconfig as separate logical interfaces.

The assertion ifconfig only displays one IP address per interface also demonstrably false.

Using these false bits of information to advocate for change seems rather ridiculous.

One change I would love to see... "ping" bundled with most Linux distros doesn't support IPv6. You have to call IPv6 specific analogue which is unworkable. Knowing address family in advance is not a reasonable expectation and works contrary to how all other IPv6 capable software any user would actually run work.

Heck for a while traceroute supported both address families. The one by Olaf Kirch eons ago did then someone decided not invented here and replaced it with one that works like ping6 where you have to call traceroute6 if you want v6.

It seems anymore nobody spends time fixing broken shit... they just spend their time finding new ways to piss me off. Now I have to type journalctl and wait for hell to freeze over just to liberate log data I previously could access nearly instantaneously. It almost feels like Microsoft's event viewer now.

DamnOregonian ( 963763 ) , Sunday May 27, 2018 @05:30PM ( #56685130 )
Re:Changes for changes sake ( Score: 4 , Insightful)
TFA is full of shit. IP aliases have always and still do appear in ifconfig as separate logical interfaces.

No, you're just ignorant.
Aliases do not appear in ifconfig as separate logical interfaces.
Logical interfaces appear in ifconfig as logical interfaces.
Logical interfaces are one way to add an alias to an interface. A crude way, but a way.

The assertion ifconfig only displays one IP address per interface also demonstrably false.

Nope. Again, your'e just ignorant.

root@swalker-samtop:~# tunctl
Set 'tap0' persistent and owned by uid 0
root@swalker-samtop:~# ifconfig tap0 10.10.10.1 netmask 255.255.255.0 up
root@swalker-samtop:~# ip addr add 10.10.10.2/24 dev tap0
root@swalker-samtop:~# ifconfig tap0:0 10.10.10.3 netmask 255.255.255.0 up
root@swalker-samtop:~# ip addr add 10.10.1.1/24 scope link dev tap0:0
root@swalker-samtop:~# ifconfig tap0 | grep inet
inet 10.10.1.1 netmask 255.255.255.0 broadcast 0.0.0.0
root@swalker-samtop:~# ifconfig tap0:0 | grep inet
inet 10.10.10.3 netmask 255.255.255.0 broadcast 10.10.10.255
root@swalker-samtop:~# ip addr show dev tap0 | grep inet
inet 10.10.1.1/24 scope link tap0
inet 10.10.10.1/24 brd 10.10.10.255 scope global tap0
inet 10.10.10.2/24 scope global secondary tap0
inet 10.10.10.3/24 brd 10.10.10.255 scope global secondary tap0:0

If you don't understand what the differences are, you really aren't qualified to opine on the matter.
Ifconfig is fundamentally incapable of displaying the amount of information that can go with layer-3 addresses, interfaces, and the architecture of the stack in general. This is why iproute2 exists.

JustNiz ( 692889 ) , Sunday May 27, 2018 @01:55PM ( #56684030 )
I propose a new word: ( Score: 5 , Funny)

SysD: (v). To force an unnecessary replacement of something that already works well with an alternative that the majority perceive as fundamentally worse.
Example usage: Wow you really SysD'd that up.

[Oct 14, 2018] Ever tried to edit systemd files? Depending on systemd version you have to create overrides, modify symlinks or edit systemd files straight up which can be in about 5 different locations and on top of that, systemd can have overrides on any changes either with an update or just inherited.

Notable quotes:
"... A big ol ball? My init.d was about 13 scripts big which were readable and editable. Ever tried to edit systemd files? Depending on systemd version you have to create overrides, modify symlinks or edit systemd files straight up which can be in about 5 different locations and on top of that, systemd can have overrides on any changes either with an update or just inherited. ..."
"... Remove/fail a hard drive and your system will boot into single user mode ..."
"... because it was in fstab and apparently everything in fstab is a hard dependency on systemd. ..."
"... So the short answer is: Yes, systemd makes things unnecessarily complex with little benefit. ..."
"... Troubleshooting is really a bitch with systemd, much more time-consuming. For instance, often systemctl reports a daemon as failed while it's not, or suddenly decides that it didn't start because of some mysterious arbitrary timeout while the daemon just needs some time to run a maintenance tasks at startup time. And getting anything of value out of the log is a pain in the ass. ..."
"... Granted, I have never needed any kind of tampering or corruption mitigation in my log files over the last 20 years of Linux administration. So the value for at least my usage of journalctl has been sum negative because I don't see the value in a command that by default truncates log output. ..."
"... So the answer for systemd is to workaround it by using a "legacy" service to restore decades of functionality. ..."
Oct 14, 2018 | linux.slashdot.org

guruevi ( 827432 ) writes: < evi&evcircuits,com > on Monday December 11, 2017 @12:46AM ( #55713829 ) Homepage

Re: Ah yes the secret to simplicity ( Score: 5 , Insightful)

A big ol ball? My init.d was about 13 scripts big which were readable and editable. Ever tried to edit systemd files? Depending on systemd version you have to create overrides, modify symlinks or edit systemd files straight up which can be in about 5 different locations and on top of that, systemd can have overrides on any changes either with an update or just inherited.

Systemd makes every system into a dependency mess.

Remove/fail a hard drive and your system will boot into single user mode, not even remote access will be available so you better be near the machine just because it was in fstab and apparently everything in fstab is a hard dependency on systemd.

Z00L00K ( 682162 ) , Monday December 11, 2017 @12:52AM ( #55713851 ) Homepage
Re: Ah yes the secret to simplicity ( Score: 5 , Insightful)

So the short answer is: Yes, systemd makes things unnecessarily complex with little benefit.

That matches my experience - losing a lot of time trying to figure out why things don't work. The improved boot time is lost several times over.

lucm ( 889690 ) , Monday December 11, 2017 @01:33AM ( #55713995 )
Re: Ah yes the secret to simplicity ( Score: 5 , Informative)
So the short answer is: Yes, systemd makes things unnecessarily complex with little benefit.

That matches my experience - losing a lot of time trying to figure out why things don't work. The improved boot time is lost several times over.

I completely agree. Troubleshooting is really a bitch with systemd, much more time-consuming. For instance, often systemctl reports a daemon as failed while it's not, or suddenly decides that it didn't start because of some mysterious arbitrary timeout while the daemon just needs some time to run a maintenance tasks at startup time. And getting anything of value out of the log is a pain in the ass.

Quite often I end up writing control shell scripts specifically to be called by systemd, because this junkware is too fragile and capricious to work with actual daemons. That says a lot about the overal usefulness of systemd.

Nothing has been gained with systemd, at least not on servers.

93 Escort Wagon ( 326346 ) , Monday December 11, 2017 @01:39AM ( #55714017 )
Re: Ah yes the secret to simplicity ( Score: 5 , Informative)
Troubleshooting is really a bitch with systemd, much more time-consuming. For instance, often systemctl reports a daemon as failed while it's not, or suddenly decides that it didn't start because of some mysterious arbitrary timeout while the daemon just needs some time to run a maintenance tasks at startup time.

Not to mention that the damn logs are not plain text, which in itself complicates things before you even have the chance to start troubleshooting.

merky1 ( 83978 ) , Monday December 11, 2017 @07:41AM ( #55714833 ) Journal
Re: Ah yes the secret to simplicity ( Score: 5 , Insightful)

Granted, I have never needed any kind of tampering or corruption mitigation in my log files over the last 20 years of Linux administration. So the value for at least my usage of journalctl has been sum negative because I don't see the value in a command that by default truncates log output.

So the answer for systemd is to workaround it by using a "legacy" service to restore decades of functionality.

SMF was the death knell for Solaris (along with the Oracle purchase), and it feels like systemd is going to be the anchor which drags Linux into the abyss.

[Oct 14, 2018] I got about 15y left and this thing called Linux that I've made a good living on will be the-next-guys steaming pile to deal with

If Redhat7 is the future, I long for the past...
Notable quotes:
"... OP, if you're an old hat like me, I'd fucking LOVE to know how old? You sound like you've got about 5 days soaking wet under your belt with a Milkshake IPA in your hand. You sound like a millennial developer-turned-sysadmin-for-a-day who's got all but cloud-framework-administration under your belt and are being a complete poser. ..."
"... I'd say 0.0001% of any of us are in control of those types of changes, no matter how we feel about is as end-user administrators of those tools we've grown to be complacent about. I got about 15y left and this thing called Linux that I've made a good living on will be the-next-guys steaming pile to deal with. ..."
"... My point being that I don't think it's unreasonable to expect to know and understand every aspect of my system's behavior, and to expect it to continue to behave in the way that I know it does ..."
"... We Linux developers virtually created Internet programming, where most of our effort was accomplished online, but in those days everybody still used books and of course the Linux Documentation Project. I have a huge stack of UNIX and Linux books from the 1990's, and I even wrote a mini-HOWTO. There was no Google. People who used Linux back then may seem like wizards today because we had to memorize everything, or else waste time looking it up in a book. Today, even if I'm fairly certain I already know how to do something, I look it up with Google anyway. ..."
"... Given that, ip and route were downright offensive. We were supposed to switch from a well-documented system to programs written by somebody who can barely speak English (the lingua franca of Linux development)? ..."
"... Today, the discussion is irrelevant. Solaris, HP-UX, and the other commercial UNIX versions are dead. Ubuntu has the common user and CentOS has the server. Google has complete documentation for these tools at a glance. In my mind, there is now no reason to not switch. ..."
Oct 14, 2018 | linux.slashdot.org

adosch ( 1397357 ) , Sunday May 27, 2018 @04:04PM ( #56684724 )

Thats... the argument? FML ( Score: 4 , Interesting)

The OP's argument is that netlink sockets are more efficient in theory so we should abandon anything that uses a pseudo-proc, re-invent the wheel and move even farther from the UNIX tradition and POSIX compliance? And it may be slower on larger systems? Define that for me because I've never experienced that.

I've worked on single stove-pipe x86 systems, to the 'SPARC archteciture' generation where everyone thought Sun/Solaris was the way to go with single entire systems in a 42U rack, IRIX systems, all the way on hundreds of RPM-base linux distro that are physical, hypervised and containered nodes in an HPC which are LARGE compute systems (fat and compute nodes).

That's a total shit comment with zero facts to back it up. This is like Good Will Hunting 'the bar scene' revisited...

OP, if you're an old hat like me, I'd fucking LOVE to know how old? You sound like you've got about 5 days soaking wet under your belt with a Milkshake IPA in your hand. You sound like a millennial developer-turned-sysadmin-for-a-day who's got all but cloud-framework-administration under your belt and are being a complete poser.

Any true sys-admin is going to flip-their-shit just like we ALL did with systemd, and that shit still needs to die. There, I got that off my chest.

I'd say you got two things right, but are completely off on one of them:

Anymore, I'm just a disgruntled and I'm sure soon-to-be-modded-down voice on /. that should be taken with a grain of salt. I'm not happy with the way the movements of Linux have gone, and if this doesn't sound old hat I don't know what is: At the end of the day, you have to embrace change.

I'd say 0.0001% of any of us are in control of those types of changes, no matter how we feel about is as end-user administrators of those tools we've grown to be complacent about. I got about 15y left and this thing called Linux that I've made a good living on will be the-next-guys steaming pile to deal with.

Greyfox ( 87712 ) , Sunday May 27, 2018 @05:45PM ( #56685218 ) Homepage Journal
Re:Thats... the argument? FML ( Score: 3 )

Yeah. The other day I set up some demo video streaming on a Linux box. Fire up screen, start my streaming program. Disconnect screen and exit my ssh system, and my streaming freezes. There're a metric fuckton of reports of systemd killing detached/nohup'd processes, but I check my config file and it's not that. Although them being that willing to walk away from expected system behavior is already cause to blow a gasket. But no, something else is going on here. I tweak the streaming code to catch all catchable signals, still nothing. So probably not systemd, but I can't be 100% certain. I'm still testing all the possibilities -- if I start the servers from the console in screen and then detach and exit, I don't have the problem, it's only if I start them from ssh. And if I ssh in later, attach and detach, I still don't have the problem. So I'm looking forward to a couple of days of digging around in the ssh code to see if I can figure out what's going on with it. In the mean time, I have a reasonable workaround.

My point being that I don't think it's unreasonable to expect to know and understand every aspect of my system's behavior, and to expect it to continue to behave in the way that I know it does. I've worked on systems where you had to type the sequence of numbers that was the machine code for the bootstrap sequence in order to boot the system.

I know how the boot sequence works. I know how fork and exec and the default file handles work. I know how my system is supposed to start from the very first process. And I don't mind changing those things, as long as I trust the judgment of the people changing them. And I very much don't, for a lot of this new shit.

Not systemd, not wayland, not the new networking utilities. Still not enough to take matters into my own hands, though.

jgotts ( 2785 ) writes: < jgottsNO@SPAMgmail.com > on Sunday May 27, 2018 @06:17PM ( #56685380 )
Some historical color ( Score: 4 , Interesting)

Just to give you guys some color commentary, I was participating quite heavily in Linux development from 1994-1999, and Linus even added me to the CREDITS file while I was at the University of Michigan for my fairly modest contributions to the kernel. [I prefer application development, and I'm still a Linux developer after 24 years. I currently work for the company Internet Brands.]

What I remember about ip and net is that they came about seemingly out of nowhere two decades ago and the person who wrote the tools could barely communicate in English. There was no documentation. net-tools by that time was a well-understood and well-documented package, and many Linux devs at the time had UNIX experience pre-dating Linux (which was announced in 1991 but not very usable until 1994).

We Linux developers virtually created Internet programming, where most of our effort was accomplished online, but in those days everybody still used books and of course the Linux Documentation Project. I have a huge stack of UNIX and Linux books from the 1990's, and I even wrote a mini-HOWTO. There was no Google. People who used Linux back then may seem like wizards today because we had to memorize everything, or else waste time looking it up in a book. Today, even if I'm fairly certain I already know how to do something, I look it up with Google anyway.

Given that, ip and route were downright offensive. We were supposed to switch from a well-documented system to programs written by somebody who can barely speak English (the lingua franca of Linux development)?

Today, the discussion is irrelevant. Solaris, HP-UX, and the other commercial UNIX versions are dead. Ubuntu has the common user and CentOS has the server. Google has complete documentation for these tools at a glance. In my mind, there is now no reason to not switch.

Although, to be fair, I still use ifconfig, even if it is not installed by default.

Hognoxious ( 631665 ) , Sunday May 27, 2018 @04:28PM ( #56684830 ) Homepage Journal
Re:Poor reasoning ( Score: 3 )
If the output has to change, there can either be a new tool or ifconfig itself changes. Either way, my script has to be changed.

Yes. I mean it would be mathematically impossible to introduce a new tool and leave the old one there too .

[Oct 14, 2018] The fix for Debian removing systemd using angband.pl

Dec 11, 2017 | linux.slashdot.org

lkcl ( 517947 ) writes: < lkcl@lkcl.net > on Monday December 11, 2017 @05:07AM ( #55714471 ) Homepage

Re:Fix ( Score: 3 )
apt purge systemd

add http://angband.pl/debian/ [angband.pl] to /etc/apt/sources.list before doing that and it will actually succeed. okok it's a bit more complex than that, but you can read the instructions online which are neeearly as simple :)

[Oct 14, 2018] Yes, systemd makes things unnecessarily complex with little benefit. Nothing has been gained with systemd, at least not on servers.

Oct 14, 2018 | linux.slashdot.org

Z00L00K ( 682162 ) , Monday December 11, 2017 @12:52AM ( #55713851 ) Homepage

Re: Ah yes the secret to simplicity ( Score: 5 , Insightful)

So the short answer is: Yes, systemd makes things unnecessarily complex with little benefit.

That matches my experience - losing a lot of time trying to figure out why things don't work. The improved boot time is lost several times over.

lucm ( 889690 ) , Monday December 11, 2017 @01:33AM ( #55713995 )
Re: Ah yes the secret to simplicity ( Score: 5 , Informative)
So the short answer is: Yes, systemd makes things unnecessarily complex with little benefit.

That matches my experience - losing a lot of time trying to figure out why things don't work. The improved boot time is lost several times over.

I completely agree. Troubleshooting is really a bitch with systemd, much more time-consuming. For instance, often systemctl reports a daemon as failed while it's not, or suddenly decides that it didn't start because of some mysterious arbitrary timeout while the daemon just needs some time to run a maintenance tasks at startup time. And getting anything of value out of the log is a pain in the ass.

Quite often I end up writing control shell scripts specifically to be called by systemd, because this junkware is too fragile and capricious to work with actual daemons. That says a lot about the overal usefulness of systemd.

Nothing has been gained with systemd, at least not on servers.

93 Escort Wagon ( 326346 ) , Monday December 11, 2017 @01:39AM ( #55714017 )
Re: Ah yes the secret to simplicity ( Score: 5 , Informative)
Troubleshooting is really a bitch with systemd, much more time-consuming. For instance, often systemctl reports a daemon as failed while it's not, or suddenly decides that it didn't start because of some mysterious arbitrary timeout while the daemon just needs some time to run a maintenance tasks at startup time.

Not to mention that the damn logs are not plain text, which in itself complicates things before you even have the chance to start troubleshooting.

merky1 ( 83978 ) , Monday December 11, 2017 @07:41AM ( #55714833 ) Journal
Re: Ah yes the secret to simplicity ( Score: 5 , Insightful)

Granted, I have never needed any kind of tampering or corruption mitigation in my log files over the last 20 years of Linux administration. So the value for at least my usage of journalctl has been sum negative because I don't see the value in a command that by default truncates log output.

So the answer for systemd is to workaround it by using a "legacy" service to restore decades of functionality.

SMF was the death knell for Solaris (along with the Oracle purchase), and it feels like systemd is going to be the anchor which drags Linux into the abyss.

coofercat ( 719737 ) writes:
Re: ( Score: 3 )

I'd say it is worse having to type something different for one log on the system, when the other 100+ are plain text and so accessible with the old tools we've all learned backwards. It means you don't have the necessary switches or key presses to hand because you don't do it often enough.

"journalctl" might be the best thing since sliced bread, but making it a hard requirement of systemd makes adoption of systemd that much harder. IMHO, systemd should "pick it's battles" and concentrate on managing system p

Junta ( 36770 ) writes:
Re: ( Score: 3 )

The point being that text format is more universally readable, and also should it get corrupted, it has a better shot of still being readable.

On the other hand, pure binary logging was not necessary to achieve what they wanted. In fact, strictly speaking a split format of fixed-size, well aligned binary metadata alongside a text record of the variable length data would have been even *better* performance and still be readable.

Junta ( 36770 ) , Monday December 11, 2017 @09:49AM ( #55715423 )
Re: Ah yes the secret to simplicity ( Score: 5 , Insightful)

Using text processing skills to process a generic text file isn't any harder than using journalctl. The difference is that the former is generically applicable to just about any other software on the planet, and the latter is for journald. It's not that complex to confer the journalctl benefits without ditching *native* text log capability, but they refuse to do so.

Using ForwardToSyslog just means there's an unnecessary middle-man, meaning both services must be functional to complete logging. The problem is the time when you actually want logs is the time when there's something going wrong. A few weeks ago was trying to support someone who did something pretty catastrophic to his system. One of the side effects was that it broke the syslog forwarding (syslog would still work, but nothing from journald would get to it). The other thing that happened would be for the system to lock out all access. I thought 'ok, I'll reboot and use jornalctl', but wait, on CentOS 7, journald defaults to not persisting journald across boot, because you have syslog to do that.

Of course the other problem (not entirely systemd project fault) was the quest to 'simplify' console output so he just saw 'fail' instead of the much more useful error messages that would formerly spam the console on experiencing the sort of problem he hit (because it would be terrible to have an 'ugly' console...). This hints about another source of the systemd controversy, that it's also symbolic of a lot of other design choices that have come out of the distros.

[Oct 14, 2018] In essence, Red Hat is attempting to out-MS MS by polluting and warping Linux needlessly but surely

Highly recommended!
So there is strategy behind systemd introduction, and pretty nefarious one...
Oct 14, 2018 | linux.slashdot.org

OneHundredAndTen ( 1523865 ) , Monday December 11, 2017 @09:36AM ( #55715351 )

Re:Oh stop complaining FFS ( Score: 5 , Interesting)

I don't agree. Systemd is the most visible part of a clear trend within Red Hat, consisting in an attempt to make their particular version of Linux THE canonical Linux, to the point that, if you are not using Red Hat, or some derived distribution, things will not work. In essence, Red Hat is attempting to out-MS MS by polluting and warping Linux needlessly but surely. The latest: they have come up with the 'timedatectl' command, which does exactly the same as 'date'. The latter is to be deprecated. Red Hat, the MS wannabee. They will not pull it off, but they are inflicting a lot of damage on Linux in the process.

[Oct 14, 2018] I m curious if Redhat are regretting their decision on the early hard switch to systemd in RHEL7

RHEL7 looks like a fiasco to me...
Notable quotes:
"... I have friends in the industry still on RHEL6 as the upgrade to version 7 is a logistical nightmare and one who works at a reasonably large enterprise considering ditching Redhat entirely to go to a systemd-less alternative. ..."
"... That faster boot time sure helps with servers that are only restarted once a year ... ..."
"... The best part is how they want to use systemd to save 12 seconds of boot time on my servers that take 5-10 minutes to boot, once every few years. Anyone who thinks that complexity is worth that negligible difference is insane. Of course there are other arguments for systemd, but don't tell me I need it on my servers so they "boot faster". ..."
Oct 14, 2018 | linux.slashdot.org

Zarjazz ( 36278 ) , Monday December 11, 2017 @08:42AM ( #55715117 )

Enterprise Headache ( Score: 4 , Informative)

I'm curious if Redhat are regretting their decision on the early hard switch to systemd in RHEL7. I know for a fact their support system was flooded with issues from early adopters.

I have friends in the industry still on RHEL6 as the upgrade to version 7 is a logistical nightmare and one who works at a reasonably large enterprise considering ditching Redhat entirely to go to a systemd-less alternative.

That faster boot time sure helps with servers that are only restarted once a year ...

jon3k ( 691256 ) , Monday December 11, 2017 @10:44AM ( #55715759 )
Re:Enterprise Headache ( Score: 4 , Informative)

The best part is how they want to use systemd to save 12 seconds of boot time on my servers that take 5-10 minutes to boot, once every few years. Anyone who thinks that complexity is worth that negligible difference is insane. Of course there are other arguments for systemd, but don't tell me I need it on my servers so they "boot faster".

[Oct 13, 2018] How to Change the Log Level in Systemd

Apr 01, 2016 | 410gone.click

SyslogLevel=
See syslog(3) for details. This option is
only useful when StandardOutput= or StandardError= are set to
syslog or kmsg . Note that individual lines output by the daemon
might be prefixed with a different log level which can be used to
override the default log level specified here. The interpretation
of these prefixes may be disabled with SyslogLevelPrefix= , see
below. For details, see sd-daemon(3) . Defaults to info .

The Current Log Level

To check the log level of systemd , which are currently set, and then use the show command of systemctl command.

* To set the 'LogLevel' parameter of -p option.

[code]systemctl -pLogLevel show
LogLevel=info[/code]

How to Temporarily Change the Log Level

To temporarily change the log level of systemd , use the set-log-level option of systemd-analyze command.

It is an example to change the log level to notice.

[code]systemd-analyze set-log-level notice[/code]

How to Permanently Change the Log Level

To enable the log level you have also changed after a restart of the system , change the ' LogLevel ' of /etc/systemd/system.conf .

It is an example to change the log level to notice.

[code]vi /etc/systemd/system.conf
#LogLevel=info
LogLevel=notice
[/code]

[Oct 12, 2018] How to Change the Log Level in Systemd

Apr 01, 2016 | 410gone.click

The Current Log Level

To check the log level of systemd , which are currently set, and then use the show command of systemctl command.

* To set the 'LogLevel' parameter of -p option.

[code]systemctl -pLogLevel show
LogLevel=info[/code]

How to Temporarily Change the Log Level

To temporarily change the log level of systemd , use the set-log-level option of systemd-analyze command.

It is an example to change the log level to notice.

[code]systemd-analyze set-log-level notice[/code]

How to Permanently Change the Log Level

To enable the log level you have also changed after a restart of the system , change the ' LogLevel ' of /etc/systemd/system.conf .

It is an example to change the log level to notice.

[code]vi /etc/systemd/system.conf
#LogLevel=info
LogLevel=notice
[/code]

[Oct 12, 2018] RHEL 7 Root Password Recovery - Red Hat Customer Portal

Notable quotes:
"... That should be "touch /.autorelabel" (forward-slash dot instead of dot forward-slash). ..."
Oct 12, 2018 | access.redhat.com

Robert Krátký

Adding rd.break to the end of the line with kernel parameters in Grub stops the start up process before the regular root filesystem is mounted (hence the necessity to chroot into sysroot ). Emergency mode, on the other hand, does mount the regular root filesystem, but it only mounts it in a read-only mode. Note that you can even change to emergency mode using the systemctl command on a running system ( systemctl emergency ).

I updated the linked solution ( Resetting the Root Password of RHEL-7 / systemd ) to make the instructions easier to follow.

Guru 6827 points 11 November 2014 1:44 AM R. Hinton Community Leader

I've found recently that sometimes on a virtual system you have to add "console=tty0" at the end on a virtual system else the output goes to a non-available serial console.

IO Loreal.Linux 9 November 2017 2:40 PMRHEL 7 root Password Reset.

Step 1: Break the Consloe while Linux boot. Step 2: Press "e" to edit the kernel. Step 3: append the entry at the end of linux line as below rd.break console=tty1

Step 4: Press CTRL+X

Step 5: mount -o remount,rw /sysroot

Step 6: chroot /sysroot; change root password

Step 7: touch /.autorelabel

Step 8: Type exit two times

Thanks & Regards Namasivayam

Siggy Sigwald 22 February 2018 3:55 PM

1 - on grub menu, select the kernel to boot from and press "e". 2 - append the following to the end of the line that starts with "linux16" rd.break console=tty1 3 - use ctrl+x to bootup - mount -o remount,rw /sysroot 4 - chroot /sysroot 5 - passwd (enter new password). 6 - touch /.autorelabel 7 - ctrl+d twice to resume normal boot process.

William Schmidt, 18 April 2018 11:14 AM

If you use the syntax above (touch ./autorelabel (dot forward-slash)), the result is to create a file called autorelabel in the current directory. The result will NOT be a relabel function and a system you may not be able to log in to.

That should be "touch /.autorelabel" (forward-slash dot instead of dot forward-slash). Notice the position of the period in front of the autorelabel filename. The result is to create a file called .autorelabel in the root directory. The result will be a relabel function and a system you can log in to.

I have made that mistake!

// Bill //

[Sep 04, 2018] Unifying custom scripts system-wide with rpm on Red Hat-CentOS

Highly recommended!
Aug 24, 2018 | linuxconfig.org
Objective Our goal is to build rpm packages with custom content, unifying scripts across any number of systems, including versioning, deployment and undeployment. Operating System and Software Versions Requirements Privileged access to the system for install, normal access for build. Difficulty MEDIUM Conventions Introduction One of the core feature of any Linux system is that they are built for automation. If a task may need to be executed more than one time - even with some part of it changing on next run - a sysadmin is provided with countless tools to automate it, from simple shell scripts run by hand on demand (thus eliminating typo errors, or only save some keyboard hits) to complex scripted systems where tasks run from cron at a specified time, interacting with each other, working with the result of another script, maybe controlled by a central management system etc.

While this freedom and rich toolset indeed adds to productivity, there is a catch: as a sysadmin, you write a useful script on a system, which proves to be useful on another, so you copy the script over. On a third system the script is useful too, but with minor modification - maybe a new feature useful only that system, reachable with a new parameter. Generalization in mind, you extend the script to provide the new feature, and complete the task it was written for as well. Now you have two versions of the script, the first is on the first two system, the second in on the third system.

You have 1024 computers running in the datacenter, and 256 of them will need some of the functionality provided by that script. In time you will have 64 versions of the script all over, every version doing its job. On the next system deployment you need a feature you recall you coded at some version, but which? And on which systems are they?

On RPM based systems, such as Red Hat flavors, a sysadmin can take advantage of the package manager to create order in the custom content, including simple shell scripts that may not provide else but the tools the admin wrote for convenience.

In this tutorial we will build a custom rpm for Red Hat Enterprise Linux 7.5 containing two bash scripts, parselogs.sh and pullnews.sh to provide a way that all systems have the latest version of these scripts in the /usr/local/sbin directory, and thus on the path of any user who logs in to the system.


me width=


Distributions, major and minor versions In general, the minor and major version of the build machine should be the same as the systems the package is to be deployed, as well as the distribution to ensure compatibility. If there are various versions of a given distribution, or even different distributions with many versions in your environment (oh, joy!), you should set up build machines for each. To cut the work short, you can just set up build environment for each distribution and each major version, and have them on the lowest minor version existing in your environment for the given major version. Of cause they don't need to be physical machines, and only need to be running at build time, so you can use virtual machines or containers.

In this tutorial our work is much easier, we only deploy two scripts that have no dependencies at all (except bash ), so we will build noarch packages which stand for "not architecture dependent", we'll also not specify the distribution the package is built for. This way we can install and upgrade them on any distribution that uses rpm , and to any version - we only need to ensure that the build machine's rpm-build package is on the oldest version in the environment. Setting up building environment To build custom rpm packages, we need to install the rpm-build package:

# yum install rpm-build
From now on, we do not use root user, and for a good reason. Building packages does not require root privilege, and you don't want to break your building machine.

Building the first version of the package Let's create the directory structure needed for building:

$ mkdir -p rpmbuild/SPECS
Our package is called admin-scripts, version 1.0. We create a specfile that specifies the metadata, contents and tasks performed by the package. This is a simple text file we can create with our favorite text editor, such as vi . The previously installed rpmbuild package will fill your empty specfile with template data if you use vi to create an empty one, but for this tutorial consider the specification below called admin-scripts-1.0.spec :

me width=


Name:           admin-scripts
Version:        1
Release:        0
Summary:        FooBar Inc. IT dept. admin scripts
Packager:       John Doe 
Group:          Application/Other
License:        GPL
URL:            www.foobar.com/admin-scripts
Source0:        %{name}-%{version}.tar.gz
BuildArch:      noarch

%description
Package installing latest version the admin scripts used by the IT dept.

%prep
%setup -q


%build

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/local/sbin
cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
%dir /usr/local/sbin
/usr/local/sbin/parselogs.sh
/usr/local/sbin/pullnews.sh

%doc

%changelog
* Wed Aug 1 2018 John Doe 
- release 1.0 - initial release
Place the specfile in the rpmbuild/SPEC directory we created earlier.

We need the sources referenced in the specfile - in this case the two shell scripts. Let's create the directory for the sources (called as the package name appended with the main version):

$ mkdir -p rpmbuild/SOURCES/admin-scripts-1/scripts
And copy/move the scripts into it:
$ ls rpmbuild/SOURCES/admin-scripts-1/scripts/
parselogs.sh  pullnews.sh

me width=


As this tutorial is not about shell scripting, the contents of these scripts are irrelevant. As we will create a new version of the package, and the pullnews.sh is the script we will demonstrate with, it's source in the first version is as below:
#!/bin/bash
echo "news pulled"
exit 0
Do not forget to add the appropriate rights to the files in the source - in our case, execution right:
chmod +x rpmbuild/SOURCES/admin-scripts-1/scripts/*.sh
Now we create a tar.gz archive from the source in the same directory:
cd rpmbuild/SOURCES/ && tar -czf admin-scripts-1.tar.gz admin-scripts-1
We are ready to build the package:
rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.0.spec
We'll get some output about the build, and if anything goes wrong, errors will be shown (for example, missing file or path). If all goes well, our new package will appear in the RPMS directory generated by default under the rpmbuild directory (sorted into subdirectories by architecture):
$ ls rpmbuild/RPMS/noarch/
admin-scripts-1-0.noarch.rpm
We have created a simple yet fully functional rpm package. We can query it for all the metadata we supplied earlier:
$ rpm -qpi rpmbuild/RPMS/noarch/admin-scripts-1-0.noarch.rpm 
Name        : admin-scripts
Version     : 1
Release     : 0
Architecture: noarch
Install Date: (not installed)
Group       : Application/Other
Size        : 78
License     : GPL
Signature   : (none)
Source RPM  : admin-scripts-1-0.src.rpm
Build Date  : 2018. aug.  1., Wed, 13.27.34 CEST
Build Host  : build01.foobar.com
Relocations : (not relocatable)
Packager    : John Doe 
URL         : www.foobar.com/admin-scripts
Summary     : FooBar Inc. IT dept. admin scripts
Description :
Package installing latest version the admin scripts used by the IT dept.
And of cause we can install it (with root privileges): Installing custom scripts with rpm Installing custom scripts with rpm

me width=


As we installed the scripts into a directory that is on every user's $PATH , you can run them as any user in the system, from any directory:
$ pullnews.sh 
news pulled
The package can be distributed as it is, and can be pushed into repositories available to any number of systems. To do so is out of the scope of this tutorial - however, building another version of the package is certainly not. Building another version of the package Our package and the extremely useful scripts in it become popular in no time, considering they are reachable anywhere with a simple yum install admin-scripts within the environment. There will be soon many requests for some improvements - in this example, many votes come from happy users that the pullnews.sh should print another line on execution, this feature would save the whole company. We need to build another version of the package, as we don't want to install another script, but a new version of it with the same name and path, as the sysadmins in our organization already rely on it heavily.

First we change the source of the pullnews.sh in the SOURCES to something even more complex:

#!/bin/bash
echo "news pulled"
echo "another line printed"
exit 0
We need to recreate the tar.gz with the new source content - we can use the same filename as the first time, as we don't change version, only release (and so the Source0 reference will be still valid). Note that we delete the previous archive first:
cd rpmbuild/SOURCES/ && rm -f admin-scripts-1.tar.gz && tar -czf admin-scripts-1.tar.gz admin-scripts-1
Now we create another specfile with a higher release number:
cp rpmbuild/SPECS/admin-scripts-1.0.spec rpmbuild/SPECS/admin-scripts-1.1.spec
We don't change much on the package itself, so we simply administrate the new version as shown below:
Name:           admin-scripts
Version:        1
Release:        1
Summary:        FooBar Inc. IT dept. admin scripts
Packager:       John Doe 
Group:          Application/Other
License:        GPL
URL:            www.foobar.com/admin-scripts
Source0:        %{name}-%{version}.tar.gz
BuildArch:      noarch

%description
Package installing latest version the admin scripts used by the IT dept.

%prep
%setup -q


%build

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/local/sbin
cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
%dir /usr/local/sbin
/usr/local/sbin/parselogs.sh
/usr/local/sbin/pullnews.sh

%doc

%changelog
* Wed Aug 22 2018 John Doe 
- release 1.1 - pullnews.sh v1.1 prints another line
* Wed Aug 1 2018 John Doe 
- release 1.0 - initial release

me width=


All done, we can build another version of our package containing the updated script. Note that we reference the specfile with the higher version as the source of the build:
rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.1.spec
If the build is successful, we now have two versions of the package under our RPMS directory:
ls rpmbuild/RPMS/noarch/
admin-scripts-1-0.noarch.rpm  admin-scripts-1-1.noarch.rpm
And now we can install the "advanced" script, or upgrade if it is already installed. Upgrading custom scripts with rpm Upgrading custom scripts with rpm

And our sysadmins can see that the feature request is landed in this version:

rpm -q --changelog admin-scripts
* sze aug 22 2018 John Doe 
- release 1.1 - pullnews.sh v1.1 prints another line

* sze aug 01 2018 John Doe 
- release 1.0 - initial release
Conclusion

We wrapped our custom content into versioned rpm packages. This means no older versions left scattered across systems, everything is in it's place, on the version we installed or upgraded to. RPM gives the ability to replace old stuff needed only in previous versions, can add custom dependencies or provide some tools or services our other packages rely on. With effort, we can pack nearly any of our custom content into rpm packages, and distribute it across our environment, not only with ease, but with consistency.

[Jun 12, 2018] How to convert RHEL 6.x to CentOS 6.x The Picky SysAdmin

Jun 12, 2018 | www.pickysysadmin.ca

How to convert RHEL 6.x to CentOS 6.x 2014-07-15 by Eric Schewe

28

Last modified: [last-modified]

This post relates to my older post about converting RHEL 5.x to CentOS 5.x . All the reasons for doing so and other background information can be found in that post.

This post will cover how to convert RHEL 6.x to 5.x.

Updated 2016-03-29 – Thanks to feedback from here I've updated the guide.

Updates and Backups!
  1. Fully patch your system and reboot your system before starting this process
  2. Take a full backup of your system or a Snapshot if it's a VM
Conversion
  1. Login to the server and become root
  2. Clean up yum's cache
    1 localhost :~ root # yum clean all
  3. Create a temporary working area
    1 2 localhost :~ root # mkdir -p /temp/centos localhost :~ root # cd /temp/centos
  4. Determine your version of RHEL
    1 localhost :~ root # cat /etc/redhat-release
  5. Determine your architecture (32-bit = i386, 64-bit = x86_64)
    1 localhost :~ root # uname -i
  6. Download the applicable files for your release and architecture. The version numbers on these packages could change. To find the current versions of these files browse this FTP site: http://mirror.centos.org/centos/6/os/i386/Packages/ (32-bit) or http://mirror.centos.org/centos/6/os/x86_64/Packages/ (64-bit) and replace the 'x' values below with the current version numbers
    CentOS 6.5 / 32-bit
    1 2 3 4 5 localhost :~ root # wget http://mirror.centos.org/centos/6/os/i386/RPM-GPG-KEY-CentOS-6 localhost :~ root # wget http://mirror.centos.org/centos/6/os/i386/Packages/centos-release-6-x.el6.centos.x.x.i686.rpm localhost :~ root # wget http://mirror.centos.org/centos/6/os/i386/Packages/centos-indexhtml-6-x.el6.centos.noarch.rpm localhost :~ root # wget http://mirror.centos.org/centos/6/os/i386/Packages/yum-x.x.x-x.el6.centos.noarch.rpm localhost :~ root # wget http://mirror.centos.org/centos/6/os/i386/Packages/yum-plugin-fastestmirror-x.x.x-x.el6.noarch.rpm

    CentOS 6.5 / 64-bit
    1 2 3 4 5 localhost :~ root # wget http://mirror.centos.org/centos/6/os/x86_64/RPM-GPG-KEY-CentOS-6 localhost :~ root # wget http://mirror.centos.org/centos/6/os/x86_64/Packages/centos-release-6-x.el6.centos.xx.x.x86_64.rpm localhost :~ root # wget http://mirror.centos.org/centos/6/os/x86_64/Packages/centos-indexhtml-6-x.el6.centos.noarch.rpm localhost :~ root # wget http://mirror.centos.org/centos/6/os/x86_64/Packages/yum-x.x.xx-xx.el6.centos.noarch.rpm localhost :~ root # wget http://mirror.centos.org/centos/6/os/x86_64/Packages/yum-plugin-fastestmirror-x.x.xx-xx.el6.noarch.rpm
  7. Import the GPG key for the appropriate version of CentOS
    1 localhost :~ root # rpm --import RPM-GPG-KEY-CentOS-6
  8. Remove RHEL packages

    Note: If the 'rpm -e' command fails saying one of the packages is not installed remove the package from the command and run it again.

    1 2 localhost :~ root # yum remove rhnlib abrt-plugin-bugzilla redhat-release-notes* localhost :~ root # rpm -e --nodeps redhat-release-server-6Server redhat-indexhtml
  9. Remove any left over RHEL subscription information and the subscription-manager

    Note: If you do not do this every time you run 'yum' you will receive the following message: "This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register."

    1 2 localhost :~ root # subscription-manager clean localhost :~ root # yum remove subscription-manager
  10. Force install the CentOS RPMs we downloaded
    1 localhost :~ root # rpm -Uvh --force *.rpm
  11. Clean up yum one more time and then upgrade
    1 2 localhost :~ root # yum clean all localhost :~ root # yum upgrade
  12. Reboot your server
  13. Verify functionality
  14. Delete VM Snapshot if you took one as part of the backup
References

[Apr 29, 2018] RHEL 6.10 Beta is available

Apr 29, 2018 | www.serverwatch.com

...On April 26, Red Hat announced the RHEL 6.10 beta, providing stability and security updates.

RHEL 6 was first released in November 2010 and was superseded as the leading edge of RHEL development when RHEL 7 was released in June 2014.

"Red Hat Enterprise Linux offers a ten year lifecycle, one of the longest in the industry, and version 6 is within its seventh year of support, putting it in the Maintenance Support 2 phase," Marcel Kolaja, product manager, Red Hat Enterprise Linux at Red Hat, told ServerWatch . "This means that Red Hat Enterprise Linux 6 receives Critical-rated Red Hat Security Advisories (RHSAs), and selected Urgent-rated Red Hat Bug Fix Advisories (RHBAs) may be released as they become available."

Kolaja noted that the Red Hat Enterprise Linux 6.10 Beta delivers updates only to maintain and enhance the security, stability, and reliability of the platform in production roles.

[Apr 26, 2018] chkservice - A Tool For Managing Systemd Units From Linux Terminal

Apr 26, 2018 | www.2daygeek.com

For RPM Based Systems , use DNF Command to install chkservice .

$ sudo yum install https://github.com/linuxenko/chkservice/releases/download/0.1/chkservice_0.1.0-amd64.rpm
How To Use chkservice

Just fire the following command to launch the chkservice tool. The output is split to four parts.

$ sudo chkservice

[Jan 29, 2018] How good is Red Hat Enterprise Linux or CentOS as a desktop OS - Quora

Jan 29, 2018 | www.quora.com

It's harder to use CentOS as a desktop than Debian, the latter of which has many, many more packages. You can use the Nux repo at My RPM repositories (or use his CentOS 6 remix called Stella: Stella, stellae) to beef up your CentOS installation.

But I think there's safety in numbers, and a lot more developers package things for Debian.

That said, I use Fedora because it's pretty much the CentOS of the future, and there are a lot more packages. Sure, you need to add RPM Fusion for some things, but that's not so difficult.

Stability is a funny thing. Often in the Linux world, "stable" just means old. I find Fedora -- with much newer packages -- very stable in terms of its ability to keep running. Using what works with your hardware is more important than what is labeled "stable." Older hardware is often better with older software. And newer hardware may be more "stable" with newer software.

Your mileage will definitely vary, and it doesn't hurt to do a few installations (Debian, CentOS, Fedora, Ubuntu) to see what works best for you. Try before you buy (even if what you're "buying" is free).

[Oct 25, 2017] So after 4 hours of debugging systemd and NetworkManager; nothing but pain linux

Oct 25, 2017 | www.reddit.com

This is old but pretty instructive post. Comments provide important tips on how to deal with systemd problems.

it drops the default route every time I reboot the machine

Can't work out how to resolve this at all. Nothing in logs or anything. Two hours down the pan.

ALL of this is config management stuff related to systemd and NetworkManager. Not at all impressed. Excuse the rant but this is motivating me to move our CentOS 5.x machines to FreeBSD which has some semblance of debuggability, configuration management that doesn't make you want to gouge your eyes out and stability.

This feels like Windows, WMI and the registry which is an abhorrent place for Linux to be. I know because I deal with Windows Server as well.

I know there is a fan following of systemd on here but this is what it's like in the real world for those of us who need to argue with it to get shit done, so Dear Lennart (and RedHat): Fuck you for taking away 4 hours of my life and fuck you for forcing these bits of crap on the distributions.

Edit: please read all my replies before you kick me down.

Edit 2: I've spoken to our CTO. 95% of our kit can be moved to windows as it's hosting JVMs so that's going in the mix as well.

I have a few CentOS 7 servers online and have had no trouble with routes being dropped, during reboot or at any other time.

[–] godlesspeon S ] 2 points 3 points 4 points 3 years ago (3 children)

How did you do the install on these? I'm genuinely interested in where this has gone wrong.

[–] e_t_ 8 points 9 points 10 points 3 years ago (2 children)

I went through Anaconda. It wrote a config file to /etc/sysconfig/network-scripts, which NM picks up. I think the same file could be created with nmtui .

[–] godlesspeon S ] 2 points 3 points 4 points 3 years ago (1 child)

I used nmtui and the sysconfig folder is expected (all contents as expected, correctly configured) and this is good hardware (Intel gbit cards).

I will say that the damn thing has just spontaneously started working which is even more worrying as the whole thing is completely non-deterministic. Ugh.

[–] riking27 9 points 10 points 11 points 3 years ago (0 children)

Nondeterministic? Sounds like a missing ordering dependency to me ( After= , Before= ).

Example: NFS mounts need an After=network-online.target to work; anything using the network on stop needs an After=network.target .

[–] azalynx 67 points 68 points 69 points 3 years ago (124 children)

[...] this is motivating me to move our CentOS 5.x machines to FreeBSD which has [...]

As a fan of systemd, I was going to take your side and agree that these issues are unacceptable, until you said this. You're not helping, seriously. Your problem is related to bugs in a piece of software, not to systemd's design philosophy, and especially not to Linux as a whole.

The fact that your solution to some bugs in the software is to throw the baby out with the bathwater, speaks a lot about your attitude. Chances are the issues you mentioned will be resolved as RHEL/CentOS 7 become more deployed, and others run across similar problems and actually report and/or fix the actual bugs instead of embarking on an anti-Lennart crusade and spewing vitriol all over the place.

I would also be willing to fucking bet that other people have had similar problems with every new version of RHEL or CentOS ever, in history, but because this time there's a convenient bullseye in the form of systemd to assign blame to, everyone loses their shit.

[...] and fuck you for forcing these bits of crap on the distributions.

Woah, what? This fallacy again? Red Hat isn't forcing jack shit on anyone . The reality is that many users/businesses need this functionality; just because you don't need it, that doesn't mean that there aren't a thousand other use cases out there that require systemd. Please stop assuming that you are at the center of the universe and that everyone elses use cases are like yours . Distributions have always been designed to be "one-size-fits-all" in order to fit everyone's uses; obviously many distributions want systemd's functionality because they have users who demand it.

Once again, as I said in my first sentence; it's not ok for an enterprise product to have the problems you experienced, but your attitude isn't helping, really. It's also worth noting that if everyone just flipped tables when something broke, we'd never have any progress.

[Oct 25, 2017] Top 5 Linux pain points in 2017

Looks like the author is a dilettante, if he things that library compatibility problem is not an issue. He also failed to mention systemd.
Oct 25, 2017 | opensource.com
2016 Open Source Yearbook article on troubleshooting tips for the 5 most common Linux issues , Linux installs and operates as expected for most users, but some inevitably run into problems. How have things changed over the past year in this regard? Once again, I posted the question to LinuxQuestions.org and on social media, and analyzed LQ posting patterns. Here are the updated results.

1. Documentation

Documentation, or lack thereof, was one of the largest pain points this year. Although open source methodology produces superior code, the importance of producing quality documentation has only recently come to the forefront. As more non-technical users adopt Linux and open source software, the quality and quantity of documentation will become paramount. If you've wanted to contribute to an open source project but don't feel you are technical enough to offer code, improving documentation is a great way to participate. Many projects even keep the documentation in their repository, so you can use your contribution to get acclimated to the version control workflow.

If you've wanted to contribute to an open source project but don't feel you are technical enough to offer code, improving documentation is a great way to participate.

2. Software/library version incompatibility I was surprised by this one, but software/library version incompatibility was mentioned frequently... 3. UEFI and secure boot

Although this issue continues to improve as more supported hardware is deployed, many users indicate that they still have issues with UEFI and/or secure boot. Using a distribution that fully supports UEFI/secure boot out of the box is the best solution here.

4. Deprecation of 32-bit

Many users are lamenting the death of 32-bit support in their favorite distributions and software projects. Although you still have many options if 32-bit support is a must, fewer and fewer projects are likely to continue supporting a platform with decreasing market share and mind share. Luckily, we're talking about open source, so you'll likely have at least a couple options as long as someone cares about the platform.

5. Deteriorating support and testing for X-forwarding Although many longtime and power users of Linux regularly use X-forwarding and consider it critical functionality, as Linux becomes more mainstream it appears to be seeing less testing and support; especially from newer apps. With Wayland network transparency still evolving, the situation may get worse before it improves.

[Sep 27, 2017] Chkservice - An Easy Way to Manage Systemd Units in Terminal

Sep 27, 2017 | linoxide.com

Systemd is a system and service manager for Linux operating systems which introduces the concept of systemd units and provides a number of features such as parallel startup of system services at boot time, on-demand activation of daemons, etc. It helps to manage services on your Linux OS such as starting/stopping/reloading. But to operate on services with systemd, you need to know the different services launched and the name which exactly matches the service. There is a tool provided which can help Linux users to navigate through the different services available on your Linux as you do for the different process in progress on your system with top command.

What is chkservice?

chkservice is a new and handy tool for systemd units management in a terminal. It is a GitHub project developed by Svetlana Linuxenko. It has the particularity to list the differents services presents on your system. You have a view of each service available and you are able to manage it as you want.

Debian:

sudo add-apt-repository ppa:linuxenko/chkservice
sudo apt-get update
sudo apt-get install chkservice

Arch

git clone https://aur.archlinux.org/chkservice.git
cd chkservice
makepkg -si

Fedora

dnf copr enable srakitnican/default
dnf install chkservice

chkservice require super user privileges to make changes into unit states or sysv scripts. For user it works read-only.

Package dependencies:

Build dependencies:

Build and install debian package.

git clone https://github.com/linuxenko/chkservice.git
mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX=/usr ../
cpack

dpkg -i chkservice-x.x.x.deb

Build release version.

git clone https://github.com/linuxenko/chkservice.git
mkdir build
cd build
cmake ../
make

[Sep 23, 2017] CentOS 7 Server Hardening Guide Lisenet.com Linux Security Networking

Highly recommended!
Notable quotes:
"... As a rule of thumb, malicious applications usually write to /tmp and then attempt to run whatever was written. A way to prevent this is to mount /tmp on a separate partition with the options noexec , nodev and nosuid enabled. ..."
Sep 23, 2017 | www.lisenet.com

Remove packages which you don't require on a server, e.g. firmware of sound cards, firmware of WinTV, wireless drivers etc.

# yum remove alsa-* ivtv-* iwl*firmware ic94xx-firmware
2. System Settings – File Permissions and Masks 2.1 Restrict Partition Mount Options

Partitions should have hardened mount options:

  1. /boot – rw,nodev,noexec,nosuid
  2. /home – rw,nodev,nosuid
  3. /tmp – rw,nodev,noexec,nosuid
  4. /var – rw,nosuid
  5. /var/log – rw,nodev,noexec,nosuid
  6. /var/log/audit – rw,nodev,noexec,nosuid
  7. /var/www – rw,nodev,nosuid

As a rule of thumb, malicious applications usually write to /tmp and then attempt to run whatever was written. A way to prevent this is to mount /tmp on a separate partition with the options noexec , nodev and nosuid enabled.

This will deny binary execution from /tmp , disable any binary to be suid root, and disable any block devices from being created.

The storage location /var/tmp should be bind mounted to /tmp , as having multiple locations for temporary storage is not required:

/tmp /var/tmp none rw,nodev,noexec,nosuid,bind 0 0

The same applies to shared memory /dev/shm :

tmpfs /dev/shm tmpfs rw,nodev,noexec,nosuid 0 0

The proc pseudo-filesystem /proc should be mounted with hidepid . When setting hidepid to 2, directories entries in /proc will hidden.

proc /proc proc rw,hidepid=2 0 0

Harden removeable media mounts by adding nodev noexec and nosuid , e.g.:

/dev/cdrom /mnt/cdrom iso9660 ro,noexec,nosuid,nodev,noauto 0 0
2.2 Restrict Dynamic Mounting and Unmounting of Filesystems

Add the following to /etc/modprobe.d/hardening.conf to disable uncommon filesystems:

install cramfs /bin/true

install freevxfs /bin/true

install jffs2 /bin/true

install hfs /bin/true

install hfsplus /bin/true

install squashfs /bin/true

install udf /bin/true

Depending on a setup (if you don't run clusters, NFS, CIFS etc), you may consider disabling the following too:

install fat /bin/true

install vfat /bin/true

install cifs /bin/true

install nfs /bin/true

install nfsv3 /bin/true

install nfsv4 /bin/true

install gfs2 /bin/true

It is wise to leave ext4, xfs and btrfs enabled at all times.

2.3 Prevent Users Mounting USB Storage

Add the following to /etc/modprobe.d/hardening.conf to disable modprobe loading of USB and FireWire storage drivers:

blacklist usb-storage

blacklist firewire-core

install usb-storage /bin/true

Disable USB authorisation. Create a file /opt/usb-auth.sh with the following content:

#!/bin/bash

echo 0 > /sys/bus/usb/devices/usb1/authorized

echo 0 > /sys/bus/usb/devices/usb1/authorized_default

If more than one USB device is available, then add them all. Create a service file /etc/systemd/system/usb-auth.service with the following content:

[Unit]

Description=Disable USB auth

DefaultDependencies=no



[Service]

Type=oneshot

ExecStart=/bin/bash /opt/usb-auth.sh



[Install]

WantedBy=multi-user.target

Set permissions, enable and start the service:

# chmod 0700 /opt/usb-auth.sh

# systemctl enable usb-auth.service

# systemctl start usb-auth.service

If required, disable kernel support for USB via bootloader configuration. To do so, append nousb to the kernel line GRUB_CMDLINE_LINUX in /etc/default/grub and generate the Grub2 configuration file:

# grub2-mkconfig -o /boot/grub2/grub.cfg

Note that disabling all kernel support for USB will likely cause problems for systems with USB-based keyboards etc.

2.4 Restrict Programs from Dangerous Execution Patterns

Configure /etc/sysctl.conf with the following:

# Disable core dumps

fs.suid_dumpable = 0



# Disable System Request debugging functionality

kernel.sysrq = 0



# Restrict access to kernel logs

kernel.dmesg_restrict = 1



# Enable ExecShield protection

kernel.exec-shield = 1



# Randomise memory space

kernel.randomize_va_space = 2



# Hide kernel pointers

kernel.kptr_restrict = 2

Load sysctl settings:

# sysctp -p
2.5 Set UMASK 027

The following files require umask hardening: /etc/bashrc , /etc/csh.cshrc , /etc/init.d/functions and /etc/profile .

Sed one-liner:

# sed -i -e 's/umask 022/umask 027/g' -e 's/umask 002/umask 027/g' /etc/bashrc

# sed -i -e 's/umask 022/umask 027/g' -e 's/umask 002/umask 027/g' /etc/csh.cshrc

# sed -i -e 's/umask 022/umask 027/g' -e 's/umask 002/umask 027/g' /etc/profile

# sed -i -e 's/umask 022/umask 027/g' -e 's/umask 002/umask 027/g' /etc/init.d/functions
2.6 Disable Core Dumps

Open /etc/security/limits.conf and set the following:

*  hard  core  0
2.7 Set Security Limits to Prevent DoS

Add the following to /etc/security/limits.conf to enforce sensible security limits:

# 4096 is a good starting point

*      soft   nofile    4096

*      hard   nofile    65536

*      soft   nproc     4096

*      hard   nproc     4096

*      soft   locks     4096

*      hard   locks     4096

*      soft   stack     10240

*      hard   stack     32768

*      soft   memlock   64

*      hard   memlock   64

*      hard   maxlogins 10



# Soft limit 32GB, hard 64GB

*      soft   fsize     33554432

*      hard   fsize     67108864



# Limits for root

root   soft   nofile    4096

root   hard   nofile    65536

root   soft   nproc     4096

root   hard   nproc     4096

root   soft   stack     10240

root   hard   stack     32768

root   soft   fsize     33554432
2.8 Verify Permissions of Files

Ensure that all files are owned by a user:

# find / -ignore_readdir_race -nouser -print -exec chown root {} \;

Ensure that all files are owned by a group:

# find / -ignore_readdir_race -nogroup -print -exec chgrp root {} \;

Automate the process by creating a cron file /etc/cron.daily/unowned_files with the following content:

#!/bin/bash

find / -ignore_readdir_race -nouser -print -exec chown root {} \;

find / -ignore_readdir_race -nogroup -print -exec chgrp root {} \;

Set ownership and permissions:

# chown root:root /etc/cron.daily/unowned_files

# chmod 0700 /etc/cron.daily/unowned_files
2.9 Monitor SUID/GUID Files

Search for setuid/setgid files and identify if all are required:

# find / -xdev -type f -perm -4000 -o -perm -2000
3. System Settings – Firewall and Network Configuration 3.1 Firewall

Setting the default firewalld zone to drop makes any packets which are not explicitly permitted to be rejected.

# sed -i "s/DefaultZone=.*/DefaultZone=drop/g" /etc/firewalld/firewalld.conf

Unless firewalld is required, mask it and replace with iptables:

# systemctl stop firewalld.service

# systemctl mask firewalld.service

# systemctl daemon-reload

# yum install iptables-services

# systemctl enable iptables.service ip6tables.service

Add the following to /etc/sysconfig/iptables to allow only minimal outgoing traffic (DNS, NTP, HTTP/S and SMTPS):

*filter

-F INPUT

-F OUTPUT

-F FORWARD

-P INPUT ACCEPT

-P FORWARD DROP

-P OUTPUT ACCEPT

-A INPUT -i lo -m comment --comment local -j ACCEPT

-A INPUT -d 127.0.0.0/8 ! -i lo -j REJECT --reject-with icmp-port-unreachable

-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

-A INPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 22 -s 10.0.0.0/8 -j ACCEPT

-A INPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 22 -s 172.16.0.0/12 -j ACCEPT

-A INPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 22 -s 192.168.0.0/16 -j ACCEPT

-A INPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 22 -j ACCEPT

-A INPUT -j DROP

-A OUTPUT -d 127.0.0.0/8 -o lo -m comment --comment local -j ACCEPT

-A OUTPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

-A OUTPUT -p icmp -m icmp --icmp-type any -j ACCEPT

-A OUTPUT -p udp -m udp -m conntrack --ctstate NEW --dport 53 -j ACCEPT

-A OUTPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 53 -j ACCEPT

-A OUTPUT -p udp -m udp -m conntrack --ctstate NEW --dport 123 -j ACCEPT

-A OUTPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 80 -j ACCEPT

-A OUTPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 443 -j ACCEPT

-A OUTPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 587 -j ACCEPT

-A OUTPUT -j LOG --log-prefix "iptables_output "

-A OUTPUT -j REJECT --reject-with icmp-port-unreachable

COMMIT

Note that the rule allowing all incoming SSH traffic should be removed restricting access to an IP whitelist only, or hiding SSH behind a VPN.

Add the following to /etc/sysconfig/ip6tables to deny all IPv6:

*filter

-F INPUT

-F OUTPUT

-F FORWARD

-P INPUT DROP

-P FORWARD DROP

-P OUTPUT DROP

COMMIT

Apply configurations:

# iptables-restore < /etc/sysconfig/iptables

# ip6tables-restore < /etc/sysconfig/ip6tables
3.2 TCP Wrappers

Open /etc/hosts.allow and allow localhost traffic and SSH:

ALL: 127.0.0.1

sshd: ALL

The file /etc/hosts.deny should be configured to deny all by default:

ALL: ALL
3.3 Kernel Parameters Which Affect Networking

Open /etc/sysctl.conf and add the following:

# Disable packet forwarding

net.ipv4.ip_forward = 0



# Disable redirects, not a router

net.ipv4.conf.all.accept_redirects = 0

net.ipv4.conf.default.accept_redirects = 0

net.ipv4.conf.all.send_redirects = 0

net.ipv4.conf.default.send_redirects = 0

net.ipv4.conf.all.secure_redirects = 0

net.ipv4.conf.default.secure_redirects = 0

net.ipv6.conf.all.accept_redirects = 0

net.ipv6.conf.default.accept_redirects = 0



# Disable source routing

net.ipv4.conf.all.accept_source_route = 0

net.ipv4.conf.default.accept_source_route = 0

net.ipv6.conf.all.accept_source_route = 0



# Enable source validation by reversed path

net.ipv4.conf.all.rp_filter = 1

net.ipv4.conf.default.rp_filter = 1



# Log packets with impossible addresses to kernel log

net.ipv4.conf.all.log_martians = 1

net.ipv4.conf.default.log_martians = 1



# Disable ICMP broadcasts

net.ipv4.icmp_echo_ignore_broadcasts = 1



# Ignore bogus ICMP errors

net.ipv4.icmp_ignore_bogus_error_responses = 1



# Against SYN flood attacks

net.ipv4.tcp_syncookies = 1



# Turning off timestamps could improve security but degrade performance.

# TCP timestamps are used to improve performance as well as protect against

# late packets messing up your data flow. A side effect of this feature is 

# that the uptime of the host can sometimes be computed.

# If you disable TCP timestamps, you should expect worse performance 

# and less reliable connections.

net.ipv4.tcp_timestamps = 1



# Disable IPv6 unless required

net.ipv6.conf.lo.disable_ipv6 = 1

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1



# Do not accept router advertisements

net.ipv6.conf.all.accept_ra = 0

net.ipv6.conf.default.accept_ra = 0
3.4 Kernel Modules Which Affect Networking

Open /etc/modprobe.d/hardening.conf and disable Bluetooth kernel modules:

install bnep /bin/true

install bluetooth /bin/true

install btusb /bin/true

install net-pf-31 /bin/true

Also disable AppleTalk:

install appletalk /bin/true

Unless required, disable support for IPv6:

options ipv6 disable=1

Disable (uncommon) protocols:

install dccp /bin/true

install sctp /bin/true

install rds /bin/true

install tipc /bin/true

Since we're looking at server security, wireless shouldn't be an issue, therefore we can disable all the wireless drivers.

# for i in $(find /lib/modules/$(uname -r)/kernel/drivers/net/wireless -name "*.ko" -type f);do \

  echo blacklist "$i" >>/etc/modprobe.d/hardening-wireless.conf;done
3.5 Disable Radios

Disable radios (wifi and wwan):

# nmcli radio all off
3.6 Disable Zeroconf Networking

Open /etc/sysconfig/network and add the following:

NOZEROCONF=yes
3.7 Disable Interface Usage of IPv6

Open /etc/sysconfig/network and add the following:

NETWORKING_IPV6=no

IPV6INIT=no
3.8 Network Sniffer

The server should not be acting as a network sniffer and capturing packages. Run the following to determine if any interface is running in promiscuous mode:

# ip link | grep PROMISC
3.9 Secure VPN Connection

Install the libreswan package if implementation of IPsec and IKE is required.

# yum install libreswan
3.10 Disable DHCP Client

Manual assignment of IP addresses provides a greater degree of management.

For each network interface that is available on the server, open a corresponding file /etc/sysconfig/network-scripts/ifcfg- interface and configure the following parameters:

BOOTPROTO=none

IPADDR=

NETMASK=

GATEWAY=
4. System Settings – SELinux

Ensure that SELinux is not disabled in /etc/default/grub , and verify that the state is enforcing:

# sestatus
5. System Settings – Account and Access Control 5.1 Delete Unused Accounts and Groups

Remove any account which is not required, e.g.:

# userdel -r adm

# userdel -r ftp

# userdel -r games

# userdel -r lp

Remove any group which is not required, e.g.:

# groupdel games
5.2 Disable Direct root Login
# echo > /etc/securetty
5.3 Enable Secure (high quality) Password Policy

Note that running authconfig will overwrite the PAM configuration files destroying any manually made changes. Make sure that you have a backup

Secure password policy rules are outlined below.

  1. Minimum length of a password – 16.
  2. Minimum number of character classes in a password – 4.
  3. Maximum number of same consecutive characters in a password – 2.
  4. Maximum number of consecutive characters of same class in a password – 2.
  5. Require at least one lowercase and one uppercase characters in a password.
  6. Require at least one digit in a password.
  7. Require at least one other character in a password.

The following command will enable SHA512 as well as set the above password requirements:

# authconfig --passalgo=sha512 \

 --passminlen=16 \

 --passminclass=4 \

 --passmaxrepeat=2 \

 --passmaxclassrepeat=2 \

 --enablereqlower \

 --enablerequpper \

 --enablereqdigit \

 --enablereqother \

 --update

Open /etc/security/pwquality.conf and add the following:

difok = 8

gecoscheck = 1

These will ensure that 8 characters in the new password must not be present in the old password, and will check for the words from the passwd entry GECOS string of the user.

5.4 Prevent Log In to Accounts With Empty Password

Remove any instances of nullok from /etc/pam.d/system-auth and /etc/pam.d/password-auth to prevent logins with empty passwords.

Sed one-liner:

# sed -i 's/\<nullok\>//g' /etc/pam.d/system-auth /etc/pam.d/password-auth
5.5 Set Account Expiration Following Inactivity

Disable accounts as soon as the password has expired.

Open /etc/default/useradd and set the following:

INACTIVE=0

Sed one-liner:

# sed -i 's/^INACTIVE.*/INACTIVE=0/' /etc/default/useradd
5.6 Secure Pasword Policy

Open /etc/login.defs and set the following:

PASS_MAX_DAYS 60

PASS_MIN_DAYS 1

PASS_MIN_LEN 14

PASS_WARN_AGE 14

Sed one-liner:

# sed -i -e 's/^PASS_MAX_DAYS.*/PASS_MAX_DAYS 60/' \

  -e 's/^PASS_MIN_DAYS.*/PASS_MIN_DAYS 1/' \

  -e 's/^PASS_MIN_LEN.*/PASS_MIN_LEN 14/' \

  -e 's/^PASS_WARN_AGE.*/PASS_WARN_AGE 14/' /etc/login.defs
5.7 Log Failed Login Attemps

Open /etc/login.defs and enable logging:

FAILLOG_ENAB yes

Also add a delay in seconds before being allowed another attempt after a login failure:

FAIL_DELAY 4
5.8 Ensure Home Directories are Created for New Users

Open /etc/login.defs and configure:

CREATE_HOME yes
5.9 Verify All Account Password Hashes are Shadowed

The command below should return "x":

# cut -d: -f2 /etc/passwd|uniq
5.10 Set Deny and Lockout Time for Failed Password Attempts

Add the following line immediately before the pam_unix.so statement in the AUTH section of /etc/pam.d/system-auth and /etc/pam.d/password-auth :

auth required pam_faillock.so preauth silent deny=3 unlock_time=900 fail_interval=900

Add the following line immediately after the pam_unix.so statement in the AUTH section of /etc/pam.d/system-auth and /etc/pam.d/password-auth :

auth [default=die] pam_faillock.so authfail deny=3 unlock_time=900 fail_interval=900

Add the following line immediately before the pam_unix.so statement in the ACCOUNT section of /etc/pam.d/system-auth and /etc/pam.d/password-auth :

account required pam_faillock.so

The content of the file /etc/pam.d/system-auth can be seen below.

#%PAM-1.0

auth        required      pam_env.so

auth        required      pam_faillock.so preauth silent deny=3 unlock_time=900 fail_interval=900

auth        sufficient    pam_unix.so  try_first_pass

auth        [default=die] pam_faillock.so authfail deny=3 unlock_time=900 fail_interval=900

auth        requisite     pam_succeed_if.so uid >= 1000 quiet_success

auth        required      pam_deny.so



account     required      pam_unix.so

account     required      pam_faillock.so

account     sufficient    pam_localuser.so

account     sufficient    pam_succeed_if.so uid < 1000 quiet

account     required      pam_permit.so



password    requisite     pam_pwquality.so try_first_pass local_users_only retry=3 authtok_type=

password    sufficient    pam_unix.so sha512 shadow  try_first_pass use_authtok remember=5

password    required      pam_deny.so



session     optional      pam_keyinit.so revoke

session     required      pam_limits.so

-session    optional      pam_systemd.so

session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid

session     required      pam_unix.so

Also, do not allow users to reuse recent passwords by adding the remember option.

Make /etc/pam.d/system-auth and /etc/pam.d/password-auth configurations immutable so that they don't get overwritten when authconfig is run:

# chattr +i /etc/pam.d/system-auth /etc/pam.d/password-auth

Accounts will get locked after 3 failed login attemtps:

login[]: pam_faillock(login:auth): Consecutive login failures for user tomas account temporarily locked

Use the following to clear user's fail count:

# faillock --user tomas --reset
5.11 Set Boot Loader Password

Prevent users from entering the grub command line and edit menu entries:

# grub2-setpassword

# grub2-mkconfig -o /boot/grub2/grub.cfg

This will create the file /boot/grub2/user.cfg if one is not already present, which will contain the hashed Grub2 bootloader password.

Verify permissions of /boot/grub2/grub.cfg :

# chmod 0600 /boot/grub2/grub.cfg
5.12 Password-protect Single User Mode

CentOS 7 single user mode is password protected by the root password by default as part of the design of Grub2 and systemd.

5.13 Ensure Users Re-Authenticate for Privilege Escalation

The NOPASSWD tag allows a user to execute commands using sudo without having to provide a password. While this may sometimes be useful it is also dangerious.

Ensure that the NOPASSWD tag does not exist in /etc/sudoers configuration file or /etc/sudoers.d/ .

5.14 Multiple Console Screens and Console Locking

Install the screen package to be able to emulate multiple console windows:

# yum install screen

Install the vlock package to enable console screen locking:

# yum install vlock
5.15 Disable Ctrl-Alt-Del Reboot Activation

Prevent a locally logged-in console user from rebooting the system when Ctrl-Alt-Del is pressed:

# systemctl mask ctrl-alt-del.target
5.16 Warning Banners for System Access

Add the following line to the files /etc/issue and /etc/issue.net :

Unauthorised access prohibited. Logs are recorded and monitored.
5.17 Set Interactive Session Timeout

Open /etc/profile and set:

readonly TMOUT=900
5.18 Two Factor Authentication

The recent version of OpenSSH server allows to chain several authentication methods, meaning that all of them have to be satisfied in order for a user to log in successfully.

Adding the following line to /etc/ssh/sshd_config would require a user to authenticate with a key first, and then also provide a password.

AuthenticationMethods publickey,password

This is by definition a two factor authentication: the key file is something that a user has, and the account password is something that a user knows.

Alternatively, two factor authentication for SSH can be set up by using Google Authenticator.

5.19 Configure History File Size

Open /etc/profile and set the number of commands to remember in the command history to 5000:

HISTSIZE=5000

Sed one-liner:

# sed -i 's/HISTSIZE=.*/HISTSIZE=5000/g' /etc/profile
6. System Settings – System Accounting with auditd 6.1 Auditd Configuration

Open /etc/audit/auditd.conf and configure the following:

local_events = yes

write_logs = yes

log_file = /var/log/audit/audit.log

max_log_file = 25

num_logs = 10

max_log_file_action = rotate

space_left = 30

space_left_action = email

admin_space_left = 10

admin_space_left_action = email

disk_full_action = suspend

disk_error_action = suspend

action_mail_acct = root@example.com

flush = data

The above auditd configuration should never use more than 250MB of disk space (10x25MB=250MB) on /var/log/audit .

Set admin_space_left_action=single if you want to cause the system to switch to single user mode for corrective action rather than send an email.

Automatically rotating logs ( max_log_file_action=rotate ) minimises the chances of the system unexpectedly running out of disk space by being filled up with log data.

We need to ensure that audit event data is fully synchronised ( flush=data ) with the log files on the disk .

6.2 Auditd Rules

System audit rules must have mode 0640 or less permissive and owned by the root user:

# chown root:root /etc/audit/rules.d/audit.rules

# chmod 0600 /etc/audit/rules.d/audit.rules

Open /etc/audit/rules.d/audit.rules and add the following:

# Delete all currently loaded rules

-D



# Set kernel buffer size

-b 8192



# Set the action that is performed when a critical error is detected.

# Failure modes: 0=silent 1=printk 2=panic

-f 1



# Record attempts to alter the localtime file

-w /etc/localtime -p wa -k audit_time_rules



# Record events that modify user/group information

-w /etc/group -p wa -k audit_rules_usergroup_modification

-w /etc/passwd -p wa -k audit_rules_usergroup_modification

-w /etc/gshadow -p wa -k audit_rules_usergroup_modification

-w /etc/shadow -p wa -k audit_rules_usergroup_modification

-w /etc/security/opasswd -p wa -k audit_rules_usergroup_modification



# Record events that modify the system's network environment

-w /etc/issue.net -p wa -k audit_rules_networkconfig_modification

-w /etc/issue -p wa -k audit_rules_networkconfig_modification

-w /etc/hosts -p wa -k audit_rules_networkconfig_modification

-w /etc/sysconfig/network -p wa -k audit_rules_networkconfig_modification

-a always,exit -F arch=b32 -S sethostname -S setdomainname -k audit_rules_networkconfig_modification

-a always,exit -F arch=b64 -S sethostname -S setdomainname -k audit_rules_networkconfig_modification



# Record events that modify the system's mandatory access controls

-w /etc/selinux/ -p wa -k MAC-policy



# Record attempts to alter logon and logout events

-w /var/log/tallylog -p wa -k logins

-w /var/log/lastlog -p wa -k logins

-w /var/run/faillock/ -p wa -k logins



# Record attempts to alter process and session initiation information

-w /var/log/btmp -p wa -k session

-w /var/log/wtmp -p wa -k session

-w /var/run/utmp -p wa -k session



# Ensure auditd collects information on kernel module loading and unloading

-w /usr/sbin/insmod -p x -k modules

-w /usr/sbin/modprobe -p x -k modules

-w /usr/sbin/rmmod -p x -k modules

-a always,exit -F arch=b64 -S init_module -S delete_module -k modules



# Ensure auditd collects system administrator actions

-w /etc/sudoers -p wa -k actions



# Record attempts to alter time through adjtimex

-a always,exit -F arch=b32 -S adjtimex -S settimeofday -S stime -k audit_time_rules



# Record attempts to alter time through settimeofday

-a always,exit -F arch=b64 -S adjtimex -S settimeofday -k audit_time_rules



# Record attempts to alter time through clock_settime

-a always,exit -F arch=b32 -S clock_settime -F a0=0x0 -k time-change



# Record attempts to alter time through clock_settime

-a always,exit -F arch=b64 -S clock_settime -F a0=0x0 -k time-change



# Record events that modify the system's discretionary access controls

-a always,exit -F arch=b32 -S chmod -S fchmod -S fchmodat -F auid>=1000 -F auid!=4294967295 -k perm_mod

-a always,exit -F arch=b32 -S chown -S fchown -S fchownat -S lchown -F auid>=1000 -F auid!=4294967295 -k perm_mod

-a always,exit -F arch=b64 -S chmod -S fchmod -S fchmodat -F auid>=1000 -F auid!=4294967295 -k perm_mod

-a always,exit -F arch=b64 -S chown -S fchown -S fchownat -S lchown -F auid>=1000 -F auid!=4294967295 -k perm_mod

-a always,exit -F arch=b32 -S setxattr -S lsetxattr -S fsetxattr -S removexattr -S lremovexattr -S fremovexattr -F auid>=1000 -F auid!=4294967295 -k perm_mod

-a always,exit -F arch=b64 -S setxattr -S lsetxattr -S fsetxattr -S removexattr -S lremovexattr -S fremovexattr -F auid>=1000 -F auid!=4294967295 -k perm_mod



# Ensure auditd collects unauthorised access attempts to files (unsuccessful)

-a always,exit -F arch=b32 -S creat -S open -S openat -S open_by_handle_at -S truncate -S ftruncate -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -k access

-a always,exit -F arch=b32 -S creat -S open -S openat -S open_by_handle_at -S truncate -S ftruncate -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -k access

-a always,exit -F arch=b64 -S creat -S open -S openat -S open_by_handle_at -S truncate -S ftruncate -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -k access

-a always,exit -F arch=b64 -S creat -S open -S openat -S open_by_handle_at -S truncate -S ftruncate -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -k access



# Ensure auditd collects information on exporting to media (successful)

-a always,exit -F arch=b32 -S mount -F auid>=1000 -F auid!=4294967295 -k export

-a always,exit -F arch=b64 -S mount -F auid>=1000 -F auid!=4294967295 -k export



# Ensure auditd collects file deletion events by user

-a always,exit -F arch=b32 -S rmdir -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete

-a always,exit -F arch=b64 -S rmdir -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete



# Ensure auditd collects information on the use of privileged commands

-a always,exit -F path=/usr/bin/chage -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/chcon -F perm=x -F auid>=1000 -F auid!=4294967295 -F key=privileged-priv_change

-a always,exit -F path=/usr/bin/chfn -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/chsh -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/crontab -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/gpasswd -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/mount -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/newgrp -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/passwd -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/pkexec -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/screen -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/ssh-agent -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/sudo -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/sudoedit -F perm=x -F auid>=1000 -F auid!=4294967295 -F key=privileged

-a always,exit -F path=/usr/bin/su -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/umount -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/wall -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/write -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/lib64/dbus-1/dbus-daemon-launch-helper -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/libexec/openssh/ssh-keysign -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/libexec/utempter/utempter -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/lib/polkit-1/polkit-agent-helper-1 -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/netreport -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/pam_timestamp_check -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/postdrop -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/postqueue -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/restorecon -F perm=x -F auid>=1000 -F auid!=4294967295 -F key=privileged-priv_change

-a always,exit -F path=/usr/sbin/semanage -F perm=x -F auid>=1000 -F auid!=4294967295 -F key=privileged-priv_change

-a always,exit -F path=/usr/sbin/setsebool -F perm=x -F auid>=1000 -F auid!=4294967295 -F key=privileged-priv_change

-a always,exit -F path=/usr/sbin/unix_chkpwd -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/userhelper -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/usernetctl -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged



# Make the auditd configuration immutable.

# The configuration can only be changed by rebooting the machine.

-e 2

The auditd service does not include the ability to send audit records to a centralised server for management directly.

It does, however, include a plug-in for audit event multiplexor to pass audit records to the local syslog server.

To do so, open the file /etc/audisp/plugins.d/syslog.conf and set:

active = yes

Enable and start the service:

# systemctl enable auditd.service

# systemctl start auditd.service
6.3. Enable Kernel Auditing

Open /etc/default/grub and append the following parameter to the kernel boot line GRUB_CMDLINE_LINUX:

audit=1

Update Grub2 configuration to reflect changes:

# grub2-mkconfig -o /boot/grub2/grub.cfg
7. System Settings – Software Integrity Checking 7.1 Advanced Intrusion Detection Environment (AIDE)

Install AIDE:

# yum install aide

Build AIDE database:

# /usr/sbin/aide --init

By default, the database will be written to the file /var/lib/aide/aide.db.new.gz .

# cp /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz

Storing the database and the configuration file /etc/aide.conf (or SHA2 hashes of the files) in a secure location provides additional assurance about their integrity.

Check AIDE database:

# /usr/sbin/aide --check

By default, AIDE does not install itself for periodic execution. Configure periodic execution of AIDE by adding to cron:

# echo "30 4 * * * root /usr/sbin/aide --check|mail -s 'AIDE' root@example.com" >> /etc/crontab

Periodically running AIDE is necessary in order to reveal system changes.

7.2 Tripwire

Open Source Tripwire is an alternative to AIDE. It is recommended to use one or another, but not both.

Install Tripwire from the EPEL repository:

# yum install epel-release

# yum install tripwire

# /usr/sbin/tripwire-setup-keyfiles

The Tripwire configuration file is /etc/tripwire/twcfg.txt and the policy file is /etc/tripwire/twpol.txt . These can be edited and configured to match the system Tripwire is installed on, see this blog post for more details.

Initialise the database to implement the policy:

# tripwire --init

Check for policy violations:

# tripwire --check

Tripwire adds itself to /etc/cron.daily/ for daily execution therefore no extra configuration is required.

7.3 Prelink

Prelinking is done by the prelink package, which is not installed by default.

# yum install prelink

To disable prelinking, open the file /etc/sysconfig/prelink and set the following:

PRELINKING=no

Sed one-liner:

# sed -i 's/PRELINKING.*/PRELINKING=no/g' /etc/sysconfig/prelink

Disable existing prelinking on all system files:

# prelink -ua
8. System Settings – Logging and Message Forwarding 8.1 Configure Persistent Journald Storage

By default, journal stores log files only in memory or a small ring-buffer in the directory /run/log/journal . This is sufficient to show recent log history with journalctl, but logs aren't saved permanently. Enabling persistent journal storage ensures that comprehensive data is available after system reboot.

Open the file /etc/systemd/journald.conf and put the following:

[Journal]

Storage=persistent



# How much disk space the journal may use up at most

SystemMaxUse=256M



# How much disk space systemd-journald shall leave free for other uses

SystemKeepFree=512M



# How large individual journal files may grow at most

SystemMaxFileSize=32M

Restart the service:

# systemctl restart systemd-journald
8.2 Configure Message Forwarding to Remote Server

Depending on your setup, open /etc/rsyslog.conf and add the following to forward messages to a some remote server:

*.* @graylog.example.com:514

Here *.* stands for facility.severity . Note that a single @ sends logs over UDP, where a double @ sends logs using TCP.

8.3 Logwatch

Logwatch is a customisable log-monitoring system.

# yum install logwatch

Logwatch adds itself to /etc/cron.daily/ for daily execution therefore no configuration is mandatory.

9. System Settings – Security Software 9.1 Malware Scanners

Install Rkhunter and ClamAV:

# yum install epel-release

# yum install rkhunter clamav clamav-update

# rkhunter --update

# rkhunter --propupd

# freshclam -v

Rkhunter adds itself to /etc/cron.daily/ for daily execution therefore no configuration is required. ClamAV scans should be tailored to individual needs.

9.2 Arpwatch

Arpwatch is a tool used to monitor ARP activity of a local network (ARP spoofing detection), therefore it is unlikely one will use it in the cloud, however, it is still worth mentioning that the tools exist.

Be aware of the configuration file /etc/sysconfig/arpwatch which you use to set the email address where to send the reports.

9.3 Commercial AV

Consider installing a commercial AV product that provides real-time on-access scanning capabilities.

9.4 Grsecurity

Grsecurity is an extensive security enhancement to the Linux kernel. Although it isn't free nowadays, the software is still worth mentioning.

The company behind Grsecurity stopped publicly distributing stable patches back in 2015, with an exception of the test series continuing to be available to the public in order to avoid impact to the Gentoo Hardened and Arch Linux communities.

Two years later, the company decided to cease free distribution of the test patches as well, therefore as of 2017, Grsecurity software is available to paying customers only.

10. System Settings – OS Update Installation

Install the package yum-utils for better consistency checking of the package database.

# yum install yum-utils

Configure automatic package updates via yum-cron.

# yum install yum-cron

Add the following to /etc/yum/yum-cron.conf to get notified via email when new updates are available:

update_cmd = default

update_messages = yes

download_updates = no	

apply_updates = no

emit_via = email	

email_from = root@example.com

email_to = user@example.com

email_host = localhost

Add the following to /etc/yum/yum-cron-hourly.conf to check for security-related updates every hour and automatically download and install them:

update_cmd = security

update_messages = yes

download_updates = yes

apply_updates = yes

emit_via = stdio

Enable and start the service:

# systemctl enable yum-cron.service

# systemctl start yum-cron.service
11. System Settings – Process Accounting

The package psacct contain utilities for monitoring process activities:

  1. ac – displays statistics about how long users have been logged on.
  2. lastcomm – displays information about previously executed commands.
  3. accton – turns process accounting on or off.
  4. sa – summarises information about previously executed commands.

Install and enable the service:

# yum install psacct

# systemctl enable psacct.service

# systemctl start psacct.service
1. Services – SSH Server

Create a group for SSH access as well as some regular user account who will be a member of the group:

# groupadd ssh-users

# useradd -m -s /bin/bash -G ssh-users tomas

Generate SSH keys for the user:

# su - tomas

$ mkdir --mode=0700 ~/.ssh

$ ssh-keygen -b 4096 -t rsa -C "tomas" -f ~/.ssh/id_rsa

Generate SSH host keys:

# ssh-keygen -b 4096 -t rsa -N "" -f /etc/ssh/ssh_host_rsa_key

# ssh-keygen -b 1024 -t dsa -N "" -f /etc/ssh/ssh_host_dsa_key

# ssh-keygen -b 521 -t ecdsa -N "" -f /etc/ssh/ssh_host_ecdsa_key

# ssh-keygen -t ed25519 -N "" -f /etc/ssh/ssh_host_ed25519_key

For RSA keys, 2048 bits is considered sufficient. DSA keys must be exactly 1024 bits as specified by FIPS 186-2.

For ECDSA keys, the -b flag determines the key length by selecting from one of three elliptic curve sizes: 256, 384 or 521 bits. ED25519 keys have a fixed length and the -b flag is ignored.

The host can be impersonated if an unauthorised user obtains the private SSH host key file, therefore ensure that permissions of /etc/ssh/*_key are properly set:

# chmod 0600 /etc/ssh/*_key

Configure /etc/ssh/sshd_config with the following:

# SSH port

Port 22



# Listen on IPv4 only

ListenAddress 0.0.0.0



# Protocol version 1 has been exposed

Protocol 2



# Limit the ciphers to those which are FIPS-approved, the AES and 3DES ciphers

# Counter (CTR) mode is preferred over cipher-block chaining (CBC) mode

Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,aes192-cbc,aes256-cbc,3des-cbc



# Use FIPS-approved MACs

MACs hmac-sha2-512,hmac-sha2-256,hmac-sha1



# INFO is a basic logging level that will capture user login/logout activity

# DEBUG logging level is not recommended for production servers

LogLevel INFO



# Disconnect if no successful login is made in 60 seconds

LoginGraceTime 60



# Do not permit root logins via SSH

PermitRootLogin no



# Check file modes and ownership of the user's files before login

StrictModes yes



# Close TCP socket after 2 invalid login attempts

MaxAuthTries 2



# The maximum number of sessions per network connection

MaxSessions 2



# User/group permissions

AllowUsers

AllowGroups ssh-users

DenyUsers root

DenyGroups root



# Password and public key authentications

PasswordAuthentication no

PermitEmptyPasswords no

PubkeyAuthentication yes

AuthorizedKeysFile  .ssh/authorized_keys



# Disable unused authentications mechanisms

RSAAuthentication no # DEPRECATED

RhostsRSAAuthentication no # DEPRECATED

ChallengeResponseAuthentication no

KerberosAuthentication no

GSSAPIAuthentication no

HostbasedAuthentication no

IgnoreUserKnownHosts yes



# Disable insecure access via rhosts files

IgnoreRhosts yes



AllowAgentForwarding no

AllowTcpForwarding no



# Disable X Forwarding

X11Forwarding no



# Disable message of the day but print last log

PrintMotd no

PrintLastLog yes



# Show banner

Banner /etc/issue



# Do not send TCP keepalive messages

TCPKeepAlive no



# Default for new installations

UsePrivilegeSeparation sandbox



# Prevent users from potentially bypassing some access restrictions

PermitUserEnvironment no



# Disable compression

Compression no



# Disconnect the client if no activity has been detected for 900 seconds

ClientAliveInterval 900

ClientAliveCountMax 0



# Do not look up the remote hostname

UseDNS no



UsePAM yes

In case you want to change the default SSH port to something else, you will need to tell SELinux about it.

# yum install policycoreutils-python

For example, to allow SSH server to listen on TCP 2222, do the following:

# semanage port -a -t ssh_port_t 2222 -p tcp

Ensure that the firewall allows incoming traffic on the new SSH port and restart the sshd service.

2. Service – Network Time Protocol

CentOS 7 should come with Chrony, make sure that the service is enabled:

# systemctl enable chronyd.service
3. Services – Mail Server 3.1 Postfix

Postfix should be installed and enabled already. In case it isn't, the do the following:

# yum install postfix

# systemctl enable postfix.service

Open /etc/postfix/main.cf and configure the following to act as a null client:

smtpd_banner = $myhostname ESMTP

inet_interfaces = loopback-only

inet_protocols = ipv4

mydestination =

local_transport = error: local delivery disabled

unknown_local_recipient_reject_code = 550

mynetworks = 127.0.0.0/8

relayhost = [mail.example.com]:587

Optionally (depending on your setup), you can configure Postfix to use authentication:

# yum install cyrus-sasl-plain

Open /etc/postfix/main.cf and add the following:

smtp_sasl_auth_enable = yes

smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd

smtp_sasl_security_options = noanonymous

smtp_tls_CApath = /etc/ssl/certs

smtp_use_tls = yes

Open /etc/postfix/sasl_passwd and put authentication credentials in a format of:

[mail.example.com]:587 user@example.com:password

Set permissions and create a database file:

# chmod 0600 /etc/postfix/sasl_passwd

# postmap /etc/postfix/sasl_passwd

Restart the service and ensure that firewall allows outgoing traffic to the SMTP relay server.

3.2 Mail Distribution to Active Mail Accounts

Configure the file /etc/aliases to have a forward rule for the root user.

4. Services – Remove Obsolete Services

None of these should be installed on CentOS 7 minimal:

# yum erase xinetd telnet-server rsh-server \

  telnet rsh ypbind ypserv tfsp-server bind \

  vsfptd dovercot squid net-snmpd talk-erver talk

Check all enabled services:

# systemctl list-unit-files --type=service|grep enabled

Disable kernel dump service:

# systemctl disable kdump.service

# systemctl mask kdump.service

Disable everything that is not required, e.g.:

# systemctl disable tuned.service
5. Services – Restrict at and cron to Authorised Users

If the file cron.allow exists, then only users listed in the file are allowed to use cron, and the cron.deny file is ignored.

# echo root > /etc/cron.allow

# echo root > /etc/at.allow

# rm -f /etc/at.deny /etc/cron.deny

Note that the root user can always use cron, regardless of the usernames listed in the access control files.

6. Services – Disable X Windows Startup

This can be achieved by setting a default target:

# systemctl set-default multi-user.target
7. Services – Fail2ban

Install Fail2ban from the EPEL repository:

# yum install epel-release

# yum install fail2ban

If using iptables rather than firewalld, open the file /etc/fail2ban/jail.d/00-firewalld.conf and comment out the following line:

#banaction = firewallcmd-ipset

Fail2Ban uses /etc/fail2ban/jail.conf . Configuration snippet for SSH is provided below:

[sshd]

port    = ssh

enabled = true

ignoreip = 10.8.8.61

bantime  = 600

maxretry = 5

If you run SSH on a non-default port, you can change the port value to any positive integer and then enable the jail.

# systemctl enable fail2ban.service
# systemctl start fail2ban.service
8. Services – Sysstat to Collect Performance Activity

Sysstat may provide useful insight into system usage and performance, however, unless used, the service should be disabled, or not installed at all.

# yum install sysstat
# systemctl enable sysstat.service
# systemctl start sysstat.service
References

[Sep 18, 2017] Kickstart File Example

Kickstart File Example

Below is an example of a kickstart file that you can use to install and configure Parallels Cloud Server in unattended mode. You can use this file as the basis for creating your own kickstart files.

# Install Parallels Cloud Server

install

# Uncomment the line below to install Parallels Cloud Server in a completely unattended mode

# cmdline

# Use the path of http://example.com/pcs to get the installation files.

url --url http://example.com/pcs

# Use English as the language during the installation and as the default system language.

lang en_US.UTF-8

# Use the English keyboard type.

keyboard us

# Uncomment the line below to remove all partitions from the SDA hard drive and create these partitions: /boot, /, /vz, and swap.

# clearpart --drives=sda --all --initlabel

# zerombr

part /boot --fstype=ext4 --size=512

part / --fstype=ext4 --size=20096

part /vz --fstype=ext4 --size=40768 --grow

part swap --size=4000

# Use a DHCP server to obtain network configuration.

network --bootproto dhcp

# Set the root password for the server.

rootpw xxxxxxxxx

# Use the SHA-512 encryption for user passwords and enable shadow passwords.

auth --enableshadow --passalgo=sha512

# Set the system time zone to America/New York and the hardware clock to UTC.

timezone --utc America/New_York

# Set sda as the first drive in the BIOS boot order and write the boot record to mbr.

bootloader --location=mbr

# Tell the Parallels Cloud Server installer to reboot the system after installation.

reboot

# Install the Parallels Cloud Server license on the server.

key XXXXXX-XXXXXX-XXXXXX-XXXXXX-XXXXXX

# Create the virt_network1 Virtual Network on the server and associate it with the network adapter eth0.

vznetcfg --net=virt_network1:eth0

# Configure the ip_tables ipt_REJECT ipt_tos ipt_limit modules to be loaded in Containers.

# Use the http://myrepository.com to handle Fedora OS and application templates.

vztturlmap $FC_SERVER http://myrepository.com

# Install the listed EZ templates. Cache all OS templates after installation. Skip the installation of pre-created templates.

nosfxtemplate

%eztemplates --cache

centos-6-x86_64

centos-6-x86

mailman-centos-6-x86_64

mailman-centos-6-x86

# Install the packages for Parallels Cloud Server on the server.

%packages

@base

@core

@vz

@ps

Kickstart file example for installing on EFI-based systems

You can use the file above to install Parallels Cloud Server on BIOS-based systems. For installation on EFI-based systems, you need to modify the following places in the file (the changes are highlighted in bold):

# The following 4 commands are used to remove all partitions from the SDA hard drive and create these partitions: /boot/efi (required for EFI-based systems), /boot, /, /vz, and swap.

# clearpart --drives=sda --all --initlabel

part /boot/efi --fstype=efi --grow --maxsize=200 --size=20

part /boot --fstype=ext4 --size=512

part / --fstype=ext4 --size=20096

part /vz --fstype=ext4 --size=40768 --grow

part swap --size=4000

# Configure the bootloader.

bootloader --location=partition

Kickstart file example for upgrading to Parallels Cloud Server 6.0

Below is an example of a kickstart file you can use to upgrade your system to Parallels Cloud Server 6.0.

# Upgrade to Parallels Cloud Server rather than perform a fresh installation.

upgrade

# Use the path of http://example.com/pcs to get the installation files.

url --url http://example.com/pcs

# Use English as the language during the upgrade and as the default system language.

lang en_US.UTF-8

# Use the English keyboard type.

keyboard us

# Set the system time zone to America/New York and the hardware clock to UTC.

timezone --utc America/New_York

# Upgrade the bootloader configuration.

bootloader --upgrade

[Sep 18, 2017] RHEL ISO with kickstart file

Aug 06, 2017 | unix.stackexchange.com

Hugo , asked Jan 7 '15 at 16:36

I am trying to edit the original RHEL 6.5 DVD (rhel-server-6.5-x86_64-dvd.iso) from redhat in order to add kickstart file on it. The goal is to have one 3.4Go iso with automatic install. And not one boot media and one DVD.

This technique is not supported by redhat officially, but I found a procedure : https://access.redhat.com/solutions/60959

My ks.cfg looks like :

install
cdrom
repo --name="Red Hat Enterprise Linux"  --baseurl=file:/mnt/source --cost=100
repo --name=HighAvailability --baseurl=file:///mnt/source/HighAvailability

I got an error when the installer start : it didn't find the disk Red Hat Enterprise Linux.

I guess this is because installer is not looking on its own media.

Is there a way to achieve this ? Does the cdrom have optional parameter to hard link the device ?

tonioc , answered Jan 7 '15 at 17:30

You don't need to set the repo URLs in ks.cfg, here is an example of kickstart that I use currently with rhel6.
# interactive install from CD-ROM/DVD
interactive
install
cdrom

key --skip
lang en_US.UTF-8
# keyboard us

#
clearpart --all --initlabel
part /boot --fstype ext4 --size=100
part pv.100000 --size=1 --grow
volgroup vg00 --pesize=32768 pv.100000
logvol / --fstype ext4 --name=lvroot --vgname=vg00 --size=15360
logvol swap --fstype swap --name=lvswap --vgname=vg00 --size=2048
logvol /var --fstype ext4 --name=lvvar --vgname=vg00 --size 5120

timezone Europe/Paris
firewall --disabled
authconfig --useshadow --passalgo=sha512
selinux --enforcing

#skipx

# pre-set list of packages/groups to install 
%packages
@core
@server-policy
acpid
device-mapper-multipath
dmidecode
# ... and so on the list of packages/groups I pre-customize (and with - those I don't want)
vsftpd
wget
xfsprogs
-autoconf
-automake
-bc
# ... and so on
#-----------------------------------------------------------------------------
# postinstall, execution avec chroot dans le systeme installé.
%post --interpreter=/bin/sh --log=/root/post_install.log
echo -e "================================================================="
echo -e "       Starting kickStart post install script "

# do some extra stuff here , like mounting cd-rom copying add-ons specific for my product

Hugo , asked Jan 7 '15 at 16:36

I am trying to edit the original RHEL 6.5 DVD (rhel-server-6.5-x86_64-dvd.iso) from redhat in order to add kickstart file on it. The goal is to have one 3.4Go iso with automatic install. And not one boot media and one DVD.

This technique is not supported by redhat officially, but I found a procedure : https://access.redhat.com/solutions/60959

My ks.cfg looks like :

install
cdrom
repo --name="Red Hat Enterprise Linux"  --baseurl=file:/mnt/source --cost=100
repo --name=HighAvailability --baseurl=file:///mnt/source/HighAvailability

I got an error when the installer start : it didn't find the disk Red Hat Enterprise Linux.

I guess this is because installer is not looking on its own media.

Is there a way to achieve this ? Does the cdrom have optional parameter to hard link the device ?

tonioc , answered Jan 7 '15 at 17:30

You don't need to set the repo URLs in ks.cfg, here is an example of kickstart that I use currently with rhel6.
# interactive install from CD-ROM/DVD
interactive
install
cdrom

key --skip
lang en_US.UTF-8
# keyboard us

#
clearpart --all --initlabel
part /boot --fstype ext4 --size=100
part pv.100000 --size=1 --grow
volgroup vg00 --pesize=32768 pv.100000
logvol / --fstype ext4 --name=lvroot --vgname=vg00 --size=15360
logvol swap --fstype swap --name=lvswap --vgname=vg00 --size=2048
logvol /var --fstype ext4 --name=lvvar --vgname=vg00 --size 5120

timezone Europe/Paris
firewall --disabled
authconfig --useshadow --passalgo=sha512
selinux --enforcing

#skipx

# pre-set list of packages/groups to install 
%packages
@core
@server-policy
acpid
device-mapper-multipath
dmidecode
# ... and so on the list of packages/groups I pre-customize (and with - those I don't want)
vsftpd
wget
xfsprogs
-autoconf
-automake
-bc
# ... and so on
#-----------------------------------------------------------------------------
# postinstall, execution avec chroot dans le systeme installé.
%post --interpreter=/bin/sh --log=/root/post_install.log
echo -e "================================================================="
echo -e "       Starting kickStart post install script "

# do some extra stuff here , like mounting cd-rom copying add-ons specific for my product

[Sep 17, 2017] RHEL6 Unable to download kickstart file

Sep 17, 2017 | unix.stackexchange.com
Oops! I didn't mean to do this.
up vote down vote favorite

robertpas , asked Nov 9 '15 at 15:13

In our lab we have a set of scripts that automatically configure a kickstart installation for RHEL5 on HP ProLiant DL380p Gen8. Based on data from several configuration files, it does the following steps:
  1. mounts redhat dvd
  2. modifies isolinux.cfg accordingly
  3. creates ks.cfg
  4. creates a bootdisk with the installation data (isolinux.cfg, ks.cfg, etc)
  5. creates a http server with the bootdisk directory.
  6. mounts the bootdisk through ILO ( /dev/scd1 )
  7. installs RHEL5

Here is the line referring to the kickstart file location :

append initrd=initrd.img ks=hd:scd1:/isolinux/ks.cfg ksdevice=eth4

Everything works well for RHEL5, but there have been requests for RHEL6.

For RHEL6, everything seems to work OK until #7, where it returns the message "unable to download kickstart file" . I have commented some lines in the scripts, eliminating the installation part and leaving only the ILO mount part.

The bootdisk is mounted and accessible on /dev/scd1 . The ks.cfg file is present there. I have also tested and the files from the Kickstart server are accessible with wget .

I have also tried accessing the ks.cfg file through http :

append initrd=initrd.img ks=http://<ip>:<port>/boot/isolinux/ks.cfg ksdevice=eth4

The above part did not work.

But what really vexes me is that RHEL5 works in the same conditions, but RHEL6 does not.

I have been talking to redhat support for a week and they don't seem to know what is wrong.

Any help would be greatly appreciated.

robertpas , answered Nov 11 '15 at 8:11

I have figured out the problem.

There seems to be a difference between RHEL5 and RHEL6 at the installation level.

RHEL5 will detect your physical cdrom and mount it on /dev/scd0 , therefore the location of the mount will be /dev/scd1 . RHEL6 does not seem to do this, therefore the mount location will be /dev/scd0 .

The correct way to declare the ks file location in a case like this is :

append initrd=initrd.img ks=hd:scd0:/isolinux/ks.cfg ksdevice=eth4

I hope someone will find this helpful in the future.

[Sep 17, 2017] systemd-free.org Archlinux, systemd-free

Notable quotes:
"... Since the adoption of systemd by Arch Linux I've encountered many problems with my systems, ranging from lost temporary files which systemd deemed proper to delete without asking (changing default behaviour on a whim), to total, consistent boot lockups because systemd-210+ couldn't mount an empty /usr/local partition (whereas systemd-208 could; go figure). ..."
"... As each "upgrade" of systemd aggressively assimilated more and more key system components into itself, it became apparent that the only way to avoid this single most critical point of failure was to stay as far away from it as possible. ..."
"... How about defaulting KillUserProcesses to yes , which effectively kills all backgrounded user processes (tmux and screen included) on logout? ..."
Aug 02, 2017 | systemd-free.org

An init system must be an init system

Systemd problems

Since the adoption of systemd by Arch Linux I've encountered many problems with my systems, ranging from lost temporary files which systemd deemed proper to delete without asking (changing default behaviour on a whim), to total, consistent boot lockups because systemd-210+ couldn't mount an empty /usr/local partition (whereas systemd-208 could; go figure).

As each "upgrade" of systemd aggressively assimilated more and more key system components into itself, it became apparent that the only way to avoid this single most critical point of failure was to stay as far away from it as possible.

Reading the list of those system components is daunting: login, pam, getty, syslog, udev, cryptsetup, cron, at, dbus, acpi, cgroups, gnome-session, autofs, tcpwrappers, audit, chroot, mount ... How about defaulting KillUserProcesses to yes , which effectively kills all backgrounded user processes (tmux and screen included) on logout?

It would seem that the only thing still missing from systemd is a decent init system.

The solution: Remove systemd, install OpenRC

"Coincidentally", there were others before me who had had similar concerns and had prepared the way.

Their efforts and experience is summarised in these pages. Sincere, warm thanks go to artoo and Aaditya who have done most of the work in Archland and, of course, the Gentoo developers who have made this possible in the first place. I administer a handsome lot of linux boxes and I've performed the migration procedure (successfully and without exception) in all of them, even remote ones.

The procedure is explained in Installation ; however you might want to read about OpenRC in the links below:

The Archlinux OpenRC wiki page doesn't contain information on the migration process anymore; it breaks down things in several different articles and provides links to other resources not always Arch-specific, which unnecessarily obfuscates things, not to mention the omnipresent warning to not remove systemd. The migration procedure described here instead is reliable and as plain and simple as possible, explaining what is to be done and why in clearly defined steps, despite what a FUD-spreading Arch Wiki admin says against it in his every other post. This is proven time and again on many different boxes and setups.

For your convenience, an up-to-date OpenRC ISO image is also provided for clean installations. Go to Installation for additional information.

Other Linux distros: Escape from systemd

Here we focus on removing systemd from Arch Linux and derivatives: Manjaro, ArchBang, Antergos etc. For information about removing systemd from other Linux distributions (namely Debian and deb/apt-get based ones like Ubuntu and Mint) you can visit the Without systemd wiki .
Additionally, a list of Operating systems without systemd in the default installation might be of special interest as, ultimately, the future of the Linux init systems will be determined by the popularity (or lack thereof) of systemd-free distros like Gentoo , Slackware , PCLinuxOS , Void Linux and Devuan .

Non-Linux OSes are also a viable (if somewhat last-resort) option, especially if the situation for non-systemd setups significantly worsens; FreeBSD and DragonFlyBSD are totally worth taking a shot at.

Clean installations

Contact You may contact us at Freenode, channels #openrc, #manjaro-openrc and #arch-openrc.

[Aug 14, 2017] Are 32-bit applications supported in RHEL 7?

Aug 14, 2017 | access.redhat.com
Solution Verified - Updated June 1 2017 at 1:22 PM -

Environment

Issue Resolution

Red Hat Enterprise Linux 7 does not support installation on i686, 32 bit hardware. ISO installation media is only provided for 64-bit hardware. Refer to Red Hat Enterprise Linux technology capabilities and limits for additional details.

However, 32-bit applications are supported in a 64-bit RHEL 7 environment in the following scenarios:

While RHEL 7 will not natively support 32-bit hardware, certified hardware can be searched for in the certified hardware database .

[Aug 14, 2017] CentOS-RHEL - WineHQ Wiki

Aug 14, 2017 | wiki.winehq.org

Notes on EPEL 7

At the time of this writing, EPEL 7 still has no 32-bit packages (including wine and its dependencies). There is a 64-bit version of wine, but without the 32-bit libraries needed for WoW64 capabilities, it cannot support any 32-bit Windows apps (the vast majority) and even many 64-bit ones (that still include 32-bit components).

This is primarily because with release 7, Red Hat didn't have enough customer demand to justify an i386 build. While Red Hat itself still comes with lean multilib and 32-bit support for legacy needs, this is part of Red Hat's release process, not the packages themselves. Therefore CentOS 7 had to develop its own workflow for building an i386 release, a process that was completed in Oct 2015 .

With its i386 release, CentOS has cleared a major hurdle on the way to an EPEL with 32-bit libraries, and now the ball is in the Fedora project's court (as the maintainers of the EPEL). Once a i386 version of the EPEL becomes available, you should be able to follow the same instructions above to install a fully functional wine package for CentOS 7 and its siblings.

Thankfully, this also means that EPEL 8 shouldn't suffer from this same problem. In the meantime though, you can keep reading for some hints on getting a recent version of wine from the source code.

Special Considerations for Red Hat

Those with a Red Hat subscription should have access to enhanced help and support, but we wanted to provide some very quick notes on enabling the EPEL for your system. Before installing the epel-release package, you'll first need to activate some additional repositories.

On release 6 and older, which used the Red Hat Network Classic system for package subscriptions, you need to activate the optional repo with rhn-channel

rhn-channel -a -c rhel-6-server-optional-rpms -u <your-RHN-username> -p <your-RHN-password>

Starting with release 7 and the Subscription Manager system, you'll need to activate both the optional and extras repos with subscription-manager

subscription-manager repos --enable=rhel-7-server-optional-rpms
subscription-manager repos --enable=rhel-7-server-extras-rpms

As for source RPMs signed by Red Hat, there doesn't seem to be much public-facing documentation. With a subscription, you should be able to login and browse the repos ; this post on LWN also has some background.

[Aug 14, 2017] CentOS 6 or CentOS 7

Aug 14, 2017 | www.lowendtalk.com

[Aug 13, 2017] The Tar Pit - How and why systemd has won

Notable quotes:
"... However, despite this and despite the flame wars it has caused throughout the open source communities, and the endless attempts to boycott it, systemd has already won. Red Hat Enterprise Linux now uses it; Debian made it the default init system for their next version 6 and as a consequence, Ubuntu is replacing Upstart with systemd; openSUSE and Arch have it enabled for quite some time now. Basically every major GNU/Linux distribution is now using it 7 . ..."
"... systemd doesn't even know what the fuck it wants to be. It is variously referred to as a "system daemon" or a "basic userspace building block to make an OS from", both of which are highly ambiguous. [...] Ironically, despite aiming to standardize Linux distributions, it itself has no clear standard, and is perpetually rolling. ..."
Sep 27, 2014 | thetarpit.org

systemd is the work of Lennart Poettering, some guy from Germany who makes free and open source software, and who's been known to rub people the wrong way more than once. In case you haven't heard of him, he's also the author of PulseAudio, also known as that piece of software people often remove from their GNU/Linux systems in order to make their sound work. Like any software engineer, or rather like one who's gotten quite a few projects up and running, Poettering has an ego. Well, this should be about systemd, not about Poettering, but it very well is.

systemd started as a replacement for the System V init process. Like everything else, operating systems have a beginning and an end, and like every other operating system, Linux also has one: the Linux kernel passes control over to user space by executing a predefined process commonly called init , but which can be whatever process the user desires. Now, the problem with the old System V approach is that, well, I don't really know what the problem is with it, other than the fact that it's based on so-called "init scripts" 1 and that this, and maybe a few other design aspects impose some fundamental performance limitations. Of course, there are other aspects, such as the fact that no one ever expects or wants the init process to die, otherwise the whole system will crash.

The broader history is that systemd isn't the first attempt to stand out as a new, "better" init system. Canonical have already tried that with Upstart; Gentoo relies on OpenRC; Android uses a combination between Busybox and their own home-made flavour of initialization scripts, but then again, Android does a lot of things differently. However, contrary to the basic tenets 2 of the Unix philosophy , systemd also aims to do a lot of things differently.

For example, it aims to integrate as many other system-critical daemons as possible: from device management, IPC and logging to session management and time-based scheduling, systemd wants to do it all. This is indeed rather stupid from a software engineering point of view 3 , as it increases software complexity and bugs and the attack surface and whatnot 4 , but I can understand the rationale behind it: the maintainers want more control over everything, so they end up requesting that all other daemons are written as systemd plugins 5 .

However, despite this and despite the flame wars it has caused throughout the open source communities, and the endless attempts to boycott it, systemd has already won. Red Hat Enterprise Linux now uses it; Debian made it the default init system for their next version 6 and as a consequence, Ubuntu is replacing Upstart with systemd; openSUSE and Arch have it enabled for quite some time now. Basically every major GNU/Linux distribution is now using it 7 .

At the end of the day, systemd has won by being integrated into the democratic ecosystem that is GNU/Linux. As much as I hate PulseAudio and as much as I don't like Poettering, I see that distribution developers and maintainers seem to desperately need it, although I must confess I don't really know why. Either way, compare this :

systemd doesn't even know what the fuck it wants to be. It is variously referred to as a "system daemon" or a "basic userspace building block to make an OS from", both of which are highly ambiguous. [...] Ironically, despite aiming to standardize Linux distributions, it itself has no clear standard, and is perpetually rolling.

to this :

Verifiable Systems are closely related to stateless systems: if the underlying storage technology can cryptographically ensure that the vendor-supplied OS is trusted and in a consistent state, then it must be made sure that /etc or /var are either included in the OS image, or simply unnecessary for booting.

and this . Some of the stuff there might be downright weird or unrealistic or bullshit, but other than that, these guys (especially Poettering) have a damn good idea what they want to do and where they're going, unlike many other free software, open source projects.

And now's one of those times when such a clear vision makes all the difference.


  1. That is, it's "imperative" instead of "declarative". Does this matter to the average guy? I haven't the vaguest idea, to be honest.
  2. Some people don't consider software engineering a science, that's why. But I guess it would be fairer to call them "principles", wouldn't it?
  3. One does not simply integrate components for the sake of "integration". There are good reasons to have isolation and well-established communication protocols between software components: for example if I want to build my own udev or cron or you-name-it, systemd won't let me do that, because it "integrates". Well, fuck that.
  4. And guess what; for system software, systemd has a shitload of bugs . This is just not acceptable for production. Not. Acceptable. Ever.
  5. That's what "having systemd as a dependency" really means, no matter how they try to sugarcoat it.
  6. Jessie, at the time of writing.
  7. Well, except Slackware.

[Aug 13, 2017] Is Modern Linux Becoming Too Complex

One man's variety is another man's hopelessly confusing goddamn mess.

Feb 11, 2015 | Slashdot

An anonymous reader writes: Debian developer John Goerzen asks whether Linux has become so complex that it has lost some of its defining characteristics. "I used to be able to say Linux was clean, logical, well put-together, and organized. I can't really say this anymore. Users and groups are not really determinitive for permissions, now that we have things like polkit running around. (Yes, by the way, I am a member of plugdev.) Error messages are unhelpful (WHY was I not authorized?) and logs are nowhere to be found. Traditionally, one could twiddle who could mount devices via /etc/fstab lines and perhaps some sudo rules. Granted, you had to know where to look, but when you did, it was simple; only two pieces to fit together. I've even spent time figuring out where to look and STILL have no idea what to do."

Lodragandraoidh (639696) on Wednesday February 11, 2015 @11:21AM (#49029667)

Re:So roll your own. (Score:5, Insightful)

I think you're missing the point. Linux is the kernel - and it is very stable, and while it has modern extensions, it still keeps the POSIX interfaces consistent to allow inter-operation as desired. The issue here is not that forks and new versions of Linux distros are an aberration, but how the major distributions have changed and the article is a symptom of those changes towards homogeneity.

The Linux kernel is by definition identically complex on any distro using a given version of the kernel (the variances created by compilation switches notwithstanding). The real variance is in the distros - and I don't think variety is a bad thing, particularly in this day and age when we are having to focus more and more on security, and small applications on different types of devices - from small ARM processor systems, to virtual cluster systems in data centers.

Variety creates a strong ecosystem that is more resilient to security exploitation as a whole; variety is needed now more than ever given the security threats we are seeing. If you look at the history of Linux distributions over time - you'll see that from the very beginning it was a vibrant field with many distros - some that bombed out - some that were forked and then died, and forks and forks of forks that continued on - keeping the parts that seemed to work for those users.

Today - I think people perceive what is happening with the major distros as a reduction in choice (if Redhat is essentially identical to Debian, Ubuntu, et al - why bother having different distros?) - a bottleneck in variability; from a security perspective, I think people are worried that a monoculture is emerging that will present a very large and crystallized attack surface after the honeymoon period is over.

If people don't like what is available, if they are concerned about the security implications, then they or their friends need to do something about it. Fork an existing distro, roll your own distro, or if you are really clever - build your own operating system from scratch to provide an answer, and hopefully something better/different in the long run. Progress isn't a bad thing; sitting around doing nothing and complaining about it is.

NotDrWho (3543773) on Wednesday February 11, 2015 @11:28AM (#49029739)

Re: So roll your own. (Score:5, Funny)

One man's variety is another man's hopelessly confusing goddamn mess.

Anonymous Coward on Wednesday February 11, 2015 @09:31AM (#49028605)

Re: Yes (Score:4, Insightful)

Systemd has been the most divisive force in the Linux community lately, and perhaps ever. It has been foisted upon many unwilling victims. It has torn apart the Debian community. It has forced many long-time Linux users to the BSDs, just so they can get systems that boot properly.

Systemd has harmed the overall Linux community more than anything else has. Microsoft and SCO, for example, couldn't have dreamed of harming Linux as much as systemd has managed to do, and in such a short amount of time, too.

Re:
Amen. It's sad, but a single person has managed to kill the momentum of GNU/Linux as an operating system. Microsoft should give the guy a medal.

People are loath to publish new projects because keeping them running with systemd and all its dependencies in all possible permutations is a full time job. The whole "do one thing only and do it well" concept has been flushed down the drain.

I know that I am not the only sysadmin who refuses to install Red Hat Enterprise Linux 7, but install new systems with RHE

gmack (197796) <gmack@innerfire.nCOLAet minus caffeine> on Wednesday February 11, 2015 @11:55AM (#49030073) Homepage Journal

Score:4, Informative)

Who modded this up?

SystemD has put in jeopardy the entire presence of Linux in the server room:

1: AFIAK, as there have been zero mention of this, SystemD appears to have had -zero- formal code testing, auditing, or other assurance that it is stable. It was foisted on people in RHEL 7 and downstreams with no ability to transition to it.

Formal code testing is pretty much what Redhat brings to the table.

2: It breaks applications that use the init.d mechanism to start with. This is very bad, since some legacy applications can not be upgraded. Contrast that to AIX where in some cases, programs written back in 1991 will run without issue on AIX 7.1. Similar with Solaris.

At worst it breaks their startup scripts, and since they are shell scripts they are easy to fix.

3: SystemD is one large code blob with zero internal separation... and it listens on the network with root permissions. It does not even drop perms which virtually every other utility does. Combine this with the fact that this has seen no testing... and this puts every production system on the Internet at risk of a remote root hole. It will be -decades- before SystemD becomes a solid program. Even programs like sendmail went through many bug fixes where security was a big problem... and sendmail has multiple daemons to separate privs, unlike SystemD.

Do you really understand the architecture of either SystemD or sendmail? Sendmail was a single binary written in a time before anyone cared about security. I don't recall sendmail being a bundle programs but then it's been a decade since I stopped using it precisely because of it's poor security track record. Contrary to your FUD, Systemd runs things as separate daemons with each component using the least amount of privileges needed to do it's job and on top of that many of the network services (ntp, dhcpd) that people complain about are completely optional addons and quite frankly, since they seem designed around the single purpose of Linux containers, I have not installed them. This is a basic FAQ entry on the systemd web site so I really don't get how you didn't know this.

4: SystemD cannot be worked around. The bash hole, I used busybox to fix. If SystemD breaks, since it encompasses everything including the bootloader, it can't be replaced. At best, the system would need major butchery to work. In the enterprise, this isn't going to happen, and the Linux box will be "upgraded" to a Windows or Solaris box.

Unlikely, it is a minority of malcontents who are upset about SystemD who have created an echo chamber of half truths and outright lies. Anyone who needs to get work done will not even notice the transition.

5: SystemD replaces many utilities that have stood 20+ years of testing, and takes a step back in security by the monolithic userland and untested code. Even AIX with its ODM has at least seen certification under FIPS, Common Criteria, and other items.

Again you use the word "monolitic without having a shred of knowledge about how SystemD works.The previous init system despite all of it's testing was a huge mess. There is a reason there were multiple projects that came before SystemD that tried to clean up the horrific mess that was the previous init.

6: SystemD has no real purpose, other than ego. The collective response justifying its existence is, "because we say so. Fuck you and use it." Well, this is no way to treat enterprise customers. Enterprise customers can easily move to Solaris if push comes to shove, and Solaris has a very good record of security, without major code added without actual testing being done, and a way to be compatible. I can turn Solaris 11's root role into a user, for example.

Solaris has already transitioned to it's own equivalent daemon that does roughly what SystemD does.

As for SystemD: It allows booting on more complicated hardware. Debian switched because they were losing market share on larger systems that the current init system only handles under extreme protest. As a side affect of the primary problem it was meant to solve, it happens to be faster which is great for desktops and uses a lot less memory which is good for embedded systems.

So, all and all, SystemD is the worst thing that has happened with Linux, its reputation, and potentially, its future in 10 years, since the ACTA treaty was put to rest. SystemD is not production ready, and potentially can put every single box in jeopardy of a remote root hole.

Riight.. Meanwhile in the real world, none of my desktops or servers have any SystemD related network services running so no root hole here.

Dragonslicer (991472) on Wednesday February 11, 2015 @12:26PM (#49030407)

Score:5, Insightful)

3: SystemD is one large code blob with zero internal separation... and it listens on the network with root permissions. It does not even drop perms which virtually every other utility does. Combine this with the fact that this has seen no testing... and this puts every production system on the Internet at risk of a remote root hole. It will be -decades- before SystemD becomes a solid program.

Even programs like sendmail went through many bug fixes where security was a big problem... and sendmail has multiple daemons to separate privs, unlike SystemD.

Because of course it's been years since anyone found any security holes in well-tested software like Bash or OpenSSL.

Anonymous Coward on Wednesday February 11, 2015 @08:24AM (#49028117)

Score:5, Interesting)

I was reading through the article's comments and saw this thread of discussion [complete.org]. Well, it's hard to call it a thread of discussion because John apparently put an end to it right away.

The first comment in that thread is totally right though. It is systemd and Gnome 3 that are causing so many of these problems with Linux today. I don't use Debian, but I do use another distro that switched to systemd, and it is in fact the problem here. My workstation doesn't work anywhere as well as it did a couple of years ago, before systemd got installed. So when somebody blames systemd for these kinds of problems, that person is totally correct. I don't get why John would censor the discussion like that. I also don't get why he'd label somebody who points out the real problem as being a 'troll'.

John needs to admit that the real problem here is not the people who are against systemd. These people are actually the ones who are right, and who have the solution to John's problems!

The comment I linked to says 'Systemd needs to be removed from Debian immediately.', and that's totally right. But I think we need to expand it to 'Systemd needs to be removed from all Linux distros immediately.'

If we want Linux to be usable again, systemd does need to go. It's just as simple as that. Censoring any and all discussion of the real problem here, systemd, sure isn't going to get these problems resolved any quicker!

Re:Why does John shut down all systemd talk? (Score:5, Insightful)

[Aug 10, 2017] Kickstart Problem with --initlabel

Notable quotes:
"... Last edited by phatrik; 08-07-2012 at 11:17 AM . Reason: prefixed with solved ..."
Aug 07, 2012 | www.linuxquestions.org
phatrik
Kickstart: Problem with --initlabel

I'm having a problem when using kickstart to deploy CentOS 6.3 KVM guest OS and no one seems to know why so I figured I'd ask in the KVM SF :-) Details:

- Tyring to install CentOS 6.3
- Doing a netinstall using a FTP site
- The installation is for a guest OS (KVM).

The install is being launched with:

virt-install -n server1.example.com -r 768 -l /media/netinstall -x "ks=ftp://192.168.100.2/pub/ks.cfg"

The install starts and gets to the point where I see "Error processing drive. This device may need to be re-initialized." The relevant part of my KS file:

clearpart --initlabel --all

# Disk partitioning information

part /boot --fstype="ext4" --size=500
part /home --fstype="ext4" --size=2048
part swap --fstype="swap" --size=2048
part / --fstype="ext4" --grow --size=1

When I switch to the 3rd terminal for information, here's what I see:

required disklable type for sda (1) is None
default diskalbel type for sda is msdos
selecting msdos diskalbel for sda based on size

Based on "required diskalbel type for sda (1) is none" I decided to remove the --initlabel parm, however I still face the same problem (prompted to initialize the disk).

TIA

Erik


Last edited by phatrik; 08-07-2012 at 11:17 AM . Reason: prefixed with solved
dyasny 08-07-2012 Registered: Dec 2007 Location: Canada Distribution: RHEL,Fedora Posts: 995
I'd just abandon virt-install and deploy VMs from a template instead. Much faster and easier to do
phatrik 08-07-2012, 08:14

Thank you for your reply, but that's obviously not the answer I'm looking for. Yes I know virt-clone could be used but what I'm truly interested in is getting at the bottom of this problem.

wungad

From RedHat Knowledge base
Issue

The 'clearpart --initlabel' option in a kickstart no longer initializes drives in RHEL 6.3.

Environment
Red Hat Enterprise Linux 6.3
Anaconda (kickstart)

Resolution

Use the 'zerombr' option in the kickstart to initialize disks and create a new partition table.

Use the 'ignoredisk' option in the kickstart to limit which disks are initialized by the 'zerombr' option. The following example will limit initialization to the 'vda' disk only:
zerombr
ignoredisk --only-use=vda

phatrik

Thank you for your reply, that's exactly what I was looking for.

Quote:

Originally Posted by wungad Issue
The 'clearpart --initlabel' option in a kickstart no longer initializes drives in RHEL 6.3.

Environment

Red Hat Enterprise Linux 6.3
Anaconda (kickstart)

Resolution

Use the 'zerombr' option in the kickstart to initialize disks and create a new partition table.
Use the 'ignoredisk' option in the kickstart to limit which disks are initialized by the 'zerombr' option. The following example will limit initialization to the 'vda' disk only:
zerombr
ignoredisk --only-use=vda

[Aug 08, 2017] Kickstart option to set GRUB drive location?

Notable quotes:
"... ignoredisk --drives=sdb ..."
Aug 08, 2017 | www.centos.org

andersbiro " 2010/03/04 12:36:32

Hello, I have successfully created a Centos USB stick installation with an automated kickstart configuration according to the instructions at http://wiki.centos.org/HowTos/InstallFromUSBkey .

Everything works flawlessly with the exception that the installation writes the GRUB Boot Loaders on the USB stick instead of the destination hard drive and hence can only be booted from the USB stick.

Afterwards I can solve this manually by editing grub.conf to point to the hard drive and using the grub utility I can nstall the Grub loader on the hard drive MBR instead and then it boots normally.

The aim is however to create a fully automated installation since the end users in question are expected to be technically proficient so my question is if there is a kickstart option to explicitly write GRUB correctly to the hard drive from the very beginning?

There seems to be a kickstart "bootloader" option but I can not really see any flags that would explicitly set the GRUB on a specified hard drive? Top


AlanBartlett
Forum Moderator
Posts: 9310
Joined: 2007/10/22 11:30:09
Location: ~/Earth/UK/England/Suffolk
Contact: Contact AlanBartlett Website
Re: Kickstart option to set GRUB drive location?

Post by AlanBartlett " 2010/03/04 18:52:16

In the CentOS wiki article that you reference, under the heading Notes , there is a [color=ff1480]cherry-red[/color] block of text. Isn't that appropriate?

If not, do you have any suggestions for improvement to the article? Top


pschaff
Retired Moderator
Posts: 18276
Joined: 2006/12/13 20:15:34
Location: Tidewater, Virginia, North America
Contact: Contact pschaff Website
Kickstart option to set GRUB drive location?

Post by pschaff " 2010/03/04 21:36:41

andersbiro wrote:
...
There seems to be a kickstart "bootloader" option but I can not really see any flags that would explicitly set the GRUB on a specified hard drive?

How about Kickstart Options : Code: Select all

bootloader (required)
Specifies how the boot loader should be installed. This option is required for both installations and upgrades.

* --append= ? Specifies kernel parameters. To specify multiple parameters, separate them with spaces. For example:

bootloader --location=mbr --append="hdd=ide-scsi ide=nodma"

* --driveorder ? Specify which drive is first in the BIOS boot order. For example:

bootloader --driveorder=sda,hda

* --location= ? Specifies where the boot record is written. Valid values are the following: mbr (the default),
partition (installs the boot loader on the first sector of the partition containing the kernel), or
none (do not install the boot loader).

Still can't guarantee that a totally automated approach is possible, unless the hardware is identical, as the devices and ordering will be system-dependent. Top


andersbiro
Posts: 12
Joined: 2010/02/22 10:07:54
Re: Kickstart option to set GRUB drive location?

Post by andersbiro " 2010/03/08 08:48:17

AlanBartlett wrote:
In the CentOS wiki article that you reference, under the heading Notes , there is a [color=ff1480]cherry-red[/color] block of text. Isn't that appropriate?

If not, do you have any suggestions for improvement to the article?

To my understanding this specific part of the text refers to an interactive installation but since I deal with a fully automatic installation I do not think that part is appropriate so that is why I am looking for corresponding kickstart options to achieve the same thing.
To be fair it also mentions the line bootloader --driveorder=cciss/c0d0,sda --location=mbr" that might be appropriate but since I am not very proficient with completely comprehending the parameters. Top


andersbiro
Posts: 12
Joined: 2010/02/22 10:07:54
Re: Kickstart option to set GRUB drive location?

Post by andersbiro " 2010/03/08 08:58:16

pschaff wrote:
andersbiro wrote:
...
There seems to be a kickstart "bootloader" option but I can not really see any flags that would explicitly set the GRUB on a specified hard drive?

How about Kickstart Options : Code: Select all

bootloader (required)
Specifies how the boot loader should be installed. This option is required for both installations and upgrades.

* --append= ? Specifies kernel parameters. To specify multiple parameters, separate them with spaces. For example:

bootloader --location=mbr --append="hdd=ide-scsi ide=nodma"

* --driveorder ? Specify which drive is first in the BIOS boot order. For example:

bootloader --driveorder=sda,hda

* --location= ? Specifies where the boot record is written. Valid values are the following: mbr (the default),
partition (installs the boot loader on the first sector of the partition containing the kernel), or
none (do not install the boot loader).

I was aware of these parameters but I am not fully sure about how to apply them... the "--location" flag seemed easy enough and also driveorder but the "append" kernel parameters eludes me but perhaps this is not required.

I know that the kernel and Grub part should reside on the first partition of disk "sda" and the USB stick on "sdb" so would setting the "--driveorder=sda,sdb" insure that grub.conf points to the sda disk?

Also, would that automatically write the GRUB loader on "sda" as well or do you need to use the "partition flag" for that? Top


andersbiro
Posts: 12
Joined: 2010/02/22 10:07:54
Re: Kickstart option to set GRUB drive location?

Post by andersbiro " 2010/03/08 12:04:09

As a matter of fact I tried the --driveorder flag and that actually worked as it now can boot directly without USB stick which is a great step forward.
The only remaining obstacle is that somehow the FAT32 partition disappear from the USB stick so it cannot be used for future installations.
This can however be fixed by using FDISK to create a new FAT32 partition in the same space and somehow this also restores the previous file in the partition.

Since the GRUB bootloader seems to be written to the destination disk I must say that I cannot understand at all why the FAT32 partition disappears?
Are additional flags required to prevent this from happening? Top


pschaff
Retired Moderator
Posts: 18276
Joined: 2006/12/13 20:15:34
Location: Tidewater, Virginia, North America
Contact: Contact pschaff Website
Re: Kickstart option to set GRUB drive location?

Post by pschaff " 2010/03/08 12:51:41

andersbiro wrote:
...
Since the GRUB bootloader seems to be written to the destination disk I must say that I cannot understand at all why the FAT32 partition disappears?
Are additional flags required to prevent this from happening?

I have not seen that happen. You have both FAT32 and ext3 partitions, and the FAT32 one is gone after the install? I'd check the kickstart file carefully to be sure it is not inadvertently messing with the USB drive.

Thanks for reporting back, and please keep us posted. Any recommendations for the Wiki article appreciated. Top


andersbiro
Posts: 12
Joined: 2010/02/22 10:07:54
Re: Kickstart option to set GRUB drive location?

Post by andersbiro " 2010/03/08 14:49:12

I managed to solve the issue by adding the "ignoredisk --drives=sdb" parameter for the USB drive and now the installer leaves the USB stick intact and the installation works flawlessly.
I however still do not know why the installer affected the disk in the first place but this flag did at any rate solve the problem for me. Top
pschaff
Retired Moderator
Posts: 18276
Joined: 2006/12/13 20:15:34
Location: Tidewater, Virginia, North America
Contact: Contact pschaff Website
Re: Kickstart option to set GRUB drive location?

Post by pschaff " 2010/03/08 14:53:40

Thanks for the additional info. Still seems that a general solution is elusive, as there's no guarantee that on a different set of hardware the USB drive will show up as /dev/sdb. Top
nektoid
Posts: 1
Joined: 2012/04/03 17:00:26
Re: Kickstart option to set GRUB drive location?

Post by nektoid " 2012/04/03 17:06:27

Hi. I ran into this recently kickstarting both 5.5 and 6.2 hosts. Kickstarts worked one day and the next the bootloader wanted to be on the usb key, odd. This is was an example of what worked for me with 5.5 where the usb key was consistently seen as sdb. Both at their respective parts in the preamble section of ks.cfg.

#stop writing bootloader to usb
bootloader --driveorder=sda,sdb --location=mbr

#stop erasing my usb stick
ignoredisk --drives=sdb

[Aug 08, 2017] Unattended Installation of Red Hat Enterprise Linux 7 Operating System on Dell PowerEdge Servers Using iDRAC With Lifecycle Controller

The OS Deployment feature available in Lifecycle Controller enables you to deploy standard and custom operating systems on the managed system. You can also configure RAID before installing the operating system if it is not already configured. You can deploy the operating system using any of the following methods:

The unattended installation feature requires an OS configuration or answer file. During unattended installation, the answer file is provided to the OS loader. This activity requires minimal or no user intervention. Currently, the unattended installation feature is supported only for Microsoft Windows and Red Hat Enterprise Linux 7 operating systems from Lifecycle Controller.

Note: This paper only covers unattended installation of Red Hat Enterprise Linux 7 operating system from Lifecycle Controller. For more information about unattended installation of Microsoft Windows operating systems, see the "Unattended Installation of Windows Operating Systems

[Aug 06, 2017] Unable to download the kickstart file

Aug 06, 2017 | community.hpe.com
Hi there
There is a kickstart file on http://10.10.0.3/ks.cfg it opens in the browser fine and I can read the content.
Also wget http://10.10.0.3/ks.cf works flawlessly yet the installation fails with the following error : "Unable to download the kickstart file, please modify the kickstart parameter..."
I'm using apache as a web server.
Any idea what's causing the problem ?
Do I need some special configuration of Apache ?

Jesus is the King Solved! Go to Solution. 0 Kudos Reply

12 REPLIES Michal Kapalka (mikap) Honored Contributor

‎11-15-2010 11:54 PM

‎11-15-2010 11:54 PM

Re: Unable to download the kickstart file
hi,

check this forum thread :

http://web.archiveorange.com/archive/v/YcynVy2jK7BBdYo5BvqY

mikap 0 Kudos Reply Piotr Kirklewski Super Advisor

‎11-16-2010 05:50 AM

‎11-16-2010 05:50 AM

Re: Unable to download the kickstart file
Well - I'm not using NFS here.
So it seems to me that post is irrelevant unless I don't get it. Jesus is the King 0 Kudos Reply Piotr Kirklewski Super Advisor

‎11-16-2010 06:08 AM

‎11-16-2010 06:08 AM

Re: Unable to download the kickstart file
Also I have only one NIC in my VM and I'm not specifying the ksdevice in my ks.cfg .
Should I do that ? Jesus is the King 0 Kudos Reply Matti_Kurkela Honored Contributor

‎11-17-2010 01:52 AM

‎11-17-2010 01:52 AM

Re: Unable to download the kickstart file
You did not mention the name of your Linux distribution, but Kickstart suggests RedHat or one of its derivatives.

RedHat's installer sets up a shell prompt on one of the virtual consoles, and a log or two on other virtual consoles.

You might take a look at the logged messages, and/or use the shell prompt to verify the system has a working network connection.

See paragraph 4.1.1 of this document for a description of the virtual console functionality of the installer (this part of the installer has not changed significantly since RHEL 3):

http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/3/html/Installation_Guide_x8664/ch-guimode.html

Another possibility is that the web server offers the kickstart file using a MIME content type the installer does not like. I guess "text/plain" would be acceptable, but this might not be Apache's default content type for .cfg files.

If this is the case, you might have to add the file type to your Apache configuration:

AddType text/plain .cfg

MK MK 0 Kudos Reply Jimmy Vance HPE Pro

‎11-17-2010 04:06 AM

‎11-17-2010 04:06 AM

Re: Unable to download the kickstart file
You don't mention the server model or distribution version your working with. Even though other systems can access the ks file OK, maybe the system your trying to install on does not have proper NIC drivers during boot to access the ks file?

__________________________________________________
No support by private messages. Please ask the forum! I work for HPE

If you feel this was helpful please click the KUDOS! thumb below! 0 Kudos Reply Piotr Kirklewski Super Advisor

‎11-17-2010 09:47 AM

‎11-17-2010 09:47 AM

Re: Unable to download the kickstart file
I'm trying to install Centos5.5 64-bit
The PXE server is Debian.

vim /tftpboot/pxelinux.cfg/default

### CENTOS 5.5 - CUSTOM ###
LABEL Centos 5.5 x86_64 Custom
kernel CentOS/vmlinuz noapic
append initrd=CentOS/initrd.img ks= http://10.10.0.3/ks.cfg text nofb
TEXT HELP
Customized, unatended installation version of Centos 5.5
ENDTEXT
\n

Did you mean to add this to apache config ? :

DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm index.cfg ks.cfg

Jesus is the King 0 Kudos Reply Jimmy Vance HPE Pro

‎11-17-2010 10:15 AM

‎11-17-2010 10:15 AM

Re: Unable to download the kickstart file

OK, so now we know your working with CentOS

Does the CentOS 5.5 initrd image your booting from contain the correct NIC drivers for the server your trying to install on? You didn't mention the server model.

__________________________________________________
No support by private messages. Please ask the forum! I work for HPE

If you feel this was helpful please click the KUDOS! thumb below! 0 Kudos Reply Piotr Kirklewski Super Advisor

‎11-17-2010 02:30 PM

‎11-17-2010 02:30 PM

Re: Unable to download the kickstart file
It's a VM on ESXi 4.1.
I'm already using the same intrd here:
Label CentOS 5 64-bit installer & rescue
kernel centos5x64/vmlinuz
append initrd=centos5x64/initrd.img text vga=791
\n

I have just copied it to a different dir and started working on the kickstart installation.
The manual installation goes without any problems so I can't see why would it be different for the kickstart.
Jesus is the King 0 Kudos Reply Jimmy Vance HPE Pro

‎11-17-2010 04:56 PM

‎11-17-2010 04:56 PM

Solution
Re: Unable to download the kickstart file
A manual installation from media doesn't matter if it is bare iron, or a VM, the network does not come into play. For bare iron, or a VM, network installation the boot installer needs to be able to access the network to pull the KS file. While your in the installer switch over to one of the other console screens F2, F4, etc and see if the network is functional

__________________________________________________
No support by private messages. Please ask the forum! I work for HPE

If you feel this was helpful please click the KUDOS! thumb below! 1 Kudo Reply Piotr Kirklewski Super Advisor

‎11-18-2010 04:52 PM

‎11-18-2010 04:52 PM

Re: Unable to download the kickstart file
Alt + F3 shows the following message:
ERROR: No network device in choose network device!
ERROR: No network drivers for dooing kickstart
ERROR: Unable to bring up network

Where do I go from there ?

Jesus is the King 0 Kudos Reply Jimmy Vance HPE Pro

‎11-18-2010 05:12 PM

‎11-18-2010 05:12 PM

Re: Unable to download the kickstart file
Hopefully someone with more VMware experience than I have will chime in. I'm not sure what driver you need to load for the virtual NIC VMware presents to the guest OS

__________________________________________________
No support by private messages. Please ask the forum! I work for HPE

If you feel this was helpful please click the KUDOS! thumb below! 0 Kudos Reply DeafFrog Valued Contributor

‎11-22-2010 12:05 AM

‎11-22-2010 12:05 AM

Re: Unable to download the kickstart file
Hi ,

pls check this :

http://www.vmware.com/support/vsphere4/doc/vsp_41_new_feat.html

section "network" > ESXi configuration guide > page 96 .

hope this helps..

Reg. FrogIsDeaf

[Aug 06, 2017] Customising Anaconda uEFI boot menu to include kickstart parameter - Red Hat Customer Portal

Aug 06, 2017 | access.redhat.com
Customising Anaconda uEFI boot menu to include kickstart parameter Latest response March 3 2017 at 6:50 AM Hi

I am producing a remastered RHEL 6.5 which contains a custom kickstart. I have got the ISO to boot using the kickstart from a BIOS boot by making the standard modifications to isolinux.cfg (i.e. "append initrd=initrd.img ks=cdrom:/ks-x86_64.cfg"). However I cannot locate the correct file(s) to perform the same customisation when I boot to UEFI. I can enter the "ks=cdrom:/ks-x86_64.cfg" parameter to the UEFI anaconda boot menu by editing the kernel parameters but I cannot find a way of customising it like you can with editing isolinux.cfg.

Does anyone know how to customise the anaconda boot parameters when using UEFI?

Many thanks Started March 21 2014 at 11:55 AM by Aidan Beeson Community Member 87 points Join the conversation Responses Guru 6863 points 21 March 2014 4:03 PM James Radtke Community Leader Hey Aidan - I don't know this for certain (and don't have time to validate right now) but hopefully to get you moving forward.

Look in /EFI/BOOT in your media. It seems to resemble what is in /isolinux

Specifically, check out BOOTX64.conf

Raw
#debug --graphics
default=0
splashimage=/EFI/BOOT/splash.xpm.gz
timeout 5
hiddenmenu
title Red Hat Enterprise Linux 6.4
    kernel /images/pxeboot/vmlinuz
    initrd /images/pxeboot/initrd.img
title Install system with basic video driver
    kernel /images/pxeboot/vmlinuz xdriver=vesa nomodeset askmethod
    initrd /images/pxeboot/initrd.img
title rescue
    kernel /images/pxeboot/vmlinuz rescue askmethod
    initrd /images/pxeboot/initrd.img

I'll revisit this later if I have something to update/change. But, hopefully this is correct and helpful. ;-)

Community Member 87 points 21 March 2014 4:26 PM Aidan Beeson Hi James,

That does look promising. Bit of a "d'oh" moment as it was kinda obvious really! When I get a chance to get onto the UEFI hardware again I'll have a play and update this thread...

Thanks!

Community Member 87 points 25 March 2014 10:07 AM Aidan Beeson James,

You're correct, the configuration is located in the /EFI/BOOT/BOOTX64.conf.
Unlike the non-efi Anaconda boot it uses a standard grub menu rather than the (slightly) fancier one. To get it to boot anaconda using the kickstart file I've used the following:

Raw
#debug --graphics
default=0
splashimage=/EFI/BOOT/splash.xpm.gz
timeout 60
# hiddenmenu
title Install RHEL with kickstart
    kernel /images/pxeboot/vmlinuz ks=cdrom:/ks-x86_64.cfg
    initrd /images/pxeboot/initrd.img
title Install Standard Red Hat Enterprise Linux OS
    kernel /images/pxeboot/vmlinuz
    initrd /images/pxeboot/initrd.img
title Install system with basic video driver
    kernel /images/pxeboot/vmlinuz xdriver=vesa nomodeset askmethod
    initrd /images/pxeboot/initrd.img
title rescue
    kernel /images/pxeboot/vmlinuz rescue askmethod
    initrd /images/pxeboot/initrd.img

Many thanks

:)

Guru 6863 points 25 March 2014 2:27 PM James Radtke Community Leader Good to hear it. I'm glad we were on the right page. I believe this (uEFI) will start to become more of a hot topic as time goes on. (I know I have a lot to learn yet ;-) Community Member 30 points 9 March 2015 5:33 PM Jun Li Yes, uefi is becoming more popular now, uefi is standard now for HP G9 servers, while IBM/Lenovo x series made uefi standard couple years ago.

Satellite 6 is falling behind, so far, it doesn't support pxe uefi kickstart, reasons are:
1. tftp server doesn't have uefi boot image, only has bios boot image;
2. dhcp server config doesn't have have definition for uefi based pxe request. This actually isn't a satellite/foreman issue, because foreman is missing the function of update dhcpd.conf when a subnet defined.
3. A uefi compatible pxe config file is missing when a new host defined in foreman/satellite.
the one that defined in /var/lib/tftpboot/pxelinux.cfg only works for bios based pxe.

Based on the 3 issues, I got pxe uefi kickstart working this morning, by addressing the above 3 accordingly:

  1. Manually add uefi boot image to ttftp server:

[root@capsule tftpboot]# pwd
/var/lib/tftpboot
[root@capsule tftpboot]# cd efi/
[root@capsule efi]# ls
bootx64.efi efidefault images splash.xpm.gz TRANS.TBL
[root@capsule efi]#
[root@capsule efi]# pwd
/var/lib/tftpboot/efi
[root@capsule efi]# ls
bootx64.efi efidefault images splash.xpm.gz TRANS.TBL
[root@capsule efi]# ls -l images/pxeboot/
total 36644
-r--r--r-- 1 root root 33383449 Mar 6 12:27 initrd.img
-r--r--r-- 1 root root 441 Mar 6 12:27 TRANS.TBL
-r-xr-xr-x 1 root root 4128944 Mar 6 12:27 vmlinuz
[root@capsule efi]#
[root@capsule efi]# cat efidefault

debug --graphics

default=0
splashimage=(nd)/splash.xpm.gz
timeout 5
hiddenmenu
title Red Hat Enterprise Linux 6.5
root (nd)
kernel /images/pxeboot/vmlinuz ks=http://satellite6.example.com:80/unattended/provision?token=fc25b9df-8c28-41cf-af5d-fa42b6401c29 ksdevice=bootif network kssendmac
initrd=/images/pxeboot/initrd.img
IPAPPEND 2
title Install system with basic video driver
kernel /images/pxeboot/vmlinuz xdriver=vesa nomodeset askmethod
initrd /images/pxeboot/initrd.img
title rescue
kernel /images/pxeboot/vmlinuz rescue askmethod
initrd /images/pxeboot/initrd.img
[root@capsule efi]#

  1. Add one section to serve uefi based pxe request, so the cliet will get bootx64.efi boot image instead of pxelinux.0

class "pxeclients" {
match if substring (option vendor-class-identifier, 0, 9) = "PXEClient";
next-server 10.1.1.1;

Raw
              if option arch = 00:06 {
                      filename "efi/bootia32.efi";
              } else if option arch = 00:07 {
                      filename "efi/bootx64.efi";
              } else {
                      filename "pxelinux.0";
              }

}

  1. every time when a new host defined in satellite 6/foreman, you will get a unattended provision url, copy this url, replace the ks url in /var/lib/tftpboot/efi/efidefault,

Now you should be able to kickstart a uefi system via pxe in a satellite 6/capsule environment.

Community Member 95 points 6 March 2015 12:02 AM David Worth Hi,

I am trying to create a custom iso for kickstart builds on servers with UEFI. From looking at the post, it looks like you were able to create a boot iso for EFI. If that is the case, what were the steps that you followed to create it? I am struggling to find good detailed info.

Thanks,
David

Community Member 39 points 9 March 2015 3:35 PM Aidan Beeson Hi David,

There are probably better/different ways of doing this but this is how I got it to work:

  1. Loop mount the install iso.
  2. Create local disk copies of "isolinx", "EFI" and "images" directories.
  3. Modify files in isolinux & EFI directories as required (e.g. isolinux.cfg and EFI/BOOT/BOOTX64.conf)
  4. Create the new ISO:
Raw
 mkisofs -o my.iso \
        -R -J -A "MyProject" \
  -hide-rr-moved \
  -v -d -N \
  -no-emul-boot -boot-load-size 4 -boot-info-table \
  -b isolinux/isolinux.bin \
  -c isolinux/isolinux.boot \
  -eltorito-alt-boot -no-emul-boot  \
  -eltorito-boot images/efiboot.img \
  -x ${mountDIR}/isolinux \
  -x ${mountDIR}/images \
  -x ${mountDIR}/EFI \
  -x .svn \
  -graft-points /path/to/loopmount/install_dvd my_kickstart.cfg=my_kickstart.cfg isolinux/=isolinux images=images EFI=EFI

The isolinux.cfg and BOOTx64.conf should contain a reference to the ks file, for example:

(isolinux.cfg)

Raw
label linux
  menu label ^Install OS using kickstart
  menu default
  kernel vmlinuz
  append initrd=initrd.img ks=cdrom:/my_kickstart.cfg
label vesa
  menu label Install ^standard Red Hat Enterprise Linux OS
  kernel vmlinuz
  append initrd=initrd.img 
label rescue
  menu label ^Rescue installed system
  kernel vmlinuz
  append initrd=initrd.img rescue
label local
  menu label Boot from ^local drive
  localboot 0xffff
label memtest86
  menu label ^Memory test
  kernel memtest
  append -

(BOOTx64.conf)

Raw
  title Install OS using kickstart
        kernel /images/pxeboot/vmlinuz ks=cdrom:/my_kickstart.cfg
        initrd /images/pxeboot/initrd.img
title Install standard Red Hat Enterprise Linux OS
        kernel /images/pxeboot/vmlinuz
        initrd /images/pxeboot/initrd.img
title Install system with basic video driver
        kernel /images/pxeboot/vmlinuz xdriver=vesa nomodeset askmethod
        initrd /images/pxeboot/initrd.img
title rescue
        kernel /images/pxeboot/vmlinuz rescue askmethod
        initrd /images/pxeboot/initrd.img

Hope this helps. I've tested it using a VMware EFI emulation but not in anger on any "real" EFI systems.

Aidan

Community Member 95 points 11 March 2015 8:23 PM David Worth Thanks for the info. We are building new servers on HP gen8 and 9 servers. The Gen9's are defaulting to UEFI. For this go around I reverted back to legacy, but we will have more builds to come. So it would be nice to figure a way to boot and install using UEFI. I am guessing that is the direction hardware vendors are going. Also interesting that it was not too easy to find a lot of good info. Just pieces here and there.

I will give this a try on the next hardware build using EFI.

Thanks again!

Community Member 95 points 22 March 2015 12:44 AM David Worth Hi Aidan

Thanks again for the info. I was troubleshooting why boot from SAN was not working with a HP bl460Gen9 blade. I was trying legacy mode, but would not boot after install from kickstart. Waiting For HP on this issue.

However, I created a dual boot ISO, legacy and EFI. I was able to set the bios to EFI and image the server from kickstart. I just added the following to EFI/BOOT/BOOTX64.conf:

debug --graphics

default=0
splashimage=/EFI/BOOT/splash.xpm.gz
timeout 360
title RHEL 6.6 l00l
kernel /images/pxeboot/vmlinuz ks=nfs:nfsserver:/ifs/data/kickstart/KSConfigs/l001/l001-ks.cfg initrd=rhe6664.img ksdevice=eth0 ip= gateway= netmask=255.255.255.0 dns=
initrd /images/pxeboot/initrd.img
title RHEL 6.6 l002
kernel /images/pxeboot/vmlinuz ks=nfs:nfsserver:/ifs/data/kickstart/KSConfigs/l002/l002-ks.cfg initrd=rhe6664.img ksdevice=eth0 ip= gateway= netmask=255.255.255.0 dns=
initrd /images/pxeboot/initrd.img
title Red Hat Enterprise Linux 6.6
kernel /images/pxeboot/vmlinuz
initrd /images/pxeboot/initrd.img
title Install system with basic video driver
kernel /images/pxeboot/vmlinuz xdriver=vesa nomodeset askmethod
initrd /images/pxeboot/initrd.img
title rescue
kernel /images/pxeboot/vmlinuz rescue askmethod
initrd /images/pxeboot/initrd.img

There was only one instance where the ISO booted up and was went to a grub menu. Not sure why, but rebooted and everything was good. So your steps also work with physical as well.

Regards,
David

Community Member 35 points 18 June 2015 5:38 PM Jose Carlos Alves Hi David,

What you have in your file isolinux/isolinux.cfg?

Regards,
Sara

Community Member 95 points 18 June 2015 10:24 PM David Worth Hi Sarah,

To setup a dual legacy and EFI boot iso, I mount the latest RHEL 6.x dvd and copy the following directories to a work area on my server.

EFI/ images/ isolinux

If you are just setting up legacy boot, you can ignore the EFI and images directories.

Here is an example of what I am putting in the isolinux.cfg

------start of file --------
default vesamenu.c32

prompt 1

timeout 600

display boot.msg

menu background splash.jpg
menu title Welcome to Red Hat Enterprise Linux 6.6!
menu color border 0 #ffffffff #00000000
menu color sel 7 #ffffffff #ff000000
menu color title 0 #ffffffff #00000000
menu color tabmsg 0 #ffffffff #00000000
menu color unsel 0 #ffffffff #00000000
menu color hotsel 0 #ff000000 #ffffffff
menu color hotkey 7 #ffffffff #ff000000
menu color scrollbar 0 #ffffffff #00000000

label linux
menu label ^Install or upgrade an existing system
menu default
kernel vmlinuz
append initrd=initrd.img
label vesa
menu label Install system with ^basic video driver
kernel vmlinuz
append initrd=initrd.img xdriver=vesa nomodeset
label rescue
menu label ^Rescue installed system
kernel vmlinuz
append initrd=initrd.img rescue
label local
menu label Boot from ^local drive
localboot 0xffff
label servera-set-Network
kernel vmlinuz
append ks=nfs:nfsserver.example.com:/kickstart/location/servera-ks.cfg initrd=initrd.img ksdevice=eth0 ip=192.168.1.5 gateway=192.168.1.1 netmask=255.255.255.0 dns=1192.168.1.2
label serverb-dhcp
kernel vmlinuz
append ks=nfs:nfsserver.example.com:/kickstart/location/serverb-ks.cfg initrd=initrd.img ksdevice=eth0

------end of file --------

Then you have to run mkisofs to create the iso.

Hope that helps.

Regards,
David

Community Member 35 points 19 June 2015 8:15 AM Jose Carlos Alves Hi David,

My problema is because I have a HP Gen9 with UEFI and not EFI

I have
images
isolinux

In the isolinux.cfg, I have this

default=0
splashimage=/EFI/BOOT/splash.xpm.gz

prompt 1

timeout 5
hiddenmenu

label ptmtshdpnopp01
kernel vmlinuz
append initrd=initrd.img ksdevice=eth0 ip=10.126.77.11 netmask=255.255.255.224 gateway=10.126.77.1 dns=10.126.26.47 ks=nfs:10.126.58.136:/apps/redhat/ks-rhel-server-6.6-x86_64_hdp.cfg

label ptmtshdpnopp02
kernel vmlinuz

append initrd=initrd.img ksdevice=eth0 ip=10.126.77.12 netmask=255.255.255.224 gateway=10.126.77.1 dns=10.126.26.47 ks=nfs:10.126.58.136:/apps/redhat/ks-rhel-server-6.6-x86_64_hdp.cfg

I create th iso with

mkisofs -N -J -joliet-long -D -V "HADOOP" -o rhel-server-6.6-x86_64-hadoop.iso -b "isolinux/isolinux.bin" -c "isolinux/boot.cat" -hide "isolinux/boot.cat" -no-emul-boot -boot-load-size 4 -boot-info-table isolinux-6.6-x86_64/

But the machine doesn't see the iso

Thanks for the help
Regards,
Sara

Community Member 95 points 19 June 2015 1:18 PM David Worth Hi Sara,

I ran into the same thing earlier this year. We started using Gen9's. We had issues with boot from San and switching them to legacy mode, so we went with EFI, aka UEFI. EFI or UEFI boot is a different method of managing booting the OS. It was developed by HP in the Itanium servers. But you can search for the features and differences with legacy boot.

The way I went after booting with an ISO for kickstart installs is to create a dual boot ISO, legacy and EFI boot. Anything that will boot from legacy mode will read the isolinux.cfg and EFI mode will read the Bootx64.conf.

To create a dual ISO, create directory and copy EFI/ images/ isolinux/ directories from an install DVD. Next all you legacy boot entries will go into the isolinux/isolinux.cfg. You can see the example I have above. For EFI, you will have to edit the EFI/BOOT/BOOTx64.conf. Here is an example of what I put into mine.

-----start------

debug --graphics

default=0
splashimage=/EFI/BOOT/splash.xpm.gz
timeout 900
title Red Hat Enterprise Linux 6.6
kernel /images/pxeboot/vmlinuz
initrd /images/pxeboot/initrd.img
title rescue
kernel /images/pxeboot/vmlinuz rescue askmethod
initrd /images/pxeboot/initrd.img
title server-d
kernel /images/pxeboot/vmlinuz ks=nfs:nfsserver.example.net/kickstart/KSConfigs/server-d-ks.cfg ksdevice=eth0 ip=192.168.5.6 gateway=192.168.5.1 netmask=255.255.255.0 dns=192.168.2.2
initrd /images/pxeboot/initrd.img
title server-e
kernel /images/pxeboot/vmlinuz ks=nfs:nfsserver.example.net/kickstart/KSConfigs/server-e-ks.cfg ksdevice=eth0
initrd /images/pxeboot/initrd.img

------End------
In the example above, I put in an example of setting an IP as well as using DHCP. After you edit your legacy or EFI configs, then you have to create the ISO. Here is what I run to create a dual ISO:

  1. cd into the directory that contains the 3 directories I mentioned earlier.
  2. run > mkisofs -o ../my-iso-name.iso -R -J -A "MyDualISO" -hide-rr-moved -v -d -N -no-emul-boot -boot-load-size 4 -boot-info-table -b isolinux/isolinux.bin -c isolinux/isolinux.boot -eltorito-alt-boot -no-emul-boot -eltorito-boot images/efiboot.img -x ${mountDIR}/isolinux -x ${mountDIR}/images -x ${mountDIR}/EFI .

Also, after this is done, your kickstart will need to be able to work with EFI boots. First, the partition table will need to be gpt. Next, the bootloader location needs to be partition. Last, you will need a partition /boot/efi with fstype as efi and size 200.

If you need help with that, I can send you my disk layout from my kickstart config. Just let me know.

Regards,
David

Community Member 97 points 5 August 2015 7:26 PM Ivan Borghetti Hello David, how are you. I am also trying to create an iso that supports dual boot , i am now testing the UEFI boot and it works however i still need to customize the menu to list my different kickstarts, etc.

After copying the directories i needed i created the iso running mkisofs however i did not have the following lines in my command:

-x ${mountDIR}/isolinux -x ${mountDIR}/images -x ${mountDIR}/EFI .

Would you mind explaining me why those are needed and which would be the $(mountDIR) , i guess is the parent directory where those 3 subdirectories are, right? in my case would be iso

iso_cfg/
├── EFI
│ └── BOOT
├── images
│ └── pxeboot
└── isolinux

thanks in advance

Community Member 95 points 8 August 2015 10:27 AM David Worth Hi Ivan,

I am almost sure you do not need them. The -x is similar to the -m, which if you look in the man page, it allows you to exclude files. I was basing my mkisofs command off the one in this thread above. I thought I removed it from the one I use, but I still have it in.

Try removing it and see what happens. Can you let me know the outcome? Just curious.

Community Member 97 points 10 August 2015 12:12 PM Ivan Borghetti Hello David, thanks for your response, I tried without those parameters and it worked without issues,

thanks again

Community Member 95 points 10 August 2015 1:06 PM David Worth Great. Thanks for letting me know. I will take it out of my build script as well. Community Member 35 points 19 June 2015 2:37 PM Jose Carlos Alves Hi David,

Yes please, let me know what you have in your kickstart config about the disk layout.

Thanks,
Sara

Community Member 95 points 19 June 2015 3:05 PM David Worth Hi Sara,

I am setting up my disk partitioning in a pre section, then writing the disk to a file that gets included. The script will check for all drives and then only select a disk that is greater than 130000MB, roughly 126GB. This is to deal with the ISO, USB, or cdrom that you are using to image the server. It will ignore that and select my disk which is 134GB.

--------start---------

Disk Configuration

%include /tmp/partitioning

End Disk Configuration

%pre --log /root/ks-rhn-pre.log

Find OS disk

tdsk=""
list-harddrives | while read DISK DSIZE
do
#Convert float into int
DSIZEI=${DSIZE%.*}
# get scsi ID for disk
scsi_id="$(/mnt/runtime/lib/udev/scsi_id -gud /dev/${DISK})"
echo "F-$DSIZE I-$DSIZEI-- ID $scsi_id"

Raw
# determine if we should partition this device
###  if disk is smaller than 130000, than change the size in mb to reflect it.
if [ ${DSIZEI} -gt 130000 ]; then
     # add device to ignoredisk --only-use list
     ##the following is for SAN disks##
     tdsk="/dev/disk/by-id/scsi-${scsi_id}"
     ##End SAN Disk###

    ##if using local disks, comment the tdsk above and use this###
    ##tdsk="/dev/$DISK"
    ###End Local disk###

    echo "DISK - $tdsk"
Create GPT partition

echo "creating gpt on ${tdsk}"
parted -s ${tdsk} mklabel gpt

cat << EOF >> /tmp/partitioning
bootloader --location=partition --driveorder=${tdsk}
ignoredisk --only-use=${tdsk}
zerombr
clearpart --linux --drives=${tdsk}

part /boot/efi --fstype=efi --size=200 --ondisk=${tdsk}
part /boot --fstype ext4 --size=512 --ondisk=${tdsk}
part pv.4 --size=100 --grow --ondisk=${tdsk}
volgroup vg00 --pesize=32768 pv.4

logvol / --fstype ext4 --name=lvroot --vgname=vg00 --size=10240
logvol /opt --fstype ext4 --name=lvopt --vgname=vg00 --size=2048
logvol /home --fstype ext4 --name=lvhome --vgname=vg00 --size=2048
logvol /tmp --fstype ext4 --name=lvtmp --vgname=vg00 --size=6144
logvol /var --fstype ext4 --name=lvvar --vgname=vg00 --size=5120
logvol /usr/local --fstype ext4 --name=lvulocal --vgname=vg00 --size=4096
logvol swap --fstype swap --name=lvswap1 --vgname=vg00 --size=16384
logvol swap --fstype swap --name=lvswap2 --vgname=vg00 --size=16384
EOF
exit
fi
done

%end

--------end---------

Also note, I am booting from SAN so this line will work for SAN disks. > tdsk="/dev/disk/by-id/scsi-${scsi_id}"

If you are using local disks or VMware, use this > tdsk="/dev/$DISK"

I added comments to the pre script so you can see where to make changes. Hope that helps.

Regards,
David

Community Member 95 points 19 June 2015 3:18 PM David Worth It seems the formatting of the code is off. Let me try again. Raw
##Disk Configuration##
%include /tmp/partitioning

##End Disk Configuration##

%pre --log /root/ks-rhn-pre.log
##Find OS disk

 tdsk=""
 list-harddrives | while read DISK DSIZE
 do
    #Convert float into int
    DSIZEI=${DSIZE%.*}
    # get scsi ID for disk
    scsi_id="$(/mnt/runtime/lib/udev/scsi_id -gud /dev/${DISK})"
        echo "F-$DSIZE I-$DSIZEI-- ID $scsi_id"

    # determine if we should partition this device
###  if disk is smaller than 130000, than change the size in mb to reflect it.
    if [ ${DSIZEI} -gt 130000 ]; then
         # add device to ignoredisk --only-use list
##the following is for SAN disks##
         tdsk="/dev/disk/by-id/scsi-${scsi_id}"
##End SAN Disk###

    ##if using local disks, comment the tdsk above and use this###
    ##tdsk="/dev/$DISK"
    ###End Local disk###

        echo "DISK - $tdsk"

##Create GPT partition
echo "creating gpt on ${tdsk}"
parted -s ${tdsk} mklabel gpt

cat << EOF >> /tmp/partitioning
bootloader --location=partition --driveorder=${tdsk}
ignoredisk --only-use=${tdsk}
zerombr
clearpart --linux --drives=${tdsk}

part /boot/efi --fstype=efi --size=200 --ondisk=${tdsk}
part /boot --fstype ext4 --size=512 --ondisk=${tdsk}
part pv.4 --size=100 --grow --ondisk=${tdsk}
volgroup vg00 --pesize=32768 pv.4

logvol / --fstype ext4 --name=lvroot --vgname=vg00 --size=10240
logvol /opt --fstype ext4 --name=lvopt --vgname=vg00 --size=2048
logvol /home --fstype ext4 --name=lvhome --vgname=vg00 --size=2048
logvol /tmp --fstype ext4 --name=lvtmp --vgname=vg00 --size=6144
logvol /var --fstype ext4 --name=lvvar --vgname=vg00 --size=5120
logvol /usr/local --fstype ext4 --name=lvulocal --vgname=vg00 --size=4096
logvol swap --fstype swap --name=lvswap1 --vgname=vg00 --size=16384
logvol swap --fstype swap --name=lvswap2 --vgname=vg00 --size=16384
EOF
        exit
     fi
 done

%end

Community Member 35 points 29 June 2015 8:52 AM Jose Carlos Alves Thanks David,

I already installed my machines with your help.

Best regards,
Sara Soares

Community Member 32 points 1 October 2015 6:45 PM Gustavo Vegas Hello everyone,
I have a similar requirement, in our case we use ISO images to kickstart our servers. With pieces of information from this thread as well as some other resources out there, I have been able to get the RHEL6 side working. Now, I am also trying to do this for RHEL7, and things seem to have changed. The BOOTX64.conf seems to be getting ignored and GRUB2/grub.conf seems to be the one been taken into account. Any insights on how to work this out on RHEL7? Any help would be appreciated.

Thanks.

Red Hat Active Contributor 190 points 2 October 2015 1:07 PM Petr Bokoc Hello,

For RHEL7 on UEFI systems, the file you want to modify on the boot media is BOOT/EFI/grub.cfg . You can append the inst.ks= parameter to the line starting with linuxefi in any of the entries. The second entry is selected by default; you can use the set_default= option at the beginning of the file to change that (entries are numbered from 0).

After you change grub.cfg to your preferences, you can follow the instructions in the Anaconda Customization Guide to create a new bootable ISO image with the modified boot menu.

Newbie 11 points 3 March 2017 11:48 AM systeembeheer beeldengeluid Hello, just a update to help a few people along te way on Rhel 7.3 the dual(UEFI/Bios) boot iso, it can be build with a few steps.

Use following steps : Copy the content from a iso boot image say "rhel-server-7.3-x86_64-boot.iso" to your own image directory. Where you modify your dualboot image. Modify the EFI/Boot/grub.cfg for your UEFI boot needs. Modify the isolinux/isolinux.xfg for your BIOS based Boot needs.

Now the right mkisofs command( this one did it for me) Start this in the root of your own image directory mkisofs -U -A "RHEL-7.3 x86_64" -V "RHEL-7.3 x86_64" -volset "RHEL-7.3 x86_64" -J -joliet-long -r -v -T -x ./lost+found \ -o ../Kickstart_lab_7.3-disc1-dualboot.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 \ -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot .

The volume name ( "RHEL-7.3 x86_64") may come back to bite you ( haven't tried to change it) This label important in the Uefi part of your dual boot and comes back in the EFI/BOOT/grub.cfg. In two places, first in the search line and second after inst.stage2 parameter in the menu entry.

------------------- snippet grub.cfg------

search --no-floppy --set=root -l 'RHEL-7.3 x86_64'

--### BEGIN /etc/grub.d/10_linux ###-- menuentry 'Install Red Hat Enterprise Linux 7.3' --class fedora --class gnu-linux --class gnu --class os { linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-7.3\x20x86_64 quiet initrdefi /images/pxeboot/initrd.img

}

After the inst.stage2 you can place your inst.ks parameters you need.

The isolinux/isolinux.cfg has your bios based boot parameter

Have fun.. and good luck

[Aug 06, 2017] Nathan Mike - Senior System Engineer - LPI1-C and CLA-11

Notable quotes:
"... mkdir -p bootdisk/RHEL ..."
"... example: cp -R /mnt/isolinux/* ~/bootdisk/RHEL/ ..."
"... cd ~/bootdisk/RHEL/ ..."
"... example: cp ks.cfg ~/bootdisk/RHEL/ ..."
"... mkisofs -r -T -J -V "RedHat KSBoot" -b isolinux.bin -c boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -v -o linuxboot.iso . ..."
"... linux ks=cdrom:/ks.cfg ..."
"... linux ks=cdrom:/ks.cfg append ip=<IPaddress> netmask=<netmask> ksdevice=<NICx> ..."
"... example: linux ks=cdrom:/ks.cfg append ip=10.10.10.10 netmask=255.255.255.0 ksdevice=eth0 ..."
Aug 06, 2017 | mikent.wordpress.com
How to create a kickstart ISO boot disk for RedHat Posted by mikent on April 12, 2012 14 Votes


1) logon as root

2) create a directory name bootdisk/RHEL

mkdir -p bootdisk/RHEL

3) copy the directory isolinux from your RedHat DVD or other location containing RedHat binaries in bootdisk/RHEL

example: cp -R /mnt/isolinux/* ~/bootdisk/RHEL/

4) change direcotry to ~/bootdisk/RHEL/

cd ~/bootdisk/RHEL/

5) create (or copy) your ks.cfg (it will be discussed later in another post how to create a kickstart file) in ~/bootdisk/RHEL/

example: cp ks.cfg ~/bootdisk/RHEL/

6) Now, you can create the ISO boot disk as follow (make sure you run the command from ~/bootdisk/RHEL/) :

mkisofs -r -T -J -V "RedHat KSBoot" -b isolinux.bin -c boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -v -o linuxboot.iso .

7) Burn your iso linuxboot.iso into a blank cd-rom or mount it as it is on a Virtual Machine for example

8) At linux boot prompt, type the following command:

linux ks=cdrom:/ks.cfg

if you need to install using a specific IP address using a specific Ks boot device, type the following:

linux ks=cdrom:/ks.cfg append ip=<IPaddress> netmask=<netmask> ksdevice=<NICx>

example: linux ks=cdrom:/ks.cfg append ip=10.10.10.10 netmask=255.255.255.0 ksdevice=eth0

9) your are done!

[Aug 06, 2017] How can I troubleshoot a failing Linux kickstart installation

Aug 06, 2017 | serverfault.com
down vote favorite I am having some trouble getting my pxebooting kickstart process working.

The script is known to work when installed via a DVD with the local media on the disk. I have updated the script to work with a remote repository and am leveraging PXEBooting so this can be leveraged for my enterprise. The script executes fine up until it starts to download packages.

Checking the log files on the web server hosting the repository, the first file is downloaded successfully with a 200 HTTP code. But the server running the kickstart indicates a failure and attempts to download the package again. I have confirmed this on the web server, as I see multiple requests for the same package repeated, all with the 200 HTTP code. But kickstart indicates that the download failed.

I am using CentOS 5. I copied the entire first DVD (I do not need the OpenOffice suite from the second) to the location on the web server, so the repodata already exists.

I have been able successfully download software from this repository using other systems that have already been built.

No errors appear in any of the log files that I have found on the kickstarted server, no messages are output to the screen. I have not been able to find any means of debugging this issue.

I am hoping that someone here can provide a link or information on how to attain more detailed debugging information to resolve this.

Thank you in advance. linux centos installation kickstart

share improve this question edited Apr 21 '15 at 14:34 ewwhite 162k 64 332 643 asked May 22 '12 at 13:48 Nick V 105
1
Try hitting ALT+F2 and ALT+F3 on the server you are kickstarting to see if it has any additional information that might help. – becomingwisest May 22 '12 at 14:03
If you download the package from that URL, do you get a valid package? – larsks May 22 '12 at 14:22
I have tried ALT+F2 and ALT+F3. F2 brings me to the BusyBox prompt, but no error messages. I have looked through the entire filesystem and nothing. The install logs under /mnt/sysimage/root/ do not even contain information. – Nick V May 22 '12 at 14:31
I am able to download a package from the repository without error. The issue only exists when I attempt to use the repository for installation. I even attempted to run a manual installation from the repository. The image/stage2.img file is loaded, but when it comes time to download packages the behavior is the same. At this point, I have loopback-mounted the first DVD but the same issue exists. (using CentOS 5.8) – Nick V May 22 '12 at 14:32
add a comment |
1 Answer active oldest votes
up vote down vote accepted Can you post an excerpt of your kickstart? What is the relationship between the repository and the system you're building? Same subnet?

I had a period of kickstart installation problems during the middle of the CentOS 5 series. The best thing to do from your standpoint is to check the other virtual terminals during the installation. Are you running the installation in graphical (X Windows) or text mode?

Here's what the different virtual terminals display. You should be able to debug from there.

Alt-F1
The installation dialog when using text or cmdline

Alt-F2
A shell prompt

Alt-F3
The install log displaying messages from install program

Alt-F4
The system log displaying messages from kernel, etc.

Alt-F5
All other messages

Alt-F7
The installation dialog when using the graphical installer

At some point, I was unable to resolve a kickstart performance issue. I ended up changing the installation method to NFS and the issues disappeared. See: CentOS 5.5 remote kickstart installation stalls at "Starting install process." How to debug?

share improve this answer edited Apr 13 at 12:14 Community ♦ 1 answered May 22 '12 at 14:42 ewwhite 162k 64 332 643
For the same mystical reason, the NFS change resolved the issue. – Nick V Aug 6 '12 at 4:33

[Aug 06, 2017] TipsAndTricks-KickStart - CentOS Wiki

Aug 06, 2017 | wiki.centos.org
Tips and tricks for anaconda and kickstart

For full documentations, please see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Installation_Guide/ch-kickstart2.html (CentOS 5), https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/ch-kickstart2.html (CentOS 6) or https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Installation_Guide/chap-kickstart-installations.html (CentOS 7)

Tuning the %packages section

When using %packages to define the set of packages that should be installed there are a number of more or less documented options that can be set:

--resolvedeps
Dependencies between packages will be automatically resolved. This option has been deprecated in Centos 5, dependencies are resolved automatically every time now.
--excludedocs

Skips the installation of files that are marked as documentation (all files that are listed when you do rpm -qld <packagename>)

--nobase
Skips installation of @Base. This won't work unless you know what you're doing as it might leave out packages required by post installation scripts
--ignoremissing
Ignore missing packages and groups instead of asking what to do. This option has been deprecated in Centos 5, dependencies are resolved automatically every time now.

Example of minimal package selection for CentOS 4:

%packages --resolvedeps --excludedocs --nobase
kudzu

please note that essential stuff will be missing. There will be no rpm, no yum, no vim, no dhcp-client and no keyboard layouts. Kudzu is required, because the installer fails if it is missing.

Example of minimal package selection for CentOS 5:

%packages --excludedocs --nobase
@Core

Again, this will leave you with a *very* basic system that will be missing almost every feature you might expect.

The --resolvedeps used with CentOS 4 is not required for CentOS 5 and newer releases as the newer installer always resolves dependencies.

Partitioning

If you start out with a unpartitioned disk, or a virtual machine on a unpartitioned image, use the --initlabel parameter to clearpart to make sure that the disklabel is initialized, or Anaconda will ask you to confirm creation of a disklabel interactively. For instance, to clean all partitions on xvda , and initialize the disklabel if it does not exist yet, you could use:

clearpart --all --initlabel --drives=xvda
Running anaconda in real text-mode

You probably already know that you can make anaconda run with a ncurses interface instead of the X11 interface by adding the line "text" to your kickstart file. But there's another option: install in real shell-like textmode. Replace the "text"-line with a "cmdline"-line in your kickstart file and anaconda will do the whole installation in textmode. Especially when you use %packages --nobase or run complex %post scripts this will probably save hours of debugging, because you can actually see the output of all scripts that run during the installation.

Enable/disable firstboot

You all know firstboot, the druid that helps you to set up the system after install. It can be enabled and disabled by adding either "firstboot --enable" or "firstboot --disable" to the command section of your kickstart file.

What the different terminals display
Alt-F1
The installation dialog when using text or cmdline
Alt-F2
A shell prompt
Alt-F3
The install log displaying messages from install program
Alt-F4
The system log displaying messages from kernel, etc.
Alt-F5
All other messages
Alt-F7
The installation dialog when using the graphical installer
Logging %pre and %post

When using a %pre or %post script you can simply log the output to a file by using --log=/path/to/file

%post --log=/root/my-post-log
echo 'Hello, World!'

Another way of logging and displaying the results on the screen would be the following:

%post
exec < /dev/tty3 > /dev/tty3
chvt 3
echo
echo "################################"
echo "# Running Post Configuration   #"
echo "################################"
(
echo 'Hello, World!'
) 2>&1 | /usr/bin/tee /var/log/post_install.log
chvt 1
Trusted interfaces for firewall configuration

You can use the --trust option to the firewall option multiple times to trust multiple interfaces:

# Enable firewall, open port for ssh and make eth1 and eth2 trusted
firewall --enable --ssh --trust=eth1 --trust=eth2
Use a specific network interface for kickstart

When your system has more than one network interface anaconda asks you which one you'd like to use for the kickstart process. This decision can be made at boot time by adding the ksdevice paramter and setting it accordingly. To run kickstart via eth0 simply add ksdevice=eth0 to the kernel command line.

A second method is using ksdevice=link . In this case anaconda will use the first interface it finds that has a active link.

A third method works if you are doing PXE based installations. Then you add IPAPPEND 2 to the PXE configuration file and use ksdevice=bootif . In this case anaconda will use the interface that did the PXE boot (this does not necessarily needs to be the first one with a active link).

Within the kickstart config itself you need to define the network interfaces using the network statement. If you are using method 2 or 3 then you don't know which device actually will be used. If you don't specify a device for the network statement anaconda will configure the device used for the kickstart process and set it up according to your network statement.

Forcing kickstart to ask for network configuration

Starting at CentOS 5, there a undocumented option that enable a prompt asking for network configuration during the installation. At the network statement, put the query keyword at the --bootproto= networking configuration, as we see below:

network --device=eth0 --bootproto=query

And a dialog box will appear asking for IP addressing, as well the hostname configuration.

Useful collection of ready-made kickstart files

At https://github.com/CentOS/Community-Kickstarts you can find a collection of ready-made kickstart files. Their primary goal is to provide functional sample kickstarts and snippets for the various types of deployments used in the community.

[Aug 06, 2017] Unable to get kickstart file from http webserver

Aug 06, 2017 | superuser.com
up vote down vote favorite I'm trying to get a vm up using a kickstart file. However, whenever the virtual machine initalize, it says it is unable to located the kickstart file from the location provided.

Code to build vm:

virt-install --name guest --ram 2048 --disk /vm/guest.img --location /CentOS-6.6-x86_64-bin-DVD1.iso -x "ks=http://192.168.1.72/engineer.cfg ksdevice=eth0 ip=192.168.0.1 netmask=255.255.255.0 dns=8.8.8.8 gateway=192.168.1.254"

kickstarter file:

#platform=x86, AMD64, or Intel EM64T
#version=DEVEL
# Firewall configuration
firewall --disabled
# Install OS instead of upgrade
install
# Use network installation
url --url="http://192.168.1.72/"
# Root password
rootpw --iscrypted $1$AcXRM2i4$9Wzd1rjvrLNREmeIsM9.W1
# System authorization information
auth  --useshadow  --passalgo=sha512
# Use graphical install
graphical
firstboot --disable
# System keyboard
keyboard us
# System language
lang en_US
# SELinux configuration
selinux --enforcing
# Installation logging level
logging --level=info

# System timezone
timezone  Asia/Singapore
# Network information
network  --bootproto=dhcp --device=eth0 --onboot=on
# System bootloader configuration
bootloader --location=mbr
# Clear the Master Boot Record
zerombr
# Partition clearing information
clearpart --all  
# Disk partitioning information
part /boot --fstype="ext4" --size=100
part swap --fstype="swap" --size=512
part / --fstype="ext4" --grow --size=1

%post
echo "ENGINEERING WORKSTATION" > /etc/issue
%end

The file is located at the /var/www/html directory of the webserver.

Any advice on what I may have missed will be greatly appreciated. linux virtual-machine centos httpd kickstart

share improve this question asked Jan 28 '15 at 17:12 user4985 1
add a comment |
1 Answer active oldest votes
up vote down vote Make sure that your .cfg file have right permissions and is readable by other users/systems. You may try to wget or just simply open it from any other PC in your network and see if it works.

If you have permissions problem try to set chmod to 666.

chmod 666

[Aug 02, 2017] EWONTFIX - Systemd has 6 service startup notification types, and theyre all wrong

Notable quotes:
"... Socket activation is a misfeature, it doesnt matter if it can be done cross-platform, it's still just dumb. It leads to a system appearing to boot a tiny bit faster, but you can no longer guarantee that everything is actually *up* at that point. I dont care much about speeding up boot times but I would still call that a win - IF it wasnt achieved at the cost of making the entire boot process unreliable, but it's not worth anything like the price you pay here. see more ..."
"... Anyway, the feature creep of systemd is the scary part to many of us. Why does my init process need to be my automounter, my syslog, my inetd? ..."
"... Personally, I'm just horribly upset about them changing the ordering of the start/stop/restart arguments for no good reason. what advantage does systemctl start apache have over systemctl apache start? ..."
"... You know what makes a laptop really boot faster? Without breaking anything? Suspend to disk. ..."
"... Only idiots who spend all their time rebooting and doing nothing else value boot-up times above all else. REAL people, however, have other concerns, such as system security, system integrity, and system brittleness (systemd's mandate that all deamons become dependent upon systemd makes ALL deamons brittle), which destroys longterm system stability. see more ..."
Aug 02, 2017 | ewontfix.com

lucius-cornelius 3 years ago

I'm just a user...

OpenSource began as a philosophical response to a mindset that was regarded as unhealthy and incompatible with freedom. The software is an expression of that philosophical response. So any new software that disregards, or appears to disregard that philosophy, is immediately suspect. It's no surprise, given the corruption of OpenSource in recent years, that we now have a generation of users and developers who don't care about freedom, or security, or simplicity. But it is rather sad.

The KISS principle is fundamental. Adding a new and very complex gadget in order to replace an older, simpler but perfectly functioning gadget, breaks every principle of good design and engineering.
Take GRUB2 (please, it's hideous), GRUB1 was fine, LILO is awesome. Linux had 2 perfectly working boot loaders so what happens? Replace one of them with a far more complex system. It's files have different names, the lines of code are much more complex and long winded. Editing it is not as easy as it was. This is not progress.
This appears to be not about anything other than developer's egos.

I see Linux being dragged into the same muddied waters that Windows and OSX inhabit, in order to placate the same greedy amoral Corporations or the same mindless squawking "journalists", and all to please a mass of people who don't give a damn what OS they run as long as they can gain access to their stuff without having to think/learn/read.
Think of it this way, it's a bit like taking the Mona Lisa down and drawing breasts on it and putting an ipod in one hand so that the masses can "appreciate it", or taking a Mozart symphony and remixing it into a 4 minute drum and bass track but insisting that "it's still a Mozart symphony and classical music lovers can rest assured". It's a tragedy for art lovers and the masses don't care anyway. So you end up trashing your own treasure in the pursuit of the appreciation of those who don't understand what treasure is and would trample it in sheer ignorance if they came near it.

So for me, a mere user, I don't understand the technical arguments, but I understand philosophy and I see the philosophy behind Linux/OpenSource being strangled, slowly and deliberately, by people who either harbour hidden agendas or who don't care about philosophy.

akulkis lucius-cornelius 3 years ago
"This appears to be not about anything other than developer's egos."

Ding! Ding! Ding! We have a winner.

Sievers & Poettering (and i'll throw gregkh in for good measure) are behaving like vandals. This is an indicator of being a psychopath -- they don't care how many people are harmed, and to what extent, as long as they get what they want. I believe we have a pretty good idea from 20th Century European history of where that sort of "will to power (and screw anybody who doesn't have the means to stop me immediately)" mentality leads to. Do I think it will cause 50 million dead like the last incarnation of this that came out of Germany? No. Will it cause a lot of needless misery, and pointless wasting of resources: Yes.

Siosm 3 years ago
Everything you said here is "debunked" by the well written answers in the same thread:

The main method used by systemd to monitor processes is not by monitoring PIDs, it's by using cgroups. You need to put your hatred aside, and really start reading what's actually done by systemd to understand how it works, because those posts clearly show you don't. Yes it is significantly different that what was here before, and yes those changes are somewhat disruptive. This is what makes it interesting.

Stonoho Siosm 3 years ago
I do not want my software to be "interesting." I want it to be reliable. Part of reliability is that it does not depend on the specific operating system I run it on. Cgroups are linux-specific and not suitable for portable software.
greyfade Stonoho 3 years ago
I'm genuinely curious: What about init systems makes cross-platform portability desirable? Worded another way: Why would I want to use a Unix init system on Windows?

I sincerely cannot comprehend why using the native features of the kernel an init system is written for is a bad thing, especially when it enables the init system to (potentially) do a better job? (Let's ignore for the moment whether systemd is actually doing a good job. This is a general question that would apply even if we weren't talking about systemd specifically.)

akulkis greyfade 3 years ago

Uh, Dude, the problem is that systemd is going to make deamons non-portable between Linux and the rest of the Unix world. That's not a feature, it's a very, very, very, very, very, very serious bug.

greyfade akulkis 3 years ago ...

Why is that an issue? Almost every Linux distro and most of the BSDs have been shipping _their own_ init scripts for years, even when a given daemon includes its own. I don't see how this makes daemons non-portable.

akulkis greyfade 3 years ago
Because every deamon is now going to have to have TWO versions -- the Linux systemd compatible version, and the sane one used by BSD and the rest of the unix world.

Poettering summed up his attitude -- Linux isn't unix, so don't give a shit about breaking everything and anything, as long as it feeds his narcisstic craving to control every damned thing. Basically, he's not improving Linux, he's turning it into Windows. If he wants to work on creating a system that's compatible with the Windows philosphy (it's just fabulous write huge, unstable programs with murky boundaries that do lots and lots of things poorly and don't play well with others) then he should get out of Unix and write for the ReactOS people.

It's obvious that Sievers and Poettering want to be the big fish in the pond... so w should encourage them to go to ReactOS... ReactOS needs people like Sievert and Poettering -- the Linux world doesn't. All of this crap of moving tons of things out of /bin to /usr/bin, etc (and then putting in soft links from /bin to /usr/bin, rather than, oh, I don't know, doing something SANE like putting the soft link in /usr/bin pointing at /bin) is an example of the absolute CONTEMPT these two have for the entire Linux community. Things that have worked well for almost half a century should not be thrown out like a baby with the bathwater, for the (dubious) goal of shaving 5 seconds (and usually not even that) off of boot-up time --- because what, we don't use our computers to get actual work done, we just sit around all day rebooting !?!?!?!?!?!??! Who the hell cares if a computer takes 10 seconds or 2 minutes to reboot -- it's not something you do that often. And the only computers which ARE booted up frequently (laptops) aren't running dozens upon dozens of system services to make for a long boot-up anyways.

Even my laptop, I only reboot once every couple of weeks.

Frankly, I regard Sievers and Poettering as nothing less than vandals, because they are breaking far more than fixing. For a pair who profess to be systems programmers, they seem to be utterly clueless about issues and mechanisms directly linked to system stability vs. instability.

Overly complicated PID 1 is a perfect example.

digi_owl akulkis 3 years ago

Honestly i dont think Poettering and crew is gunning for Windows. They are trying to create OSX without the Apple hardware dongles. But then they also work for a company that is likely trying to supplant Sun Solaris. Just about everything that comes out of RH these days are gunning for some kind of "secure" workstation for government/corporate work. Maybe with a solid dose of cloud thrown in on top to spite Oracle (who owns Sun these days).

akulkis digi_owl 2 years ago
OS X is BSD underneath. They are certainly NOT trying to create BSD. BSD's init system is even more simplistic than SysV init
digi_owl akulkis 2 years ago
OSX use Launchd, and some want to see that transfered to the BSDs. But at this point Systemd is much more than Launchd, never mind SUN's SMF.

At this point it may be as relevant to talk about Systemd/Linux as it is to talk about GNU/Linux.

But look at the projects Poettering has initiated and they are all more or less inspired by OSX. He pretty much admitted as much in an interview some years back.

svartalf digi_owl 2 years ago
The problem is...everything that they're doing ensures that it won't BE secure. That's the most laughable thing about this cruelty joke Red Hat's inflicting on us. They used to be friends...but with friends like that, who needs enemis...
svartalf akulkis 2 years ago
I'd opine that neither are really systems programmers. This is evidenced by at least Pottering's partial failures over time, including PulseAudio which honestly breaks more than it fixes and is really solving problems you and most everyone else (99.9999...%) don't have, nor will they ever have.

This is something that is very much *MISSION CRITICAL*, meaning it really needs to be this largely armor-plated, can't really ever fail, piece of software. So far, Lennart has yet to write a piece of software like this. He's got...interesting...notions... But they should've never been given the time of day in most of the cases, mainly because if you'd done the mental exercises most actual systems developers do, you'd realize that they're all solutions looking for a problem to fix.. Remote network audio is, quite frankly, something already solved and his "solution" added issues including needless latency in things.

His logging solution in systemd is excreable- and this massive mission/function creep abortion they call systemd is, quite simply, too complex- it's attempting to build an OS abstraction on top of an...heh...OS abstraction already there. It's almost like the One Ring. That's not a systems programmer there. It's a wannabe.

akulkis svartalf 2 years ago

You have summed it up very well. These guys are applications programmers posing as systems programmers. see more

svartalf greyfade 2 years ago
It's an issue because it's more akin to the welded shut hood of your vehicle versus what you currently have. If a *BSD variant wants something resembling systemd, the best they can hope for is the winnowed down fork labeled uselessd that does NOTHING but handle the boot dependencies, etc.

With the old way, while it was clunky and "slow" (I'd opine that "fast" is only relevant in the *EMBEDDED* and *CONSUMER DEVICE* spaces, not the server spaces -- you shouldn't NEED fast boot times for a server...) you could "port" it in the large to your system or at least HACK your needed daemon start script into the system. With systemd, you don't get that luxury. It needs to be systemd-ed and you can just pretty much kiss anything other than Linux support goodbye- or support the "old" way and the "new" way at the same time.

It's stupid. It's a damned waste of valuable time. And better yet, this doesn't even get into the lack of actual mission-criticalness that is needed in the server and embedded spaces- that this damn thing DOES NOT AND CAN NOT HAVE because of it's poor design decisions. I keep questioning why in the hell Red Hat keeps the man on their payroll, this is that bad.

greyfade svartalf 2 years ago
Your argument is one that doesn't seem, to me, to support your case.

Why would BSD want systemd? That's the core of my main objection to the argument that systemd's lack of portability is bad: It's a Linux init system that uses Linux kernel features to do its more interesting jobs, so why would any other platform want it?

Why would you have to hack your daemon? Systemd supports launching daemons via init script, and with an extension, supports sysvinit-style scripts. The systemd-specific code that enables systemd optimization for a daemon amounts to an #ifdef for a few lines of code and a single C file that can be simply excluded from your build to support non-systemd systems. How does this mean that you have to "kiss anything other than Linux support goodbye"?

You are, to be blunt, merely regurgitating the same long-debunked arguments I've seen discussed a hundred times, and still not answering my question.

akulkis greyfade 2 years ago

The reason BSD doesn't want systemd is because systemd is a steaming pile of excrement. NO SANE PERSON wants systemd. see more

SomeoneWithAClue akulkis 3 years ago

systemd's socket activation support relies on features that have been available on every decent Unix-like system for ages. So does its readiness notification support (as outlined above). Therefore they can both be implemented on *every* Unix with a minimal amout of fuss. How exactly is this supposed to make a daemon unportable? see more

Arker SomeoneWithAClue 3 years ago

Socket activation is a misfeature, it doesnt matter if it can be done cross-platform, it's still just dumb. It leads to a system appearing to boot a tiny bit faster, but you can no longer guarantee that everything is actually *up* at that point. I dont care much about speeding up boot times but I would still call that a win - IF it wasnt achieved at the cost of making the entire boot process unreliable, but it's not worth anything like the price you pay here. see more

sarlalian greyfade 3 years ago

Thats the problem with Systemd, is it's not a Unix init system, its a linux init system. If it were a unix init system, it wouldn't depend on things like cgroups existing. cgroups aren't part of the posix spec, they don't exist on FreeBSD, OpenBSD, NetBSD, Solaris and really any others. There are numerous other problems with the nested dependencies of systemd, its not friendly with running inside containers. I'm sure most people who don't like systemd would have less issues with it, if the pid 1 portion of systemd were smaller and less complex. The problem is if pid 1 dies, or needs to be updated, this results in a crash or reboot.

Anyway, the feature creep of systemd is the scary part to many of us. Why does my init process need to be my automounter, my syslog, my inetd?

Personally, I'm just horribly upset about them changing the ordering of the start/stop/restart arguments for no good reason. what advantage does systemctl start apache have over systemctl apache start?

greyfade sarlalian 3 years ago

This does not answer my question. Why is it a bad thing for it to be a Linux init system? Why does conforming to the POSIX spec matter for a Linux init system?

We can talk about dependencies and argue about features, sure. But what I don't understand is this fetishistic obsession with POSIX when we're talking about a freaking init system. I'm not aware of POSIX imposing any requirements on the init system other than a small subset of interactions with it, so why is this such a big deal? Why do you even care?

I understand, and to some degree, agree with the concerns about feature creep. But the rest of your arguments are little more than bikeshedding, and don't really explain why systemd is objectively bad.

sarlalian greyfade 3 years ago

Technically your question was about running a "unix" init system on windows. My point which did answer that question was that it isn't a "unix" init system, it is a Linux init system. While POSIX isn't some sort of panacea it does imply some level of portability across unix and unix like systems, and for an init system that is important to a lot of people. Especially when systemd apparently provides non-portable (dbus and sd_notify) ways for daemons to implement some concept of service activation.

From a desktop perspective, systemd looks fantastic, it has a ton of neat features that will make a linux laptop boot up faster, and use less resources on startup these are good things, it is also substantially more usable than the usability nightmare that is launchd which is where the systemd authors got a lot of their ideas. I'm not really against sytemd on the desktop, on the server however, where security and stability matter it leaves some things to be desired.

Objective problems with systemd on the server:

1) PID 1 is arguably the most important process, if it halts your sever halts, ideally PID 1 should be the simplest, most tested program you have running so that it won't halt ever, and never needs to be upgraded.

2) It depends on too many libraries for such a critical program. I understand that many of those libraries are well tested, best of breed, but each library increases the attack surface of critical parts of a servers infrastructure. This increased attack surface makes it more likely that there will be root exploits available for it, and decreases the overall security and stability of a system.

3) AutoFS systems suck in general, and when the automounter doesn't work correctly, services and systems hang, and this is not a good thing.

4) Not that I've spent a lot of time trying to get it to work in a docker container, but evidently it is non-trivial to get systemd to work as PID 1 in a container due to dbus and cgroup dependencies, so I end up with an init process for my overall system, and a separate init process for my containers.

5) Because it is tied so directly to Linux, I end up with one init system for my workstations (Fedora based due to external vendor requirements), and my other servers FreeBSD based.

Subjective problems with systemd.

1) It violates the unix philosophy "This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface."

2) Hubris, obviously we know better than the syslog/syslog-ng/rsyslog/inetd/xinetd/autofs/automounter/amd people... They are experts in their domain, who cares, we can do better, and include it in the init process.

3) The main developers of systemd appear to be jackasses.

- Lennart Poettering ( https://archive.fosdem.org/... "In fact, the way I see things the Linux API has been taking the role of the POSIX API and Linux is the focal point of all Free Software development. Due to that I can only recommend developers to try to hack with only Linux in mind and experience the freedom and the opportunities this offers you. So, get yourself a copy of The Linux Programming Interface, ignore everything it says about POSIX compatibility and hack away your amazing Linux software. It's quite relieving!" (Obviously Linux is the whole of the unix universe).

- Kay Sievers ( https://bugs.freedesktop.or... The whole kernel debug argument being parsed by systemd and screwing things up for the kernel blowup from a couple of months back.

4) The systemd dev's seem to have the worst case of NIH (Not invented here) Syndrome I've ever seen. We are creating an init system.. need to make sure it has AutoFS, xinetd, Syslog, DHCP, Device Detection

5) The whole boot speed at the cost of sanity attitude drives me nuts.

Anyway, to address your bikesheding accusation, an init system SHOULD more resemble a bike shed than a nuclear power plant, but systemd seems to be aiming for kernel level complexity vs. bike shed level simplicity.

Arker sarlalian 3 years ago

You know what makes a laptop really boot faster? Without breaking anything? Suspend to disk.
akulkis Arker 2 years ago
Only idiots who spend all their time rebooting and doing nothing else value boot-up times above all else. REAL people, however, have other concerns, such as system security, system integrity, and system brittleness (systemd's mandate that all deamons become dependent upon systemd makes ALL deamons brittle), which destroys longterm system stability. see more
greyfade sarlalian 3 years ago
Again, you have not addressed my question.

I pointed out myself that it's not a Unix init system, and that it has no reason to be one. My point about portability to Windows was to highlight that although it has some POSIX compatibility, there's no rationale for bringing a Unix init system to it. And that highlights my core question: Why does it matter that it be a POSIX init system? It's not meant to be portable, so there is no rationale I can find to justify making it portable. (Why would you even want to use systemd on BSD?) I wasn't asking if it should be portable to Windows, I was asking *WHY* it should ever be portable to anything other than Linux. You have dodged this question four times now.

I have already conceded twice that there are valid objective problems. Your points 1 and 4 (and to a lesser degree 2) are among them. 3 I will not address, because I have not encountered such problems, and have not seen them documented. Your "objective" point 5 is what I accuse of bikeshedding, as you have not presented a case as to why a least common denominator, featureless, aging, limited init system is preferable to having an init system which takes advantage of the features a platform has to offer. I would take the same position if we were talking about some new hypothetical FreeBSD init system: If it takes advantage of FreeBSD-specific features, why should it be portable to, say, OpenBSD?

To your subjective points:

1) I disagree with taking a dogmatic stance on the Unix philosophy. All philosophies have limits to their applicability (there is no universal philosophy, IMO), and there are numerous reasonable rationales for having a more complex communication protocol between subprocesses than a text stream. I do not find this a convincing argument.

2) This appeal to authority is much less convincing. It is hubris, I would claim, to think that someone else *can't* do it better than you.

3) I can make the same accusation of Linus (Why would you use Linux? Linus is an asshole!), Theo de Raadt (Why would you use OpenBSD or OpenSSH? Theo is an asshole!), Dan J. Bernstein (Why would you use djbdns or qmail or...) and on and on. For every amazing piece of software, there's a raging asshole behind it. This argument is unconvincing, and tells me that you're scrambling for justification.

4) On balance, there are problems with the most commonly-used variants of each of those tools. hid-dbus, xinetd, syslog-ng, dhcpcd, and several others have all sent me flying into a rage about some silly thing at one time or another, and I, for one, *welcome* more alternatives. If you think you can do better, by all means, *please do.* I think you're an asshole, but I'll at least give it a shot.

5) The whole POSIX-or-nothing attitude drives me nuts. What's your point?

sarlalian greyfade 3 years ago
Your original statement was "I'm genuinely curious: What about init systems makes cross-platform portability desirable? Worded another way: Why would I want to use a Unix init system on Windows?", maybe I'm reading that wrong, but it really does seem like you called it a UNIX init system there and asked why it should be portable to Windows. If your point about windows was that it has some POSIX compatibility then I failed at reading between the lines there, because what I read was 1) an assertion that it is a Unix init system which to me implies some degree of portability to other Unix / Unix like systems, and 2) a question about why you would want to use it on windows. Yes, in retrospect it probably should have been obvious that you were making an intentionally absurd statement about using a unix init system on windows, but I missed that point and for that apologize. My first response was more about correcting the assertion that it was a Unix init system, which is a factually incorrect statement. I don't think anyone really cares whether its POSIX or not, some people care about the byproduct of it being POSIX or not and that byproduct is some degree of portability, personally for me, whether its portable or not doesn't matter at all, I was wrong in including portability in my list of objective problems with systemd, as its definitely a subjective problem with it, and a subjective problem that I don't have with it, but some others do. I evidently got carried away with my second response ( https://xkcd.com/386/) , sorry. I really haven't been dodging the portability question, I just don't care whether its portable or not as far as problems with systemd go, portability is about as close to the bottom of the list as it gets for me.

That said, I've never asserted that SysV init is better or that I prefer it in any way shape or form, should it be replace, absolutely, is systemd the answer? I hope not, but evidently most people disagree with me on this one. And to be fair I'm not a developer of any init system, so everything I say should be taken with a grain of salt. Is my point 5 bikeshedding, probably because upon further reflection, I don't care if its portable or not.

For my subjective points, great, you disagree with me on my opinion and think I'm an asshole, good for you. Have a lovely day.

greyfade sarlalian 3 years ago
I'm sorry my original question caused confusion. I was trying to make the point that if portability of systemd to a *BSD is desirable, then logically that means that for the same reasons an init system for a genetic Unix should be in turn portable to Windows. I wasn't calling systemd a Unix init system, just pointing out the consequences of the logic, as it applies to init systems in general.

And please don't take my tone as confrontational. I'm just trying to get rational explanations for the animosity towards systemd, many of which seem to me to be completely baseless complaints. (Though, as I said, there are valid complaints and I try to acknowledge them when they're mentioned, but they're few and don't really support the case that systemd is somehow bad.)

sarlalian greyfade 3 years ago
Systemd's portability isn't really that important, the only place where it causes issues is when it makes other daemons depend on systemd specific functions like the sd_notify function. That said, thats really only a #ifdef away from not being an issue if you are on a system that doesn't support sd_notify which to me makes it a pretty trivial issue and relatively easy to port around.

As for taking your tone as confrontational, its really difficult to read identify tone in a string of text, that said, you did say you think I'm an asshole, and I'm not sure that being called an asshole can ever be interpreted as non-confrontational between strangers on the internet.

Any personal animosity (probably an overly strong word for a feeling about an init system) I have for it comes down to the simple statement that it trades stability, security and simplicity for features that I don't care much about in a server operating system. And the absolutely stupid annoying %$^#&^$^%%$# change of the location of start/stop/restart in the systemctl command as compared to the service command. (Yes I'm aware that service is still available but its less informative than systemctl).

greyfade sarlalian3 years ago
The "asshole" bit was an illustrative response. I'm sorry. You depicted these developers as unsavory people, and I was trying to point out that that is a ludicrous point to be making in matters of software - plenty of assholes (many of whom I like) write great software, so demeaning the software based on their personality is simply wrong.

On a lighter note, it may not be a bad idea to add a set of functions to your shell .rc scripts to wrap the functionality of service management in a way that's more sensible to you. I agree the differences are annoying, but short of proposing changes upstream, there's little you can do about it.

greyfade 3 years ago
"(Why would you even want to use systemd on BSD?)"

I would not want to run systemd on my system, regardless of kernel. But the pressure to do so, again regardless of kernel, is simply to get a program that depends on it to run. To the degree the systemd cabal gets more and more software to depend on this more and more people will be inconvenienced by it.

More generally, why would I not want to run a 'linux only' init system? The same reason I dont want to run 'linux only' anything. I have been using *nix for over 20 years, I have migrated distributions, I have migrated kernels, and having the freedom to do that again whenever necessary is important.

greyfade Arker 3 years ago
You're not the first to make this point, and I'm still frustrated by it, because it makes little sense, and still fails to answer my question.

Swapping out a kernel in an environment is not so trivial a matter. Switching between different versions of Linux is fine, since the ABI hasn't changed all that much over the years, and so there's little worry about a software distribution working across several kernel versions. But swapping out a Linux kernel for something entirely different is another matter. I understand that the BSD emulation of the Linux ABI is incomplete, and so swapping out Linux for a BSD kernel is going to cause a great number of headaches - the init system is the least of your concerns.

But I do not understand why you're so concerned about being able to do such a thing, especially when doing so necessarily means a substantial change in the operating environment, let alone binary loader behavior differences and ABI differences that will inevitably break a number of core features.

It's fine if you want to just use a software distribution where systemd is unsupported, but making such drastic changes to a running system is at best inadvisable, even ignoring the matter of the init system or the difficulties in changing libc.so or other critical infrastructure.

akulkis greyfade 2 years ago
your points have been addressed repeatedly.

If you don't want to listen, tha'ts your problem. Now, sit the hell down and shut the hell up, and LISTEN, Lennart.

akulkis sarlalian 2 years ago
And it doesn't even deliver the faster bootup time which is supposedly the justification for all of this BS.
akulkis greyfade 2 years ago
BS, Grayfade. We have answered your question repeatedly. Stop being a concern troll.
greyfade akulkis 2 years ago
No, you really haven't. I asked why portability and POSIX-compliance is so important for systemd. I have never gotten an answer. I asked why anyone would want systemd portable to systems like BSD. All I get is complaints that no one would because it sucks. That doesn't answer my questions. If you're going to say systemd sucks and then give me reasons it sucks, I'm going to expect your reasoning to make sense.

But you don't have reasoning that I can see. All you're doing is complaining that it sucks. Well, that's useless.

And then you keep bugging me several months after I've long forgotten about this post and call *me* a troll?

Please, just drop the subject. You're clearly not interested in conversation or reasonable discussion, so please don't reply to me again unless you have actual, reasoned answers to my questions.

pydsigner greyfade 2 years ago
It matters because a dev doesn't want to have to maintain separate daemon codebases to support both Linux and BSD. That might seem like an edge case, but if you write something to use all of the systemd interfaces because you aren't concerned about having to run it on other UNIX systems, the people who want to use your project on BSD will be left to reimplement daemonization themselves.
greyfade pydsigner 2 years ago
They don't have to maintain a separate daemon codebase. They can continue to use their old rc script and the systemd compatibility layer can run that script just fine. Or, a simple unit file can run the daemon directly.

It's only if the daemon needs to directly manage systemd services or take advantage of systemd-specific features that maintaining additional code becomes a concern.

Now, if a daemon uses systemd libraries, I have to question the motivation for porting that daemon to another OS. Clearly, if the daemon performs some task relevant to systemd, what reason is there for it to perform the same task on a system with no systemd?

Conversely, if a daemon does not rely on systemd, but just links the libraries, I have to question the motivation for linking to those libraries. If they're not needed, why link?

I still don't see the argument.

Sebastian Freundt SomeoneWithAClue 2 years ago

Interestingly, that's not true. See https://bugzilla.novell.com... see more

akulkis SomeoneWithAClue 2 years ago

And what if it crashes in the process?
What if there's a bug in the serialization?
What if there's a bug in the de-serialization in the upgraded version of systemd that just got exec()ed by the old version of systemd? see more

Sebastian Freundt akulkis 2 years ago

I can tell you what happens when (not if) it crashes: https://bugzilla.novell.com... see more

akulkis Sebastian Freundt 2 years ago

Yes, I know. It's that Someone With[OUT] a Clue doesn't seem to understand how systemd's pid 1 is overly complex, and by that, I mean, anything more complex than a process which spawns off the rest of the init system, and then sits around reaping orphans. see more

OrenWithAnE greyfade 3 years ago
It's fragile. If the cgroup implementation changes (aka, Linus feels like it) then we all feel the pain. Option #1 above relies only on the guaranteed behavior on compliant POSIX systems. It's anti-fragile.

greyfade OrenWithAnE 3 years ago

That's a completely ridiculous argument. One need only look at the last few months of LKML discussion to see that Linus is absolutely dogmatic about never, under any circumstances, breaking userland. He has gone on dozens of long rants at people for changing something that broke a userland program, shouting obscenities for days about it.

If you think the cgroups ABI changing is at all a problem, then you clearly haven't paid one whit of attention to kernel development. Kernel ABIs that are in use never change without massive cooperative efforts.

sarlalian greyfade 3 years ago

Considering that they were considering hiding some kernel arguments from systemd because it was screwing up their ability to debug the kernel, http://www.phoronix.com/sca... I think you may be overestimating Linus's feelings about breaking systemd. see more

OrenWithAnE greyfade 3 years ago

Is there a normative document spelling out the specifications and contract for cgroups, equivalent to the POSIX spec for socket writes?

greyfade OrenWithAnE 3 years ago

Yes. In the kernel docs, where it's supposed to be. And if the ABI for it changes at any time and breaks anything, Linus will bite everyones' heads off and revert the change after dressing everyone down. And if the ABI actually needs to be changed, there will be extended discussion, Linus will complain that everyone is violating his Rule #1, then all of the users of the API will agree on a migration plan to the new ABI, then it'll get changed. Remember Linux kernel development rule #1: DON'T BREAK USERSPACE.

Again, it's an absurd argument. Linux is not a Unix, and there is absolutely no reason for the init system to be portable beyond Linux. You haven't answered my question. You've only presented an ill-posed and illogical half-argument that seems to be predicated on the fact that it's not POSIX, which, as I've already pointed out, isn't relevant. Systemd is a Linux init system, not a POSIX init system. The question remains: What about init systems makes portability desirable?

[Aug 02, 2017] CentOS - RHEL 7 How to disable NetworkManager

Aug 02, 2017 | thegeekdiary.com
CentOS / RHEL 7 : How to disable NetworkManager

By Sandeep

Disabling NetworkManager

The following steps will disable NetworkManager service and allows the interface to be managed only by network service.

1. To check which are the interfaces managed by NetworkManager

# nmcli device status

This displays a table that lists all network interfaces along with their STATE. If Network Manager is not controlling an interface, its STATE will be listed as unmanaged . Any other value indicates the interface is under Network Manager control.

2. Stop the NetworkManager service:

# systemctl stop NetworkManager

3. Disable the service permanently:

# systemctl disable NetworkManager

4. To confirm the NetworkManager service has been disabled

# systemctl list-unit-files | grep NetworkManager

5. Add the below parameter in /etc/sysconfig/network-scripts/ifcfg-ethX of interfaces that are managed by NetworkManager to make it unmanaged.

NM_CONTROLLED="no"
Note: Be sure to change the NM_CONTROLLED="yes" to " no " or the network service may complain about "Connection activation failed" when it cannot find an interface to start Switching to "network" service

When the NetworkManager is disabled, the interface can be configured for use with the network service. Follow the steps below to configure and interface using network services.

1. Set the IP address in the configuration file: /etc/sysconfig/network-scripts/ifcfg-eth0. Set the NM_CONTROLLED value to no and assign a static IP address in the file.

NAME="eth0"
HWADDR=...
ONBOOT=yes
BOOTPROTO=none
IPADDR=...
NETMASK=...
GATEWAY=...
TYPE=Ethernet
NM_CONTROLLED=no

2. Set the DNS servers to be used by adding into the file: /etc/resolv.conf :

nameserver [server 1]
nameserver [server 2]

3. Enable the network service

# systemctl enable network

4. Restart the network service

# systemctl restart network

Filed Under: CentOS/RHEL 7

Some more articles you might also be interested in
  1. CentOS / RHEL 7 : Never run the iptables service and FirewallD service at the same time!
  2. RHEL 7 – RHCSA Notes – Create and manage Access Control Lists (ACLs)
  3. CentOS / RHEL 7 : How to boot into Rescue Mode or Emergency Mode
  4. CentOS / RHEL 7 : Change default kernel (boot with old kernel)
  5. CentOS / RHEL 7 : How to start / Stop Firewalld
  6. CentOS / RHEL 7 : Unable to start/enable iptables
  7. CentOS / RHEL : Configure yum automatic updates with yum-cron service
  8. CentOS / RHEL 7 : How to Change Timezone
  9. CentOS / RHEL 7 : How to add a kernel parameter only to a specific kernel
  10. CentOS / RHEL 7 : Booting process

[Aug 02, 2017] EWONTFIX - Broken by design systemd

Notable quotes:
"... difficult not to use ..."
Aug 02, 2017 | ewontfix.com
Broken by design: systemd 09 Feb 2014 19:56:09 GMT

Recently the topic of systemd has come up quite a bit in various communities in which I'm involved, including the musl IRC channel and on the Busybox mailing list .

While the attitude towards systemd in these communities is largely negative, much of what I've seen has been either dismissable by folks in different circles as mere conservatism, or tempered by an idea that despite its flaws, "the design is sound". This latter view comes with the notion that systemd's flaws are fixable without scrapping it or otherwise incurring major costs, and therefore not a major obstacle to adopting systemd.

My view is that this idea is wrong: systemd is broken by design , and despite offering highly enticing improvements over legacy init systems, it also brings major regressions in terms of many of the areas Linux is expected to excel: security, stability, and not having to reboot to upgrade your system.

The first big problem: PID 1

On unix systems, PID 1 is special. Orphaned processes (including a special case: daemons which orphan themselves) get reparented to PID 1. There are also some special signal semantics with respect to PID 1, and perhaps most importantly, if PID 1 crashes or exits, the whole system goes down (kernel panic).

Among the reasons systemd wants/needs to run as PID 1 is getting parenthood of badly-behaved daemons that orphan themselves, preventing their immediate parent from knowing their PID to signal or wait on them.

Unfortunately, it also gets the other properties, including bringing down the whole system when it crashes. This matters because systemd is complex. A lot more complex than traditional init systems. When I say complex, I don't mean in a lines-of-code sense. I mean in terms of the possible inputs and code paths that may be activated at runtime. While legacy init systems basically deal with no inputs except SIGCHLD from orphaned processes exiting and manual runlevel changes performed by the administrator, systemd deals with all sorts of inputs, including device insertion and removal, changes to mount points and watched points in the filesystem, and even a public DBus-based API. These in turn entail resource allocation, file parsing, message parsing, string handling, and so on. This brings us to:

The second big problem: Attack Surface

On a hardened system without systemd, you have at most one root-privileged process with any exposed surface: sshd. Everything else is either running as unprivileged users or does not have any channel for providing it input except local input from root. Using systemd then more than doubles the attack surface.

This increased and unreasonable risk is not inherent to systemd's goal of fixing legacy init. However it is inherent to the systemd design philosophy of putting everything into the init process.

The third big problem: Reboot to Upgrade

Windows Update rebooting

Fundamentally, upgrading should never require rebooting unless the component being upgraded is the kernel. Even then, for security updates, it's ideal to have a "hot-patch" that can be applied as a loadable kernel module to mitigate the security issue until rebooting with the new kernel is appropriate.

Unfortunately, by moving large amounts of functionality that's likely to need to be upgraded into PID 1, systemd makes it impossible to upgrade without rebooting. This leads to "Linux" becoming the laughing stock of Windows fans , as happened with Ubuntu a long time ago.

Possible counter-arguments

With regards to security , one could ask why can't desktop systems use systemd, and leave server systems to find something else. But I think this line of reasoning is flawed in at least three ways:

  1. Many of the selling-point features of systemd are server-oriented. State-of-the-art transaction-style handling of daemon starting and stopping is not a feature that's useful on desktop systems. The intended audience for that sort of thing is clearly servers.
  2. The desktop is quickly becoming irrelevant. The future platform is going to be mobile and is going to be dealing with the reality of running untrusted applications. While the desktop made the unix distinction of local user accounts largely irrelevant, the coming of mobile app ecosystems full of potentially-malicious apps makes "local security" more important than ever.
  3. The crowd pushing systemd, possibly including its author, is not content to have systemd be one choice among many. By providing public APIs intended to be used by other applications, systemd has set itself up to be difficult not to use once it achieves a certain adoption threshold.

With regards to upgrades , systemd's systemctl has a daemon-reexec command to make systemd serialize its state, re-exec itself, and continue uninterrupted. This could perhaps be used to switch to a new version without rebooting. Various programs already use this technique, such as the IRC client irssi which lets you /upgrade without dropping any connections. Unfortunately, this brings us back to the issue of PID 1 being special. For normal applications, if re-execing fails, the worst that happens is the process dies and gets restarted (either manually or by some monitoring process) if necessary. However for PID 1, if re-execing itself fails, the whole system goes down (kernel panic).

For common reasons it might fail, the execve syscall returns failure in the original process image, allowing the program to handle the error. However, failure of execve is not entirely atomic:

In addition, systemd might fail to restore its serialized state due to resource allocation failures, or if the old and new versions have diverged sufficiently that the old state is not usable by the new version.

So if not systemd, what? Debian's discussion of whether to adopt systemd or not basically devolved into a false dichotomy between systemd and upstart. And except among grumpy old luddites, keeping legacy sysvinit is not an attractive option. So despite all its flaws, is systemd still the best option?

No.

None of the things systemd "does right" are at all revolutionary. They've been done many times before. DJB's daemontools , runit , and Supervisor , among others, have solved the "legacy init is broken" problem over and over again (though each with some of their own flaws). Their failure to displace legacy sysvinit in major distributions had nothing to do with whether they solved the problem, and everything to do with marketing. Said differently, there's nothing great and revolutionary about systemd. Its popularity is purely the result of an aggressive, dictatorial marketing strategy including elements such as:

So how should init be done right?

The Unix way: with simple self-contained programs that do one thing and do it well.

First, get everything out of PID 1:

The systemd way: Take advantage of special properties of pid 1 to the maximum extent possible. This leads to ever-expanding scope creep and exacerbates all of the problems described above (and probably many more yet to be discovered).

The right way: Do away with everything special about pid 1 by making pid 1 do nothing but start the real init script and then just reap zombies:

#define _XOPEN_SOURCE 700
#include <signal.h>
#include <unistd.h>

int main()
{
    sigset_t set;
    int status;

    if (getpid() != 1) return 1;

    sigfillset(&set);
    sigprocmask(SIG_BLOCK, &set, 0);

    if (fork()) for (;;) wait(&status);

    sigprocmask(SIG_UNBLOCK, &set, 0);

    setsid();
    setpgid(0, 0);
    return execve("/etc/rc", (char *[]){ "rc", 0 }, (char *[]){ 0 });
}

Yes, that's really all that belongs in PID 1. Then there's no way it can fail at runtime, and no need to upgrade it once it's successfully running.

Next, from the init script, run a process supervision system outside of PID 1 to manage daemons as immediate child processes (no backgrounding). As mentioned above are several existing choices here. It's not clear to me that any of them are sufficiently polished or robust to satisfy major distributions at this time. But neither is systemd ; its backers are just better at sweeping that under the rug.

What the existing choices do have, though, is better design , mainly in the way of having clean, well-defined scope rather than Katamari Damacy .

If none of them are ready for prime time, then the folks eager to replace legacy init in their favorite distributions need to step up and either polish one of the existing solutions or write a better implementation based on the same principles. Either of these options would be a lot less work than fixing what's wrong with systemd.

Whatever system is chosen, the most important criterion is that it be transparent to applications. For 30+ years, the choice of init system used has been completely irrelevant to everybody but system integrators and administrators. User applications have had no reason to know or care whether you use sysvinit with runlevels, upstart, my minimal init with a hard-coded rc script or a more elaborate process-supervision system, or even /bin/sh . Ironically, this sort of modularity and interchangibility is what made systemd possible; if we were starting from the kind of monolithic, API-lock-in-oriented product systemd aims to be, swapping out the init system for something new and innovative would not even be an option.

Update: license on code

Added December 21, 2014.

There has been some interest in having a proper free software license on the trivial init code included above. I originally considered it too trivial to even care about copyright or need a license on it, but I don't want this to keep anyone from using or reusing it, so I'm explicitly licensing it under the following terms (standard MIT license):

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

[Jul 15, 2017] Red Hat 6.9 is now available

After the Phase 2, RHEL 6 will only receive security updates till 2020 under the Phase 3 which will commence on May 10, 2017. Red Hat's current focus of development is the RHEL 7 platform which was updated to RHEL 7.3 last year.
Mar 21, 2017

Release Notes

Red Hat Enterprise Linux 6 is now in Production Phase 2, and Red Hat Enterprise Linux 6.9 therefore provides a stable release focused on bug fixes. Red Hat Enterprise Linux 6 enters Production Phase 3 on May 10, 2017. Subsequent updates will be limited to qualified critical security fixes and business-impacting urgent issues. Please refer to Red Hat Enterprise Linux Life Cycle for more information

Migration to RHEL 7 is now supported

In-place Upgrade

As Red Hat Enterprise Linux subscriptions are not tied to a particular release, existing customers can update their Red Hat Enterprise Linux 6 infrastructure to Red Hat Enterprise Linux 7 at any time, free of charge, to take advantage of recent upstream innovations. To simplify the upgrade to Red Hat Enterprise Linux 7, Red Hat provides the Preupgrade Assistant and Red Hat Upgrade Tool. For more information, see Chapter 2, General Updates.

Red Hat Insights
Since Red Hat Enterprise Linux 6.7, the Red Hat Insights service is available. Red Hat Insights is a proactive service designed to enable you to identify, examine, and resolve known technical issues before they affect your deployment. Insights leverages the combined knowledge of Red Hat Support Engineers, documented solutions, and resolved issues to deliver relevant, actionable information to system administrators.

The service is hosted and delivered through the customer portal at https://access.redhat.com/insights/ or through Red Hat Satellite. To register your systems, follow the Getting Started Guide for Insights. For further information, data security and limits, refer to https://access.redhat.com/insights/splash/.

Red Hat Customer Portal Labs

Red Hat Customer Portal Labs is a set of tools in a section of the Customer Portal available at https://access.redhat.com/labs/. The applications in Red Hat Customer Portal Labs can help you improve performance, quickly troubleshoot issues, identify security problems, and quickly deploy and configure complex applications. Some of the most popular applications are, for example
NetworkManager now supports manual DNS configuration with dns=none

With this update, the user has the option to prevent NetworkManager from modifying the /etc/resolv.conf file. This is useful for manual management of DNS settings. To protect the file from being modified, add the dns=none option to the /etc/NetworkManager/NetworkManager.conf file. (BZ#1308730)

Red Hat Software Collections

is a Red Hat content set that provides a set of dynamic programming languages, database servers, and related packages that you can install and use on all supported releases of Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures. Red Hat Developer Toolset is included as a separate Software Collection.

Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection, GNU Debugger, and other development, debugging, and performance monitoring tools. Since Red Hat Software Collections 2.3, the Eclipse development platform is provided as a separate Software Collection.

Dynamic languages, database servers, and other tools distributed with Red Hat Software Collections do not replace the default system tools provided with Red Hat Enterprise Linux, nor are they used in preference to these tools. Red Hat Software Collections uses an alternative packaging mechanism based on the scl utility to provide a parallel set of packages. This set enables optional use of alternative package versions on Red Hat Enterprise Linux. By using the scl utility, users can choose which package version they want to run at any time.

See the Red Hat Software Collections documentation for the components included in the set, system requirements, known problems, usage, and specifics of individual Software Collections.

See the Red Hat Developer Toolset documentation for more information about the components included in this Software Collection, installation, usage, known problems, and more.

[Jun 01, 2017] CVE-2017-1000367 Bug in sudos get_process_ttyname. Most linux distributions are affected

Jun 01, 2017 | www.cyberciti.biz

There is a serious vulnerability in sudo command that grants root access to anyone with a shell account. It works on SELinux enabled systems such as CentOS/RHEL and others too. A local user with privileges to execute commands via sudo could use this flaw to escalate their privileges to root. Patch your system as soon as possible.

It was discovered that Sudo did not properly parse the contents of /proc/[pid]/stat when attempting to determine its controlling tty. A local attacker in some configurations could possibly use this to overwrite any file on the filesystem, bypassing intended permissions or gain root shell.

... ... ...

A list of affected Linux distro
  1. Red Hat Enterprise Linux 6 (sudo)
  2. Red Hat Enterprise Linux 7 (sudo)
  3. Red Hat Enterprise Linux Server (v. 5 ELS) (sudo)
  4. Oracle Enterprise Linux 6
  5. Oracle Enterprise Linux 7
  6. Oracle Enterprise Linux Server 5
  7. CentOS Linux 6 (sudo)
  8. CentOS Linux 7 (sudo)
  9. Debian wheezy
  10. Debian jessie
  11. Debian stretch
  12. Debian sid
  13. Ubuntu 17.04
  14. Ubuntu 16.10
  15. Ubuntu 16.04 LTS
  16. Ubuntu 14.04 LTS
  17. SUSE Linux Enterprise Software Development Kit 12-SP2
  18. SUSE Linux Enterprise Server for Raspberry Pi 12-SP2
  19. SUSE Linux Enterprise Server 12-SP2
  20. SUSE Linux Enterprise Desktop 12-SP2
  21. OpenSuse, Slackware, and Gentoo Linux

[May 19, 2017] Google Found Over 1,000 Bugs In 47 Open Source Projects

May 14, 2017 | it.slashdot.org
(helpnetsecurity.com) 43

Posted by EditorDavid on Saturday May 13, 2017 @11:34AM

Orome1 writes: In the last five months, Google's OSS-Fuzz program has unearthed over 1,000 bugs in 47 open source software projects ...

So far, OSS-Fuzz has found a total of 264 potential security vulnerabilities: 7 in Wireshark, 33 in LibreOffice, 8 in SQLite 3, 17 in FFmpeg -- and the list goes on...

Google launched the program in December and wants more open source projects to participate, so they're offering cash rewards for including "fuzz" targets for testing in their software.

"Eligible projects will receive $1,000 for initial integration, and up to $20,000 for ideal integration" -- or twice that amount, if the proceeds are donated to a charity.

[Mar 21, 2017] systemd-redux - blog dot lusis

Mar 21, 2017 | blog.lusis.org

I encourage you STRONGLY to read the systemd-devel mailing list for the kinds of issues you'll possibly have to deal with.

Systemd-redux

Nov 20th, 2014 | Comments

I figured it was about time for a followup on my systemd post. I've been meaning to do it for a while but time hasn't allowed. The end of Linux

Some people wrongly characterized this as some sort of hyperbole. It was not. Systemd IS changing what we know as Linux today. It remains to be seen if this is a good or bad thing but Linux is becoming something different than it was.

Linux is in for a rough few years

I do honestly believe this will end up being the start of a rocky period for Linux.

Additionally, while not Systemd specific but legitimately all inter-related, kdbus is coming and its already got its fair share of issues in the first implementation including breaking userspace.

We also have distros like SLES adopting btrfs as the default filesystem.

All of these things combined mean that Linux is pushing the bleeding edge of a lot of unbaked technologies. Time will tell if this turns people off or not. I expect that enterprise shops will probably freeze systems at RHEL6 for a good while to come (and not just the standard "we're enterprise and we don't like to upgrade" time period).

Systemd isn't going away

Systemd is here to stay. The only way you will have a system without it is to roll your own. I don't expect many distros to chose to back out. My best hope is that they'll all freeze at the current version. Maybe a few things will get backported here and there for security fixes.

SystemD components are NOT optional

I know everyone likes to tout this but, no, the various systemd components while not pid 1 are realistically not optional. Kdbus, single parent hierarchy for namespaces (systemd is taking this one of course), udev changes - the kernel and distros are changing and coallescing around whatever systemd ships. Most distros will probably use systemd-networkd for instance. Look at what happened with Debian just today. The (albeit way late to the game) recommendation to support alternate init systems was rejected. I encourage you STRONGLY to read the systemd-devel mailing list for the kinds of issues you'll possibly have to deal with.

Options

To be clear if you're going to stick with Linux, you will have to deal with systemd. It's up to you to decide if that's something you're comfortable with. Systemd is bringing some good things but, like other discussions I've been involved with, you're going to be stuck with all the other stuff that comes along with it whether you like it or not.

It's worth noting that FreeBSD just got a nice donation from the WhatsApp folks. It also ships with ZFS as part of the kernel and has a jails which is a much more baked technology and implementation than LXC. While you can't use docker now with jails, my understanding is that there is work being done to support NON-LXC operating system level virtualization (such as jails and solaris zones).

Speaking of zones and Solaris, if that's an option for you it's probably the best of breed stack right now. Rich mature OS-level virtualization. SmartOS brings along KVM support for when you HAVE to run Linux but backed by Solaris tech under the hood. There's also OmniOS as a variant as well.

If you absolutely MUST run Linux, my recommendation is to minimize the interaction with the base distro as much as possible. CoreOS (when it's finally baked and production ready) can bring you an LXC based ecosystem. If they were to ever add actual virt support (i.e. KVM), then you could mix and match as needed. If you're working for a startup or a more flexible organization, you can go down this path. If you're working for a more traditional enterprise, your options are pretty limited. At least you'll have the RedHat support contract.

Posted by John E. Vincent Nov 20th, 2014

[Feb 04, 2017] Restoring deleted /tmp folder

Jan 13, 2015 | cyberciti.biz

As my journey continues with Linux and Unix shell, I made a few mistakes. I accidentally deleted /tmp folder. To restore it all you have to do is:

mkdir /tmp
chmod 1777 /tmp
chown root:root /tmp
ls -ld /tmp
 
mkdir /tmp chmod 1777 /tmp chown root:root /tmp ls -ld /tmp 

[Jan 26, 2017] Penguins force-fed root Cruel security flaw found in systemd v228

theregister.co.uk
Some Linux distros will need to be updated following the discovery of an easily exploitable flaw in a core system management component.

The CVE-2016-10156 security hole in systemd v228 opens the door to privilege escalation attacks, creating a means for hackers to root systems locally if not across the internet. The vulnerability is fixed in systemd v229.

Essentially, it is possible to create world-readable, world-writeable setuid executable files that are root owned by setting all the mode bits in a call to touch(). The systemd changelog for the fix reads:

basic: fix touch() creating files with 07777 mode

mode_t is unsigned, so MODE_INVALID < 0 can never be true.

This fixes a possible [denial of service] where any user could fill /run by writing to a world-writable /run/systemd/show-status.

However, as pointed out by security researcher Sebastian Krahmer, the flaw is worse than a denial-of-service vulnerability – it can be exploited by a malicious program or logged-in user to gain administrator access: "Mode 07777 also contains the suid bit, so files created by touch() are world writable suids, root owned."

The security bug was quietly fixed in January 2016 back when it was thought to pose only a system-crashing risk. Now the programming blunder has been upgraded this week following a reevaluation of its severity. The bug now weighs in at a CVSS score of 7.2, towards the top end of the 1-10 scale.

It's a local root exploit, so it requires access to the system in question to exploit, but it pretty much boils down to "create a powerful file in a certain way, and gain root on the server." It's trivial to pull off.

"Newer" versions of systemd deployed by Fedora or Ubuntu have been secured, but Debian systems are still running an older version and therefore need updating.

systemd is a suite for building blocks for Linux systems that provides system and service management technology. Security specialists view it with suspicion and complaints about function creep are not uncommon. ®

[Dec 26, 2016] Devuans Systemd-Free Linux Hits Beta 2

Notable quotes:
"... Devuan came about after some users felt [Debian] had become too desktop-friendly . The change the greybeards objected to most was the decision to replace sysvinit init with systemd, a move felt to betray core Unix principles of user choice and keeping bloat to a bare minimum. ..."
"... Devuan.org now features an "init freedom" logo with the tagline, "watching your first step. ..."
Dec 26, 2016 | linux.slashdot.org
(theregister.co.uk) 338

Posted by EditorDavid on Saturday December 03, 2016 @11:38PM from the forking-the-road dept.

Long-time Slashdot reader Billly Gates writes,

"For all the systemd haters who want a modern distro feel free to rejoice. The Debian fork called Devuan is almost done, completing a daunting task of stripping systemd dependencies from Debian."

From The Register:

Devuan came about after some users felt [Debian] had become too desktop-friendly . The change the greybeards objected to most was the decision to replace sysvinit init with systemd, a move felt to betray core Unix principles of user choice and keeping bloat to a bare minimum.

Supporters of init freedom also dispute assertions that systemd is in all ways superior to sysvinit init, arguing that Debian ignored viable alternatives like sinit , openrc , runit , s6 and shepherd . All are therefore included in Devuan.

Devuan.org now features an "init freedom" logo with the tagline, "watching your first step.

Their home page now links to the download site for Devuan Jessie 1.0 Beta2 , promising an OS that "avoids entanglement".

[Dec 26, 2016] The Linux Foundation Offers 50% Discounts On Training

Dec 26, 2016 | linux.slashdot.org
(linuxfoundation.org) 39 Posted by EditorDavid on Sunday December 18, 2016 @05:44PM from the tell-em-Linus-sent-you dept. An anonymous reader writes: The non-profit association that sponsors Linus Torvalds' work on Linux also offers self-paced online training and certification programs. And now through December 22, they're available at a 50% discount . "Make learning Linux and other open source technologies your New Year's Resolution this holiday season," reads a special page at LinuxFoundation.org. There's training in Linux security, networking, and system administration, as well as software-defined networking and OpenStack administration. (Plus a course called "Fundamentals Of Professional Open Source Management," and two certification programs that can make you a Linux Foundation-certified engineer or system administrator.)
And if you order right now, they'll also give you a free mug with a penguin on it.

[Nov 06, 2016] ascii files can be recovered b

view /tmp/somefile and see what you want to copy over from /dev/ to original location.

if you are on ext2 mount, may be you can try recover command.

my two penny advice for future :please always read man page for command and arguments before you actually run.

[Jun 06, 2016] 20 Linux Accounts to Follow on Twitter by Marin Todorov

www.tecmint.com

| Published: November 30, 2015 | November 30, 2015
Download Your Free eBooks NOW - 10 Free Linux eBooks for Administrators | 4 Free Shell Scripting eBooks

System Administrators often need to find new information in their field of work. Reading the latest blog posts from hundreds of different sources is a task that not everyone may have the time to do. If you are a such busy user or just like to find new information about Linux, you can use social media website like Twitter.
Linux Twitter Accounts to Follow

20 Linux Twitter Accounts to Follow

Twitter is a website where you can follow users that share information that you are interested in. You can use the power of this website to get news, new ideas to solve problems, commands, links to interesting articles, new releases updates and many others. The possibilities are many, but Twitter is as good as the people you follow on it.

If you don't follow anyone, then your Twitter wall will remain empty. But if you follow the right people, you will be presented with tons of interesting information shared by people you followed.

The fact that you came across TecMint definitely means you are a Linux user thirsty to learn new stuff. We have decided to make your Twitter wall a bit more interesting, by gathering 20 Linux accounts to follow on Twitter.
1. Linus Torvalds – @Linus__Torvalds

Of course, the number one spot is saved for the person who created Linux – Linus Torvalds. His account is not that frequently updated, but it is still good to have it. The account was created on November 2012 and has over 22k followers.
Follow @Linus__Torvalds

Follow @Linus__Torvalds
2. FSF – @fsf

The Free Software Foundation is fighting for essential rights for the free software since 1985. The FSF has joined twitter on May 2008 and has over 10.6K followers. You can find different information here about new releases of new and free software as well as other information relevant to free software.
Follow @fsf

Follow @fsf
3. The Linux Foundation – @linuxfoundation

Next in our list is the Linux Foundation. On that page you will find many interesting news, latest updates around Linux and some useful tutorials. The account joined Twitter on May 2008 and has been active ever since. It has over 198K followers.
Follow @linuxfoundation

Follow @linuxfoundation
4. Linux Today – @linuxtoday

LinuxToday is account that shares different news and tutorials gathered from different sources around the internet. This account joined Twitter on June 2009 and has over 67K users.
Follow @linuxtoday

Follow @linuxtoday
5. Distro Watch – @DistroWatch

DistroWatch will keep you updated about the latest Linux distributions available. If you are a OS maniac like us, this account is a must follow. The account joined Twitter on February 2009 and has over 23K followers.
Follow @DistroWatch

Follow @DistroWatch
6. Linux – @Linux

The Linux page likes to follow up with the latest Linux OS releases. You can follow up this page if you want to know when a new Linux release is available. The account was created on September 2007 and has over 188K followers.
Follow @Linux

Follow @Linux
7. LinuxDotCom – @LinuxDotCom

LinuxDotCom is a page that covers information about Linux and everything around it. From Linux operating systems to devices in our life that use Linux. The account joined Twitter on January 2009 and has nearly 80K followers.
Follow @LinuxDotCom

Follow @LinuxDotCom
8. Linux For You – @LinuxForYou

LinuxForYou is Asia's first English magazine for free and open source software. It joined Twitter on February 2009 and has nearly 21K followers.
Follow @LinuxForYou

Follow @LinuxForYou
9. Linux Journal – @linuxjournal

Another good tweeter account to keep up with latest Linux news is LinuxJournal's. Their articles are always informative and if you like to get notified about new information about Linux, I will recommend you to signup for their newsletter. The account joined on October 2007 and has over 35K followers.

10. Linux Pro – @linux_pro

The Linux_pro page is the page of the famous LinuxPro magazine. Except for Linux news, you will learn about the latest products, tools and strategies for administrators, programming in the Linux environment and more. The account joined Twitter on September 2008 and has over 35K followers.

11 Tux Radar – @turxradar

This is another popular account that provides interesting, yet different Linux News. TuxRadar uses different sources so you will definitely want to have them in your wall stream. The account joined Twitter on February 2009 and has 11K followers

12. CommandLineFu – @commandlinefu

If you like the Linux command line and want to find more tricks and tips, then commandlinefu is the perfect user to follow. The account posts frequent updates with different useful commands. It joined Twitter on January 2009 and has nearly 18K followers
Follow @commandlinefu

Follow @commandlinefu
13. Command Line Magic – @climagic

CommandLineMagic shows some command lines for advanced linux users as well as some funny nerdy jokes. It's another fun account to follow and learn from. It joined Twitter November 2009 and has 108K followers:

14 SadServer – @sadserver

The SadServer is one of those accounts that just makes you laugh and want to check over and over again. Fun facts and stories are shared often so you won't be disappointed. The account joined Twitter on February 2010 and has over 54K followers.
Follow @sadserver

Follow @sadserver
15. Nixcraft – @nixcraft

If you enjoy Linux and DevOps work then NixCraft is the one you should follow. The account is very popular around Linux users and has over 48K followers. It joined twitter on November 2008.

16.Unixmen – @unixmen

Unixmen has a blog full of useful tutorials about Linux administration. It's another popular account across Linux users. The account has nearly 10K followers and joined twitter on April 2009.

17. HowToForge – @howtoforgecom

HowToForge provides user friendly tutorials and howtos about almost every topic related to Linux. They have over 8K followers on Twitter.
Follow @howtoforgecom

Follow @howtoforgecom
18. Webupd8 – @WebUpd8

Webupd8 describe themselves as Ubuntu blog, but they cover much more than that. On their website or twitter account you can find information about newly released Linux operating systems, open source software, howto's as well as customization tips. The account has nearly 30K followers and joined Twitter on March 2009.
Follow @WebUpd8

Follow @WebUpd8
19.The Geek Stuff – @thegeekstuff

TheGeekStuff is another useful account where you can find Linux tutorials on different topics on both software and hardware. The account has over 3.5K followers and joined Twitter on December 2008.

20. Tecmint – @tecmint

Last, but definitely not least, lets not forget about TecMint the very website that you're reading right now. We like to share all type of different stuff about Linux – from tutorials to funny things on terminal and jokes about Linux. Tecmint is basically best website and twitter page that you can must follow it and ensures that you will never miss another article from us.
Follow @tecmint

[May 31, 2016] RHEL 6.8 is out

Notable quotes:
"... For customers with ever-increasing volumes of data, the Scalable File System Add-on for Red Hat Enterprise Linux 6.8 now supports xfs filesystem sizes up to 300TB. ..."
"... enables customers to migrate their traditional workloads into container-based applications – suitable for deployment on Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux Atomic Host. ..."
redhat.com

Red Hat Enterprise Linux 6.8 adds improved system archiving, new visibility into storage performance and an updated open standard for secure virtual private networks

Red Hat, Inc. (NYSE: RHT), the world's leading provider of open source solutions, today announced the general availability of Red Hat Enterprise Linux 6.8, the latest version of the Red Hat Enterprise Linux 6 platform. Red Hat Enterprise Linux 6.8 delivers new capabilities and provides a stable and trusted platform for critical IT infrastructure. With nearly six years of field-proven success, Red Hat Enterprise Linux 6 has set the stage for the innovations of today, as Red Hat Enterprise Linux continues to power not only existing workloads, but also the technologies of the future, from cloud-native applications to Linux containers.

With enhancements to security features and management, Red Hat Enterprise Linux 6.8 remains a solid, proven base for modern enterprise IT operations.

Jim Totton vice president and general manager, Platforms Business Unit, Red Hat

Red Hat Enterprise Linux 6.8 includes a number of new and updated features to help organizations bolster platform security and enhance systems management/monitoring capabilities, including:

Enhanced Security, Authentication, and Interoperability

To enhance security for virtual private networks (VPNs), Red Hat Enterprise Linux 6.8 includes libreswan, an implementation of one of the most widely supported and standardized VPN protocols, which replaces openswan as the Red Hat Enterprise Linux 6 VPN endpoint solution, giving Red Hat Enterprise Linux 6 customers access to recent advances in VPN security.

Customers running the latest version of Red Hat Enterprise Linux 6 can see increased client-side performance and simpler management through the addition of new capabilities to the Identity Management client code (SSSD). Cached authentication lookup on the client reduces the unnecessary exchange of user credentials with Active Directory servers. Support for adcli simplifies the management of Red Hat Enterprise Linux 6 systems interoperating with an Active Directory domain. In addition, SSSD now supports user authentication via smart cards, for both system login and related functions such as sudo.

Enhanced Management and Monitoring
The inclusion of Relax-and-Recover, a system archiving tool, provides a more streamlined system administration experience, enabling systems administrators to create local backups in an ISO format that can be centrally archived and replicated remotely for simplified disaster recovery operations. An enhanced yum tool simplifies the addition of packages, adding intelligence to the process of locating required packages to add/enable new platform features.

Red Hat Enterprise Linux 6.8 provides increased visibility into storage usage and performance through dmstats, a program that displays and manages I/O statistics for user-defined regions of devices using the device-mapper driver.

Additional Enhancements and Updates

For customers with ever-increasing volumes of data, the Scalable File System Add-on for Red Hat Enterprise Linux 6.8 now supports xfs filesystem sizes up to 300TB.

Additionally, the general availability of Red Hat Enterprise Linux 6.8 includes the launch of an updated Red Hat Enterprise Linux 6.8 base image which enables customers to migrate their traditional workloads into container-based applications – suitable for deployment on Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux Atomic Host.

Today's release also marks the transition of Red Hat Enterprise Linux 6 into Production Phase 2, a phase which prioritizes ongoing stability and security features for critical platform deployments. More information on the Red Hat Enterprise Linux lifecycle can be found at https://access.redhat.com/support/policy/updates/errata .

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_compiler_and_tools.html

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_file_systems.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_networking.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_servers_and_services.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_storage.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_system_and_subscription_management.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/chap-Red_Hat_Enterprise_Linux-6.8_Release_Notes-Red_Hat_Software_Collections.html

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/part-Red_Hat_Enterprise_Linux-6.8_Release_Notes-Known_Issues.html

[May 31, 2016] Red Hat Enterprise Linux 6.8 Deprecates Btrfs

Notable quotes:
"... Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 6. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments. ..."
www.phoronix.com
Buried within the notes for today's Red Hat Enterprise Linux 6.8 release are a few interesting notes.

First, RHEL has deprecated support for the Btrfs file-system.

Btrfs file system
Development of B-tree file system (Btrfs) has been discontinued, and Btrfs is considered deprecated. Btrfs was previously provided as a Technology Preview, available on AMD64 and Intel 64 architectures.

Huh? Since when was Btrfs development discontinued? At least in the upstream space, it's still ongoing and Facebook (as well as other companies) continue pouring resources into stabilizing and advancing the capabilities of Btrfs, which is widely sought as a Linux alternative to ZFS. There's no signs of things stalling on the Btrfs mailing list. Especially as Red Hat hasn't been packaging ZFS for RHEL officially (but you can grab packages via ZFSOnLinux.org) as an alternative, this move doesn't make a lot of sense. While Btrfs development has dragged on for a while and short of OpenSUSE/SUSE hasn't seen it deployed by default by other tier-one Linux distributions, it's a bit odd that Red Hat seems to be tossing in the towel on Btrfs.

Red Hat's definition of "deprecated" in their RHEL context means (as shown on the same page), "Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 6. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments."

[Apr 25, 2016] What's New in Red Hat Enterprise Linux 7.2

Video presentation.

[Apr 25, 2016] Red Hat Enterprise Linux 7.2 Beta Now Available

Red Hat

With a nod to the importance of continuously maintaining stable and secure enterprise environments, the beta release of Red Hat Enterprise Linux 7.2 includes several new and enhanced security capabilities. The introduction of a new SCAP module in the installer (anaconda) allows enterprise customers to apply SCAP-based security profiles during installation. Another new capability allows for the binding of data to local networks. This allows enterprises to encrypt systems at scale with centralized management. In addition, Red Hat Enterprise 7.2 beta introduces support for DNSSEC for DNS zones managed by Identity Management (IdM) as well as federated identities, a mechanism that allows users to access resources using a single set of digital credentials.

Given the complexity and necessary due diligence required to efficiently and effectively manage the modern datacenter at scale, the beta release of Red Hat Enterprise Linux 7.2 includes new and improved tools to facilitate a more streamlined system administration experience. These new features and enhancements include:

As always, leveraging work in the Fedora community, Red Hat continuously monitors upstream developments and systematically incorporates select enterprise-ready features and technologies into Red Hat Enterprise Linux. The beta release of Red Hat Enterprise Linux 7.2 accomplishes this through the rebasing of the GNOME 3 desktop, the inclusion of GNOME Software, and the addition of new tuned profiles (inclusive of a profile for Red Hat Enterprise Linux for Real Time).

For more information on Red Hat Enterprise Linux 7.2, you can read the full release notes or, as an existing Red Hat customer, take Red Hat Enterprise Linux 7.2 beta for a test drive yourself via the Red Hat Customer Portal.

[Apr 25, 2016] What's Coming in Red Hat Enterprise Linux 7.2

DNSSEC for DNS zones managed by Red Hat Identity Management

RHEL 7.2 will also bring live kernel patching to RHEL, which Dumas sees as a critical security measure. Using elements of the KPATCH technology that recently landed in the upstream Linux 4.0 kernel, RHEL users will be able to patch their running kernels dynamically.

...Dumas is particularly excited about the performance gains that RHEL 7.2 introduces. In particular she noted that core networking patch performance is being accelerated by 35 percent for RHEL 7.2.

...With RHEL 7.2, Red Hat is refreshing the desktop with GNOME 3.14, which includes the GNOME software package manager and improvements to multi-monitor deployment capabilities.

[Apr 25, 2016] Red Hat Enterprise Linux 7 What's New

Jun 10, 2014 | InformationWeek

Red Hat released the 7.0 version of Red Hat Enterprise Linux today, with embedded support for Docker containers and support for direct use of Microsoft's Active Directory. The update uses XFS as its new file system.

"[Use of XFS] opens the door to a new class of data warehouse and big data analytics applications," said Mark Coggin, senior director of product marketing, in an interview before the announcement.

The high-capacity, 64-bit XFS file system, now the default file system in Red Hat Enterprise Linux, originated in the Silicon Graphics Irix operating system. It can scale up to 500 TB of addressable memory. In comparison, previous file systems, such EXT 4, typically supported 16 TBs.

RHEL 7's support for Linux containers amounts to a Docker container format integrated into the operating system so that users can begin building a "layered" application. Applications in the container can be moved around and will be optimized to run on Red Hat Atomic servers, which are hosts that use the specialized Atomic version of Enterprise Linux to manage containers.

[Want to learn more about Red Hat's commitment to Linux containers? Read Red Hat Containers OS-Centric: No Accident.]

RHEL 7 will also work with Active Directory, using cross-realm trust. Since both Linux and Windows are frequently found in the same enterprise data centers, cross-realm trust lets Linux use Active Directory as either a secondary check on a primary identity management system, or simply as a trusted source to identify users, Coggin says.

RHEL 7 also has more built-in instrumentation and tuning for optimized performance based on a selected system profile. "If you're running a compute-bound workload, you can select a profile that's better geared to it," Coggin notes.

[Dec 12, 2015] How to install and configure ZFS on Linux using Debian Jessie 8.1

www.howtoforge.com
ZFS is a combined filesystem and logical volume manager. The features of ZFS include protection against data corruption, support for high storage capacities, efficient data compression, integration of the filesystem and volume management concept, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs.

ZFS was originally implemented as open-source software, licensed under the Common Development and Distribution License (CDDL).

When we talking about the ZFS filesystem, we can highlight the following key concepts:

For a full overview and description of all available features see this detailed wikipedia article.

In this tutorial, I will guide you step by step through the installation of the ZFS filesystem on Debian 8.1 (Jessie). I will show you how to create and configure pool's using raid0 (stripe), raid1 (Mirror) and RAID-Z (Raid with parity) and explain how to configure a file system with ZFS.

Based on the information from the website www.zfsonlinux.org, ZFS is only supported on the AMD64 and Intel 64 Bit architecture (amd64). Let's get started with the setup. ... ... ...

The ZFS file system is a revolutionary new file system that fundamentally changes the way file systems are administered on Unix-like operating systems. ZFS provides features and benefits that were not found in any other file system available today. ZFS is robust, scalable, and easy to administer.

[Dec 05, 2015] How to forcefully unmount a Linux disk partition

January 27, 2006 | www.cyberciti.biz

... ... ...

Linux / UNIX will not allow you to unmount a device that is busy. There are many reasons for this (such as program accessing partition or open file) , but the most important one is to prevent the data loss. Try the following command to find out what processes have activities on the device/partition. If your device name is /dev/sdb1, enter the following command as root user:

# lsof | grep '/dev/sda1'
Output:
vi 4453       vivek    3u      BLK        8,1                 8167 /dev/sda1

Above output tells that user vivek has a vi process running that is using /dev/sda1. All you have to do is stop vi process and run umount again. As soon as that program terminates its task, the device will no longer be busy and you can unmount it with the following command:

# umount /dev/sda1
How do I list the users on the file-system /nas01/?

Type the following command:

# fuser -u /nas01/
# fuser -u /var/www/
Sample outputs:
/var/www:             3781rc(root)  3782rc(nginx)  3783rc(nginx)  3784rc(nginx)  3785rc(nginx)  3786rc(nginx)  3787rc(nginx)  3788rc(nginx)  3789rc(nginx)  3790rc(nginx)  3791rc(nginx)  3792rc(nginx)  3793rc(nginx)  3794rc(nginx)  3795rc(nginx)  3796rc(nginx)  3797rc(nginx)  3798rc(nginx)  3800rc(nginx)  3801rc(nginx)  3802rc(nginx)  3803rc(nginx)  3804rc(nginx)  3805rc(nginx)  3807rc(nginx)  3808rc(nginx)  3809rc(nginx)  3810rc(nginx)  3811rc(nginx)  3812rc(nginx)  3813rc(nginx)  3815rc(nginx)  3816rc(nginx)  3817rc(nginx)

The following discussion allows you to unmount device and partition forcefully using mount or fuser Linux commands.

Linux fuser command to forcefully unmount a disk partition

Suppose you have /dev/sda1 mounted on /mnt directory then you can use fuser command as follows:

WARNING! These examples may result into data loss if not executed properly (see "Understanding device error busy error" for more information).

Type the command to unmount /mnt forcefully:

# fuser -km /mnt
Where,

Linux umount command to unmount a disk partition.

You can also try the umount command with –l option on a Linux based system:

# umount -l /mnt
Where,

If you would like to unmount a NFS mount point then try following command:

# umount -f /mnt
Where,

Please note that using these commands or options can cause data loss for open files; programs which access files after the file system has been unmounted will get an error.

See also:

[Nov 08, 2015] Getting Service or Asset Tags on Linux by Nick Geoghegan

Jul 3, 2015

At one point in time, you will need to find out your service or asset tag. Maybe you need to find out when your machine is out of vendor warranty, or are actually finding out what is in the machine. Popping the service tag into the Dell support site will tell you this… But what if you don't have them written down?

The Dell "tools", it should be pointed out, require you restarting the machine with a CD in the drive or using a COM file. There is no way in hell that I'm digging out a DOS disk to try and run a COM file to get the service tag. The CD, as it turns out, is just a rebadged Ubuntu CD… Success!

So I mounted the Dell ISO, which was rather fiddly, and took a look around. A program called serviceTag was the first thing I noticed. Was this a specific Dell tool? What would happen if I ran it?

Being paranoid, I decided to see what was linked to this binary.

$ ldd serviceTag
     linux-gate.so.1 =>  (0xf773f000)
     libsmbios.so.2 => not found
     libstdc++.so.6 => /usr/lib32/libstdc++.so.6 (0xf7635000)
     libm.so.6 => /lib32/libm.so.6 (0xf760b000)
     libgcc_s.so.1 => /usr/lib32/libgcc_s.so.1 (0xf75ed000)
     libc.so.6 => /lib32/libc.so.6 (0xf7473000)
     /lib/ld-linux.so.2 (0xf7740000)

Hmmmm. Never heard of libsmbios before. A quick Google vision quest lead me here.

The SMBIOS Specification addresses how motherboard and system vendors present management information about their products in a standard format by extending the BIOS interface on x86 architecture systems.

Success!

Debian (and RHEL) have these tools in their standard repos! For Debian, it's just a matter of

apt-get install libsmbios-bin

You can then, simply, run

[root@calculon /home/nick]$ /usr/sbin/getSystemId
Libsmbios version:      2.2.28
Product Name:           Gazelle Professional
Vendor:                 System76, Inc.
BIOS Version:           4.6.5
System ID:              XXXXXXXXXXXXXX
Service Tag:            XXXXXXXXXXXXXX
Express Service Code:   0
Asset Tag:              XXXXXXXXXXXXXX
Property Ownership Tag:

[Sep 02, 2015] Is systemd as bad as boycott systemd is trying to make it

October 26, 2014 | LinuxBSDos.com
Win2NIX, on October 26, 2014 at 11:04 pm

I migrated fully to NIX after 10-15 years as a Win admin and got tired of having control "hidden". Worked with ESX and used the console and loved the freedom. The trend I am noticing with the systemd debate is VERY similar to what has happened with M$. Keep It Simple Stupid is something Nix should be doing, having things modular and not depending on something else makes life easier. If one thing breaks it's not taking everything else with it. Further, if this is all done in binary and not easily read THIS IS NOT GOOD. I hated M$ making me download other crap to diagnose their BSODs if you like having your system flipping out and not saving your data then I guess systemd would be for you given it's direction. This is also akin to making your browser part of your OS and having it intertwine with it. (Bad Voodoo) I'm using Mint and looking for a possible way to decouple from systemd. I just don't see this as a good thing and it reminds me too much of M$ tactics. Now is the time to deviate from systemd and keep a more modular approach then watch and see if systemd starts to be an issue, which at this point if it keeps taking over more management it's only a matter of time. I also wonder if the M$ embracing open source has anything to do with this, it certainly smells of large corporation thinking or lack there of. I like improving things, but this does not appear to be an improvement rather a bomb waiting to go off. On these points this is a bad idea, binary not an easy way to gain insight and correct issues and adding multiple processes to control with more being added. I was able to patch heartbleed within 15 minutes after finding out about it. In the M$/corp world good luck hope it's this month.

Ummm..., on September 4, 2014 at 7:55 pm

I will admit right off, that I am not a linux designer or maintainer. I got started with linux about 20 years ago. People state that the old init system was fragile. Maybe it was, again…not building linux from scratch I wouldn't know. I don't recall ever having any issues though.

Whether right or wrong, from my (very) limited understanding, the systemd process is driven by binary files, which are not really meant to be edited or looked at by hand. So if something catastrophic happens (which granted hasn't happened yet)…how would I fix it or know what to fix? Go to my distro's forum and hope someone can fix it/release a patch soon?

Anyway, if one of the earlier commenters is correct, and there is no specific plan for systemd (which frankly is a scary thought)…how much more of the system will it continue to take over? And at what point does too much become too much?

I'm all for progress, but I think the Keep It Simple Stupid approach, which may not be "exciting" stuff to develop, it still the best approach.

"why did the people responsible for the development of the major Linux distributions accept it as a replacement for old init system?"

I can't speak for the initial decision, but at this point, I would suspect that inertia is keeping it in place. I highly doubt that any of the major linux desktop systems that must current users depend on would even function without systemd…at least not without a lot of major programming changes to make it happen. If someone did take that route, then all of those custom changes then need to be maintained.

(Simplistically thinking) Why can't things be more pluggable/portable? Distro X uses a systemd plugin for their init, and distro Y chooses to build against something else? Granted systemd is most likely now too big for that, but one can dream I suppose.

AC, on September 4, 2014 at 2:16 pm

Yes. Systemd is a trojan.

xx, on September 4, 2014 at 1:17 pm

Systemd is a perfect system for rootkits, and NSA backdoors.
Once it will be complete it will hide necessary processes even from root, it will filter unnecessary events from log, and it will do much much more.

But it seems, that only minority care about that.

Dimitri Minaev, on September 4, 2014 at 11:59 am

IMHO, the downside of systems as a project is that its parts lack a defined stable interface. This means that you cannot replace one part with a different one, creating your own stack of tools. When you configure your desktop system, you can combine any display manager with any window manager with any panel or file manager. Can you replace networkd with another tool transparently? If yes, can you be sure that your tool will keep working after the next systemd upgrade?

T Davis, on September 4, 2014 at 11:20 am

The reason Debian (and therefore Ubuntu) adopted SystemD is that the appointed Debian tech team is now devided equally between Ubuntu devs (which were Debian devs before Ubuntu came along) and Redhat employees. Look at the voting emails and 3 months of arguments.

The biggest issue is really not one of SystemD infiltration, but more of Redhat taking over every aspect of the Linux development process. Time and again, I have seen Canonical steer in their own direction, not because they want too go rogue, but because the upstreams for the main projects (Gnome, Wayland, Pulse Audio, now SystemD and possibly OpenStack, and even the kernel to some extent) are almost exclusively owned by Redhat, and only wish to make forward progress at their own pace (wayland has had almost twice the development time and resources as mir for example).

The REAL issue here is; who has the Linux community in their best interests? Do some real investigation and write a story on that.

Ericg, on September 3, 2014 at 7:12 pm
Except you, the author, has fallen into the same trap everyone else does… Confusing Systemd (the project) with systemd (the binary). Systemd, the project, is like Apache, its an umbrella term for a lot of other things. Systemd, logind, networkd, and other utilities.

Systemd, the binary, handles service management in pid1, that includes socket and explicit activation. Other tasks it passes off to non-pid 1 processes. For example: session management isn't handled through systemd pid 1, its handled through logind.

Readahead is handled through a service file for systemd, just like other daemons.

syslog functionality isn't handled in pid1, its handled in journald which is a separate process.

hostname, locale, and time registation are all handled through explicit utilities: hostnamectl, localectl, and timedatectl, which are done as separate processes.

Network configuration got added in networkd. What is networkd? The most minimal network userland you can have. Its for people who don't want to write by-hand config files, but for whom NetworkManager is way overkill. Is it pid 1? Nope.

Yes, systemd started off as "just an init replacement." It grew into more things. But don't assume that "systemd" (the binary) is the same as "systemd" (the project). Most things that are added to systemd in recent times AREN'T pid 1 like boycottsystemd claims, they're just small utilities that got added under the systemd umbrella project.

Peter, on September 4, 2014 at 4:42 am
Ericg, thats the problem
systemmd has become a whole integrated stack
init.d while not easy to use for starters, was at least within the idea of simple units which can be mixed and matched to get the results the user wants – note user wants – not developer wants

a Linux user, on September 4, 2014 at 5:23 am

hostname, locale, and time registation are all handled through explicit utilities: hostnamectl, localectl, and timedatectl, which are done as separate processes.

Missing the point.

People talk as though prior to systemd such tasks were beyond Linux, didn't work, always crashed, were a nightmare to use or manage and that is not the case.

The only difference I see between my Linux machine now and my Linux machine of a few years ago is that it now boots faster. And that's it. And whilst that's nice, it's so meaningless as to be painful to behold the enthusiasm that some display, as though all they did all day long was sit and reboot their machines with a stop watch in one hand.

The main problem with systemd is this – if there are ulterior motives at work here (and by definition they will be hidden at present) then by the time we find that out it will be too late.
And the other problem is that it takes a special kind of arrogance to sneer at 20+ years of development by some seriously smart people and claim that you, as a mere child, can do better. I do wonder how far systemd would have got had it not had Red Hat's weight behind it. I do realise that improvement sometimes means kicking out old 'tried and trusted' methods. But it's the way its happening with systemd that rings alarm bells – too many sneering, nasty bullies trashing anyone who disagrees (just like anyone who thinks Corporations should pay proper taxes is sneered at, or anyone who thinks Putin is not as bad as he is made out to be gets sneered at – sneering is the new way of silencing genuine debate, so when I come across it in Linuxland, alarm bells beging to ring).

Linux is about granular power and control, not convenience.

J. Orejarena, on September 4, 2014 at 9:38 am
"The main problem with systemd is this – if there are ulterior motives at work here (and by definition they will be hidden at present) then by the time we find that out it will be too late."

Just read http://0pointer.net/blog/revisiting-how-we-put-together-linux-systems.html (without the blank space before ".net") to find the ulterior motives.

[May 07, 2015] Red Hat Enterprise Linux Life Cycle - Red Hat Customer Portal

* The life cycle dates are subject to adjustment.

In Red Hat Enterprise Linux 4, EUS was available for the following minor releases:

In Red Hat Enterprise Linux 5, EUS is available for the following minor releases:

In Red Hat Enterprise Linux 6, EUS is available for all minor releases released during the Production 1 Phase, but not for the minor release marking transition to Production 2 or any minor releases released during Production Phases 2 or 3. Each Red Hat Enterprise Linux 6 EUS stream is available for 24 months from the availability of the minor release.

In Red Hat Enterprise Linux 6, EUS is available for the following minor releases:

Future Red Hat Enterprise Linux 6 releases for which EUS is available will be added to the above list upon their release.

In Red Hat Enterprise Linux 7, EUS will be available for all minor releases during the Production 1 Phase, but not for 7.0 or the minor release marking the transition to Production 2, or for any minor releases released during Production Phases 2 or 3. Each Red Hat Enterprise Linux 7 EUS stream is available for 24 months from the availability of the minor release.

In Red Hat Enterprise Linux 7, EUS is available for the following releases:

Future Red Hat Enterprise Linux 7 releases for which EUS is available will be added to the above list upon their release.

Please see this Knowledgebase Article for more details on EUS.

[Jun 27, 2014] What's new in Red Hat Enterprise Linux 7

Red Hat

Download

...Red Hat Enterprise Linux 7 delivers dramatic improvements in reliability, performance, and scalability. A wealth of new features provides the architect, system administrator, and developer with the resources necessary to innovate and manage more efficiently.

LINUX CONTAINERS

Linux containers have emerged as a key open source application packaging and delivery technology, combining lightweight application isolation with the flexibility of image-based deployment methods. Developers have rapidly embraced Linux containers because they simplify and accelerate application deployment, and many Platform-as-a-Service (PaaS) platforms are built around Linux container technology, including OpenShift by Red Hat. Red Hat Enterprise Linux 7 implements Linux containers using core technologies such as control groups (cGroups) for resource management, namespaces for process isolation, and SELinux for security, enabling secure multi-tenancy and reducing the potential for security exploits. The Red Hat container certification ensures that application containers built using Red Hat Enterprise Linux will operate seamlessly across certified container hosts.
NUMA AFFINITY
With more and more systems, even at the low end, presenting non-uniform memory access (NUMA) topologies, Red Hat Enterprise Linux 7 addresses the performance irregularities that such systems present. A new, kernel-based NUMA affinity mechanism automates memory and scheduler optimization. It attempts to match processes that consume significant resources with available memory and CPU resources in order to reduce cross-node traffic. The resulting improved NUMA resource alignment improves performance for applications and virtual machines, especially when running memory-intensive workloads.
HARDWARE EVENT REPORTING MECHANISM
Red Hat Enterprise Linux 7 unifies hardware event reporting into a single reporting mechanism. Instead of various tools collecting errors from different sources with different timestamps, a new hardware event reporting mechanism (HERM) will make it easier to correlate events and get an accurate picture of system behavior. HERM reports events in a single location and in a sequential timeline. HERM uses a new userspace daemon, rasdaemon, to catch and log all RAS events coming from the kernel tracing infrastructure.
VIRTUALIZATION GUEST INTEGRATION WITH VMWARE
Red Hat Enterprise Linux 7 advances the level of integration and usability between the Red Hat Enterprise Linux guest and VMware vSphere. Integration now includes: • Open VM Tools - bundled open source virtualization utilities. • 3D graphics drivers for hardware-accelerated OpenGL and X11 rendering. • Fast communication mechanisms between VMware ESX and the virtual machine.
PARTITIONING DEFAULTS FOR ROLLBACK
The ability to revert to a known, good system configuration is crucial in a production environment. Using LVM snapshots with ext4 and XFS (or the integrated snapshotting feature in Btrfs described in the "Snapper" section) an administrator can capture the state of a system and preserve it for future use. An example use case would involve an in-place upgrade that does not present a desired outcome and an administrator who wants to restore the original configuration.
CREATING INSTALLATION MEDIA
Red Hat Enterprise Linux 7 introduces Live Media Creator for creating customized installation media from a kickstart file for a range of deployment use cases. Media can then be used to deploy standardized images whether on standardized corporate desktops, standardized servers, virtual machines, or hyperscale deployments. Live Media Creator, especially when used with templates, provides a way to control and manage configurations across the enterprise.
SERVER PROFILE TEMPLATES
Red Hat Enterprise Linux 7 features the ability to use installation templates to create servers for common workloads. These templates can simplify and speed creating and deploying Red Hat Enterprise Linux servers, even for those with little or no experience with Linux.

Red Hat Red Hat Enterprise Linux 7 – Setting World Records At Launch

June 10, 2014

Today's announcement of general availability of Red Hat Enterprise Linux 7 marks a significant milestone for Red Hat. The culmination of a multi-year effort by Red Hat's engineering team and our partners, the latest major release of our flagship platform redefines the enterprise operating system, and is designed to power the spectrum of enterprise IT: applications running on physical servers, containerized applications, and also cloud services.

Since its introduction more than a decade ago, Red Hat Enterprise Linux has become the world's leading enterprise Linux platform, setting industry standards for performance along the way, with Red Hat Enterprise Linux 7 continuing this trend. On its first day of general availability, Red Hat Enterprise Linux 7 already claims multiple world record-breaking benchmark results running on HP ProLiant servers, including:

SPECjbb2013 Multi-JVM Benchmark
• One processor world record for both max-jOPS (16,252) and critical-jOPS (4,721) metrics
• Two processor world record for both max-jOPS (119,517) and critical-jOPS (36,411) metrics
• Four processor world record for both max-jOPS (202,763) and critical-jOPS (65,950) metrics

The SPECjbb2013 benchmark is an industry-standard measurement of Java-based application performance developed by the Standard Performance Evaluation Corporation (SPEC). Application performance remains an important attribute for many customers, and this set of results demonstrates Red Hat Enterprise Linux's continued ability to deliver world-class performance, alongside support from our ecosystem of partners and OEMs. With these impressive results to its name already, we like to think that this is only the tip of the iceberg for Red Hat Enterprise Linux 7's achievements, especially since the platform is designed to power a broad spectrum of enterprise IT workloads.

SPEC and SPECjbb are registered trademarks of the Standard Performance Evaluation Corporation. Results as of June 10, 2014. See www.spec.org for more information.

For further details on SPECjbb2013 benchmark results achieved on HP ProLiant XL220a Gen8 v2 (1P), HP ProLiant DL580 Gen8 (2P), and HP ProLiant DL580 Gen8 (4P) servers, see http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA5-3283ENW&cc=us&lc=en

[Jun 27, 2014] Red Hat Enterprise Linux 7 in evaluation for Common Criteria certification

June 19, 2014

Security is a crucial component of the technology Red Hat provides for its customers and partners, especially those who operate in sensitive environments, including the military.

[Jun 27, 2014] Oracle Announces OpenStack Support for Oracle Linux and Oracle VM

A technology preview of an OpenStack distribution that allows Oracle Linux and Oracle VM to work with the open source cloud software is now available. Users can install this OpenStack technology preview in their test environments with the latest version of Oracle Linux and the beta release of Oracle VM 3.3.

Read the Press Release
Read More from Oracle Senior Vice President of Linux and Virtualization Wim Coekaerts
Read More from Oracle Product Management Director Ronen Kofman

Oracle Linux Free as in Speech AND Free as in Beer by Monica Kumar

Jan 08, 2014 | Oracle's Linux Blog

One of the biggest benefits of Oracle Linux is that binaries, patches, errata, and source are always free. Even if you don't have a support subscription, you can download and run exactly the same enterprise-grade distribution that is deployed in production by thousands of customers around the world. You can receive binaries and errata reliably and on schedule, and take advantage of the thousands of hours Oracle spends testing Oracle Linux every day. And, of course, Oracle Linux is completely compatible with Red Hat Enterprise Linux, so switching to Oracle Linux is easy.

CentOS is another Linux distribution that offers free binaries with Red Hat compatibility. Traditionally, CentOS has been used for Linux systems which do not require support in order to reduce or avoid expensive Red Hat Enterprise Linux subscription costs. Recently, Red Hat announced it was "joining forces" with the CentOS project, hiring many the key CentOS developers, and "building a new CentOS." This is a curious development given that the primary factors that have made CentOS popular are that it is free and Red Hat compatible.

It would be natural for existing CentOS users to wonder what Red Hat actually has in mind for the "new CentOS" when the FAQ accompanying the announcement states that Red Hat does not recommend CentOS for production deployment, is not recommending mixed CentOS and Red Hat Enterprise Linux deployments, will not support JBoss and other products on CentOS, and is not including CentOS in Red Hat's developer offerings designed to create "applications for deployment into production environments."

If Red Hat truly wished to satisfy the key requirements of most CentOS users, they would take a much simpler step: they would make Red Hat Enterprise Linux binaries, patches, and errata available for free download – just like Oracle already does.

Fortunately, no matter what future CentOS faces in Red Hat's hands, Oracle Linux offers all users a single distribution for development, testing, and deployment, for free or with a paid support subscription. Oracle does not require customers to buy a subscription for every server running Oracle Linux (or any server running Oracle Linux). If a customer wants to pay for support for production systems only, that's the customer's choice. The Oracle model is simple, economical, and well suited to environments with rapidly changing needs.

Oracle is focused on providing what we have since day one – a fast, reliable Linux distribution that is completely compatible with Red Hat Enterprise Linux, coupled with enterprise class support, indemnity, and flexible support policies. If you are CentOS user, or a Red Hat user, why not download and try Oracle Linux today? You have nothing to lose – after all, it's of the CentOS community while remaining committed to our current and new users."

Al Gillen, program vice president, System Software, IDC
"CentOS is one of the major non-commercial distributions in the industry, and a key adjacent project for many Red Hat Enterprise Linux customers. This relationship helps strengthen the CentOS community, and will ensure that CentOS benefits directly from the community-centric development approach that Red Hat both understands and heavily supports. Given the growing opportunities for Linux in the market today in areas such as OpenStack, cloud and big data, a stronger CentOS technology backed by the CentOS community-including Red Hat-is a positive development that helps the overall industry."

Stephen O'Grady, principal analyst, RedMonk
"Though it will doubtless come as a surprise, this move by Red Hat represents the logical embrace of an adjacent ecosystem. Bringing the CentOS and Red Hat communities closer together should be a win for both parties."

Additional Resources

Connect with Red Hat

Red Hat + CentOS - Red Hat Open Source Community

Red Hat + CentOS

Red Hat and the CentOS Project are building a new CentOS, capable of driving forward development and adoption of next-generation open source projects.


Red Hat will contribute its resources and expertise in building thriving open source communities to help establish more open project governance, broaden opportunities for participation, and provide new ways for CentOS users and contributors to collaborate on next-generation technologies such as cloud, virtualization, and Software-Defined Networking (SDN).


With Red Hat's contributions and investment, the CentOS Project will be better able to serve the needs of open source community members who require different or faster-moving components to be integrated with CentOS, expanding on existing efforts to collaborate with open source projects such as OpenStack, Gluster, OpenShift Origin, and oVirt.


Red Hat has worked with the CentOS Project to establish a merit-based open governance model for the CentOS Project, allowing for greater contribution and participation through increased transparency and access.

CentOS


Today, the CentOS Project produces CentOS, a popular community Linux distribution built from much of the Red Hat Enterprise Linux codebase and other sources. Over the coming year, the CentOS Project will expand its mission to establish CentOS as a leading community platform for emerging open source technologies coming from other projects such as OpenStack.


How is CentOS different from Red Hat Enterprise Linux?


CentOS is a community project that is developed, maintained, and supported by and for its users and contributors. Red Hat Enterprise Linux is a subscription product that is developed, maintained, and supported by Red Hat for its subscribers.


While CentOS is derived from the Red Hat Enterprise Linux codebase, CentOS and Red Hat Enterprise Linux are distinguished by divergent build environments, QA processes, and, in some editions, different kernels and other open source components. For this reason, the CentOS binaries are not the same as the Red Hat Enterprise Linux binaries.


The two also have very different focuses. While CentOS delivers a distribution with strong community support, Red Hat Enterprise Linux provides a stable enterprise platform with a focus on security, reliability, and performance as well as hardware, software, and government certifications for production deployments. Red Hat also delivers training, and an entire support organization ready to fix problems and deliver future flexibility by getting features worked into new versions.


Once in use, the operating systems often diverge further, as users selectively install patches to address bugs and security vulnerabilities to maintain their respective installs. In addition, the CentOS Project maintains code repositories of software that are not part of the Red Hat Enterprise Linux codebase. This includes feature changes selected by the CentOS Project. These are available as extra/additional packages and environments for CentOS users.

[Oct 26, 2013] RHEL handling of DST change

Most server hardware clocks are use UTC. UTC stands for the Universal Time, Coordinated, also known as Greenwich Mean Time (GMT). Other time zones are determined by adding or subtracting from the UTC time. Server typically displays local time, which now is subject of DST correction twice a year.

Wikipedia defines DST as follows:

Daylight saving time (DST), also known as summer time in British English, is the convention of advancing clocks so that evenings have more daylight and mornings have less. Typically clocks are adjusted forward one hour in late winter or early spring and are adjusted backward in autumn.

DST patch is only required in few countries such as USA. Please see this wikipedia article.

Linux will change to and from DST when the HWCLOCK setting in /etc/sysconfig/clock is set to -u, i.e. when the hardware clock is set to UTC (which is closely related to GMT), regardless of whether Linux was running at the time DST is entered or left.

When the HWCLOCK setting is set to `--localtime', Linux will not adjust the time, operating under the assumption that your system may be a dual boot system at that time and that the other OS takes care of the DST switch. If that was not the case, the DST change needs to be made manually.

Note:

EST is defined as being GMT -5 all year round. US/Eastern, on the other hand, means GMT-5 or GMT-4 depending on whether Daylight Savings Time (DST) is in effect or not.

The tzdata package contains data files with rules for various timezones around the world. When this package is updated, it will update multiple timezone changes for all previous timezone fixes.

[Feb 28, 2012] Red Hat vs. Oracle Linux Support 10 Years Is New Standard

The VAR Guy

The support showdown started a couple of weeks ago, when Red Hat extended the life cycle of Red Hat Enterprise Linux (RHEL) versions 5 and 6 from the norm of seven years to a new standard of 10 years. A few days later, Oracle responded by extending Oracle Linux life cycles to 10 years. Side note: It sounds like SUSE, now owned by Attachmate, also offers extended Linux support of up to 10 years.

[Feb 07, 2012] Virtualization With Xen On CentOS 6.2 (x86_64)

Linux Howtos

This tutorial provides step-by-step instructions on how to install Xen (version 4.1.2) on a CentOS 6.2 (x86_64) system.

Xen lets you create guest operating systems (*nix operating systems like Linux and FreeBSD), so called "virtual machines" or domUs, under a host operating system (dom0). Using Xen you can separate your applications into different virtual machines that are totally independent from each other (e.g. a virtual machine for a mail server, a virtual machine for a high-traffic web site, another virtual machine that serves your customers' web sites, a virtual machine for DNS, etc.), but still use the same hardware. This saves money, and what is even more important, it's more secure. If the virtual machine of your DNS server gets hacked, it has no effect on your other virtual machines. Plus, you can move virtual machines from one Xen server to the next one.

[Jan 11, 2012] Red Hat Enterprise Linux 6.2 Announcement

They continue to push KVM which is seldom used in enterprise environment. The most important addition is Linux containers.
Dec 06, 2011 [rhelv6-announce]

Hardware support

Linux Containers

Filesystems

LVM

Performance

Error detection and reporting

X11

The X server has been re-based in this release. Updating the X server will increase system stability through the isolation of the system display drivers and will provide a better base for new features. Overall improved support for newer workstation optional hardware, multiple displays and new input devices.

[Jul 31, 2011] Scientific Linux pushes RHEL clones forward by Sean Michael Kerner

July 29, 2011 | InternetNews.
From the 'Clone Wars' files:

"Scientific Linux 6.1 is now available providing users with a stable reliable Free (as in Beer) version of Red Hat Enterprise Linux 6.1.

Red Hat released RHEL 6.1 in May, providing improved driver support and hardware enablement and oh yeah security fixes too.

Scientific Linux is a joint effort by Fermilab and CERN and is targeted at the scientific community, but it's a solid RHEL version in its own right. It's also one that could now be attracting some new users, thanks to delays at the 'other' popular RHEL clone -- CentOS.

The CentOS project just releases CentOS 6 and are many months behind Scientific Linux and even more time behind RHEL. That's a problem for some and could also represent a real security risk for most.

With the more rapid release cycle of Scientific Linux I will not be surprised if some disgruntled CentOS users make the switch and/or if new users just start off with Scientific Linux first.

While Scientific Linux is faster than CentOS at replicating RHEL 6.1, they aren't the fastest clone.

Oracle Linux 6.1 came out in June, barely a month after Red Hat's release.

It's somewhat ironic that Oracle is now the fasted clone tracking RHEL, since Red Hat has made it harder to clone with the way they package releases. As it turns out, it's not slowing Oracle down at all - though it might be impacting the community releases.

[May 31, 2011] RHEL Tuning and Optimization for Oracle V11

The Completely Fair Queuing (CFQ) scheduler is the default algorithm in Red Hat Enterprise Linux 4 which is suitable for a wide variety of applications and provides a good compromise between throughput and latency. In comparison to the CFQ algorithm, the Deadline scheduler caps maximum latency per request and maintains a good disk throughput which is best for disk-intensive database applications.

Hence, the Deadline scheduler is recommended for database systems. Also, at the time of this writing there is a bug in the CFQ scheduler which affects heavy I/O, see Metalink Bug:5041764. Even though this bug report talks about OCFS2 testing, this bug can also happen during heavy IO access to raw or block devices and as a consequence could evict RAC nodes.

To switch to the Deadline scheduler, the boot parameter elevator=deadline must be passed to the kernel that is being used.

Edit the /etc/grub.conf file and add the following parameter to the kernel that is being used, in this example 2.4.21-32.0.1.ELhugemem:

title Red Hat Enterprise Linux Server (2.6.18-8.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.el5 ro root=/dev/sda2 elevator=deadline initrd /initrd-2.6.18-8.el5.img

This entry tells the 2.6.18-8.el5 kernel to use the Deadline scheduler. Make sure to reboot the system to activate the new scheduler.

Changing Network Adapter Settings

To check the speed and settings of network adapters, use the ethtool command which works now

for most network interface cards. To check the adapter settings of eth0 run:

# ethtool eth0

To force a speed change to 1000Mbps, full duplex mode, run:

# ethtool -s eth0 speed 1000 duplex full autoneg off

To make a speed change permanent for eth0, set or add the ETHTOOL_OPT environment variable in

/etc/sysconfig/network-scripts/ifcfg-eth0:

ETHTOOL_OPTS="speed 1000 duplex full autoneg off"

This environment variable is sourced in by the network scripts each time the network service is

started.

Changing Network Kernel Settings

Oracle now uses User Datagram Protocol (UDP) as the default protocol on Linux for interprocess

communication, such as cache fusion buffer transfers between the instances. However, starting with

Oracle 10g network settings should be adjusted for standalone databases as well.

Oracle recommends the default and maximum send buffer size (SO_SNDBUF socket option) and

receive buffer size (SO_RCVBUF socket option) to be set to 256 KB. The receive buffers are used

by TCP and UDP to hold received data until it is read by the application. The receive buffer cannot

overflow because the peer is not allowed to send data beyond the buffer size window. This means that

datagrams will be discarded if they do not fit in the socket receive buffer. This could cause the sender

to overwhelm the receiver.

The default and maximum window size can be changed in the proc file system without reboot:

The default setting in bytes of the socket receive buffer

# sysctl -w net.core.rmem_default=262144

The default setting in bytes of the socket send buffer

# sysctl -w net.core.wmem_default=262144

The maximum socket receive buffer size which may be set by using the SO_RCVBUF socket option

# sysctl -w net.core.rmem_max=262144

The maximum socket send buffer size which may be set by using the SO_SNDBUF socket option

# sysctl -w net.core.wmem_max=262144

To make the change permanent, add the following lines to the /etc/sysctl.conf file, which is used

during the boot process:

net.core.rmem_default=262144

net.core.wmem_default=262144

net.core.rmem_max=262144

net.core.wmem_max=262144

To improve fail over performance in a RAC cluster, consider changing the following IP kernel

parameters as well:

net.ipv4.tcp_keepalive_time

net.ipv4.tcp_keepalive_intvl

net.ipv4.tcp_retries2

net.ipv4.tcp_syn_retries

Changing these settings may be highly dependent on your system, network, and other applications.

For suggestions, see Metalink Note:249213.1 and Note:265194.1.

On Red Hat Enterprise Linux systems the default range of IP port numbers that are allowed for TCP

and UDP traffic on the server is too low for 9i and 10g systems. Oracle recommends the following port

range:

# sysctl -w net.ipv4.ip_local_port_range="1024 65000"

To make the change permanent, add the following line to the /etc/sysctl.conf file, which is used during

the boot process:

net.ipv4.ip_local_port_range=1024 65000

The first number is the first local port allowed for TCP and UDP traffic, and the second number is the last port number.

10.3. Flow Control for e1000 Network Interface Cards

The e1000 network interface card family do not have flow control enabled in the 2.6 kernel on Red Hat

Enterprise Linux 4 and 5. If you have heavy traffic, then the RAC interconnects may lose blocks, see

Metalink Bug:5058952. For more information on flow control, see Wikipedia Flow control1.

To enable Receive flow control for e1000 network interface cards, add the following line to the /etc/

modprobe.conf file:

options e1000 FlowControl=1

The e1000 module needs to be reloaded for the change to take effect. Once the module is loaded with

flow control, you should see e1000 flow control module messages in /var/log/messages.

Verifying Asynchronous I/O Usage

To verify whether $ORACLE_HOME/bin/oracle was linked with asynchronous I/O, you can use the

Linux commands ldd and nm.

In the following example, $ORACLE_HOME/bin/oracle was relinked with asynchronous I/O:

$ ldd $ORACLE_HOME/bin/oracle | grep libaio

libaio.so.1 => /usr/lib/libaio.so.1 (0x0093d000)

$ nm $ORACLE_HOME/bin/oracle | grep io_getevent

w io_getevents@@LIBAIO_0.1

$

In the following example, $ORACLE_HOME/bin/oracle has NOT been relinked with asynchronous I/

O:

$ ldd $ORACLE_HOME/bin/oracle | grep libaio

$ nm $ORACLE_HOME/bin/oracle | grep io_getevent

w io_getevents

$

If $ORACLE_HOME/bin/oracle is relinked with asynchronous I/O it does not necessarily mean that

Oracle is really using it. You also have to ensure that Oracle is configured to use asynchronous I/O

calls, see Enabling Asynchronous I/O Support.

To verify whether Oracle is making asynchronous I/O calls, you can take a look at the /proc/

slabinfo file assuming there are no other applications performing asynchronous I/O calls on the

system. This file shows kernel slab cache information in real time.

On a Red Hat Enterprise Linux 3 system where Oracle does not make asynchronous I/O calls, the

output looks like this:

$ egrep "kioctx|kiocb" /proc/slabinfo

kioctx 0 0 128 0 0 1 : 1008 252

kiocb 0 0 128 0 0 1 : 1008 252

$

Once Oracle makes asynchronous I/O calls, the output on a Red Hat Enterprise Linux 3 system will

look like this:

$ egrep "kioctx|kiocb" /proc/slabinfo

kioctx 690 690 128 23 23 1 : 1008 252

kiocb 58446 65160 128 1971 2172 1 : 1008 252

redhat.com Red Hat Enterprise Linux 5.7 Released in Beta

Storage Drivers

4.2. Network Drivers

[May 21, 2011] 6.1 Technical Notes

Installer

[May 21, 2011] Red Hat Delivers Red Hat Enterprise Linux 6.1

RHEL 6.0 was pretty raw, hopefully they fixed the host glaring flaws.
May 19, 2011 | Red Hat

Red Hat, Inc. (NYSE: RHT) today announced the general availability of Red Hat Enterprise Linux 6.1, the first update to the platform since the delivery of Red Hat Enterprise Linux 6 in November 2010.
... ... ... ...

Red Hat Enterprise Linux 6.1 is already established as a performance leader serving both as a virtual machine guest and hypervisor host in SPECvirt benchmarks. Red Hat and HP recently announced that the combination of Red Hat Enterprise Linux with KVM running on a HP ProLiant BL620c G7 20-core Blade server delivered a record-setting SPECvirt_sc2010 benchmark result. Red Hat and IBM also recently announced that the companies submitted a benchmark to SPEC in which a combination of Red Hat Enterprise Linux, Red Hat Enterprise Virtualization and IBM systems delivered 45% better consolidation capability than competitors in performance tests conducted by Red Hat and IBM. See www.spec.org for details.

"Building on our decade-long partnership to optimize Red Hat Enterprise Linux for IBM platforms, our companies have collaborated closely on the development of Red Hat Enterprise Linux 6.1," said Jean Staten Healy, director, Cross-IBM Linux and Open Virtualization. "Red Hat Enterprise Linux 6.1 combined with IBM hardware capabilities offers our customers expanded flexibility, performance and scalability across their bare metal, virtualized and cloud environments. Our collaboration continues to drive innovation and leading results in the industry."

In addition to performance improvements, Red Hat Enterprise Linux 6.1 also provides numerous technology updates, including:

[May 19, 2011] CentOS 6? by David Sumsky

Oracle Linux might be an alternative...

dsumsky lines

I'm a big fan of CentOS project. I use it in production and I recommend it to the others as an enterprise ready Linux distro. I have to admit that I was quite disappointed by the behaviour of project developers who weren't able to tell the community the reasons why the upcoming releases were and are so overdue. I was used to downloading CentOS images one or two months after the current RHEL release was announced. The situation has changed with RHEL 5.6 which is available since January, 2011 but the corresponding CentOS was released not before April, 2011. It took about 3 months to release it instead of one or two as usual. By the way, the main news in RHEL 5.6 are:

More details on RHEL 5.6 are officially available here.

The similar or perhaps worse situation was around the release date of CentOS 6. As you know, RHEL 6 is available since November, 2011. I considered CentOS 6 almost dead after I read about transitions to Scientific Linux or about purchasing support from Red Hat and migrating the CentOS installations to RHEL . But according to this schedule people around CentOS seem to be working hard again and the CentOS 6 should be available at the end of May.

I hope the project will continue as I don't know about better alternative to RHEL (RHEL clone) than CentOS. The question is how the whole, IMO unnecessary situation, will influence the reputation of the project.

[Nov 14, 2010] Red Hat releases RHEL 6

"Red Hat on Wednesday released version 6 of its Red Hat Enterprise Linux (RHEL) distribution. 'RHEL 6 is the culmination of 10 years of learning and partnering,' said Paul Cormier, Red Hat's president of products and technologies, in a webcast announcing the launch. Cormier positioned the OS both as a foundation for cloud deployments and a potential replacement for Windows Server. 'We want to drive Linux deeper into every single IT organization. It is a great product to erode the Microsoft Server ecosystem,' he said. Overall, RHEL 6 has more than 2,000 packages, and an 85 percent increase in the amount of code from the previous version, said Jim Totton, vice president of Red Hat's platform business unit. The company has added 1,800 features to the OS and resolved more than 14,000 bug issues."

5.6 Release Notes

Fourth Extended Filesystem (ext4) Support

The fourth extended filesystem (ext4) is now a fully supported feature in Red Hat Enterprise Linux 5.6. ext4 is based on the third extended filesystem (ext3) and features a number of improvements, including: support for larger file size and offset, faster and more efficient allocation of disk space, no limit on the number of subdirectories within a directory, faster file system checking, and more robust journaling.

To complement the addition of ext4 as a fully supported filesystem in Red Hat Enterprise Linux 5.6, the e4fsprogs package has been updated to the latest upstream version. e4fsprogs contains utilities to create, modify, verify, and correct the ext4 filesystem.

Logical Volume Manager (LVM)

Volume management creates a layer of abstraction over physical storage by creating logical storage volumes. This provides greater flexibility over just using physical storage directly. Red Hat Enterprise Linux 5.6 manages logical volumes using the Logical Volume Manager (LVM). Further Reading The Logical Volume Manager Administration document describes the LVM logical volume manager, including information on running LVM in a clustered environment.

[Apr 20, 2009] Sun goes to Oracle for $7.4B

Oracle+Sun has the power to seriously harm IBM. Solaris still has the highest market share among proprietary Unixes. And AIX is only third after HP-UX. Wonder if Solaris will become Oracle's main development platform again. Oracle is a top contributor to Linux and that might help to bridge the gap in shell and packaging. Telecommunications and database administrators always preferred Solaris over Linux.
Yahoo! Finance

Oracle Corp. snapped up computer server and software maker Sun Microsystems Inc. for $7.4 billion Monday, trumping rival IBM Corp.'s attempt to buy one of Silicon Valley's best known -- and most troubled -- companies.

... ... ...

Jonathan Schwartz, Sun's CEO, predicted the combination will create a "systems and software powerhouse" that "redefines the industry, redrawing the boundaries that have frustrated the industry's ability to solve." Among other things, he predicted Oracle will be able to offer its customers simpler computing solutions at less expensive prices by drawing upon Sun's technology.

... ... ...

Yet Oracle says it can run Sun more efficiently. It expects the purchase to add at least 15 cents per share to its adjusted earnings in the first year after the deal closes. The company estimated Santa Clara, Calif.-based Sun will contribute more than $1.5 billion to Oracle's adjusted profit in the first year and more than $2 billion in the second year.

If Oracle can hit those targets, Sun would yield more profit than the combined contributions of three other major acquisitions -- PeopleSoft Inc., Siebel Systems Inc. and BEA Systems -- that cost Oracle a total of more than $25 billion.

A deal with Oracle might not be plagued by the same antitrust issues that could have loomed over IBM and Sun, since there is significantly less overlap between the two companies. Still, Oracle could be able to use Sun's products to enhance its own software.

Oracle's main business is database software. Sun's Solaris operating system is a leading platform for that software. The company also makes "middleware," which allows business computing applications to work together. Oracle's middleware is built on Sun's Java language and software.

Calling Java the "single most important software asset we have ever acquired," Ellison predicted it would eventually help make Oracle's middleware products generate as much revenue as its database line does.

Sun's takeover is a reminder that a few missteps and bad timing can cause a star to come crashing down.

Sun was founded in 1982 by men who would become legendary Silicon Valley figures: Andy Bechtolsheim, a graduate student whose computer "workstation" for the Stanford University Network (SUN) led to the company's first product; Bill Joy, whose work formed the basis for Sun's computer operating system; and Stanford MBAs Vinod Khosla and Scott McNealy.

Sun was a pioneer in the concept of networked computing, the idea that computers could do more when lots of them were linked together. Sun's computers took off at universities and in the government, and became part of the backbone of the early Internet. Then the 1990s boom made Sun a star. It claimed to put "the dot in dot-com," considered buying a struggling Apple Computer Inc. and saw its market value peak around $200 billion.

[Apr 17, 2009] Adobe Reader 9 released - Linux and Solaris x86

Tabbed viewing was added
Ashutosh Sharma

Adobe Reader 9.1 for Linux and Solaris x86 has been released today. Solaris x86 support was one of the most requested feature by users. As per the Reader team's announcement, this release includes the following major features:

- Support for Tabbed Viewing (preview)
- Super fast launch, and better performance than previous releases
- Integration with Acrobat.com
- IPv6 support
- Enhanced support for PDF portfolios (preview)

The complete list is available here.

Adobe Reader 9.1 is now available for download and works on OpenSolaris, Solaris 10 and most modern Linux distributions such as Ubuntu 8.04, PCLinuxOS, Mandriva 2009, SLED 10, Mint Linux 6 and Fedora 10.

See also Sneak Preview of the Tabbed Viewing interface in Adobe Reader 9.x (on Ubuntu)

[Feb 22, 2009] 10 shortcuts to master bash - Program - Linux - Builder AU By Guest Contributor, TechRepublic | 2007/06/25 18:30:02

If you've ever typed a command at the Linux shell prompt, you've probably already used bash -- after all, it's the default command shell on most modern GNU/Linux distributions.

The bash shell is the primary interface to the Linux operating system -- it accepts, interprets and executes your commands, and provides you with the building blocks for shell scripting and automated task execution.

Bash's unassuming exterior hides some very powerful tools and shortcuts. If you're a heavy user of the command line, these can save you a fair bit of typing. This document outlines 10 of the most useful tools:

  1. Easily recall previous commands

    Bash keeps track of the commands you execute in a history buffer, and allows you to recall previous commands by cycling through them with the Up and Down cursor keys. For even faster recall, "speed search" previously-executed commands by typing the first few letters of the command followed by the key combination Ctrl-R; bash will then scan the command history for matching commands and display them on the console. Type Ctrl-R repeatedly to cycle through the entire list of matching commands.

  2. Use command aliases

    If you always run a command with the same set of options, you can have bash create an alias for it. This alias will incorporate the required options, so that you don't need to remember them or manually type them every time. For example, if you always run ls with the -l option to obtain a detailed directory listing, you can use this command:

    bash> alias ls='ls -l' 

    To create an alias that automatically includes the -l option. Once this alias has been created, typing ls at the bash prompt will invoke the alias and produce the ls -l output.

    You can obtain a list of available aliases by invoking alias without any arguments, and you can delete an alias with unalias.

  3. Use filename auto-completion

    Bash supports filename auto-completion at the command prompt. To use this feature, type the first few letters of the file name, followed by Tab. bash will scan the current directory, as well as all other directories in the search path, for matches to that name. If a single match is found, bash will automatically complete the filename for you. If multiple matches are found, you will be prompted to choose one.

  4. Use key shortcuts to efficiently edit the command line

    Bash supports a number of keyboard shortcuts for command-line navigation and editing. The Ctrl-A key shortcut moves the cursor to the beginning of the command line, while the Ctrl-E shortcut moves the cursor to the end of the command line. The Ctrl-W shortcut deletes the word immediately before the cursor, while the Ctrl-K shortcut deletes everything immediately after the cursor. You can undo a deletion with Ctrl-Y.

  5. Get automatic notification of new mail

    You can configure bash to automatically notify you of new mail, by setting the $MAILPATH variable to point to your local mail spool. For example, the command:

    bash> MAILPATH='/var/spool/mail/john'
    bash> export MAILPATH 

    Causes bash to print a notification on john's console every time a new message is appended to John's mail spool.

  6. Run tasks in the background

    Bash lets you run one or more tasks in the background, and selectively suspend or resume any of the current tasks (or "jobs"). To run a task in the background, add an ampersand (&) to the end of its command line. Here's an example:

    bash> tail -f /var/log/messages &
    [1] 614

    Each task backgrounded in this manner is assigned a job ID, which is printed to the console. A task can be brought back to the foreground with the command fg jobnumber, where jobnumber is the job ID of the task you wish to bring to the foreground. Here's an example:

    bash> fg 1

    A list of active jobs can be obtained at any time by typing jobs at the bash prompt.

  7. Quickly jump to frequently-used directories

    You probably already know that the $PATH variable lists bash's "search path" -- the directories it will search when it can't find the requested file in the current directory. However, bash also supports the $CDPATH variable, which lists the directories the cd command will look in when attempting to change directories. To use this feature, assign a directory list to the $CDPATH variable, as shown in the example below:

    bash> CDPATH='.:~:/usr/local/apache/htdocs:/disk1/backups'
    bash> export CDPATH

    Now, whenever you use the cd command, bash will check all the directories in the $CDPATH list for matches to the directory name.

  8. Perform calculations

    Bash can perform simple arithmetic operations at the command prompt. To use this feature, simply type in the arithmetic expression you wish to evaluate at the prompt within double parentheses, as illustrated below. Bash will attempt to perform the calculation and return the answer.

    bash> echo $((16/2))
    8
  9. Customise the shell prompt

    You can customise the bash shell prompt to display -- among other things -- the current username and host name, the current time, the load average and/or the current working directory. To do this, alter the $PS1 variable, as below:

    bash> PS1='\u@\h:\w \@> '
    
    bash> export PS1
    root@medusa:/tmp 03:01 PM>

    This will display the name of the currently logged-in user, the host name, the current working directory and the current time at the shell prompt. You can obtain a list of symbols understood by bash from its manual page.

  10. Get context-specific help

    Bash comes with help for all built-in commands. To see a list of all built-in commands, type help. To obtain help on a specific command, type help command, where command is the command you need help on. Here's an example:

    bash> help alias
    ...some help text...

    Obviously, you can obtain detailed help on the bash shell by typing man bash at your command prompt at any time.

[Feb 22, 2009] Installation Guide for RHEL 5

2. Steps to Get You Started
2.1. Upgrade or Install?
2.2. Is Your Hardware Compatible?
2.3. Do You Have Enough Disk Space?
2.4. Can You Install Using the CD-ROM or DVD?
2.4.1. Alternative Boot Methods
2.4.2. Making an Installation Boot CD-ROM
2.5. Preparing for a Network Installation
2.5.1. Preparing for FTP and HTTP installation
2.5.2. Preparing for an NFS install
2.6. Preparing for a Hard Drive Installation
3. System Specifications List
4. Installing on Intel® and AMD Systems
4.1. The Graphical Installation Program User Interface
4.1.1. A Note about Virtual Consoles
4.2. The Text Mode Installation Program User Interface
4.2.1. Using the Keyboard to Navigate
4.3. Starting the Installation Program
4.3.1. Booting the Installation Program on x86, AMD64, and Intel® 64 Systems
4.3.2. Booting the Installation Program on Itanium Systems
4.3.3. Additional Boot Options
4.4. Selecting an Installation Method
4.5. Installing from DVD/CD-ROM
4.5.1. What If the IDE CD-ROM Was Not Found?
4.6. Installing from a Hard Drive
4.7. Performing a Network Installation
4.8. Installing via NFS
4.9. Installing via FTP
4.10. Installing via HTTP
4.11. Welcome to Red Hat Enterprise Linux
4.12. Language Selection
4.13. Keyboard Configuration
4.14. Enter the Installation Number
4.15. Disk Partitioning Setup
4.16. Advanced Storage Options
4.17. Create Default Layout
4.18. Partitioning Your System
4.18.1. Graphical Display of Hard Drive(s)
4.18.2. Disk Druid's Buttons
4.18.3. Partition Fields
4.18.4. Recommended Partitioning Scheme
4.18.5. Adding Partitions
4.18.6. Editing Partitions
4.18.7. Deleting a Partition
4.19. x86, AMD64, and Intel® 64 Boot Loader Configuration
4.19.1. Advanced Boot Loader Configuration
4.19.2. Rescue Mode
4.19.3. Alternative Boot Loaders
4.19.4. SMP Motherboards and GRUB
4.20. Network Configuration
4.21. Time Zone Configuration
4.22. Set Root Password
4.23. Package Group Selection
4.24. Preparing to Install
4.24.1. Prepare to Install
4.25. Installing Packages
4.26. Installation Complete
4.27. Itanium Systems - Booting Your Machine and Post-Installation Setup
4.27.1. Post-Installation Boot Loader Options
4.27.2. Booting Red Hat Enterprise Linux Automatically
5. Removing Red Hat Enterprise Linux
6. Troubleshooting Installation on an Intel® or AMD System
6.1. You are Unable to Boot Red Hat Enterprise Linux
6.1.1. Are You Unable to Boot With Your RAID Card?
6.1.2. Is Your System Displaying Signal 11 Errors?
6.2. Trouble Beginning the Installation
6.2.1. Problems with Booting into the Graphical Installation
6.3. Trouble During the Installation
6.3.1. No devices found to install Red Hat Enterprise Linux Error Message
6.3.2. Saving Traceback Messages Without a Diskette Drive
6.3.3. Trouble with Partition Tables
6.3.4. Using Remaining Space
6.3.5. Other Partitioning Problems
6.3.6. Other Partitioning Problems for Itanium System Users
6.3.7. Are You Seeing Python Errors?
6.4. Problems After Installation
6.4.1. Trouble With the Graphical GRUB Screen on an x86-based System?
6.4.2. Booting into a Graphical Environment
6.4.3. Problems with the X Window System (GUI)
6.4.4. Problems with the X Server Crashing and Non-Root Users
6.4.5. Problems When You Try to Log In
6.4.6. Is Your RAM Not Being Recognized?
6.4.7. Your Printer Does Not Work
6.4.8. Problems with Sound Configuration
6.4.9. Apache-based httpd service/Sendmail Hangs During Startup
7. Driver Media for Intel® and AMD Systems
7.1. Why Do I Need Driver Media?
7.2. So What Is Driver Media Anyway?
7.3. How Do I Obtain Driver Media?
7.3.1. Creating a Driver Diskette from an Image File
7.4. Using a Driver Image During Installation
8. Additional Boot Options for Intel® and AMD Systems
9. The GRUB Boot Loader
9.1. Boot Loaders and System Architecture
9.2. GRUB
9.2.1. GRUB and the x86 Boot Process
9.2.2. Features of GRUB
9.3. Installing GRUB
9.4. GRUB Terminology
9.4.1. Device Names
9.4.2. File Names and Blocklists
9.4.3. The Root File System and GRUB
9.5. GRUB Interfaces
9.5.1. Interfaces Load Order
9.6. GRUB Commands
9.7. GRUB Menu Configuration File
9.7.1. Configuration File Structure
9.7.2. Configuration File Directives
9.8. Changing Runlevels at Boot Time
9.9. Additional Resources
9.9.1. Installed Documentation