May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Red Hat Enterprise Linux Administration

News Administration Recommended Books Recommended Links Installation of Red Hat from a USB drive Installation Modifying ISO image to include kickstart file Migrating systems from RHN to RHNSM
RHEL6 registration on proxy protected network RHEL5 registration on proxy protected network Log rotation in RHEL/Centos/Oracle linux Disabling useless daemons in RHEL/Centos/Oracle linux servers Disabling RHEL 6 Network Manager Kickstart RHEL 6 install highlights RHEL 6.8 Installation Checklist
Networking VNC-based Installation in Anaconda Bonding Ethernet Interfaces in Red Hat Linux Network Manager overwrites resolv.conf NTP configuration RHEL handling of DST change LVM Xinetd
Disabling useless daemons in RHEL/Centos/Oracle 6 servers Disabling useless daemons in RHEL/Centos/Oracle 5 servers Certification Program Oracle Linux Changing runlevel when booting with grub Infiniband Subnet Manager Infiniband Installing Mellanox Infiniband Driver on RHEL 6.5
RPM YUM Anaconda Kickstart Cron Wheel Group PAM RHEL handling of DST change
RHEL subscription management RHEL Runlevels RHEL4 registration RHEL5 registration on proxy protected network SELinux      
Redundant daemons in RHEL Disabling the avahi daemon Apache rsyslog SSH NFS Samba NTP
 serial console Screen vim Log rotation rsync Sendmail VNC/VINO Midnight Commander
bash IP address change   systemd rsync Systemd Disk Management Security
Red Hat Startup Scripts   Fedora Oracle Linux Administration Notes RHEL 7 lands with systemd      
Tuning Virtualization Xen Red Hat vs. Solaris

Sysadmin Horror Stories

Tips Humor Etc


 Red Hat exists in several incarnations:

RHEL is a mess

Architectural quality of RHEL is low or very low. This is a brittle and very complex system with some examples of extremely sloppy programming.

Nice example of Red Hat inaptitude is how they handle proxy settings. Even for software fully controlled by Red Hat  such as yum and subscription manager they use proxy setting in each and every configuration file. Why not /etc/sysconfig/network ?  God knows. 

Also some programs pick up setting from environment such as http_proxy and those setting overwrite configuration files, some do not and configuration file is the truth in its last instance.

Those giants of system programming even manage to embed proxy settings from /etc/rhsm/rhsm.conf into yum file /etc/yum.repos.d/redhat.repo, so the proxy value is taken from this file. not from  your /etc/yum.conf settings, as you would expect.  Moreover this is done without any elementary checks for consistency: if you make a pretty innocent  mistake and specify proxy setting in /etc/rhsm/rhsm.conf as


The Red Hat registration manager will accept this and will work file. But for yum to work properly /etc/rhsm/rhsm.conf proxy specification requires just DNS name without prefix http:// or https://  -- prefix https will be added blindly (and that's wrong) in redhat.repo   without checking if you specified http:// (or https://) prefix or not. This SNAFU will lead to generation in  redhat.repo  the proxy statement of the form https://

At this point you are up for a nasty surprise -- yum will not work with any Redhat repository and there is no any meaningful diagnostic messages. Looks like RHEL managers are iether engaged in binge drinking, or watch too much porn on the job ;-). 

Yum which started as a helpful utility gradually  turned into a complex monster that requires quite a bit of study and has a set of very complex bugs, some of which are almost features.

SELinux  was never a transparent security subsystems and has a lot of quirks of its own. And its key idea as far from being elegant like were the key ideas of AppArmor, which actually disappeared from he Linux security landscape. Many sysadmins simply disable SElinux leaving only firewall to protect the server. Some application require disabling SELinux for proper functioning. 

The deterioration of architectural vision within Red Hat as company is clearly visible in the terrible (simple terrible) quality of the customer portal, which is probably the worst I ever encountered. Sometimes I just put tickets to understand how to perform particular operation. Old or Classic as they call it RHEL customer portal actually was OK and even used to have some useful features. Then for some reason they tried to introduced something new and completely messes the thing.  Sometimes I wonder why I am using the distribution, if the company that produced it (and charges substantial money for it ) is so tremendously architecturally inapt, that it is unable to create a usable customer portal.

This is an expensive open source, my friend

RHEL is very expensive distribution for small and medium firms. There is no free lunch and if you are using commercial distribution you need to pay annual maintenance or get used to some delays in availability of new versions and security patches. In most cases this is acceptable, so if CentOS or Scientific Linux work OK for a particular application it should be used instead of commercial version just to avoid troubles with licensing.

For HPC clusters Red Hat provides discounted version (for computations nodes only; the headnode is licensed for a full price) with limited number of packages, called Red Hat Enterprise Linux for High Performance Computing. See  for more or less OK explanations of that you should expect.

Essentially, you pay the full price for the headnode and discounted price for each computational node. I am not sure that Oracle Linux is not a better deal as in this case you have the same distribution both for headnode and computational nodes for the same price as Red Hat HPc license with two different distributions.  Truth be told Red Hat does provides optimized networking stack with HPC computer node license.  The question is what is the difference and should you pay such a price for it.


RHEL support deteriorated recently while prices almost doubled from RHEE5 to 6 (especially is you use virtual guests a lot; see discussion at RHEL 6 how much for your package • The Register) and now it is now not very clear what you are paying for.

Now all tech support does when trying to resolve most of tickets is to search the database of cases, and post as a solution something that is related to your case (or may be not ;-)   Premium support still is above average, and they can switch to a live engineer on a different continent in critical cases in later hours,  so if server downtime is important this is a kind of (weak) insurance.

In any case, Red Hat support even for subsystem fully developed by Red Hat such as subscription manager and yum is usually dismal, unless you are lucky and get knowledgeable guy (I once did).  In most case they search database and recommend something from the article that they found most close to your case.  Often without even understanding what problem you are experiencing. Sometimes this "quote service" from their database that they sell instead of customer support helps, but often it is completely detached from reality.  In the past (in time of RHEL 4) support was much better. Now it is unclear what we are paying for.

Despite several level of support included in licenses (with premium supposedly to be higher level) technical support for really complex cases is uniformly weak, with mostly "monkey looking in database" type of service. If you have a complex problem, you are usually stuck, although premium service provide you an opportunity to talk with a live person, which might help.   In a way, unless you buy premium license,  the only way to use RHEL is "as is".  And with RHEL 7 even this is not a very attractive proposition as the switch to systemd creates its own set of problems and a learning curve for sysadmins.

Some of this deterioration is connected with the fact that Linux became very complex, Byzantine OS that nobody actually knows. Even a number of utilities are such that nobody knows probably more then 30% or 50% of them. and even if you learn some utility during particular case of troubleshooting you will soon forget as you probably will not get the same case in a year or two. In this sense the title "Red Hat Engineer" became a sad joke.

Even if you learned something important today you will soon forget if you do not use it as there way too may utilities, application, configuration files. You name it.

Licensing model: four different types of RHEL licenses

RHEL is struggling to fence off "copycats" by complicating access to the source of patches, but the problem is that its licensing model in Byzantium.  It is based of a half-dozen of different types of subscriptions. Some pretty expensive. In the past it I resented paying Red Hat for  our 4 socket servers to the extent that I stop using this type of servers and completely switched to two socket servers. Which with Intel CPUs rising core count was a easy escape from RHEL restrictions.  Currently Red Hat probably has most complex, the most Byzantine system of subscriptions after IBM (which is probably the leader in licensing obscurantism ;-).

And there are at least four different RHEL licenses for real (hardware-based) servers (  )

  1. Self-support. If you have many identical or almost identical servers or virtual machines it does not make sense to buy standard or premium licenses for all. It can be bought for one server and all other can be used with self-support licenses, which provides access to patches)
  2. Standard (web and phone during business hours (if you manage to get to a support specialist), but mostly web)
  3. Premium (Web and phone with phone 24 x7 for severity 1 and 2 problems)
  4. HPC  computational nodes with limited number of packages in the distribution and repositories (I wonder if using Oracle Linux is not a better deal for computational nodes then this type of RHEL licenses; sometime CENTOS can be used too which eliminates this problem)
  5. No-Cost RHEL Developer Subscription is available from March 2016

RHEL licensing scheme is based on so called "entitlements" which oversimplifying is one license for a 2 socket server. In the past they are "mergeable" so if your 4 socket license expired and you have two spare two socket licenses RHEL is happy to accommodate your needs. Now they are not. 

But that does not assure the right mix if you need different types of licenses for different classes of servers. all is fine until you use mixture of licenses (for example some cluster licenses, some patch only(aka self-support), some premium license -- 4 types of licenses altogether). In this case to understand where is particular license landed is almost impossible.  But if you use uniform licensees this scheme works reasonably well. But it breaks the moment you buy several type of licensing.  One way to avoid this is to buy RHEL via resellers (such as Dell and HP). In this case Dell of HP engineers provide support for RHEL and naturally they know their hardware much better then RHEL engineers so, for example. driver problems are much easier to debug.

This path works well but to cut the costs you need to buy five year license with the server which is a lot of money and you lack the ability to switch linux flavor. This also a problem with buying cluster license -- Dell and HP can install basic cluster software on the enclosure for minimal fee but they force upon ou additional software which you might not want or need.  And believe me this HPC can be used outside computational tasks. It is actually an interesting paradigm of managing heterogeneous datacenter.  The only problem is that you need to learn to use it :-). For example SGE is very well engineered scheduler (originally from SUN, but later open sourced). While this is a free software it  beats many commercial offerings and while it lacks calendar scheduling, any calendar scheduler can be used with it to compensate for this (even cron -- in this each cron task becomes SGE submit script).

Still using HPC-like config might be an option to lower the fees if you use multiple similar servers (for example blade enclosure with 16 identical blades). It is to organize a particular set of servers as a cluster with SGE (or other similar scheduler) installed on the head node. Now Hadoop is a fashionable thing (while being just a simple case of distributed search) and you can already claim tat this is a Hadoop type of service. In this case you pay twice higher price for headnode, but all computation nodes are $100 a year each or so. Although you can get the same self-support license from Oracle for the same price without  Red hat restrictions, so from other point of view, why bother?.

Two licensing system RHN and  RHSM

There are two licensing system used by Red Hat

  1.  Classic(RHN) -- old system that will be phased out in mid 2017
  2. "New" (RHSM) -- new system used predominatly on RHEL 6 and 7

Both are complex and requre study.  Many hours of sysadmin time are wasted on mastering its complexities, while in reality this is an overhead that allows Red Hat to charge money. So the fact that they are supporting it well tell us about the level of deterioration of the company. 

All-in-all Red Hat successful created almost un-penetrable mess of obsolete and semi obsolete notes, poorly written and incomplete documentation, dismal diagnostic and poor troubleshooting tools. And the level of frustration sometimes sometimes reaches such a level that people just abandon RHEL. I did for several non-critical system. If CentOS or Academic Linux works there is no reason to suffer from Red Hat licensing issues. Also that makes Oracle, surprisingly, more attractive option too :-). Oracle Linux is also cheaper. But usually you are bound by corporate policy here. 

"New" subscription system (RHSM) is slightly better then RHN for large organizations.  It allows to assign specific license to specific box and list the current status of licensing.  But like RHN it requires to use proxy setting in configuration file, it does not take them from the environment. If the company has several proxies and you have mismatch you can be royally screwed. In general you need already to check consistently of your environment with conf file settings.  The level of understanding of proxies environment by RHEL tech support is basic of worse, so they are  using the database of articles instead of actually troubleshooting based on sosreport data. Moreover each day there might a new person working on your ticket, so there no continuity.

RHEL System Registration Guide ( is weak and does not cover more complex cases and typical mishaps.

RHN system of RHEL licenses also can cover various  of sockets (the default is 2). For 4 socket server it will two licenses. This is not the case with RHNSM.

In general licensing by physical socket or even core is the old and dirty IBM trick that now many companies reuse ( and now Red Hat simply can't claim that they are not greedy).

In RHN, at least, licenses were eventually converted into some kind of  uniform licensing tokens that are assigned to unlicensed systems more or less automatically (for example if you have 4 socket system then two tokens were consumed). With RHNSM this is not true, which creating for large enterprises a set of complex problems.

But the major drawback of RHN for large enterprises is that there is no way (or at least I do not know how) to specify which type of license particular system requires.

In its current stage classic licensing system is simply not functional enough for a large enterprise that has complex mix of systems (HPC clusters, servers that require premium support, regular support (most of the servers), self-help system (only patching), etc).  You can slightly improve things by using you own patch distribution server (, but the licensing system remain complex and sysadmin unfriendly.  Using multiple accounts with RHN (one for each type of license) might help but I never tried that. There might be better ways to use RHN but as far as I know most organization use the most primitive "flat license space" model.  And most companies have a single account in Red Hat.

"New" subscription system (RHSM) is slightly better.  It allows to assign specific license to specific box and list the current status of licensing.  But like RHN it requires to use proxy setting in configuration file, it does not take them from the environment. If the company has several proxies and you have mismatch you can be royally screwed. In general you need already to check consistently of your environment with conf file settings.  The level of understanding of proxies environment by RHEL tech support is basic of worse, so they are  using the database of articles instead of actually troubleshooting based on sosreport data. Moreover each day there might a new person working on your ticket, so there no continuity. RHEL System Registration Guide ( is weak and does not cover more complex cases and typical mishaps.


Learn More

So those Red Hat honchos with high salaries essentially create a new job -- license administrator. Congratulations !

If you unlucky guy without such a person, they you need yourself to read and understand at least The RHEL System Registration Guide ( which outlines major options available for registering a system (and carefully avoids mentioning bugs and pitfalls, which are many).  For some reason migration from RHN to RHNSM usually works well so it might make sense to register system first in RHN and then to migrate it.

Also might be useful (to the extent any Red Hat purely written documentation is useful) is How to register and subscribe a system to the Red Hat Customer Portal using Red Hat Subscription-Manager ( At least it tires to  answers to some most basic questions:

There is also an online tool to assist you in selecting the most appropriate registration technology for your system - Red Hat Labs Registration Assistant ( If you would prefer to use this tool, please visit"

Pretty convoluted RPM packaging system which creates problems

The idea of RPM was to simplify installation of complex packages. But they create of a set of problem of their own. Especially connected with libraries (which not exactly Red Hat problem, it is Linux problems). One example is so called multilib problem that is detected by YUM:

--> Finished Dependency Resolution

Error:  Multilib version problems found. This often means that the root
       cause is something else and multilib version checking is just
       pointing out that there is a problem. Eg.:

         1. You have an upgrade for libicu which is missing some
            dependency that another package requires. Yum is trying to
            solve this by installing an older version of libicu of the
            different architecture. If you exclude the bad architecture
            yum will tell you what the root cause is (which package
            requires what). You can try redoing the upgrade with
            --exclude libicu.otherarch ... this should give you an error
            message showing the root cause of the problem.

         2. You have multiple architectures of libicu installed, but
            yum can only see an upgrade for one of those arcitectures.
            If you don't want/need both architectures anymore then you
            can remove the one with the missing update and everything
            will work.

         3. You have duplicate versions of libicu installed already.
            You can use "yum check" to get yum show these errors. can also use --setopt=protected_multilib=false to remove
       this checking, however this is almost never the correct thing to
       do as something else is very likely to go wrong (often causing
       much more problems).

       Protected multilib versions: libicu-4.2.1-14.el6.x86_64 != libicu-4.2.1-11.el6.i686

Selecting packages for installation

You can improve the typical for RHEL situation with a lot of useless daemons installed by carefully selecting packages and then reusing generated kickstart file. That can be done via advanced menu for one box and then using this kickstart file for all other boxes with minor modifications. Kickstart still works, despite trend toward overcomplexity in other parts of distribution ;-)

Problems with architectural vision of Red Hat brass

Both architectural level of thinking of Red Hat brass (with daemons like avahi, systemd installed by default) and clear attempts along the lines "Not invented here" in virtualization creates some concerns. It is clear that Red Hat by itself can't become a major virtualization player like VMware. It just does not have enough money for development.

You would think that the safest bat is to reuse the leader among open source offerings which is currently Xen. But Red Hat brass thinks differently and wants to play more dangerous poker game: it started promoting KVM: Red Hat has released Enterprise Linux 5 with integrated virtualization (Xen) and then changed their mind after RHEL 5.5 or so. In RHEL 6 Xen is replaced by KVM.

What is good that after ten years they eventually manage to re-implement Solaris 10 zones. In RHEL 7 they are usable.

Security overkill with SELinux

RHEL contain security layer called SELinux, but in most cases of corporate deployment it is either disabled, or operates in permissive mode.  The reason is that is notoriously difficult to configure correctly and in most case the game does not worth the candles.

Firewall is more usable in  corporate deployments, especially in cases when you have obnoxious or incompetent security department (a pretty typical situation for a large corporation ;-) as it prevents a lot of stupid questions from utterly incompetent "security gurus" about opened ports and can stop dead scanning attempts of tools that test for known vulnerabilities and by using which security departments are trying to justify their miserable existence. Generally it is dangerous to allow exploits used in such tools which local script kiddies (aka "security team") recklessly launch against your production server (as if checking for a particular vulnerability using internal script is inferior solution). There were reports of crashes of production servers due to such games. Some "security script kiddie" who understand very little in Unix even try to prove their worth by downloading exploits from hacker site and then using it against production servers on the internal corporate network. Unfortunately they are not always fired for such valiant efforts.   

To get the idea about the level of complexity try to read the Deployment Guide. Full set of documentation is available from

So it is not accidental that in many cases SElinix is disabled in enterprise installations. Some commercial software packages explicitly recommend to disable it in their installation manuals.

There is an alternative to SElinux which is more elegant, usable and understandable approach -- AppArmor which is used in SLES (which despite that suffers from overcomplexity even more then RHEL ;-). But it did not get enough traction.  Still IMHO if you need a really high level of security for a particular server this is a preferable path to go. Or you can use Solaris if you have knowledgeable Solaris sysadmin on the floor (security via obscurity actually works pretty well in this case).

RHEL became kind of Microsoft of Linux  world and as such this is the most hackable flavor of linux just due to its general popularity. It is not a good idea to use RHEL if security is of vital importance, although with enabled SE it definably more hardened variant of OS then without. See Potemkin Villages of Computer Security  for more detailed discussion.

Current versions and year of end of support

Supported versions of RHEL (as of April 2016) are  5.11, 6.7, and 7.2. Usually a large enterprise uses a mixture of versions, often all three of them.  Compatibility within a single version is usually very good (I would say on par with Solaris) and the risk on upgrading from, say, 6.2 to 6.7 is minimal. Not so in case of major versions. Here you mileage may vary.

See Red Hat Enterprise Linux - Wikipedia.  and Red Hat Enterprise Linux.


In linux there is no convention for determination which flavor of linux you are running. For Red Hat in order to  determine which version is installed on the server you can use command

cat /etc/redhat-release

Oracle linux adds its own file preserving RHEL file, so a more appropriate  command would be

cat /etc/*release

End of support issues

See Red Hat Enterprise Linux Life Cycle - Red Hat Customer Portal:

For more packages version information see Red Hat Enterprise Linux

Updates in RHEL 5

RHEL 5, especially versions  5.6-5.9 is probably one of the most stable version of  Red Hat I ever encountered. It still support more or less recent hardware (Oracle provides updated kernel if you want it).  This is a very conservative distribution. For example, it still uses such really old (or obsolete, if you wish) versions as bash 3.2.25, Perl 5.8.8, and Python 2.4.3.

Oracle produced improved kernel for 5.x versions based of later version of linux kernel then "stock" RHEL kernel. It might benefit stability if you are running Oracle applications. It is 64-bit only and is more capricious toward hardware then Red Hat stack kernel so your mileage can vary.

RHEL 5 suffers from proliferation of useless or semi-useless daemons and as such is not secure and probably can't  be made secure in default installation. You need carefully minimize the system to get s usable server.

Systemtap is a GPL-based infrastructure which simplifies information gathering on a running Linux system. This assists in diagnosis of performance or functional problems. With systemtap, the tedious and disruptive "instrument, recompile, install, and reboot" sequence is no longer needed to collect diagnostic data. Systemtap is now fully supported. For more information refer to
The Internet storage name service for Linux (isns-utils) is now supported. This allows you to register iSCSI and iFCP storage devices on the network. isns-utils allows dynamic discovery of available storage targets through storage initiators.

isns-utils provides intelligent storage discovery and management services comparable to those found in fibre-channel networks. This allows an IP network to function in a similar capacity to a storage area network.

With its ability to emulate fibre-channel fabric services, isns-utils allows for seamless integration of IP and fibre-channel networks. In addition, isns-utils also provides utilities for managing both iSCSI and fibre-channel devices within the network.

For more information about isns-utils specifications, refer to For usage instructions, refer to /usr/share/docs/isns-utils-[version]/README and /usr/share/docs/isns-utils-[version]/README.redhat.setup.

rsyslog is an enhanced multi-threaded syslogd daemon that supports the following (among others):

rsyslog is compatible with the stock sysklogd, and can be used as a replacement in most cases. Its advanced features make it suitable for enterprise-class, encrypted syslog relay chains; at the same time, its user-friendly interface is designed to make setup easy for novice users.

For more information about rsyslog, refer to

Openswan is a free implementation of Internet Protocol Security (IPsec) and Internet Key Exchange (IKE) for Linux. IPsec uses strong cryptography to provide authentication and encryption services. These services allow you to build secure tunnels through untrusted networks. Everything passing through the untrusted network is encrypted by the IPsec gateway machine and decrypted by the gateway at the other end of the tunnel. The resulting tunnel is a virtual private network (VPN).

This release of Openswan supports IKEv2 (RFC 4306, 4718) and contains an IKE2 daemon that conforms to IETF RFCs. For more information about Openswan, refer to

Password Hashing Using SHA-256/SHA-512
Password hashing using the SHA-256 and SHA-512 hash functions is now supported.

To switch to SHA-256 or SHA-512 on an installed system, run authconfig --passalgo=sha256 --update or authconfig --passalgo=sha512 --update. To configure the hashing method through a GUI, use authconfig-gtk. Existing user accounts will not be affected until their passwords are changed.

For newly installed systems, using SHA-256 or SHA-512 can be configured only for kickstart installations. To do so, use the --passalgo=sha256 or --passalgo=sha512 options of the kickstart command auth; also, remove the --enablemd5 option if present.

If your installation does not use kickstart, use authconfig as described above. After installation, change all created passwords, including the root password.

Appropriate options were also added to libuser, pam, and shadow-utils to support these password hashing algorithms. authconfig configures necessary options automatically, so it is usually not necessary to modify them manually:

OFED in comps.xml
The group OpenFabrics Enterprise Distribution is now included in comps.xml. This group contains components used for high-performance networking and clustering (for example, InfiniBand and Remote Direct Memory Access).

Further, the Workstation group has been removed from comps.xml in the Red Hat Enterprise Linux 5.2 Client version. This group only contained the openib package, which is now part of the OpenFabrics Enterprise Distribution group.

system-config-netboot is now included in this update. This is a GUI-based tool used for enabling, configuring, and disabling network booting. It is also useful in configuring PXE-booting for network installations and diskless clients.
In order to accommodate the use of compilers other than gcc for specific applications that use message passing interface (MPI), the following updates have been applied to the openmpi and lam packages:

Note that when upgrading to this release's version of openmpi, you should migrate any default parameters set for lam or openmpi to /usr/lib(64)/lam/etc/ and /usr/lib(64)/openmpi/[openmpi version]-[compiler name]/etc/. All configurations for either openmpi or lam should be set in these directories.

lvm2 Snapshot Volume Warning
lvm2 will now warn if a snapshot volume is near its maximum capacity. However, this feature is not enabled by default. To enable this feature, uncomment the following line in /etc/lvm/lvm.conf
snapshot_library = ""

Ensure that the dmeventd section and its delimiters ({ }) are also uncommented.

bash has been updated to version 3.2. This version fixes a number of outstanding bugs, most notably:

Note that with this update, the output of ulimit -a has also changed from the Red Hat Enterprise Linux 5.1 version. This may cause a problems with some automated scripts. If you have any scripts that use ulimit -a output strings, you should revise them accordingly.

Updates in RHEL 6

RHEL 6 was released in November 2010. So technical support and patching will last till 2020.  See Red Hat Enterprise Linux Life Cycle - Red Hat Customer Portal

RHEL 6 cut the number of daemons installed by default in comparison with RHEL 5.  NFS4 became default and that cause problems as if master shutdown often client can't recover and enter zombie state.  Luckily they spared us from systemd in this version ;-)

RHEL 6 initially gave me impression of half-baked, rushed to customer distribution and may be signal internal crisis in RHEL development as in some areas it is worse then RHEL 5.6. It stabilized around version 6.5. Some changes were arbitrary and just make distribution look "new" without bring anything significant to the table. For example during installation, the partitioning procedure changed and probably not to the better. Some "mostly-desktop or home network" daemons are present by default.   For example, complex  and potentially insecure avahi daemon (implementation of Zeroconf).

The Avahi daemon discovers network resources, allocates IP addresses without the DHCP server, makes the computer accessible by its local

As RHEL is targeted to corporate environments which typically use static IP for servers it makes little or no sense. It is better to disable it on installation. See   Disabling the Avahi daemon

Also the ability of the distribution to select right set of daemons is compromised in RHEL 6 more then in RHEL 5 despite adding useful concept of "server roles": by default there is a lot of useless daemons. If you try for example to install "database server" role  you then need to check and delete/disable redundant  manually.

Documentation for version 6

Red Hat Enterprise Linux 6 Technical Details : What's New

... ... ...


* Red Hat Enterprise Linux 6 supports more sockets, more cores, more threads, and more memory.

Efficient Scheduling

* The CFS schedules the next task to be run based on which task has consumed the least time, task prioritization, and other factors. Using hardware awareness and multi-core topologies, the CFS optimizes task performance and power consumption.

Reliability, Availability, and Serviceability (RAS)

* RAS hardware-based hot add of CPUs and memory is enabled.
* When supported by machine check hardware, the system can recover from some previously fatal hardware errors with minimal disruption.
* Memory pages with errors can be declared as "poisoned", and will be avoided.


* The new default file system, ext4, is faster, more robust, and scales to 16TB.
* The Scalable File System Add-On contains the XFS file system which scales to 100TB.
* The Resilient Storage Add-On includes the high availability, clustered GFS2 file system.
* NFSv4 is significantly improved over NFSv3, and backwards compatible.
* Fuse allows filesystems to run in user space allowing testing and development on newer fused-based filesystems (such as cloud filesystems).

High Availability

* The web interface based on Conga has been re-designed for added functionality and ease of use.
* The cluster group communication system, Corosync, is mature, secure, high performance, and light-weight.
* Nodes can re-enable themselves after failure without administrative intervention using unfencing.
* Unified logging and debugging simplifies administrative work.
* Virtualized KVM guests can be run as managed services which enables fail-over, including between physical and virtual hosts.
* Centralized configuration and management is provided by Conga.
* A single cluster command can be used to manage system logs from different services, and the logs have a consistent format that is easier to parse.

Power Management

* The tickless kernel feature keeps systems in the idle state longer, resulting in net power savings.
* Active State Power Management and Aggressive Link Power Management provide enhanced system control, reducing the power consumption of I/O subsystems. Administrators can actively throttle power levels to reduce consumption.
* Realtime drive access optimization reduces filesystem metadata write overhead.

System Resource Allocation

* Cgroups organize system tasks so that they can be tracked, and so that other system services can control the resources that cgroup tasks may consume (Partitioning). Two user-space tools, cgexec and cgclassify, provide easy configuration and management of cgroups.
* Cpuset applies CPU resource limits to cgroups, allowing processing performance to be allocated across tasks.
* The memory resource controller applies memory resource limits to cgroups.
* The network resource controller applies network traffic limits to cgroups.


* A snapshot of a logical volume may be merged back into the original logical volume, reverting changes that occurred after the snapshot.
* Mirror logs of regions that need to be synchronized can be replicated, supporting high availability.
* LVM hot spare allows the behavior of a mirrored logical volume after a device failure to be explicitly defined.
* DM-Multipath allows paths to be dynamically selected based on queue size or I/O time data.
* Very large SAN-based storage is supported.
* Automated I/O alignment and self-tuning is supported.
* Filesystem usage information is provided to the storage device, allowing administrators to use thin provisioning to allocate storage on-demand.
* SCSI and ATA standards have been extended to provide alignment and I/O hints, allowing automated tuning and I/O alignment.
* DIF/DIX provides better integrity checks for application data.


* UDP Lite tolerates partially corrupted packets to provide better service for multimedia protocols, such as VOIP, where partial packets are better than none.
* Multiqueue Networking increases processing parallelism for better performance from multiple processors and CPU cores.
* Large Receive Offload (LRO) and Generic Receive Offload (GRO) aggregate packets for better performance.
* Support for Data Center Bridging includes data traffic priorities and flow control for increased Quality of Service.
* New support for software Fiber Channel over Ethernet (FCoE) is provided.
* iSCSI partitions may be used as either root or boot filesystems.
* IPv6 is supported.

Security and Access Control

* SELinux policies have been extended to more system services.
* SELinux sandboxing allows users to run untrusted applications safely and securely.
* File and process permissions have been systematically reduced whenever possible to reduce the risk of privilege escalation.
* New utilities and system libraries provide more control over process privileges for easily managing reduced capabilities.
* Walk-up kiosks (as in banks, HR departments, etc.) are protected by SELinux access control, with on-the-fly environment setup and take-down, for secure public use.
* Openswan includes a general implementation of IPsec that works with Cisco IPsec.

Enforcement and Verification of Security Policies

* OpenScap standardizes system security information, enabling automatic patch verification and system compromise evaluation.

Identity and Authentication

* The new System Security Services Daemon (SSSD) provides centralized access to identity and authentication resources, enables caching and offline support.
* OpenLDAP is a compliant LDAP client with high availability from N-way MultiMaster replication, and performance improvements.

Web Infrastructure

* This release of Apache includes many improvements, see Overview of new features in Apache 2.2
* A major revision of Squid includes manageability and IPv6 support
* Memcached 1.4.4 is a high-performance and highly scalable, distributed, memory-based object caching system which enhances the speed of dynamic web applications.


* OpenJDK 6 is an open source implementation of the Java Platform Standard Edition (SE) 6 specification. It is TCK-certified based on the IcedTea project, and the implementation of a Java Web Browser plugin and Java web start removes the need for proprietary plugins.
* Tight integration of OpenJDK and Red Hat Enterprise Linux includes support for Java probes in SystemTap to enable better debugging for Java.
* Tomcat 6 is an open source and best-of-breed application server running on the Java platform. With support for Java Servlets and Java Server Pages (JSP), tomcat provides a robust environment for developing and deploying dynamic web applications.


* Ruby 1.8.7 is included, and Rails 3 supports dependencies.
* Version 4.4 of gcc includes OpenMP3 conformance for portable parallel programs, Integrated Register Allocator, Tuples, additional C++0x conformance implementations, and debuginfo handling improvements.
* Improvements to the libraries include malloc optimizations, improved speed and efficiency for large blocks, NUMA considerations, lock-free C++ class libraries, NSS crypto consolidation for LSB 4.0 and FIPS level 2, and improved automatic parallel mode in the C++ library.
* Gdb 7.1.29 improvements include C++ function, class, templates, variables, constructor / destructor improvements, catch / throw and exception improvements, large program debugging optimizations, and non-blocking thread debugging (threads can be stopped and continued independently).
* TurboGears 2 is a powerful internet-enabled framework that enables rapid web application development and deployment in Python.
* Updates to the popular web scripting and programming languages PHP (5.3.2), Perl (5.10.1) include many improvements.

Application Tuning

* SystemTap uses the kernel to generate non-intrusive debugging information about running applications.
* The tuned daemon monitors system use and uses that information to automatically and dynamically adjust system settings for better performance.
* SELinux can be used to observe, then tighten application access to system resources, leading to greater security.


* PostgreSQL 8.4.4 includes many improvements, please see PostgreSQL 8.4 Feature List for details.
* MySQL 5.1.47 improvement are listed here: What Is New in MySQL 5.1.
* SQLite 3.6.20 includes significant performance improvements, and many important bug fixes. Note that this release has made incompatible changes to the internal OS interface and VFS layers (compared to earlier releases).

System API / ABI Stability

* The API / ABI Compatibility Commitment defines stable, public, system interfaces for the full ten-year life cycle of Red Hat Enterprise Linux 6. During that time, applications will not be affected by security errata or service packs, and will not require re-certification. Backward compatibility for the core ABI is maintained across major releases, allowing applications to span subsequent releases.

Integrated Virtualization, Kernel-Based Virtualization

* The KVM hypervisor is fully integrated into the kernel, so all RHEL system improvements benefit the virtualized environment.
* The application environment is consistent for physical and virtual systems.
* Deployment flexibility, provided by the ability to easily move guests between hosts, allows administrators to consolidate resources onto fewer machines during quiet times, or free up hardware for maintenance downtime.

Leverages Kernel Features

* Hardware abstraction enables applications to move from physical to virtualized environments independently of the underlying hardware.
* Increased scalability of CPUs and memory provides more guests per server.
* Block storage benefits from selectable I/O schedulers and support for asynchronous I/O.
* Cgroups and related CPU, memory, and networking resource controls provide the ability to reduce resource contention and improve overall system performance.
* Reliability, Availability, and Serviceability (RAS) features (e.g., hot add of processors and memory, machine check handling, and recovery from previously fatal errors) minimize downtime.
* Multicast bridging includes the first release of IGMP snooping (in IPv4) to build intelligent packet routing and enhance network efficiency.
* CPU affinity assigns guests to specific CPUs.

Guest Acceleration

* CPU masking allows all guests to use the same type of CPU.
* SR-IOV virtualizes physical I/O card resources, primarily networking, allowing multiple guests to share a single physical resource.
* Message signaled interrupts deliver interrupts as specific signals, increasing the number of interrupts.
* Transparent hugepages provides significant performance improvements for guest memory allocation.
* Kernel Same Page (KSM) provides reuse of identical pages across virtual machines (known as deduplication in the storage context).
* The tickless kernel defines a stable time model for guests, avoiding clock drift.
* Advanced paravirtualization interfaces include non-traditional devices such as the clock (enabled by the tickless kernel), interrupt controller, spinlock subsystem, and vmchannel.


* In virtualized environments, sVirt (powered by SELinux) protects guests from one another

Microsoft Windows Support

* Windows WHQL-certified drivers enable virtualized Windows systems, and allow Microsoft customers to receive technical support for virtualized instances of Windows Server.

Installation, Updates, and Deployment

* Anaconda supports installation of a “minimal platform” as a specific server installation, or as a strategy for reducing the number of software packages to increase security.
* Red Hat Network (RHN) and Satellite continue to provide management, provisioning and monitoring for large deployments.
* Installation options have been reorganized into “workload profiles” so that each system installation will provide the right software for specific tasks.
* Dracut, a replacement for mkinitrd, minimizes the impact of underlying hardware changes, is more maintainable, and makes it easier to support third party drivers.
* The new yum history command provides information about yum transactions, and supports undo and redo of selected operations.
* Yum and RPM offer significantly improved performance.
* RPM signatures use the Secure Hash Algorithm (SHA256) for data verification and authentication, improving security.
* Storage devices can be designated for encryption at installation time, protecting user and system data. Key escrow allows recovery of lost keys.
* Standards Based Linux Instrumentation for Manageability (SBLIM) manages systems using Web-Based Enterprise Management (WBEM).
* ABRT enhanced error reporting speeds triage and resolution of software failures.

Routine Task Delegation

* PolicyKit allows administrators to provide users access to privileged operations, such adding a printer or rebooting a desktop, without granting administrative privileges.


* Improvements include better printing, printer discovery, and printer configuration services from cups and system-config-printer.
* SNMP-based monitoring of ink and toner supply levels and printer status provides easier monitoring to enable efficient inventory management of ink and toner cartridges.
* Automatic PPD configuration for postscript printers, where PPD option values are queried from printer, are available in CUPS web interface.

Microsoft Interoperability

* Samba improvements include support for Windows 2008R2 trust relationships: Windows cross-forest, transitive trust, and one-way domain trust.
* Applications can use OpenChange to gain access to Microsoft Exchange servers using native protocols, allowing mail clients like Evolution to have tighter integration with Exchange servers.

RHEL 7 and systemd invasion into server space

RHEL 7 was released in June 2014. With the release of RHEL 7 we see hard push to systemd exclusivity.  Runlevels are gone. The release of RHEL 7 with systemd as the only option for system and process management has reignited the old debate weather Red Hat is trying to establish Microsoft-style monopoly over enterprise Linux and move Linux closer to Windows: closed but user-friendly system. 

for server sysadmins systemd is a massive, fundamental change to core Linux administration for no perceivable gain. So while there is a high level of support of systemd from Linux users who run Linux on their laptops and maybe as home server, there is a strong backlash against systemd from Linux system administrators who are responsible for significant number of Linux servers in enterprise environment.

After all runlevels were used in production environment, if only to run system with or without X11.  Please read  an interesting essay on systemd (ProSystemdAntiSystemd).

Often initiated by opponents, they will lament on the horrors of PulseAudio and point out their scorn for Lennart Poettering. This later became a common canard for proponents to dismiss criticism as Lennart-bashing. Futile to even discuss, but it’s a staple.

Lennart’s character is actually, at times, relevant.. Trying to have a large discussion about systemd without ever invoking him is like discussing glibc in detail without ever mentioning Ulrich Drepper. Most people take it overboard, however.

A lot of systemd opponents will express their opinions regarding a supposed takeover of the Linux ecosystem by systemd, as its auxiliaries (all requiring governance by the systemd init) expose APIs, which are then used by various software in the desktop stack, creating dependency chains between it and systemd that the opponents deemed unwarranted. They will also point out the udev debacle and occasionally quote Lennart. Opponents see this as anti-competitive behavior and liken it to “embrace, extend, extinguish”. They often exaggerate and go all out with their vitriol though, as they start to contemplate shadowy backroom conspiracies at Red Hat (admittedly it is pretty fun to pretend that anyone defending a given piece of software is actually a shill who secretly works for it, but I digress), leaving many of their concerns to be ignored and deem ridiculous altogether.

... ... ...

In addition, the Linux community is known for reinventing the square wheel over and over again. Chaos is both Linux’s greatest strength and its greatest weakness. Remember HAL? Distro adoption is not an indicator of something being good, so much as something having sufficient mindshare.

... ... ...

The observation that sysinit is dumb and heavily flawed with its clunky inittab and runlevel abstractions, is absolutely nothing new. Richard Gooch wrote a paper back in 2002 entitled “Linux Boot Scripts”, which criticized both the SysV and BSD approaches, based on his earlier work on simpleinit(8). That said, his solution is still firmly rooted in the SysV and BSD philosophies, but he makes it more elegant by supplying primitives for modularity and expressing dependencies.

Even before that, DJB wrote the famous daemontools suite which has had many successors influenced by its approach, including s6, perp, runit and daemontools-encore. The former two are completely independent implementations, but based on similar principles, though with significant improvements. An article dated to 2007 entitled “Init Scripts Considered Harmful” encourages this approach and criticizes initscripts.

Around 2002, Richard Lightman wrote depinit(8), which introduced parallel service start, a dependency system, named service groups rather than runlevels (similar to systemd targets), its own unmount logic on shutdown, arbitrary pipelines between daemons for logging purposes, and more. It failed to gain traction and is now a historical relic.

Other systems like initng and eINIT came afterward, which were based on highly modular plugin-based architectures, implementing large parts of their logic as plugins, for a wide variety of actions that software like systemd implements as an inseparable part of its core. Initmacs, anyone?

Even Fefe, anti-bloat activist extraordinaire, wrote his own system called minit early on, which could handle dependencies and autorestart. As is typical of Fefe’s software, it is painful to read and makes you want to contemplate seppuku with a pizza cutter.

And that’s just Linux. Partial list, obviously.

At the end of the day, all comparing to sysvinit does is show that you’ve been living under a rock for years. What’s more, it is no secret to a lot of people that the way distros have been writing initscripts has been totally anathema to basic software development practices, like modularizing and reusing common functions, for years. Among other concerns such as inadequate use of already leaky abstractions like start-stop-daemon(8). Though sysvinit does encourage poor work like this to an extent, it’s distro maintainers who do share a deal of the blame for the mess. See the BSDs for a sane example of writing initscripts. OpenRC was directly inspired by the BSDs’ example. Hint: it’s in the name - “RC”.

The rather huge scope and opinionated nature of systemd leads to people yearning for the days of sysvinit. A lot of this is ignorance about good design principles, but a good part may also be motivated from an inability to properly convey desires of simple and transparent systems. In this way, proponents and opponents get caught in feedback loops of incessantly going nowhere with flame wars over one initd implementation (that happened to be dominant), completely ignoring all the previous research on improving init, as it all gets left to bite the dust. Even further, most people fail to differentiate init from rc scripts, and sort of hold sysvinit to be equivalent to the shoddy initscripts that distros have written, and all the hacks they bolted on top like LSB headers and startpar(2). This is a huge misunderstanding that leads to a lot of wasted energy.

Don’t talk about sysvinit. Talk about systemd on its own merits and the advantages or disadvantages of how it solves problems, potentially contrasting them to other init systems. But don’t immediately go “SysV initscripts were way better and more configurable, I don’t see what systemd helps solve beyond faster boot times.”, or from the other side “systemd is way better than sysvinit, look at how clean unit files are compared to this horribly written initscript I cherrypicked! Why wouldn’t you switch?”

... ... ...

Now that we pointed out how most systemd debates play out in practice and why it’s usually a colossal waste of time to partake in them, let’s do a crude overview of the personalities that make this clusterfuck possible.

The technically competent sides tend to largely fall in these two broad categories:

a) Proponents are usually part of the modern Desktop Linux bandwagon. They run contemporary mainstream distributions with the latest software, use and contribute to large desktop environment initiatives and related standards like the *kits. They’re not necessarily purely focused on the Linux desktop. They’ll often work on features ostensibly meant for enterprise server management, cloud computing, embedded systems and other needs, but the rhetoric of needing a better desktop and following the example set by Windows and OS X is largely pervasive amongst their ranks. They will decry what they perceive as “integration failures”, “fragmentation” and are generally hostile towards research projects and anything they see as “toy projects”. They are hackers, but their mindset is largely geared towards reducing interface complexity, instead of implementation complexity, and will frequently argue against the alleged pitfalls of too much configurability, while seeing computers as appliances instead of tools.

b) Opponents are a bit more varied in their backgrounds, but they typically hail from more niche distributions like Slackware, Gentoo, CRUX and others. They are largely uninterested in many of the Desktop Linux “advancements”, value configuration, minimalism and care about malleability more than user friendliness. They’re often familiar with many other Unix-like environments besides Linux, though they retain a fondness for the latter. They have their own pet projects and are likely to use, contribute to or at least follow a lot of small projects in the low-level system plumbing area. They can likely name at least a dozen alternatives to the GNU coreutils (I can name about 7, I think), generally favor traditional Unix principles and see computers as tools. These are the people more likely to be sympathetic to things like the suckless philosophy.

It should really come as no surprise that the former group dominates. They’re the ones that largely shape the end user experience. The latter are pretty apathetic or even critical of it, in contrast. Additionally, the former group simply has far more manpower in the right places. Red Hat’s employees alone dominate much of the Linux kernel, the GNU base system, GNOME, NetworkManager, many projects affiliated with standards (including Polkit) and more. There’s no way to compete with a vast group of paid individuals like those.


The “Year of the Linux Desktop” has become a meme at this point, one that is used most often sarcastically. Yet there are still a lot of people who deeply hold onto it and think that if only Linux had a good abstraction engine for package manager backends, those Windows users will be running Fedora in no time.

What we’re seeing is undoubtedly a cultural clash by two polar opposites that coexist in the Linux community. We can see it in action through the vitriol against Red Hat developers, and conversely the derision against Gentoo users on part of Lennart Poettering, Greg K-H and others. Though it appears in this case “Gentoo user” is meant as a metonym for Linux users whose needs fall outside the mainstream application set. Theo de Raadt infamously quipped that Linux is “for people who hate Microsoft”, but that quote is starting to appear outdated.

Many of the more technically competent people with views critical of systemd have been rather quiet in public, for some reason. Likely it’s a realization that the Linux desktop’s direction is inevitable, and thus trying to criticize it is a futile endeavor. There are people who still think GNOME abandoning Sawfish was a mistake, so yes.

The non-desktop people still have their own turf, but they feel threatened by systemd to one degree or another. Still, I personally do not see them dwindling down. What I believe will happen is that they will become even more segregated than they already are from mainstream Linux and that using their software will feel more otherworldly as time goes on.

There are many who are predicting a huge renaissance for BSD in the aftermath of systemd, but I’m skeptical of this. No doubt there will be increased interest, but as a whole it seems most of the anti-systemd crowd is still deeply invested in sticking to Linux.

Ultimately, the cruel irony is that in systemd’s attempt to supposedly unify the distributions, it has created a huge rift unlike any other and is exacerbating the long-present hostilities between desktop Linux and minimalist Linux sides at rates that are absolutely atypical. What will end up of systemd remains unknown. Given Linux’s tendency for chaos, it might end up the new HAL, though with a significantly more painful aftermath, or it might continue on its merry way and become a Linux standard set in stone, in which case the Linux community will see a sharp ideological divide. Or perhaps it won’t. Perhaps things will go on as usual, on an endless spiral of reinvention without climax. Perhaps we will be doomed to flame on systemd for all eternity. Perhaps we’ll eventually get sick of it and just part our own ways into different corners.

Either way, I’ve become less and less fond of politics for uselessd and see systemd debates as being metaphorically like car crashes. I likely won’t help but chime in at times, though I intend uselessd to branch off into its own direction with time.


A very controversial subsystem, systemd is implemented. systemd is a suite of system management daemons, libraries, and utilities designed for Linux and programmed exclusively for the Linux API. There is no more runlevels. For servers systemd makes little sense. Sysadmins now need to learn new systemd commands  for starting and stopping various services. There is still ‘service’ command included for backwards compatibility, but it may go away in future releases. See CentOS 7 - RHEL 7 systemd commands Linux BrigadeCentOS 7 - RHEL 7 systemd commands

From Wikipedia (systemd)

In a 2012 interview, Slackware's founder Patrick Volkerding  expressed the following reservations about the systemd architecture which are fully applicable to the server environment

Concerning systemd, I do like the idea of a faster boot time (obviously), but I also like controlling the startup of the system with shell scripts that are readable, and I'm guessing that's what most Slackware users prefer too. I don't spend all day rebooting my machine, and having looked at systemd config files it seems to me a very foreign way of controlling a system to me, and attempting to control services, sockets, devices, mounts, etc., all within one daemon flies in the face of the UNIX concept of doing one thing and doing it well.

In an August 2014 article published in InfoWorld, Paul Venezia wrote about the systemd controversy, and attributed the controversy to violation of the Unix philosophy, and to "enormous egos who firmly believe they can do no wrong."[42] The article also characterizes the architecture of systemd as more similar to that of Microsoft Windows software:[42]

While systemd has succeeded in its original goals, it's not stopping there. systemd is becoming the Svchost of Linux – which I don't think most Linux folks want. You see, systemd is growing, like wildfire, well outside the bounds of enhancing the Linux boot experience. systemd wants to control most, if not all, of the fundamental functional aspects of a Linux system – from authentication to mounting shares to network configuration to syslog to cron.


After 10 years or so after Solaris 10 Linux at last got them.

Linux containers have emerged as a key open source application packaging and delivery technology, combining lightweight application isolation with the flexibility of image-based deployment methods. Developers have rapidly embraced Linux containers because they simplify and accelerate application deployment, and many Platform-as-a-Service (PaaS) platforms are built around Linux container technology, including OpenShift by Red Hat.

Red Hat Enterprise Linux 7 implements Linux containers using core technologies such as control groups (cGroups) for resource management, namespaces for process isolation, and SELinux for security, enabling secure multi-tenancy and reducing the potential for security exploits. The Red Hat container certification ensures that application containers built using Red Hat Enterprise Linux will operate seamlessly across certified container hosts.


With more and more systems, even at the low end, presenting non-uniform memory access (NUMA) topologies, Red Hat Enterprise Linux 7 addresses the performance irregularities that such systems present. A new, kernel-based NUMA affinity mechanism automates memory and scheduler optimization. It attempts to match processes that consume significant resources with available memory and CPU resources in order to reduce cross-node traffic. The resulting improved NUMA resource alignment improves performance for applications and virtual machines, especially when running memory-intensive workloads.


Red Hat Enterprise Linux 7 unifies hardware event reporting into a single reporting mechanism. Instead of various tools collecting errors from different sources with different timestamps, a new hardware event reporting mechanism (HERM) will make it easier to correlate events and get an accurate picture of system behavior. HERM reports events in a single location and in a sequential timeline. HERM uses a new userspace daemon, rasdaemon, to catch and log all RAS events coming from the kernel tracing infrastructure.


Red Hat Enterprise Linux 7 advances the level of integration and usability between the Red Hat Enterprise Linux guest and VMware vSphere. Integration now includes: • Open VM Tools — bundled open source virtualization utilities. • 3D graphics drivers for hardware-accelerated OpenGL and X11 rendering. • Fast communication mechanisms between VMware ESX and the virtual machine.


The ability to revert to a known, good system configuration is crucial in a production environment. Using LVM snapshots with ext4 and XFS (or the integrated snapshotting feature in Btrfs described in the “Snapper” section) an administrator can capture the state of a system and preserve it for future use. An example use case would involve an in-place upgrade that does not present a desired outcome and an administrator who wants to restore the original configuration.


Red Hat Enterprise Linux 7 introduces Live Media Creator for creating customized installation media from a kickstart file for a range of deployment use cases. Media can then be used to deploy standardized images whether on standardized corporate desktops, standardized servers, virtual machines, or hyperscale deployments. Live Media Creator, especially when used with templates, provides a way to control and manage configurations across the enterprise.


Red Hat Enterprise Linux 7 features the ability to use installation templates to create servers for common workloads. These templates can simplify and speed creating and deploying Red Hat Enterprise Linux servers, even for those with little or no experience with Linux.

Top updates

Softpanorama Switchboard
Softpanorama Search


Old News ;-)

[Jun 09, 2017] Sneaky hackers use Intel management tools to bypass Windows firewall

Notable quotes:
"... the group's malware requires AMT to be enabled and serial-over-LAN turned on before it can work. ..."
"... Using the AMT serial port, for example, is detectable. ..."
"... Do people really admin a machine through AMT through an external firewall? ..."
"... Businesses demanded this technology and, of course, Intel beats the drum for it as well. While I understand their *original* concerns I would never, ever connect it to the outside LAN. A real admin, in jeans and a tee, is a much better solution. ..."
Jun 09, 2017 |
When you're a bad guy breaking into a network, the first problem you need to solve is, of course, getting into the remote system and running your malware on it. But once you're there, the next challenge is usually to make sure that your activity is as hard to detect as possible. Microsoft has detailed a neat technique used by a group in Southeast Asia that abuses legitimate management tools to evade firewalls and other endpoint-based network monitoring.

The group, which Microsoft has named PLATINUM, has developed a system for sending files -- such as new payloads to run and new versions of their malware-to compromised machines. PLATINUM's technique leverages Intel's Active Management Technology (AMT) to do an end-run around the built-in Windows firewall. The AMT firmware runs at a low level, below the operating system, and it has access to not just the processor, but also the network interface.

The AMT needs this low-level access for some of the legitimate things it's used for. It can, for example, power cycle systems, and it can serve as an IP-based KVM (keyboard/video/mouse) solution, enabling a remote user to send mouse and keyboard input to a machine and see what's on its display. This, in turn, can be used for tasks such as remotely installing operating systems on bare machines. To do this, AMT not only needs to access the network interface, it also needs to simulate hardware, such as the mouse and keyboard, to provide input to the operating system.

But this low-level operation is what makes AMT attractive for hackers: the network traffic that AMT uses is handled entirely within AMT itself. That traffic never gets passed up to the operating system's own IP stack and, as such, is invisible to the operating system's own firewall or other network monitoring software. The PLATINUM software uses another piece of virtual hardware-an AMT-provided virtual serial port-to provide a link between the network itself and the malware application running on the infected PC.

Communication between machines uses serial-over-LAN traffic, which is handled by AMT in firmware. The malware connects to the virtual AMT serial port to send and receive data. Meanwhile, the operating system and its firewall are none the wiser. In this way, PLATINUM's malware can move files between machines on the network while being largely undetectable to those machines.

PLATINUM uses AMT's serial-over-LAN (SOL) to bypass the operating system's network stack and firewall.

Enlarge / PLATINUM uses AMT's serial-over-LAN (SOL) to bypass the operating system's network stack and firewall. Microsoft

AMT has been under scrutiny recently after the discovery of a long-standing remote authentication flaw that enabled attackers to use AMT features without needing to know the AMT password. This in turn could be used to enable features such as the remote KVM to control systems and run code on them.

However, that's not what PLATINUM is doing: the group's malware requires AMT to be enabled and serial-over-LAN turned on before it can work. This isn't exploiting any flaw in AMT; the malware just uses the AMT as it's designed in order to do something undesirable.

Both the PLATINUM malware and the AMT security flaw require AMT to be enabled in the first place; if it's not turned on at all, there's no remote access. Microsoft's write-up of the malware expressed uncertainty about this part; it's possible that the PLATINUM malware itself enabled AMT-if the malware has Administrator privileges, it can enable many AMT features from within Windows-or that AMT was already enabled and the malware managed to steal the credentials.

While this novel use of AMT is useful for transferring files while evading firewalls, it's not undetectable. Using the AMT serial port, for example, is detectable. Microsoft says that its own Windows Defender Advanced Threat Protection can even distinguish between legitimate uses of serial-over-LAN and illegitimate ones. But it's nonetheless a neat way of bypassing one of the more common protective measures that we depend on to detect and prevent unwanted network activity. potato44819 , Ars Legatus Legionis Jun 8, 2017 8:59 PM Popular

"Microsoft says that its own Windows Defender Advanced Threat Protection can even distinguish between legitimate uses of serial-over-LAN and illegitimate ones. But it's nonetheless a neat way of bypassing one of the more common protective measures that we depend on to detect and prevent unwanted network activity."

It's worth noting that this is NOT Windows Defender.

Windows Defender Advanced Threat Protection is an enterprise product.

aexcorp , Ars Scholae Palatinae Jun 8, 2017 9:04 PM Popular
This is pretty fascinating and clever TBH. AMT might be convenient for sysadmin, but it's proved to be a massive PITA from the security perspective. Intel needs to really reconsider its approach or drop it altogether.

"it's possible that the PLATINUM malware itself enabled AMT-if the malware has Administrator privileges, it can enable many AMT features from within Windows"

I've only had 1 machine that had AMT (a Thinkpad T500 that somehow still runs like a charm despite hitting the 10yrs mark this summer), and AMT was toggled directly via the BIOS (this is all pre-UEFI.) Would Admin privileges be able to overwrite a BIOS setting? Would it matter if it was handled via UEFI instead? 1810 posts | registered 8/28/2012

bothered , Ars Scholae Palatinae Jun 8, 2017 9:16 PM
Always on and undetectable. What more can you ask for? I have to imagine that and IDS system at the egress point would help here. 716 posts | registered 11/14/2012
faz , Ars Praefectus Jun 8, 2017 9:18 PM
Using SOL and AMT to bypass the OS sounds like it would work over SOL and IPMI as well.

I only have one server that supports AMT, I just double-checked that the webui for AMT does not allow you to enable/disable SOL. It does not, at least on my version. But my IPMI servers do allow someone to enable SOL from the web interface.

xxx, Jun 8, 2017 9:24 PM
But do we know of an exploit over AMT? I wouldn't think any router firewall would allow packets bound for an AMT to go through. Is this just a mechanism to move within a LAN once an exploit has a beachhead? That is not a small thing, but it would give us a way to gauge the severity of the threat.

Do people really admin a machine through AMT through an external firewall? 178 posts | registered 2/25/2016

zogus , Ars Tribunus Militum Jun 8, 2017 9:26 PM
fake-name wrote:

Hi there! I do hardware engineering, and I wish more computers had serial ports. Just because you don't use them doesn't mean their disappearance is "fortunate".

Just out of curiosity, what do you use on the PC end when you still do require traditional serial communication? USB-to-RS232 adapter? 1646 posts | registered 11/17/2006

bthylafh , Ars Tribunus Angusticlavius Jun 8, 2017 9:34 PM Popular
zogus wrote:
Just out of curiosity, what do you use on the PC end when you still do require traditional serial communication? USB-to-RS232 adapter?
tomca13 , Wise, Aged Ars Veteran Jun 8, 2017 9:53 PM
This PLATINUM group must be pissed about the INTEL-SA-00075 vulnerability being headline news. All those perfectly vulnerable systems having AMT disabled and limiting their hack. 175 posts | registered 8/9/2002
Darkness1231 , Ars Tribunus Militum et Subscriptor Jun 8, 2017 10:41 PM
Causality wrote:
Intel AMT is a fucking disaster from a security standpoint. It is utterly dependent on security through obscurity with its "secret" coding, and anybody should know that security through obscurity is no security at all.
Businesses demanded this technology and, of course, Intel beats the drum for it as well. While I understand their *original* concerns I would never, ever connect it to the outside LAN. A real admin, in jeans and a tee, is a much better solution.

Hopefully, either Intel will start looking into improving this and/or MSFT will make enough noise that businesses might learn to do their update, provisioning in a more secure manner.

Nah, that ain't happening. Who am I kidding? 1644 posts | registered 3/31/2012

Darkness1231 , Ars Tribunus Militum et Subscriptor Jun 8, 2017 10:45 PM
meta.x.gdb wrote:
But do we know of an exploit over AMT? I wouldn't think any router firewall would allow packets bound for an AMT to go through. Is this just a mechanism to move within a LAN once an exploit has a beachhead? That is not a small thing, but it would give us a way to gauge the severity of the threat. Do people really admin a machine through AMT through an external firewall?
The interconnect is via W*. We ran this dog into the ground last month. Other OSs (all as far as I know (okay, !MSDOS)) keep them separate. Lan0 and lan1 as it were. However it is possible to access the supposedly closed off Lan0/AMT via W*. Which is probably why this was caught in the first place.

Note that MSFT has stepped up to the plate here. This is much better than their traditional silence until forced solution. Which is just the same security through plugging your fingers in your ears that Intel is supporting. 1644 posts | registered 3/31/2012

rasheverak , Wise, Aged Ars Veteran Jun 8, 2017 11:05 PM
Hardly surprising: ... armful.pdf

This is why I adamantly refuse to use any processor with Intel management features on any of my personal systems. 160 posts | registered 3/6/2014

michaelar , Smack-Fu Master, in training Jun 8, 2017 11:12 PM
Brilliant. Also, manifestly evil.

Is there a word for that? Perhaps "bastardly"?

JDinKC , Smack-Fu Master, in training Jun 8, 2017 11:23 PM
meta.x.gdb wrote:
But do we know of an exploit over AMT? I wouldn't think any router firewall would allow packets bound for an AMT to go through. Is this just a mechanism to move within a LAN once an exploit has a beachhead? That is not a small thing, but it would give us a way to gauge the severity of the threat. Do people really admin a machine through AMT through an external firewall?
The catch would be any machine that leaves your network with AMT enabled. Say perhaps an AMT managed laptop plugged into a hotel wired network. While still a smaller attack surface, any cabled network an AMT computer is plugged into, and not managed by you, would be a source of concern. 55 posts | registered 11/19/2012
Anonymouspock , Wise, Aged Ars Veteran Jun 8, 2017 11:42 PM
Serial ports are great. They're so easy to drive that they work really early in the boot process. You can fix issues with machines that are otherwise impossible to debug.
sphigel , Ars Centurion Jun 9, 2017 12:57 AM
aexcorp wrote:
This is pretty fascinating and clever TBH. AMT might be convenient for sysadmin, but it's proved to be a massive PITA from the security perspective. Intel needs to really reconsider its approach or drop it altogether.

"it's possible that the PLATINUM malware itself enabled AMT-if the malware has Administrator privileges, it can enable many AMT features from within Windows"

I've only had 1 machine that had AMT (a Thinkpad T500 that somehow still runs like a charm despite hitting the 10yrs mark this summer), and AMT was toggled directly via the BIOS (this is all pre-UEFI.) Would Admin privileges be able to overwrite a BIOS setting? Would it matter if it was handled via UEFI instead?

I'm not even sure it's THAT convenient for sys admins. I'm one of a couple hundred sys admins at a large organization and none that I've talked with actually use Intel's AMT feature. We have an enterprise KVM (raritan) that we use to access servers pre OS boot up and if we have a desktop that we can't remote into after sending a WoL packet then it's time to just hunt down the desktop physically. If you're just pushing out a new image to a desktop you can do that remotely via SCCM with no local KVM access necessary. I'm sure there's some sys admins that make use of AMT but I wouldn't be surprised if the numbers were quite small. 273 posts | registered 5/5/2010
gigaplex , Ars Scholae Palatinae Jun 9, 2017 3:53 AM
zogus wrote:
fake-name wrote:
blockquote Quote: blockquote

Hi there! I do hardware engineering, and I wish more computers had serial ports. Just because you don't use them doesn't mean their disappearance is "fortunate".

Just out of curiosity, what do you use on the PC end when you still do require traditional serial communication? USB-to-RS232 adapter?
We just got some new Dell workstations at work recently. They have serial ports. We avoid the consumer machines. 728 posts | registered 9/23/2011

GekkePrutser , Ars Centurion Jun 9, 2017 4:18 AM
Physical serial ports (the blue ones) are fortunately a relic of a lost era and are nowadays quite rare to find on PCs.
Not that fortunately.. Serial ports are still very useful for management tasks. It's simple and it works when everything else fails. The low speeds impose little restrictions on cables.

Sure, they don't have much security but that is partly mitigated by them usually only using a few metres cable length. So they'd be covered under the same physical security as the server itself. Making this into a LAN protocol without any additional security, that's where the problem was introduced. Wherever long-distance lines were involved (modems) the security was added at the application level.

[Jun 01, 2017] CVE-2017-1000367 Bug in sudos get_process_ttyname. Most linux distributions are affected

Jun 01, 2017 |

There is a serious vulnerability in sudo command that grants root access to anyone with a shell account. It works on SELinux enabled systems such as CentOS/RHEL and others too. A local user with privileges to execute commands via sudo could use this flaw to escalate their privileges to root. Patch your system as soon as possible.

It was discovered that Sudo did not properly parse the contents of /proc/[pid]/stat when attempting to determine its controlling tty. A local attacker in some configurations could possibly use this to overwrite any file on the filesystem, bypassing intended permissions or gain root shell.

... ... ...

A list of affected Linux distro
  1. Red Hat Enterprise Linux 6 (sudo)
  2. Red Hat Enterprise Linux 7 (sudo)
  3. Red Hat Enterprise Linux Server (v. 5 ELS) (sudo)
  4. Oracle Enterprise Linux 6
  5. Oracle Enterprise Linux 7
  6. Oracle Enterprise Linux Server 5
  7. CentOS Linux 6 (sudo)
  8. CentOS Linux 7 (sudo)
  9. Debian wheezy
  10. Debian jessie
  11. Debian stretch
  12. Debian sid
  13. Ubuntu 17.04
  14. Ubuntu 16.10
  15. Ubuntu 16.04 LTS
  16. Ubuntu 14.04 LTS
  17. SUSE Linux Enterprise Software Development Kit 12-SP2
  18. SUSE Linux Enterprise Server for Raspberry Pi 12-SP2
  19. SUSE Linux Enterprise Server 12-SP2
  20. SUSE Linux Enterprise Desktop 12-SP2
  21. OpenSuse, Slackware, and Gentoo Linux

[May 19, 2017] Google Found Over 1,000 Bugs In 47 Open Source Projects

May 14, 2017 |
( 43

Posted by EditorDavid on Saturday May 13, 2017 @11:34AM

Orome1 writes: In the last five months, Google's OSS-Fuzz program has unearthed over 1,000 bugs in 47 open source software projects ...

So far, OSS-Fuzz has found a total of 264 potential security vulnerabilities: 7 in Wireshark, 33 in LibreOffice, 8 in SQLite 3, 17 in FFmpeg -- and the list goes on...

Google launched the program in December and wants more open source projects to participate, so they're offering cash rewards for including "fuzz" targets for testing in their software.

"Eligible projects will receive $1,000 for initial integration, and up to $20,000 for ideal integration" -- or twice that amount, if the proceeds are donated to a charity.

[Mar 21, 2017] systemd-redux - blog dot lusis

Mar 21, 2017 |

I encourage you STRONGLY to read the systemd-devel mailing list for the kinds of issues you'll possibly have to deal with.


Nov 20th, 2014 | Comments

I figured it was about time for a followup on my systemd post. I've been meaning to do it for a while but time hasn't allowed. The end of Linux

Some people wrongly characterized this as some sort of hyperbole. It was not. Systemd IS changing what we know as Linux today. It remains to be seen if this is a good or bad thing but Linux is becoming something different than it was.

Linux is in for a rough few years

I do honestly believe this will end up being the start of a rocky period for Linux.

Additionally, while not Systemd specific but legitimately all inter-related, kdbus is coming and its already got its fair share of issues in the first implementation including breaking userspace.

We also have distros like SLES adopting btrfs as the default filesystem.

All of these things combined mean that Linux is pushing the bleeding edge of a lot of unbaked technologies. Time will tell if this turns people off or not. I expect that enterprise shops will probably freeze systems at RHEL6 for a good while to come (and not just the standard "we're enterprise and we don't like to upgrade" time period).

Systemd isn't going away

Systemd is here to stay. The only way you will have a system without it is to roll your own. I don't expect many distros to chose to back out. My best hope is that they'll all freeze at the current version. Maybe a few things will get backported here and there for security fixes.

SystemD components are NOT optional

I know everyone likes to tout this but, no, the various systemd components while not pid 1 are realistically not optional. Kdbus, single parent hierarchy for namespaces (systemd is taking this one of course), udev changes - the kernel and distros are changing and coallescing around whatever systemd ships. Most distros will probably use systemd-networkd for instance. Look at what happened with Debian just today. The (albeit way late to the game) recommendation to support alternate init systems was rejected. I encourage you STRONGLY to read the systemd-devel mailing list for the kinds of issues you'll possibly have to deal with.


To be clear if you're going to stick with Linux, you will have to deal with systemd. It's up to you to decide if that's something you're comfortable with. Systemd is bringing some good things but, like other discussions I've been involved with, you're going to be stuck with all the other stuff that comes along with it whether you like it or not.

It's worth noting that FreeBSD just got a nice donation from the WhatsApp folks. It also ships with ZFS as part of the kernel and has a jails which is a much more baked technology and implementation than LXC. While you can't use docker now with jails, my understanding is that there is work being done to support NON-LXC operating system level virtualization (such as jails and solaris zones).

Speaking of zones and Solaris, if that's an option for you it's probably the best of breed stack right now. Rich mature OS-level virtualization. SmartOS brings along KVM support for when you HAVE to run Linux but backed by Solaris tech under the hood. There's also OmniOS as a variant as well.

If you absolutely MUST run Linux, my recommendation is to minimize the interaction with the base distro as much as possible. CoreOS (when it's finally baked and production ready) can bring you an LXC based ecosystem. If they were to ever add actual virt support (i.e. KVM), then you could mix and match as needed. If you're working for a startup or a more flexible organization, you can go down this path. If you're working for a more traditional enterprise, your options are pretty limited. At least you'll have the RedHat support contract.

Posted by John E. Vincent Nov 20th, 2014

[Feb 04, 2017] Restoring deleted /tmp folder

Jan 13, 2015 |

As my journey continues with Linux and Unix shell, I made a few mistakes. I accidentally deleted /tmp folder. To restore it all you have to do is:

mkdir /tmp
chmod 1777 /tmp
chown root:root /tmp
ls -ld /tmp
mkdir /tmp chmod 1777 /tmp chown root:root /tmp ls -ld /tmp 

[Jan 26, 2017] Penguins force-fed root Cruel security flaw found in systemd v228
Some Linux distros will need to be updated following the discovery of an easily exploitable flaw in a core system management component.

The CVE-2016-10156 security hole in systemd v228 opens the door to privilege escalation attacks, creating a means for hackers to root systems locally if not across the internet. The vulnerability is fixed in systemd v229.

Essentially, it is possible to create world-readable, world-writeable setuid executable files that are root owned by setting all the mode bits in a call to touch(). The systemd changelog for the fix reads:

basic: fix touch() creating files with 07777 mode

mode_t is unsigned, so MODE_INVALID < 0 can never be true.

This fixes a possible [denial of service] where any user could fill /run by writing to a world-writable /run/systemd/show-status.

However, as pointed out by security researcher Sebastian Krahmer, the flaw is worse than a denial-of-service vulnerability – it can be exploited by a malicious program or logged-in user to gain administrator access: "Mode 07777 also contains the suid bit, so files created by touch() are world writable suids, root owned."

The security bug was quietly fixed in January 2016 back when it was thought to pose only a system-crashing risk. Now the programming blunder has been upgraded this week following a reevaluation of its severity. The bug now weighs in at a CVSS score of 7.2, towards the top end of the 1-10 scale.

It's a local root exploit, so it requires access to the system in question to exploit, but it pretty much boils down to "create a powerful file in a certain way, and gain root on the server." It's trivial to pull off.

"Newer" versions of systemd deployed by Fedora or Ubuntu have been secured, but Debian systems are still running an older version and therefore need updating.

systemd is a suite for building blocks for Linux systems that provides system and service management technology. Security specialists view it with suspicion and complaints about function creep are not uncommon. ®

[Dec 26, 2016] Devuans Systemd-Free Linux Hits Beta 2

Notable quotes:
"... Devuan came about after some users felt [Debian] had become too desktop-friendly . The change the greybeards objected to most was the decision to replace sysvinit init with systemd, a move felt to betray core Unix principles of user choice and keeping bloat to a bare minimum. ..."
"... now features an "init freedom" logo with the tagline, "watching your first step. ..."
Dec 26, 2016 |
( 338

Posted by EditorDavid on Saturday December 03, 2016 @11:38PM from the forking-the-road dept.

Long-time Slashdot reader Billly Gates writes,

"For all the systemd haters who want a modern distro feel free to rejoice. The Debian fork called Devuan is almost done, completing a daunting task of stripping systemd dependencies from Debian."

From The Register:

Devuan came about after some users felt [Debian] had become too desktop-friendly . The change the greybeards objected to most was the decision to replace sysvinit init with systemd, a move felt to betray core Unix principles of user choice and keeping bloat to a bare minimum.

Supporters of init freedom also dispute assertions that systemd is in all ways superior to sysvinit init, arguing that Debian ignored viable alternatives like sinit , openrc , runit , s6 and shepherd . All are therefore included in Devuan. now features an "init freedom" logo with the tagline, "watching your first step.

Their home page now links to the download site for Devuan Jessie 1.0 Beta2 , promising an OS that "avoids entanglement".

[Dec 26, 2016] The Linux Foundation Offers 50% Discounts On Training

Dec 26, 2016 |
( 39 Posted by EditorDavid on Sunday December 18, 2016 @05:44PM from the tell-em-Linus-sent-you dept. An anonymous reader writes: The non-profit association that sponsors Linus Torvalds' work on Linux also offers self-paced online training and certification programs. And now through December 22, they're available at a 50% discount . "Make learning Linux and other open source technologies your New Year's Resolution this holiday season," reads a special page at There's training in Linux security, networking, and system administration, as well as software-defined networking and OpenStack administration. (Plus a course called "Fundamentals Of Professional Open Source Management," and two certification programs that can make you a Linux Foundation-certified engineer or system administrator.)
And if you order right now, they'll also give you a free mug with a penguin on it.

[Nov 06, 2016] ascii files can be recovered b

view /tmp/somefile and see what you want to copy over from /dev/ to original location.

if you are on ext2 mount, may be you can try recover command.

my two penny advice for future :please always read man page for command and arguments before you actually run.

[Jun 06, 2016] 20 Linux Accounts to Follow on Twitter by Marin Todorov

| Published: November 30, 2015 | November 30, 2015
Download Your Free eBooks NOW - 10 Free Linux eBooks for Administrators | 4 Free Shell Scripting eBooks

System Administrators often need to find new information in their field of work. Reading the latest blog posts from hundreds of different sources is a task that not everyone may have the time to do. If you are a such busy user or just like to find new information about Linux, you can use social media website like Twitter.
Linux Twitter Accounts to Follow

20 Linux Twitter Accounts to Follow

Twitter is a website where you can follow users that share information that you are interested in. You can use the power of this website to get news, new ideas to solve problems, commands, links to interesting articles, new releases updates and many others. The possibilities are many, but Twitter is as good as the people you follow on it.

If you don't follow anyone, then your Twitter wall will remain empty. But if you follow the right people, you will be presented with tons of interesting information shared by people you followed.

The fact that you came across TecMint definitely means you are a Linux user thirsty to learn new stuff. We have decided to make your Twitter wall a bit more interesting, by gathering 20 Linux accounts to follow on Twitter.
1. Linus Torvalds – @Linus__Torvalds

Of course, the number one spot is saved for the person who created Linux – Linus Torvalds. His account is not that frequently updated, but it is still good to have it. The account was created on November 2012 and has over 22k followers.
Follow @Linus__Torvalds

Follow @Linus__Torvalds
2. FSF – @fsf

The Free Software Foundation is fighting for essential rights for the free software since 1985. The FSF has joined twitter on May 2008 and has over 10.6K followers. You can find different information here about new releases of new and free software as well as other information relevant to free software.
Follow @fsf

Follow @fsf
3. The Linux Foundation – @linuxfoundation

Next in our list is the Linux Foundation. On that page you will find many interesting news, latest updates around Linux and some useful tutorials. The account joined Twitter on May 2008 and has been active ever since. It has over 198K followers.
Follow @linuxfoundation

Follow @linuxfoundation
4. Linux Today – @linuxtoday

LinuxToday is account that shares different news and tutorials gathered from different sources around the internet. This account joined Twitter on June 2009 and has over 67K users.
Follow @linuxtoday

Follow @linuxtoday
5. Distro Watch – @DistroWatch

DistroWatch will keep you updated about the latest Linux distributions available. If you are a OS maniac like us, this account is a must follow. The account joined Twitter on February 2009 and has over 23K followers.
Follow @DistroWatch

Follow @DistroWatch
6. Linux – @Linux

The Linux page likes to follow up with the latest Linux OS releases. You can follow up this page if you want to know when a new Linux release is available. The account was created on September 2007 and has over 188K followers.
Follow @Linux

Follow @Linux
7. LinuxDotCom – @LinuxDotCom

LinuxDotCom is a page that covers information about Linux and everything around it. From Linux operating systems to devices in our life that use Linux. The account joined Twitter on January 2009 and has nearly 80K followers.
Follow @LinuxDotCom

Follow @LinuxDotCom
8. Linux For You – @LinuxForYou

LinuxForYou is Asia's first English magazine for free and open source software. It joined Twitter on February 2009 and has nearly 21K followers.
Follow @LinuxForYou

Follow @LinuxForYou
9. Linux Journal – @linuxjournal

Another good tweeter account to keep up with latest Linux news is LinuxJournal's. Their articles are always informative and if you like to get notified about new information about Linux, I will recommend you to signup for their newsletter. The account joined on October 2007 and has over 35K followers.

10. Linux Pro – @linux_pro

The Linux_pro page is the page of the famous LinuxPro magazine. Except for Linux news, you will learn about the latest products, tools and strategies for administrators, programming in the Linux environment and more. The account joined Twitter on September 2008 and has over 35K followers.

11 Tux Radar – @turxradar

This is another popular account that provides interesting, yet different Linux News. TuxRadar uses different sources so you will definitely want to have them in your wall stream. The account joined Twitter on February 2009 and has 11K followers

12. CommandLineFu – @commandlinefu

If you like the Linux command line and want to find more tricks and tips, then commandlinefu is the perfect user to follow. The account posts frequent updates with different useful commands. It joined Twitter on January 2009 and has nearly 18K followers
Follow @commandlinefu

Follow @commandlinefu
13. Command Line Magic – @climagic

CommandLineMagic shows some command lines for advanced linux users as well as some funny nerdy jokes. It's another fun account to follow and learn from. It joined Twitter November 2009 and has 108K followers:

14 SadServer – @sadserver

The SadServer is one of those accounts that just makes you laugh and want to check over and over again. Fun facts and stories are shared often so you won't be disappointed. The account joined Twitter on February 2010 and has over 54K followers.
Follow @sadserver

Follow @sadserver
15. Nixcraft – @nixcraft

If you enjoy Linux and DevOps work then NixCraft is the one you should follow. The account is very popular around Linux users and has over 48K followers. It joined twitter on November 2008.

16.Unixmen – @unixmen

Unixmen has a blog full of useful tutorials about Linux administration. It's another popular account across Linux users. The account has nearly 10K followers and joined twitter on April 2009.

17. HowToForge – @howtoforgecom

HowToForge provides user friendly tutorials and howtos about almost every topic related to Linux. They have over 8K followers on Twitter.
Follow @howtoforgecom

Follow @howtoforgecom
18. Webupd8 – @WebUpd8

Webupd8 describe themselves as Ubuntu blog, but they cover much more than that. On their website or twitter account you can find information about newly released Linux operating systems, open source software, howto's as well as customization tips. The account has nearly 30K followers and joined Twitter on March 2009.
Follow @WebUpd8

Follow @WebUpd8
19.The Geek Stuff – @thegeekstuff

TheGeekStuff is another useful account where you can find Linux tutorials on different topics on both software and hardware. The account has over 3.5K followers and joined Twitter on December 2008.

20. Tecmint – @tecmint

Last, but definitely not least, lets not forget about TecMint the very website that you're reading right now. We like to share all type of different stuff about Linux – from tutorials to funny things on terminal and jokes about Linux. Tecmint is basically best website and twitter page that you can must follow it and ensures that you will never miss another article from us.
Follow @tecmint

[May 31, 2016] RHEL 6.8 is out

Notable quotes:
"... For customers with ever-increasing volumes of data, the Scalable File System Add-on for Red Hat Enterprise Linux 6.8 now supports xfs filesystem sizes up to 300TB. ..."
"... enables customers to migrate their traditional workloads into container-based applications – suitable for deployment on Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux Atomic Host. ..."

Red Hat Enterprise Linux 6.8 adds improved system archiving, new visibility into storage performance and an updated open standard for secure virtual private networks

Red Hat, Inc. (NYSE: RHT), the world's leading provider of open source solutions, today announced the general availability of Red Hat Enterprise Linux 6.8, the latest version of the Red Hat Enterprise Linux 6 platform. Red Hat Enterprise Linux 6.8 delivers new capabilities and provides a stable and trusted platform for critical IT infrastructure. With nearly six years of field-proven success, Red Hat Enterprise Linux 6 has set the stage for the innovations of today, as Red Hat Enterprise Linux continues to power not only existing workloads, but also the technologies of the future, from cloud-native applications to Linux containers.

With enhancements to security features and management, Red Hat Enterprise Linux 6.8 remains a solid, proven base for modern enterprise IT operations.

Jim Totton vice president and general manager, Platforms Business Unit, Red Hat

Red Hat Enterprise Linux 6.8 includes a number of new and updated features to help organizations bolster platform security and enhance systems management/monitoring capabilities, including:

Enhanced Security, Authentication, and Interoperability

To enhance security for virtual private networks (VPNs), Red Hat Enterprise Linux 6.8 includes libreswan, an implementation of one of the most widely supported and standardized VPN protocols, which replaces openswan as the Red Hat Enterprise Linux 6 VPN endpoint solution, giving Red Hat Enterprise Linux 6 customers access to recent advances in VPN security.

Customers running the latest version of Red Hat Enterprise Linux 6 can see increased client-side performance and simpler management through the addition of new capabilities to the Identity Management client code (SSSD). Cached authentication lookup on the client reduces the unnecessary exchange of user credentials with Active Directory servers. Support for adcli simplifies the management of Red Hat Enterprise Linux 6 systems interoperating with an Active Directory domain. In addition, SSSD now supports user authentication via smart cards, for both system login and related functions such as sudo.

Enhanced Management and Monitoring
The inclusion of Relax-and-Recover, a system archiving tool, provides a more streamlined system administration experience, enabling systems administrators to create local backups in an ISO format that can be centrally archived and replicated remotely for simplified disaster recovery operations. An enhanced yum tool simplifies the addition of packages, adding intelligence to the process of locating required packages to add/enable new platform features.

Red Hat Enterprise Linux 6.8 provides increased visibility into storage usage and performance through dmstats, a program that displays and manages I/O statistics for user-defined regions of devices using the device-mapper driver.

Additional Enhancements and Updates

For customers with ever-increasing volumes of data, the Scalable File System Add-on for Red Hat Enterprise Linux 6.8 now supports xfs filesystem sizes up to 300TB.

Additionally, the general availability of Red Hat Enterprise Linux 6.8 includes the launch of an updated Red Hat Enterprise Linux 6.8 base image which enables customers to migrate their traditional workloads into container-based applications – suitable for deployment on Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux Atomic Host.

Today's release also marks the transition of Red Hat Enterprise Linux 6 into Production Phase 2, a phase which prioritizes ongoing stability and security features for critical platform deployments. More information on the Red Hat Enterprise Linux lifecycle can be found at .

[May 31, 2016] Red Hat Enterprise Linux 6.8 Deprecates Btrfs

Notable quotes:
"... Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 6. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments. ..."
Buried within the notes for today's Red Hat Enterprise Linux 6.8 release are a few interesting notes.

First, RHEL has deprecated support for the Btrfs file-system.

Btrfs file system
Development of B-tree file system (Btrfs) has been discontinued, and Btrfs is considered deprecated. Btrfs was previously provided as a Technology Preview, available on AMD64 and Intel 64 architectures.

Huh? Since when was Btrfs development discontinued? At least in the upstream space, it's still ongoing and Facebook (as well as other companies) continue pouring resources into stabilizing and advancing the capabilities of Btrfs, which is widely sought as a Linux alternative to ZFS. There's no signs of things stalling on the Btrfs mailing list. Especially as Red Hat hasn't been packaging ZFS for RHEL officially (but you can grab packages via as an alternative, this move doesn't make a lot of sense. While Btrfs development has dragged on for a while and short of OpenSUSE/SUSE hasn't seen it deployed by default by other tier-one Linux distributions, it's a bit odd that Red Hat seems to be tossing in the towel on Btrfs.

Red Hat's definition of "deprecated" in their RHEL context means (as shown on the same page), "Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 6. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments."

[Apr 25, 2016] What's New in Red Hat Enterprise Linux 7.2

Video presentation.

[Apr 25, 2016] Red Hat Enterprise Linux 7.2 Beta Now Available

Red Hat

With a nod to the importance of continuously maintaining stable and secure enterprise environments, the beta release of Red Hat Enterprise Linux 7.2 includes several new and enhanced security capabilities. The introduction of a new SCAP module in the installer (anaconda) allows enterprise customers to apply SCAP-based security profiles during installation. Another new capability allows for the binding of data to local networks. This allows enterprises to encrypt systems at scale with centralized management. In addition, Red Hat Enterprise 7.2 beta introduces support for DNSSEC for DNS zones managed by Identity Management (IdM) as well as federated identities, a mechanism that allows users to access resources using a single set of digital credentials.

Given the complexity and necessary due diligence required to efficiently and effectively manage the modern datacenter at scale, the beta release of Red Hat Enterprise Linux 7.2 includes new and improved tools to facilitate a more streamlined system administration experience. These new features and enhancements include:

As always, leveraging work in the Fedora community, Red Hat continuously monitors upstream developments and systematically incorporates select enterprise-ready features and technologies into Red Hat Enterprise Linux. The beta release of Red Hat Enterprise Linux 7.2 accomplishes this through the rebasing of the GNOME 3 desktop, the inclusion of GNOME Software, and the addition of new tuned profiles (inclusive of a profile for Red Hat Enterprise Linux for Real Time).

For more information on Red Hat Enterprise Linux 7.2, you can read the full release notes or, as an existing Red Hat customer, take Red Hat Enterprise Linux 7.2 beta for a test drive yourself via the Red Hat Customer Portal.

[Apr 25, 2016] What's Coming in Red Hat Enterprise Linux 7.2

DNSSEC for DNS zones managed by Red Hat Identity Management

RHEL 7.2 will also bring live kernel patching to RHEL, which Dumas sees as a critical security measure. Using elements of the KPATCH technology that recently landed in the upstream Linux 4.0 kernel, RHEL users will be able to patch their running kernels dynamically.

...Dumas is particularly excited about the performance gains that RHEL 7.2 introduces. In particular she noted that core networking patch performance is being accelerated by 35 percent for RHEL 7.2.

...With RHEL 7.2, Red Hat is refreshing the desktop with GNOME 3.14, which includes the GNOME software package manager and improvements to multi-monitor deployment capabilities.

[Apr 25, 2016] Red Hat Enterprise Linux 7 What's New

Jun 10, 2014 | InformationWeek

Red Hat released the 7.0 version of Red Hat Enterprise Linux today, with embedded support for Docker containers and support for direct use of Microsoft's Active Directory. The update uses XFS as its new file system.

"[Use of XFS] opens the door to a new class of data warehouse and big data analytics applications," said Mark Coggin, senior director of product marketing, in an interview before the announcement.

The high-capacity, 64-bit XFS file system, now the default file system in Red Hat Enterprise Linux, originated in the Silicon Graphics Irix operating system. It can scale up to 500 TB of addressable memory. In comparison, previous file systems, such EXT 4, typically supported 16 TBs.

RHEL 7's support for Linux containers amounts to a Docker container format integrated into the operating system so that users can begin building a "layered" application. Applications in the container can be moved around and will be optimized to run on Red Hat Atomic servers, which are hosts that use the specialized Atomic version of Enterprise Linux to manage containers.

[Want to learn more about Red Hat's commitment to Linux containers? Read Red Hat Containers OS-Centric: No Accident.]

RHEL 7 will also work with Active Directory, using cross-realm trust. Since both Linux and Windows are frequently found in the same enterprise data centers, cross-realm trust lets Linux use Active Directory as either a secondary check on a primary identity management system, or simply as a trusted source to identify users, Coggin says.

RHEL 7 also has more built-in instrumentation and tuning for optimized performance based on a selected system profile. "If you're running a compute-bound workload, you can select a profile that's better geared to it," Coggin notes.

[Dec 12, 2015] How to install and configure ZFS on Linux using Debian Jessie 8.1
ZFS is a combined filesystem and logical volume manager. The features of ZFS include protection against data corruption, support for high storage capacities, efficient data compression, integration of the filesystem and volume management concept, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs.

ZFS was originally implemented as open-source software, licensed under the Common Development and Distribution License (CDDL).

When we talking about the ZFS filesystem, we can highlight the following key concepts:

For a full overview and description of all available features see this detailed wikipedia article.

In this tutorial, I will guide you step by step through the installation of the ZFS filesystem on Debian 8.1 (Jessie). I will show you how to create and configure pool's using raid0 (stripe), raid1 (Mirror) and RAID-Z (Raid with parity) and explain how to configure a file system with ZFS.

Based on the information from the website, ZFS is only supported on the AMD64 and Intel 64 Bit architecture (amd64). Let's get started with the setup. ... ... ...

The ZFS file system is a revolutionary new file system that fundamentally changes the way file systems are administered on Unix-like operating systems. ZFS provides features and benefits that were not found in any other file system available today. ZFS is robust, scalable, and easy to administer.

[Dec 05, 2015] How to forcefully unmount a Linux disk partition

January 27, 2006 |

... ... ...

Linux / UNIX will not allow you to unmount a device that is busy. There are many reasons for this (such as program accessing partition or open file) , but the most important one is to prevent the data loss. Try the following command to find out what processes have activities on the device/partition. If your device name is /dev/sdb1, enter the following command as root user:

# lsof | grep '/dev/sda1'
vi 4453       vivek    3u      BLK        8,1                 8167 /dev/sda1

Above output tells that user vivek has a vi process running that is using /dev/sda1. All you have to do is stop vi process and run umount again. As soon as that program terminates its task, the device will no longer be busy and you can unmount it with the following command:

# umount /dev/sda1
How do I list the users on the file-system /nas01/?

Type the following command:

# fuser -u /nas01/
# fuser -u /var/www/
Sample outputs:
/var/www:             3781rc(root)  3782rc(nginx)  3783rc(nginx)  3784rc(nginx)  3785rc(nginx)  3786rc(nginx)  3787rc(nginx)  3788rc(nginx)  3789rc(nginx)  3790rc(nginx)  3791rc(nginx)  3792rc(nginx)  3793rc(nginx)  3794rc(nginx)  3795rc(nginx)  3796rc(nginx)  3797rc(nginx)  3798rc(nginx)  3800rc(nginx)  3801rc(nginx)  3802rc(nginx)  3803rc(nginx)  3804rc(nginx)  3805rc(nginx)  3807rc(nginx)  3808rc(nginx)  3809rc(nginx)  3810rc(nginx)  3811rc(nginx)  3812rc(nginx)  3813rc(nginx)  3815rc(nginx)  3816rc(nginx)  3817rc(nginx)

The following discussion allows you to unmount device and partition forcefully using mount or fuser Linux commands.

Linux fuser command to forcefully unmount a disk partition

Suppose you have /dev/sda1 mounted on /mnt directory then you can use fuser command as follows:

WARNING! These examples may result into data loss if not executed properly (see "Understanding device error busy error" for more information).

Type the command to unmount /mnt forcefully:

# fuser -km /mnt

Linux umount command to unmount a disk partition.

You can also try the umount command with –l option on a Linux based system:

# umount -l /mnt

If you would like to unmount a NFS mount point then try following command:

# umount -f /mnt

Please note that using these commands or options can cause data loss for open files; programs which access files after the file system has been unmounted will get an error.

See also:

[Nov 08, 2015] Getting Service or Asset Tags on Linux by Nick Geoghegan

Jul 3, 2015

At one point in time, you will need to find out your service or asset tag. Maybe you need to find out when your machine is out of vendor warranty, or are actually finding out what is in the machine. Popping the service tag into the Dell support site will tell you this… But what if you don’t have them written down?

The Dell “tools”, it should be pointed out, require you restarting the machine with a CD in the drive or using a COM file. There is no way in hell that I’m digging out a DOS disk to try and run a COM file to get the service tag. The CD, as it turns out, is just a rebadged Ubuntu CD… Success!

So I mounted the Dell ISO, which was rather fiddly, and took a look around. A program called serviceTag was the first thing I noticed. Was this a specific Dell tool? What would happen if I ran it?

Being paranoid, I decided to see what was linked to this binary.

$ ldd serviceTag =>  (0xf773f000) => not found => /usr/lib32/ (0xf7635000) => /lib32/ (0xf760b000) => /usr/lib32/ (0xf75ed000) => /lib32/ (0xf7473000)
     /lib/ (0xf7740000)

Hmmmm. Never heard of libsmbios before. A quick Google vision quest lead me here.

The SMBIOS Specification addresses how motherboard and system vendors present management information about their products in a standard format by extending the BIOS interface on x86 architecture systems.


Debian (and RHEL) have these tools in their standard repos! For Debian, it’s just a matter of

apt-get install libsmbios-bin

You can then, simply, run

[root@calculon /home/nick]$ /usr/sbin/getSystemId
Libsmbios version:      2.2.28
Product Name:           Gazelle Professional
Vendor:                 System76, Inc.
BIOS Version:           4.6.5
System ID:              XXXXXXXXXXXXXX
Service Tag:            XXXXXXXXXXXXXX
Express Service Code:   0
Asset Tag:              XXXXXXXXXXXXXX
Property Ownership Tag:

[Sep 02, 2015] Is systemd as bad as boycott systemd is trying to make it

October 26, 2014 |
Win2NIX, on October 26, 2014 at 11:04 pm

I migrated fully to NIX after 10-15 years as a Win admin and got tired of having control "hidden". Worked with ESX and used the console and loved the freedom. The trend I am noticing with the systemd debate is VERY similar to what has happened with M$. Keep It Simple Stupid is something Nix should be doing, having things modular and not depending on something else makes life easier. If one thing breaks it's not taking everything else with it. Further, if this is all done in binary and not easily read THIS IS NOT GOOD. I hated M$ making me download other crap to diagnose their BSODs if you like having your system flipping out and not saving your data then I guess systemd would be for you given it's direction. This is also akin to making your browser part of your OS and having it intertwine with it. (Bad Voodoo) I'm using Mint and looking for a possible way to decouple from systemd. I just don't see this as a good thing and it reminds me too much of M$ tactics. Now is the time to deviate from systemd and keep a more modular approach then watch and see if systemd starts to be an issue, which at this point if it keeps taking over more management it's only a matter of time. I also wonder if the M$ embracing open source has anything to do with this, it certainly smells of large corporation thinking or lack there of. I like improving things, but this does not appear to be an improvement rather a bomb waiting to go off. On these points this is a bad idea, binary not an easy way to gain insight and correct issues and adding multiple processes to control with more being added. I was able to patch heartbleed within 15 minutes after finding out about it. In the M$/corp world good luck hope it's this month.

Ummm..., on September 4, 2014 at 7:55 pm

I will admit right off, that I am not a linux designer or maintainer. I got started with linux about 20 years ago. People state that the old init system was fragile. Maybe it was, again…not building linux from scratch I wouldn't know. I don't recall ever having any issues though.

Whether right or wrong, from my (very) limited understanding, the systemd process is driven by binary files, which are not really meant to be edited or looked at by hand. So if something catastrophic happens (which granted hasn't happened yet)…how would I fix it or know what to fix? Go to my distro's forum and hope someone can fix it/release a patch soon?

Anyway, if one of the earlier commenters is correct, and there is no specific plan for systemd (which frankly is a scary thought)…how much more of the system will it continue to take over? And at what point does too much become too much?

I'm all for progress, but I think the Keep It Simple Stupid approach, which may not be "exciting" stuff to develop, it still the best approach.

"why did the people responsible for the development of the major Linux distributions accept it as a replacement for old init system?"

I can't speak for the initial decision, but at this point, I would suspect that inertia is keeping it in place. I highly doubt that any of the major linux desktop systems that must current users depend on would even function without systemd…at least not without a lot of major programming changes to make it happen. If someone did take that route, then all of those custom changes then need to be maintained.

(Simplistically thinking) Why can't things be more pluggable/portable? Distro X uses a systemd plugin for their init, and distro Y chooses to build against something else? Granted systemd is most likely now too big for that, but one can dream I suppose.

AC, on September 4, 2014 at 2:16 pm

Yes. Systemd is a trojan.

xx, on September 4, 2014 at 1:17 pm

Systemd is a perfect system for rootkits, and NSA backdoors.
Once it will be complete it will hide necessary processes even from root, it will filter unnecessary events from log, and it will do much much more.

But it seems, that only minority care about that.

Dimitri Minaev, on September 4, 2014 at 11:59 am

IMHO, the downside of systems as a project is that its parts lack a defined stable interface. This means that you cannot replace one part with a different one, creating your own stack of tools. When you configure your desktop system, you can combine any display manager with any window manager with any panel or file manager. Can you replace networkd with another tool transparently? If yes, can you be sure that your tool will keep working after the next systemd upgrade?

T Davis, on September 4, 2014 at 11:20 am

The reason Debian (and therefore Ubuntu) adopted SystemD is that the appointed Debian tech team is now devided equally between Ubuntu devs (which were Debian devs before Ubuntu came along) and Redhat employees. Look at the voting emails and 3 months of arguments.

The biggest issue is really not one of SystemD infiltration, but more of Redhat taking over every aspect of the Linux development process. Time and again, I have seen Canonical steer in their own direction, not because they want too go rogue, but because the upstreams for the main projects (Gnome, Wayland, Pulse Audio, now SystemD and possibly OpenStack, and even the kernel to some extent) are almost exclusively owned by Redhat, and only wish to make forward progress at their own pace (wayland has had almost twice the development time and resources as mir for example).

The REAL issue here is; who has the Linux community in their best interests? Do some real investigation and write a story on that.

Ericg, on September 3, 2014 at 7:12 pm
Except you, the author, has fallen into the same trap everyone else does… Confusing Systemd (the project) with systemd (the binary). Systemd, the project, is like Apache, its an umbrella term for a lot of other things. Systemd, logind, networkd, and other utilities.

Systemd, the binary, handles service management in pid1, that includes socket and explicit activation. Other tasks it passes off to non-pid 1 processes. For example: session management isn't handled through systemd pid 1, its handled through logind.

Readahead is handled through a service file for systemd, just like other daemons.

syslog functionality isn't handled in pid1, its handled in journald which is a separate process.

hostname, locale, and time registation are all handled through explicit utilities: hostnamectl, localectl, and timedatectl, which are done as separate processes.

Network configuration got added in networkd. What is networkd? The most minimal network userland you can have. Its for people who don't want to write by-hand config files, but for whom NetworkManager is way overkill. Is it pid 1? Nope.

Yes, systemd started off as "just an init replacement." It grew into more things. But don't assume that "systemd" (the binary) is the same as "systemd" (the project). Most things that are added to systemd in recent times AREN'T pid 1 like boycottsystemd claims, they're just small utilities that got added under the systemd umbrella project.

Peter, on September 4, 2014 at 4:42 am
Ericg, thats the problem
systemmd has become a whole integrated stack
init.d while not easy to use for starters, was at least within the idea of simple units which can be mixed and matched to get the results the user wants – note user wants – not developer wants

a Linux user, on September 4, 2014 at 5:23 am

hostname, locale, and time registation are all handled through explicit utilities: hostnamectl, localectl, and timedatectl, which are done as separate processes.

Missing the point.

People talk as though prior to systemd such tasks were beyond Linux, didn't work, always crashed, were a nightmare to use or manage and that is not the case.

The only difference I see between my Linux machine now and my Linux machine of a few years ago is that it now boots faster. And that's it. And whilst that's nice, it's so meaningless as to be painful to behold the enthusiasm that some display, as though all they did all day long was sit and reboot their machines with a stop watch in one hand.

The main problem with systemd is this – if there are ulterior motives at work here (and by definition they will be hidden at present) then by the time we find that out it will be too late.
And the other problem is that it takes a special kind of arrogance to sneer at 20+ years of development by some seriously smart people and claim that you, as a mere child, can do better. I do wonder how far systemd would have got had it not had Red Hat's weight behind it. I do realise that improvement sometimes means kicking out old 'tried and trusted' methods. But it's the way its happening with systemd that rings alarm bells – too many sneering, nasty bullies trashing anyone who disagrees (just like anyone who thinks Corporations should pay proper taxes is sneered at, or anyone who thinks Putin is not as bad as he is made out to be gets sneered at – sneering is the new way of silencing genuine debate, so when I come across it in Linuxland, alarm bells beging to ring).

Linux is about granular power and control, not convenience.

J. Orejarena, on September 4, 2014 at 9:38 am
"The main problem with systemd is this – if there are ulterior motives at work here (and by definition they will be hidden at present) then by the time we find that out it will be too late."

Just read (without the blank space before ".net") to find the ulterior motives.

[May 07, 2015] Red Hat Enterprise Linux Life Cycle - Red Hat Customer Portal

* The life cycle dates are subject to adjustment.

In Red Hat Enterprise Linux 4, EUS was available for the following minor releases:

In Red Hat Enterprise Linux 5, EUS is available for the following minor releases:

In Red Hat Enterprise Linux 6, EUS is available for all minor releases released during the Production 1 Phase, but not for the minor release marking transition to Production 2 or any minor releases released during Production Phases 2 or 3. Each Red Hat Enterprise Linux 6 EUS stream is available for 24 months from the availability of the minor release.

In Red Hat Enterprise Linux 6, EUS is available for the following minor releases:

Future Red Hat Enterprise Linux 6 releases for which EUS is available will be added to the above list upon their release.

In Red Hat Enterprise Linux 7, EUS will be available for all minor releases during the Production 1 Phase, but not for 7.0 or the minor release marking the transition to Production 2, or for any minor releases released during Production Phases 2 or 3. Each Red Hat Enterprise Linux 7 EUS stream is available for 24 months from the availability of the minor release.

In Red Hat Enterprise Linux 7, EUS is available for the following releases:

Future Red Hat Enterprise Linux 7 releases for which EUS is available will be added to the above list upon their release.

Please see this Knowledgebase Article for more details on EUS.

[Jun 27, 2014] What's new in Red Hat Enterprise Linux 7

Red Hat


...Red Hat Enterprise Linux 7 delivers dramatic improvements in reliability, performance, and scalability. A wealth of new features provides the architect, system administrator, and developer with the resources necessary to innovate and manage more efficiently.


Linux containers have emerged as a key open source application packaging and delivery technology, combining lightweight application isolation with the flexibility of image-based deployment methods. Developers have rapidly embraced Linux containers because they simplify and accelerate application deployment, and many Platform-as-a-Service (PaaS) platforms are built around Linux container technology, including OpenShift by Red Hat. Red Hat Enterprise Linux 7 implements Linux containers using core technologies such as control groups (cGroups) for resource management, namespaces for process isolation, and SELinux for security, enabling secure multi-tenancy and reducing the potential for security exploits. The Red Hat container certification ensures that application containers built using Red Hat Enterprise Linux will operate seamlessly across certified container hosts.
With more and more systems, even at the low end, presenting non-uniform memory access (NUMA) topologies, Red Hat Enterprise Linux 7 addresses the performance irregularities that such systems present. A new, kernel-based NUMA affinity mechanism automates memory and scheduler optimization. It attempts to match processes that consume significant resources with available memory and CPU resources in order to reduce cross-node traffic. The resulting improved NUMA resource alignment improves performance for applications and virtual machines, especially when running memory-intensive workloads.
Red Hat Enterprise Linux 7 unifies hardware event reporting into a single reporting mechanism. Instead of various tools collecting errors from different sources with different timestamps, a new hardware event reporting mechanism (HERM) will make it easier to correlate events and get an accurate picture of system behavior. HERM reports events in a single location and in a sequential timeline. HERM uses a new userspace daemon, rasdaemon, to catch and log all RAS events coming from the kernel tracing infrastructure.
Red Hat Enterprise Linux 7 advances the level of integration and usability between the Red Hat Enterprise Linux guest and VMware vSphere. Integration now includes: • Open VM Tools — bundled open source virtualization utilities. • 3D graphics drivers for hardware-accelerated OpenGL and X11 rendering. • Fast communication mechanisms between VMware ESX and the virtual machine.
The ability to revert to a known, good system configuration is crucial in a production environment. Using LVM snapshots with ext4 and XFS (or the integrated snapshotting feature in Btrfs described in the “Snapper” section) an administrator can capture the state of a system and preserve it for future use. An example use case would involve an in-place upgrade that does not present a desired outcome and an administrator who wants to restore the original configuration.
Red Hat Enterprise Linux 7 introduces Live Media Creator for creating customized installation media from a kickstart file for a range of deployment use cases. Media can then be used to deploy standardized images whether on standardized corporate desktops, standardized servers, virtual machines, or hyperscale deployments. Live Media Creator, especially when used with templates, provides a way to control and manage configurations across the enterprise.
Red Hat Enterprise Linux 7 features the ability to use installation templates to create servers for common workloads. These templates can simplify and speed creating and deploying Red Hat Enterprise Linux servers, even for those with little or no experience with Linux.

Red Hat Red Hat Enterprise Linux 7 – Setting World Records At Launch

June 10, 2014

Today’s announcement of general availability of Red Hat Enterprise Linux 7 marks a significant milestone for Red Hat. The culmination of a multi-year effort by Red Hat’s engineering team and our partners, the latest major release of our flagship platform redefines the enterprise operating system, and is designed to power the spectrum of enterprise IT: applications running on physical servers, containerized applications, and also cloud services.

Since its introduction more than a decade ago, Red Hat Enterprise Linux has become the world’s leading enterprise Linux platform, setting industry standards for performance along the way, with Red Hat Enterprise Linux 7 continuing this trend. On its first day of general availability, Red Hat Enterprise Linux 7 already claims multiple world record-breaking benchmark results running on HP ProLiant servers, including:

SPECjbb2013 Multi-JVM Benchmark
• One processor world record for both max-jOPS (16,252) and critical-jOPS (4,721) metrics
• Two processor world record for both max-jOPS (119,517) and critical-jOPS (36,411) metrics
• Four processor world record for both max-jOPS (202,763) and critical-jOPS (65,950) metrics

The SPECjbb2013 benchmark is an industry-standard measurement of Java-based application performance developed by the Standard Performance Evaluation Corporation (SPEC). Application performance remains an important attribute for many customers, and this set of results demonstrates Red Hat Enterprise Linux’s continued ability to deliver world-class performance, alongside support from our ecosystem of partners and OEMs. With these impressive results to its name already, we like to think that this is only the tip of the iceberg for Red Hat Enterprise Linux 7’s achievements, especially since the platform is designed to power a broad spectrum of enterprise IT workloads.

SPEC and SPECjbb are registered trademarks of the Standard Performance Evaluation Corporation. Results as of June 10, 2014. See for more information.

For further details on SPECjbb2013 benchmark results achieved on HP ProLiant XL220a Gen8 v2 (1P), HP ProLiant DL580 Gen8 (2P), and HP ProLiant DL580 Gen8 (4P) servers, see

[Jun 27, 2014] Red Hat Enterprise Linux 7 in evaluation for Common Criteria certification

June 19, 2014

Security is a crucial component of the technology Red Hat provides for its customers and partners, especially those who operate in sensitive environments, including the military.

[Jun 27, 2014] Oracle Announces OpenStack Support for Oracle Linux and Oracle VM

A technology preview of an OpenStack distribution that allows Oracle Linux and Oracle VM to work with the open source cloud software is now available. Users can install this OpenStack technology preview in their test environments with the latest version of Oracle Linux and the beta release of Oracle VM 3.3.

Read the Press Release
Read More from Oracle Senior Vice President of Linux and Virtualization Wim Coekaerts
Read More from Oracle Product Management Director Ronen Kofman

Oracle Linux Free as in Speech AND Free as in Beer by Monica Kumar

Jan 08, 2014 | Oracle's Linux Blog

One of the biggest benefits of Oracle Linux is that binaries, patches, errata, and source are always free. Even if you don’t have a support subscription, you can download and run exactly the same enterprise-grade distribution that is deployed in production by thousands of customers around the world. You can receive binaries and errata reliably and on schedule, and take advantage of the thousands of hours Oracle spends testing Oracle Linux every day. And, of course, Oracle Linux is completely compatible with Red Hat Enterprise Linux, so switching to Oracle Linux is easy.

CentOS is another Linux distribution that offers free binaries with Red Hat compatibility. Traditionally, CentOS has been used for Linux systems which do not require support in order to reduce or avoid expensive Red Hat Enterprise Linux subscription costs. Recently, Red Hat announced it was “joining forces” with the CentOS project, hiring many the key CentOS developers, and “building a new CentOS.” This is a curious development given that the primary factors that have made CentOS popular are that it is free and Red Hat compatible.

It would be natural for existing CentOS users to wonder what Red Hat actually has in mind for the “new CentOS” when the FAQ accompanying the announcement states that Red Hat does not recommend CentOS for production deployment, is not recommending mixed CentOS and Red Hat Enterprise Linux deployments, will not support JBoss and other products on CentOS, and is not including CentOS in Red Hat’s developer offerings designed to create “applications for deployment into production environments.”

If Red Hat truly wished to satisfy the key requirements of most CentOS users, they would take a much simpler step: they would make Red Hat Enterprise Linux binaries, patches, and errata available for free download – just like Oracle already does.

Fortunately, no matter what future CentOS faces in Red Hat’s hands, Oracle Linux offers all users a single distribution for development, testing, and deployment, for free or with a paid support subscription. Oracle does not require customers to buy a subscription for every server running Oracle Linux (or any server running Oracle Linux). If a customer wants to pay for support for production systems only, that’s the customer's choice. The Oracle model is simple, economical, and well suited to environments with rapidly changing needs.

Oracle is focused on providing what we have since day one – a fast, reliable Linux distribution that is completely compatible with Red Hat Enterprise Linux, coupled with enterprise class support, indemnity, and flexible support policies. If you are CentOS user, or a Red Hat user, why not download and try Oracle Linux today? You have nothing to lose – after all, it’s of the CentOS community while remaining committed to our current and new users."

Al Gillen, program vice president, System Software, IDC
"CentOS is one of the major non-commercial distributions in the industry, and a key adjacent project for many Red Hat Enterprise Linux customers. This relationship helps strengthen the CentOS community, and will ensure that CentOS benefits directly from the community-centric development approach that Red Hat both understands and heavily supports. Given the growing opportunities for Linux in the market today in areas such as OpenStack, cloud and big data, a stronger CentOS technology backed by the CentOS community—including Red Hat—is a positive development that helps the overall industry."

Stephen O'Grady, principal analyst, RedMonk
"Though it will doubtless come as a surprise, this move by Red Hat represents the logical embrace of an adjacent ecosystem. Bringing the CentOS and Red Hat communities closer together should be a win for both parties."

Additional Resources

Connect with Red Hat

Red Hat + CentOS — Red Hat Open Source Community

Red Hat + CentOS

Red Hat and the CentOS Project are building a new CentOS, capable of driving forward development and adoption of next-generation open source projects.

Red Hat will contribute its resources and expertise in building thriving open source communities to help establish more open project governance, broaden opportunities for participation, and provide new ways for CentOS users and contributors to collaborate on next-generation technologies such as cloud, virtualization, and Software-Defined Networking (SDN).

With Red Hat’s contributions and investment, the CentOS Project will be better able to serve the needs of open source community members who require different or faster-moving components to be integrated with CentOS, expanding on existing efforts to collaborate with open source projects such as OpenStack, Gluster, OpenShift Origin, and oVirt.

Red Hat has worked with the CentOS Project to establish a merit-based open governance model for the CentOS Project, allowing for greater contribution and participation through increased transparency and access.


Today, the CentOS Project produces CentOS, a popular community Linux distribution built from much of the Red Hat Enterprise Linux codebase and other sources. Over the coming year, the CentOS Project will expand its mission to establish CentOS as a leading community platform for emerging open source technologies coming from other projects such as OpenStack.

How is CentOS different from Red Hat Enterprise Linux?

CentOS is a community project that is developed, maintained, and supported by and for its users and contributors. Red Hat Enterprise Linux is a subscription product that is developed, maintained, and supported by Red Hat for its subscribers.

While CentOS is derived from the Red Hat Enterprise Linux codebase, CentOS and Red Hat Enterprise Linux are distinguished by divergent build environments, QA processes, and, in some editions, different kernels and other open source components. For this reason, the CentOS binaries are not the same as the Red Hat Enterprise Linux binaries.

The two also have very different focuses. While CentOS delivers a distribution with strong community support, Red Hat Enterprise Linux provides a stable enterprise platform with a focus on security, reliability, and performance as well as hardware, software, and government certifications for production deployments. Red Hat also delivers training, and an entire support organization ready to fix problems and deliver future flexibility by getting features worked into new versions.

Once in use, the operating systems often diverge further, as users selectively install patches to address bugs and security vulnerabilities to maintain their respective installs. In addition, the CentOS Project maintains code repositories of software that are not part of the Red Hat Enterprise Linux codebase. This includes feature changes selected by the CentOS Project. These are available as extra/additional packages and environments for CentOS users.

[Oct 26, 2013] RHEL handling of DST change

Most server hardware clocks are use UTC. UTC stands for the Universal Time, Coordinated, also known as Greenwich Mean Time (GMT). Other time zones are determined by adding or subtracting from the UTC time. Server typically displays local time, which now is subject of DST correction twice a year.

Wikipedia defines DST as follows:

Daylight saving time (DST), also known as summer time in British English, is the convention of advancing clocks so that evenings have more daylight and mornings have less. Typically clocks are adjusted forward one hour in late winter or early spring and are adjusted backward in autumn.

DST patch is only required in few countries such as USA. Please see this wikipedia article.

Linux will change to and from DST when the HWCLOCK setting in /etc/sysconfig/clock is set to -u, i.e. when the hardware clock is set to UTC (which is closely related to GMT), regardless of whether Linux was running at the time DST is entered or left.

When the HWCLOCK setting is set to `--localtime', Linux will not adjust the time, operating under the assumption that your system may be a dual boot system at that time and that the other OS takes care of the DST switch. If that was not the case, the DST change needs to be made manually.


EST is defined as being GMT -5 all year round. US/Eastern, on the other hand, means GMT-5 or GMT-4 depending on whether Daylight Savings Time (DST) is in effect or not.

The tzdata package contains data files with rules for various timezones around the world. When this package is updated, it will update multiple timezone changes for all previous timezone fixes.

[Feb 28, 2012] Red Hat vs. Oracle Linux Support 10 Years Is New Standard

The VAR Guy

The support showdown started a couple of weeks ago, when Red Hat extended the life cycle of Red Hat Enterprise Linux (RHEL) versions 5 and 6 from the norm of seven years to a new standard of 10 years. A few days later, Oracle responded by extending Oracle Linux life cycles to 10 years. Side note: It sounds like SUSE, now owned by Attachmate, also offers extended Linux support of up to 10 years.

[Feb 07, 2012] Virtualization With Xen On CentOS 6.2 (x86_64)

Linux Howtos

This tutorial provides step-by-step instructions on how to install Xen (version 4.1.2) on a CentOS 6.2 (x86_64) system.

Xen lets you create guest operating systems (*nix operating systems like Linux and FreeBSD), so called "virtual machines" or domUs, under a host operating system (dom0). Using Xen you can separate your applications into different virtual machines that are totally independent from each other (e.g. a virtual machine for a mail server, a virtual machine for a high-traffic web site, another virtual machine that serves your customers' web sites, a virtual machine for DNS, etc.), but still use the same hardware. This saves money, and what is even more important, it's more secure. If the virtual machine of your DNS server gets hacked, it has no effect on your other virtual machines. Plus, you can move virtual machines from one Xen server to the next one.

[Jan 11, 2012] Red Hat Enterprise Linux 6.2 Announcement

They continue to push KVM which is seldom used in enterprise environment. The most important addition is Linux containers.
Dec 06, 2011 [rhelv6-announce]

Hardware support

Linux Containers




Error detection and reporting


The X server has been re-based in this release. Updating the X server will increase system stability through the isolation of the system display drivers and will provide a better base for new features. Overall improved support for newer workstation optional hardware, multiple displays and new input devices.

[Jul 31, 2011] Scientific Linux pushes RHEL clones forward by Sean Michael Kerner

July 29, 2011 | InternetNews.
From the 'Clone Wars' files:

"Scientific Linux 6.1 is now available providing users with a stable reliable Free (as in Beer) version of Red Hat Enterprise Linux 6.1.

Red Hat released RHEL 6.1 in May, providing improved driver support and hardware enablement and oh yeah security fixes too.

Scientific Linux is a joint effort by Fermilab and CERN and is targeted at the scientific community, but it's a solid RHEL version in its own right. It's also one that could now be attracting some new users, thanks to delays at the 'other' popular RHEL clone -- CentOS.

The CentOS project just releases CentOS 6 and are many months behind Scientific Linux and even more time behind RHEL. That's a problem for some and could also represent a real security risk for most.

With the more rapid release cycle of Scientific Linux I will not be surprised if some disgruntled CentOS users make the switch and/or if new users just start off with Scientific Linux first.

While Scientific Linux is faster than CentOS at replicating RHEL 6.1, they aren't the fastest clone.

Oracle Linux 6.1 came out in June, barely a month after Red Hat's release.

It's somewhat ironic that Oracle is now the fasted clone tracking RHEL, since Red Hat has made it harder to clone with the way they package releases. As it turns out, it's not slowing Oracle down at all - though it might be impacting the community releases.

[May 31, 2011] RHEL Tuning and Optimization for Oracle V11

The Completely Fair Queuing (CFQ) scheduler is the default algorithm in Red Hat Enterprise Linux 4 which is suitable for a wide variety of applications and provides a good compromise between throughput and latency. In comparison to the CFQ algorithm, the Deadline scheduler caps maximum latency per request and maintains a good disk throughput which is best for disk-intensive database applications.

Hence, the Deadline scheduler is recommended for database systems. Also, at the time of this writing there is a bug in the CFQ scheduler which affects heavy I/O, see Metalink Bug:5041764. Even though this bug report talks about OCFS2 testing, this bug can also happen during heavy IO access to raw or block devices and as a consequence could evict RAC nodes.

To switch to the Deadline scheduler, the boot parameter elevator=deadline must be passed to the kernel that is being used.

Edit the /etc/grub.conf file and add the following parameter to the kernel that is being used, in this example 2.4.21-32.0.1.ELhugemem:

title Red Hat Enterprise Linux Server (2.6.18-8.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.el5 ro root=/dev/sda2 elevator=deadline initrd /initrd-2.6.18-8.el5.img

This entry tells the 2.6.18-8.el5 kernel to use the Deadline scheduler. Make sure to reboot the system to activate the new scheduler.

Changing Network Adapter Settings

To check the speed and settings of network adapters, use the ethtool command which works now

for most network interface cards. To check the adapter settings of eth0 run:

# ethtool eth0

To force a speed change to 1000Mbps, full duplex mode, run:

# ethtool -s eth0 speed 1000 duplex full autoneg off

To make a speed change permanent for eth0, set or add the ETHTOOL_OPT environment variable in


ETHTOOL_OPTS="speed 1000 duplex full autoneg off"

This environment variable is sourced in by the network scripts each time the network service is


Changing Network Kernel Settings

Oracle now uses User Datagram Protocol (UDP) as the default protocol on Linux for interprocess

communication, such as cache fusion buffer transfers between the instances. However, starting with

Oracle 10g network settings should be adjusted for standalone databases as well.

Oracle recommends the default and maximum send buffer size (SO_SNDBUF socket option) and

receive buffer size (SO_RCVBUF socket option) to be set to 256 KB. The receive buffers are used

by TCP and UDP to hold received data until it is read by the application. The receive buffer cannot

overflow because the peer is not allowed to send data beyond the buffer size window. This means that

datagrams will be discarded if they do not fit in the socket receive buffer. This could cause the sender

to overwhelm the receiver.

The default and maximum window size can be changed in the proc file system without reboot:

The default setting in bytes of the socket receive buffer

# sysctl -w net.core.rmem_default=262144

The default setting in bytes of the socket send buffer

# sysctl -w net.core.wmem_default=262144

The maximum socket receive buffer size which may be set by using the SO_RCVBUF socket option

# sysctl -w net.core.rmem_max=262144

The maximum socket send buffer size which may be set by using the SO_SNDBUF socket option

# sysctl -w net.core.wmem_max=262144

To make the change permanent, add the following lines to the /etc/sysctl.conf file, which is used

during the boot process:





To improve fail over performance in a RAC cluster, consider changing the following IP kernel

parameters as well:





Changing these settings may be highly dependent on your system, network, and other applications.

For suggestions, see Metalink Note:249213.1 and Note:265194.1.

On Red Hat Enterprise Linux systems the default range of IP port numbers that are allowed for TCP

and UDP traffic on the server is too low for 9i and 10g systems. Oracle recommends the following port


# sysctl -w net.ipv4.ip_local_port_range="1024 65000"

To make the change permanent, add the following line to the /etc/sysctl.conf file, which is used during

the boot process:

net.ipv4.ip_local_port_range=1024 65000

The first number is the first local port allowed for TCP and UDP traffic, and the second number is the last port number.

10.3. Flow Control for e1000 Network Interface Cards

The e1000 network interface card family do not have flow control enabled in the 2.6 kernel on Red Hat

Enterprise Linux 4 and 5. If you have heavy traffic, then the RAC interconnects may lose blocks, see

Metalink Bug:5058952. For more information on flow control, see Wikipedia Flow control1.

To enable Receive flow control for e1000 network interface cards, add the following line to the /etc/

modprobe.conf file:

options e1000 FlowControl=1

The e1000 module needs to be reloaded for the change to take effect. Once the module is loaded with

flow control, you should see e1000 flow control module messages in /var/log/messages.

Verifying Asynchronous I/O Usage

To verify whether $ORACLE_HOME/bin/oracle was linked with asynchronous I/O, you can use the

Linux commands ldd and nm.

In the following example, $ORACLE_HOME/bin/oracle was relinked with asynchronous I/O:

$ ldd $ORACLE_HOME/bin/oracle | grep libaio => /usr/lib/ (0x0093d000)

$ nm $ORACLE_HOME/bin/oracle | grep io_getevent

w io_getevents@@LIBAIO_0.1


In the following example, $ORACLE_HOME/bin/oracle has NOT been relinked with asynchronous I/


$ ldd $ORACLE_HOME/bin/oracle | grep libaio

$ nm $ORACLE_HOME/bin/oracle | grep io_getevent

w io_getevents


If $ORACLE_HOME/bin/oracle is relinked with asynchronous I/O it does not necessarily mean that

Oracle is really using it. You also have to ensure that Oracle is configured to use asynchronous I/O

calls, see Enabling Asynchronous I/O Support.

To verify whether Oracle is making asynchronous I/O calls, you can take a look at the /proc/

slabinfo file assuming there are no other applications performing asynchronous I/O calls on the

system. This file shows kernel slab cache information in real time.

On a Red Hat Enterprise Linux 3 system where Oracle does not make asynchronous I/O calls, the

output looks like this:

$ egrep "kioctx|kiocb" /proc/slabinfo

kioctx 0 0 128 0 0 1 : 1008 252

kiocb 0 0 128 0 0 1 : 1008 252


Once Oracle makes asynchronous I/O calls, the output on a Red Hat Enterprise Linux 3 system will

look like this:

$ egrep "kioctx|kiocb" /proc/slabinfo

kioctx 690 690 128 23 23 1 : 1008 252

kiocb 58446 65160 128 1971 2172 1 : 1008 252 Red Hat Enterprise Linux 5.7 Released in Beta

Storage Drivers

4.2. Network Drivers

[May 21, 2011] 6.1 Technical Notes


[May 21, 2011] Red Hat Delivers Red Hat Enterprise Linux 6.1

RHEL 6.0 was pretty raw, hopefully they fixed the host glaring flaws.
May 19, 2011 | Red Hat

Red Hat, Inc. (NYSE: RHT) today announced the general availability of Red Hat Enterprise Linux 6.1, the first update to the platform since the delivery of Red Hat Enterprise Linux 6 in November 2010.
... ... ... ...

Red Hat Enterprise Linux 6.1 is already established as a performance leader serving both as a virtual machine guest and hypervisor host in SPECvirt benchmarks. Red Hat and HP recently announced that the combination of Red Hat Enterprise Linux with KVM running on a HP ProLiant BL620c G7 20-core Blade server delivered a record-setting SPECvirt_sc2010 benchmark result. Red Hat and IBM also recently announced that the companies submitted a benchmark to SPEC in which a combination of Red Hat Enterprise Linux, Red Hat Enterprise Virtualization and IBM systems delivered 45% better consolidation capability than competitors in performance tests conducted by Red Hat and IBM. See for details.

“Building on our decade-long partnership to optimize Red Hat Enterprise Linux for IBM platforms, our companies have collaborated closely on the development of Red Hat Enterprise Linux 6.1,” said Jean Staten Healy, director, Cross-IBM Linux and Open Virtualization. “Red Hat Enterprise Linux 6.1 combined with IBM hardware capabilities offers our customers expanded flexibility, performance and scalability across their bare metal, virtualized and cloud environments. Our collaboration continues to drive innovation and leading results in the industry.”

In addition to performance improvements, Red Hat Enterprise Linux 6.1 also provides numerous technology updates, including:

[May 19, 2011] CentOS 6? by David Sumsky

Oracle Linux might be an alternative...

dsumsky lines

I'm a big fan of CentOS project. I use it in production and I recommend it to the others as an enterprise ready Linux distro. I have to admit that I was quite disappointed by the behaviour of project developers who weren't able to tell the community the reasons why the upcoming releases were and are so overdue. I was used to downloading CentOS images one or two months after the current RHEL release was announced. The situation has changed with RHEL 5.6 which is available since January, 2011 but the corresponding CentOS was released not before April, 2011. It took about 3 months to release it instead of one or two as usual. By the way, the main news in RHEL 5.6 are:

More details on RHEL 5.6 are officially available here.

The similar or perhaps worse situation was around the release date of CentOS 6. As you know, RHEL 6 is available since November, 2011. I considered CentOS 6 almost dead after I read about transitions to Scientific Linux or about purchasing support from Red Hat and migrating the CentOS installations to RHEL . But according to this schedule people around CentOS seem to be working hard again and the CentOS 6 should be available at the end of May.

I hope the project will continue as I don't know about better alternative to RHEL (RHEL clone) than CentOS. The question is how the whole, IMO unnecessary situation, will influence the reputation of the project.

[Nov 14, 2010] Red Hat releases RHEL 6

"Red Hat on Wednesday released version 6 of its Red Hat Enterprise Linux (RHEL) distribution. 'RHEL 6 is the culmination of 10 years of learning and partnering,' said Paul Cormier, Red Hat's president of products and technologies, in a webcast announcing the launch. Cormier positioned the OS both as a foundation for cloud deployments and a potential replacement for Windows Server. 'We want to drive Linux deeper into every single IT organization. It is a great product to erode the Microsoft Server ecosystem,' he said. Overall, RHEL 6 has more than 2,000 packages, and an 85 percent increase in the amount of code from the previous version, said Jim Totton, vice president of Red Hat's platform business unit. The company has added 1,800 features to the OS and resolved more than 14,000 bug issues."

5.6 Release Notes

Fourth Extended Filesystem (ext4) Support

The fourth extended filesystem (ext4) is now a fully supported feature in Red Hat Enterprise Linux 5.6. ext4 is based on the third extended filesystem (ext3) and features a number of improvements, including: support for larger file size and offset, faster and more efficient allocation of disk space, no limit on the number of subdirectories within a directory, faster file system checking, and more robust journaling.

To complement the addition of ext4 as a fully supported filesystem in Red Hat Enterprise Linux 5.6, the e4fsprogs package has been updated to the latest upstream version. e4fsprogs contains utilities to create, modify, verify, and correct the ext4 filesystem.

Logical Volume Manager (LVM)

Volume management creates a layer of abstraction over physical storage by creating logical storage volumes. This provides greater flexibility over just using physical storage directly. Red Hat Enterprise Linux 5.6 manages logical volumes using the Logical Volume Manager (LVM). Further Reading The Logical Volume Manager Administration document describes the LVM logical volume manager, including information on running LVM in a clustered environment.

[Apr 20, 2009] Sun goes to Oracle for $7.4B

Oracle+Sun has the power to seriously harm IBM. Solaris still has the highest market share among proprietary Unixes. And AIX is only third after HP-UX. Wonder if Solaris will become Oracle's main development platform again. Oracle is a top contributor to Linux and that might help to bridge the gap in shell and packaging. Telecommunications and database administrators always preferred Solaris over Linux.
Yahoo! Finance

Oracle Corp. snapped up computer server and software maker Sun Microsystems Inc. for $7.4 billion Monday, trumping rival IBM Corp.'s attempt to buy one of Silicon Valley's best known -- and most troubled -- companies.

... ... ...

Jonathan Schwartz, Sun's CEO, predicted the combination will create a "systems and software powerhouse" that "redefines the industry, redrawing the boundaries that have frustrated the industry's ability to solve." Among other things, he predicted Oracle will be able to offer its customers simpler computing solutions at less expensive prices by drawing upon Sun's technology.

... ... ...

Yet Oracle says it can run Sun more efficiently. It expects the purchase to add at least 15 cents per share to its adjusted earnings in the first year after the deal closes. The company estimated Santa Clara, Calif.-based Sun will contribute more than $1.5 billion to Oracle's adjusted profit in the first year and more than $2 billion in the second year.

If Oracle can hit those targets, Sun would yield more profit than the combined contributions of three other major acquisitions -- PeopleSoft Inc., Siebel Systems Inc. and BEA Systems -- that cost Oracle a total of more than $25 billion.

A deal with Oracle might not be plagued by the same antitrust issues that could have loomed over IBM and Sun, since there is significantly less overlap between the two companies. Still, Oracle could be able to use Sun's products to enhance its own software.

Oracle's main business is database software. Sun's Solaris operating system is a leading platform for that software. The company also makes "middleware," which allows business computing applications to work together. Oracle's middleware is built on Sun's Java language and software.

Calling Java the "single most important software asset we have ever acquired," Ellison predicted it would eventually help make Oracle's middleware products generate as much revenue as its database line does.

Sun's takeover is a reminder that a few missteps and bad timing can cause a star to come crashing down.

Sun was founded in 1982 by men who would become legendary Silicon Valley figures: Andy Bechtolsheim, a graduate student whose computer "workstation" for the Stanford University Network (SUN) led to the company's first product; Bill Joy, whose work formed the basis for Sun's computer operating system; and Stanford MBAs Vinod Khosla and Scott McNealy.

Sun was a pioneer in the concept of networked computing, the idea that computers could do more when lots of them were linked together. Sun's computers took off at universities and in the government, and became part of the backbone of the early Internet. Then the 1990s boom made Sun a star. It claimed to put "the dot in dot-com," considered buying a struggling Apple Computer Inc. and saw its market value peak around $200 billion.

[Apr 17, 2009] Adobe Reader 9 released - Linux and Solaris x86

Tabbed viewing was added
Ashutosh Sharma

Adobe Reader 9.1 for Linux and Solaris x86 has been released today. Solaris x86 support was one of the most requested feature by users. As per the Reader team's announcement, this release includes the following major features:

- Support for Tabbed Viewing (preview)
- Super fast launch, and better performance than previous releases
- Integration with
- IPv6 support
- Enhanced support for PDF portfolios (preview)

The complete list is available here.

Adobe Reader 9.1 is now available for download and works on OpenSolaris, Solaris 10 and most modern Linux distributions such as Ubuntu 8.04, PCLinuxOS, Mandriva 2009, SLED 10, Mint Linux 6 and Fedora 10.

See also Sneak Preview of the Tabbed Viewing interface in Adobe Reader 9.x (on Ubuntu)

[Feb 22, 2009] 10 shortcuts to master bash - Program - Linux - Builder AU By Guest Contributor, TechRepublic | 2007/06/25 18:30:02

If you've ever typed a command at the Linux shell prompt, you've probably already used bash -- after all, it's the default command shell on most modern GNU/Linux distributions.

The bash shell is the primary interface to the Linux operating system -- it accepts, interprets and executes your commands, and provides you with the building blocks for shell scripting and automated task execution.

Bash's unassuming exterior hides some very powerful tools and shortcuts. If you're a heavy user of the command line, these can save you a fair bit of typing. This document outlines 10 of the most useful tools:

  1. Easily recall previous commands

    Bash keeps track of the commands you execute in a history buffer, and allows you to recall previous commands by cycling through them with the Up and Down cursor keys. For even faster recall, "speed search" previously-executed commands by typing the first few letters of the command followed by the key combination Ctrl-R; bash will then scan the command history for matching commands and display them on the console. Type Ctrl-R repeatedly to cycle through the entire list of matching commands.

  2. Use command aliases

    If you always run a command with the same set of options, you can have bash create an alias for it. This alias will incorporate the required options, so that you don't need to remember them or manually type them every time. For example, if you always run ls with the -l option to obtain a detailed directory listing, you can use this command:

    bash> alias ls='ls -l' 

    To create an alias that automatically includes the -l option. Once this alias has been created, typing ls at the bash prompt will invoke the alias and produce the ls -l output.

    You can obtain a list of available aliases by invoking alias without any arguments, and you can delete an alias with unalias.

  3. Use filename auto-completion

    Bash supports filename auto-completion at the command prompt. To use this feature, type the first few letters of the file name, followed by Tab. bash will scan the current directory, as well as all other directories in the search path, for matches to that name. If a single match is found, bash will automatically complete the filename for you. If multiple matches are found, you will be prompted to choose one.

  4. Use key shortcuts to efficiently edit the command line

    Bash supports a number of keyboard shortcuts for command-line navigation and editing. The Ctrl-A key shortcut moves the cursor to the beginning of the command line, while the Ctrl-E shortcut moves the cursor to the end of the command line. The Ctrl-W shortcut deletes the word immediately before the cursor, while the Ctrl-K shortcut deletes everything immediately after the cursor. You can undo a deletion with Ctrl-Y.

  5. Get automatic notification of new mail

    You can configure bash to automatically notify you of new mail, by setting the $MAILPATH variable to point to your local mail spool. For example, the command:

    bash> MAILPATH='/var/spool/mail/john'
    bash> export MAILPATH 

    Causes bash to print a notification on john's console every time a new message is appended to John's mail spool.

  6. Run tasks in the background

    Bash lets you run one or more tasks in the background, and selectively suspend or resume any of the current tasks (or "jobs"). To run a task in the background, add an ampersand (&) to the end of its command line. Here's an example:

    bash> tail -f /var/log/messages &
    [1] 614

    Each task backgrounded in this manner is assigned a job ID, which is printed to the console. A task can be brought back to the foreground with the command fg jobnumber, where jobnumber is the job ID of the task you wish to bring to the foreground. Here's an example:

    bash> fg 1

    A list of active jobs can be obtained at any time by typing jobs at the bash prompt.

  7. Quickly jump to frequently-used directories

    You probably already know that the $PATH variable lists bash's "search path" -- the directories it will search when it can't find the requested file in the current directory. However, bash also supports the $CDPATH variable, which lists the directories the cd command will look in when attempting to change directories. To use this feature, assign a directory list to the $CDPATH variable, as shown in the example below:

    bash> CDPATH='.:~:/usr/local/apache/htdocs:/disk1/backups'
    bash> export CDPATH

    Now, whenever you use the cd command, bash will check all the directories in the $CDPATH list for matches to the directory name.

  8. Perform calculations

    Bash can perform simple arithmetic operations at the command prompt. To use this feature, simply type in the arithmetic expression you wish to evaluate at the prompt within double parentheses, as illustrated below. Bash will attempt to perform the calculation and return the answer.

    bash> echo $((16/2))
  9. Customise the shell prompt

    You can customise the bash shell prompt to display -- among other things -- the current username and host name, the current time, the load average and/or the current working directory. To do this, alter the $PS1 variable, as below:

    bash> PS1='\u@\h:\w \@> '
    bash> export PS1
    root@medusa:/tmp 03:01 PM>

    This will display the name of the currently logged-in user, the host name, the current working directory and the current time at the shell prompt. You can obtain a list of symbols understood by bash from its manual page.

  10. Get context-specific help

    Bash comes with help for all built-in commands. To see a list of all built-in commands, type help. To obtain help on a specific command, type help command, where command is the command you need help on. Here's an example:

    bash> help alias
    ...some help text...

    Obviously, you can obtain detailed help on the bash shell by typing man bash at your command prompt at any time.

[Feb 22, 2009] Installation Guide for RHEL 5

2. Steps to Get You Started
2.1. Upgrade or Install?
2.2. Is Your Hardware Compatible?
2.3. Do You Have Enough Disk Space?
2.4. Can You Install Using the CD-ROM or DVD?
2.4.1. Alternative Boot Methods
2.4.2. Making an Installation Boot CD-ROM
2.5. Preparing for a Network Installation
2.5.1. Preparing for FTP and HTTP installation
2.5.2. Preparing for an NFS install
2.6. Preparing for a Hard Drive Installation
3. System Specifications List
4. Installing on Intel® and AMD Systems
4.1. The Graphical Installation Program User Interface
4.1.1. A Note about Virtual Consoles
4.2. The Text Mode Installation Program User Interface
4.2.1. Using the Keyboard to Navigate
4.3. Starting the Installation Program
4.3.1. Booting the Installation Program on x86, AMD64, and Intel® 64 Systems
4.3.2. Booting the Installation Program on Itanium Systems
4.3.3. Additional Boot Options
4.4. Selecting an Installation Method
4.5. Installing from DVD/CD-ROM
4.5.1. What If the IDE CD-ROM Was Not Found?
4.6. Installing from a Hard Drive
4.7. Performing a Network Installation
4.8. Installing via NFS
4.9. Installing via FTP
4.10. Installing via HTTP
4.11. Welcome to Red Hat Enterprise Linux
4.12. Language Selection
4.13. Keyboard Configuration
4.14. Enter the Installation Number
4.15. Disk Partitioning Setup
4.16. Advanced Storage Options
4.17. Create Default Layout
4.18. Partitioning Your System
4.18.1. Graphical Display of Hard Drive(s)
4.18.2. Disk Druid's Buttons
4.18.3. Partition Fields
4.18.4. Recommended Partitioning Scheme
4.18.5. Adding Partitions
4.18.6. Editing Partitions
4.18.7. Deleting a Partition
4.19. x86, AMD64, and Intel® 64 Boot Loader Configuration
4.19.1. Advanced Boot Loader Configuration
4.19.2. Rescue Mode
4.19.3. Alternative Boot Loaders
4.19.4. SMP Motherboards and GRUB
4.20. Network Configuration
4.21. Time Zone Configuration
4.22. Set Root Password
4.23. Package Group Selection
4.24. Preparing to Install
4.24.1. Prepare to Install
4.25. Installing Packages
4.26. Installation Complete
4.27. Itanium Systems — Booting Your Machine and Post-Installation Setup
4.27.1. Post-Installation Boot Loader Options
4.27.2. Booting Red Hat Enterprise Linux Automatically
5. Removing Red Hat Enterprise Linux
6. Troubleshooting Installation on an Intel® or AMD System
6.1. You are Unable to Boot Red Hat Enterprise Linux
6.1.1. Are You Unable to Boot With Your RAID Card?
6.1.2. Is Your System Displaying Signal 11 Errors?
6.2. Trouble Beginning the Installation
6.2.1. Problems with Booting into the Graphical Installation
6.3. Trouble During the Installation
6.3.1. No devices found to install Red Hat Enterprise Linux Error Message
6.3.2. Saving Traceback Messages Without a Diskette Drive
6.3.3. Trouble with Partition Tables
6.3.4. Using Remaining Space
6.3.5. Other Partitioning Problems
6.3.6. Other Partitioning Problems for Itanium System Users
6.3.7. Are You Seeing Python Errors?
6.4. Problems After Installation
6.4.1. Trouble With the Graphical GRUB Screen on an x86-based System?
6.4.2. Booting into a Graphical Environment
6.4.3. Problems with the X Window System (GUI)
6.4.4. Problems with the X Server Crashing and Non-Root Users
6.4.5. Problems When You Try to Log In
6.4.6. Is Your RAM Not Being Recognized?
6.4.7. Your Printer Does Not Work
6.4.8. Problems with Sound Configuration
6.4.9. Apache-based httpd service/Sendmail Hangs During Startup
7. Driver Media for Intel® and AMD Systems
7.1. Why Do I Need Driver Media?
7.2. So What Is Driver Media Anyway?
7.3. How Do I Obtain Driver Media?
7.3.1. Creating a Driver Diskette from an Image File
7.4. Using a Driver Image During Installation
8. Additional Boot Options for Intel® and AMD Systems
9. The GRUB Boot Loader
9.1. Boot Loaders and System Architecture
9.2. GRUB
9.2.1. GRUB and the x86 Boot Process
9.2.2. Features of GRUB
9.3. Installing GRUB
9.4. GRUB Terminology
9.4.1. Device Names
9.4.2. File Names and Blocklists
9.4.3. The Root File System and GRUB
9.5. GRUB Interfaces
9.5.1. Interfaces Load Order
9.6. GRUB Commands
9.7. GRUB Menu Configuration File
9.7.1. Configuration File Structure
9.7.2. Configuration File Directives
9.8. Changing Runlevels at Boot Time
9.9. Additional Resources
9.9.1. Installed Documentation
9.9.2. Useful Websites
9.9.3. Related Books
10. Additional Resources about Itanium and Linux
IV. Common Tasks
23. Upgrading Your Current System
23.1. Determining Whether to Upgrade or Re-Install
23.2. Upgrading Your System
24. Activate Your Subscription
24.1. RHN Registration
24.1.1. Provide a Red Hat Login
24.1.2. Provide Your Installation Number
24.1.3. Connect Your System
25. An Introduction to Disk Partitions
25.1. Hard Disk Basic Concepts
25.1.1. It is Not What You Write, it is How You Write It
25.1.2. Partitions: Turning One Drive Into Many
25.1.3. Partitions within Partitions — An Overview of Extended Partitions
25.1.4. Making Room For Red Hat Enterprise Linux
25.1.5. Partition Naming Scheme
25.1.6. Disk Partitions and Other Operating Systems
25.1.7. Disk Partitions and Mount Points
25.1.8. How Many Partitions?
V. Basic System Recovery
26. Basic System Recovery
26.1. Common Problems
26.1.1. Unable to Boot into Red Hat Enterprise Linux
26.1.2. Hardware/Software Problems
26.1.3. Root Password
26.2. Booting into Rescue Mode
26.2.1. Reinstalling the Boot Loader
26.3. Booting into Single-User Mode
26.4. Booting into Emergency Mode
27. Rescue Mode on POWER Systems
27.1. Special Considerations for Accessing the SCSI Utilities from Rescue Mode
VI. Advanced Installation and Deployment
28. Kickstart Installations
28.1. What are Kickstart Installations?
28.2. How Do You Perform a Kickstart Installation?
28.3. Creating the Kickstart File
28.4. Kickstart Options
28.4.1. Advanced Partitioning Example
28.5. Package Selection
28.6. Pre-installation Script
28.6.1. Example
28.7. Post-installation Script
28.7.1. Examples
28.8. Making the Kickstart File Available
28.8.1. Creating Kickstart Boot Media
28.8.2. Making the Kickstart File Available on the Network
28.9. Making the Installation Tree Available
28.10. Starting a Kickstart Installation
29. Kickstart Configurator
29.1. Basic Configuration
29.2. Installation Method
29.3. Boot Loader Options
29.4. Partition Information
29.4.1. Creating Partitions
29.5. Network Configuration
29.6. Authentication
29.7. Firewall Configuration
29.7.1. SELinux Configuration
29.8. Display Configuration
29.8.1. General
29.8.2. Video Card
29.8.3. Monitor
29.9. Package Selection
29.10. Pre-Installation Script
29.11. Post-Installation Script
29.11.1. Chroot Environment
29.11.2. Use an Interpreter
29.12. Saving the File
30. Boot Process, Init, and Shutdown
30.1. The Boot Process
30.2. A Detailed Look at the Boot Process
30.2.1. The BIOS
30.2.2. The Boot Loader
30.2.3. The Kernel
30.2.4. The /sbin/init Program
30.3. Running Additional Programs at Boot Time
30.4. SysV Init Runlevels
30.4.1. Runlevels
30.4.2. Runlevel Utilities
30.5. Shutting Down
31. PXE Network Installations
31.1. Setting up the Network Server
31.2. PXE Boot Configuration
31.2.1. Command Line Configuration
31.3. Adding PXE Hosts
31.3.1. Command Line Configuration
31.4. TFTPD
31.4.1. Starting the tftp Server
31.5. Configuring the DHCP Server
31.6. Adding a Custom Boot Message
31.7. Performing the PXE Installation

[Feb 3, 2009] Using The Red Hat Rescue Environment LG #159

There are several different rescue CDs out there, and they all provide slightly different rescue environments. The requirement here at Red Hat Academy is, perhaps unsurprisingly, an intimate knowledge of how to use the Red Hat Enterprise Linux (RHEL) 5 boot CD.

All these procedures should work exactly the same way with Fedora and CentOS. As with any rescue environment, it provides a set of useful tools; it also allows you to configure your network interfaces. This can be helpful if you have an NFS install tree to mount, or if you have an RPM that was corrupted and needs to be replaced. There are LVM tools for manipulating Logical Volumes, "fdisk" for partitioning devices, and a number of other tools making up a small but capable toolkit.

The Red Hat rescue environment provided by the first CD or DVD can really come in handy in many situations. With it you can solve boot problems, bypass forgotten GRUB bootloader passwords, replace corrupted RPMs, and more. I will go over some of the most important and common issues. I also suggest reviewing a password recovery article written by Suramya Tomar ( that deals with recovering lost root passwords in a variety of ways for different distributions. I will not be covering that here since his article is a very good resource for those problems.

Start by getting familiar with using GRUB and booting into single user mode. After you learn to overcome and repair a variety of boot problems, what initially appears to be a non-bootable system may be fully recoverable. The best way to get practice recovering non-bootable systems is by using a non-production machine or a virtual machine and trying out various scenarios. I used Michael Jang's book, "Red Hat Certified Engineer Linux Study Guide", to review non-booting scenarios and rehearse how to recover from various situations. I would highly recommend getting comfortable with recovering non-booting systems because dealing with them in real life without any practice beforehand can be very stressful. Many of these problems are really easy to fix but only if you have had previous experience and know the steps to take.

When you are troubleshooting a non-booting system, there are certain things that you should be on the alert for. For example, an error in /boot/grub/grub.conf, /etc/fstab, or /etc/inittab can cause the system to not boot properly; so can an overwritten boot sector. In going through the process of troubleshooting with the RHEL rescue environment, I'll point out some things that may be of help in these situations.

[Jan 22, 2009] The World's Open Source Leader

Intel Intel Core i7 (Nehalem) processor is now supported. That increases scalability for database loads. Nehalem is a quad-core, hyperthreaded 45nM processor. Unaudited results showing gains of 1.7x for commercial applications and gains up to 3.5x for high-performance technical computing applications compared to the previous generation of Intel processors.

The Nehalem architecture has many new features. According to Wikipedia the most significant changes from the Core 2 include:

[Dec 24, 2008] Alan Cox and the End of an Era - Blogs – ComputerworldUK blogs - The latest technology news & analysis on Outsourcing, HMRC data, Apple iPhone, Global warming, MySQL, Open Enterprise

And now, it seems, after ten years at the company, Cox is leaving Red Hat:

I will be departing Red Hat mid January having handed in my notice. I'm not going to be spending more time with the family, gardening or other such wonderous things. I'm leaving on good terms and strongly supporting the work Red Hat is doing.

I've been at Red Hat for ten years as contractor and employee and now have an opportunity to get even closer to the low level stuff that interests me most. Barring last minute glitches I shall be relocating to Intel (logically at least, physically I'm not going anywhere) and still be working on Linux and free software stuff.

I know some people will wonder what it means for Red Hat engineering. Red Hat has a solid, world class, engineering team and my departure will have no effect on their ability to deliver.

[Sep 11, 2008] The LXF Guide 10 tips for lazy sysadmins Linux Format The website of the UK's best-selling Linux magazine

A lazy sysadmin is a good sysadmin. Time spent in finding more-efficient shortcuts is time saved later on for that ongoing project of "reading the whole of the internet", so try Juliet Kemp's 10 handy tips to make your admin life easier...

  1. Cache your password with ssh-agent
  2. Speed up logins using Kerberos
  3. screen: detach to avoid repeat logins
  4. screen: connect multiple users
  5. Expand Bash's tab completion
  6. Automate your installations
  7. Roll out changes to multiple systems
  8. Automate Debian updates
  9. Sanely reboot a locked-up box
  10. Send commands to several PCs

[Sep 9, 2008] The Fedora-Red Hat Crisis by Bruce Byfield

September 9, 2008 |

A few weeks ago, when I wrote that, "forced to choose, the average FOSS-based business is going to choose business interests over FOSS [free and open source software] every time," many people, including Mathew Aslett and Matt Assay, politely accused me of being too cynical. Unhappily, you only have to look at the relations between Red Hat and Fedora, the distribution Red Hat sponsors, during the recent security crisis for evidence that I might be all too accurate.

That this evidence should come from Red Hat and Fedora is particularly dismaying. Until last month, most observers would have described the Red Hat-Fedora relationship as a model of how corporate and community interests could work together for mutual benefit.

Although Fedora was initially dismissed as Red Hat's beta release when it was first founded in 2003, in the last few years, it had developed laudatory open processes and become increasingly independent of Red Hat. As Max Spevack, the former chair of the Fedora Board, said in 2006, the Red Hat-Fedora relationship seemed a "good example of how to have a project that serves the interests of a company that also is valuable and gives value to community members."

Yet it seems that, faced with a problem, Red Hat moved to protect its corporate interests at the expense of Fedora's interests and expectations as a community -- and that Fedora leaders were as surprised by the response as the general community.

Outline of a crisis

What happened last month is still unclear. My request a couple of weeks ago to discuss events with Paul W. Frields, the current Fedora Chair, was answered by a Red Hat publicist, who told me that the official statements on the crisis were all that any one at Red Hat or Fedora was prepared to say in public -- a response so stereotypically corporate in its caution that it only emphasizes the conflict of interests.

However, the Fedora announcements mailing list gave the essentials. On August 14, Frields sent out a notice that Fedora was "currently investigating an issue in the infrastructure systems." He warned that the entire Fedora site might become temporarily unavailable and warned that users should "not download or update any additional packages on your Fedora systems." As might be expected, the cryptic nature of this corporate-sounding announcement caused considerable curiosity, both within and without Fedora, with most people wanting to know more.

A day later, Frield's name was on another notice, saying that the situation was continuing, and pleading for Fedora users to be patient. A third notice followed on August 19, announcing that some Fedora services were now available, and providing the first real clue to what was happening when a new SSH fingerprint was released.

It was only on August 22 that Frields was permitted to announce that, "Last week we discovered that some Fedora servers were illegally accessed. The intrusion into the servers was quickly discovered, and the servers were taken offline . . . .One of the compromised Fedora servers was a system used for signing Fedora packages. However, based on our efforts, we have high confidence that the intruder was not able to capture the passphrase used to secure the Fedora package signing key."

Since then, plans for changing security keys have been announced. However, as of September 8, the crisis continues, with Fedora users still unable to get security updates or bug-fixes. Three weeks without these services might seem trivial to Windows users, but for Fedora users, like those of other GNU/Linux distribution, many of whom are used to daily updates to their system, the crisis amounts to a major disruption of service.

A conflict of cultures

From a corporate viewpoint, Red Hat's close-lipped reaction to the crisis is understandable. Like any company based on free and open source software, Red Hat derives its income from delivering services to customers, and obviously its ability to deliver services is handicapped (if not completely curtailed) when its servers are compromised. Under these circumstances, the company's wish to proceed cautiously and with as little publicity as possible is perfectly natural.

The problem is that, in moving to defend its own credibility, Red Hat has neglected Fedora's. While secrecy about the crisis may be second nature to Red Hat's legal counsel, the FOSS community expects openness.

In this respect, Red Hat's handling of the crisis could not contrast more strongly with the reaction of the community-based Debian distribution when a major security flaw was discovered in its openssl package last May. In keeping with Debian's policy of openness, the first public announcement followed hard on the discovery, and included an explanation of the scope, what users could do, and the sites where users could find tools and instructions for protecting themselves.

[Aug 23, 2008] OpenSSH blacklist script

That's sad -- RHN was compromised due and some troyanised OpenSSH packages were uploaded.
22nd August 2008

Last week Red Hat detected an intrusion on certain of its computer systems and took immediate action. While the investigation into the intrusion is on-going, our initial focus was to review and test the distribution channel we use with our customers, Red Hat Network (RHN) and its associated security measures. Based on these efforts, we remain highly confident that our systems and processes prevented the intrusion from compromising RHN or the content distributed via RHN and accordingly believe that customers who keep their systems updated using Red Hat Network are not at risk. We are issuing this alert primarily for those who may obtain Red Hat binary packages via channels other than those of official Red Hat subscribers.

In connection with the incident, the intruder was able to get a small number of OpenSSH packages relating only to Red Hat Enterprise Linux 4 (i386 and x86_64 architectures only) and Red Hat Enterprise Linux 5 (x86_64 architecture only) signed. As a precautionary measure, we are releasing an updated version of these packages and have published a list of the tampered packages and how to detect them.

To reiterate, our processes and efforts to date indicate that packages obtained by Red Hat Enterprise Linux subscribers via Red Hat Network are not at risk.

We have provided a shell script which lists the affected packages and can verify that none of them are installed on a system:

The script has a detached GPG signature from the Red Hat Security Response Team (key) so you can verify its integrity:

This script can be executed either as a non-root user or as root. To execute the script after downloading it and saving it to your system, run the command:

         bash ./

If the script output includes any lines beginning with "ALERT" then a tampered package has been installed on the system. Otherwise, if no tampered packages were found, the script should produce only a single line of output beginning with the word "PASS", as shown below:

         bash ./
   PASS: no suspect packages were found on this system

The script can also check a set of packages by passing it a list of source or binary RPM filenames. In this mode, a "PASS" or "ALERT" line will be printed for each filename passed; for example:

         bash ./ openssh-4.3p2-16.el5.i386.rpm
   PASS: signature of package "openssh-4.3p2-16.el5.i386.rpm" not on blacklist

Red Hat customers who discover any tampered packages, need help with running this script, or have any questions should log into the Red Hat support website and file a support ticket, call their local support center, or contact their Technical Account Manager.

[Aug 7, 2008] rsyslog 2.0.6 (v2 Stable) by Rainer Gerhards

This is new syslog daemon used by RHEL.

About: Rsyslog is an enhanced multi-threaded syslogd. Among others, it offers support for on-demand disk buffering, reliable syslog over TCP, SSL, TLS, and RELP, writing to databases (MySQL, PostgreSQL, Oracle, and many more), email alerting, fully configurable output formats (including high-precision timestamps), the ability to filter on any part of the syslog message, on-the-wire message compression, and the ability to convert text files to syslog. It is a drop-in replacement for stock syslogd and able to work with the same configuration file syntax.

Changes: IPv6 addresses could not be specified in forwarding actions, because they contain colons and the colon character was already used for some other purpose. IPv6 addresses can now be specified inside of square brackets. This is a recommended update for all v2-stable branch users.

[Mar 26, 2008] InternetNews Realtime IT News – Oracle Expands Its Linux Base by Sean Michael Kerner

Oracle claims that it continues to pick up users for its Linux offering and now is set to add new clustering capabilities to the mix.

So how is Oracle doing with its Oracle Unbreakable Linux? Pretty well. According to Monica Kumar, senior director Linux and open source product marketing at Oracle, there are now 2,000 customers for Oracle's Linux. Those customers will now be getting a bonus from Oracle: free clustering software.

Oracle's Clusterware software previously had only been available to Oracle's Real Application Clusters (RAC) customers, but now will also be part of the Unbreakable Linux support offering at no additional cost.

Clusterware is the core Oracle (NASDAQ: ORCL) software offering that enables the grouping of individual servers together into a cluster system. Kumar explained to that the full RAC offering provides additional components beyond just Clusterware that are useful for managing and deploying Oracle databases on clusters.

The new offering for Linux users, however, does not necessarily replace the need for RAC.

"We're not saying that this [Clusterware] replaces RAC," Kumar noted. "We are taking it out of RAC for other general purpose uses as well. Clusterware is general purpose software that is part of RAC but that isn't the full solution."

The Clusterware addition to the Oracle Unbreakable Linux support offering is expected by Kumar to add further impetus for users to adopt Oracle's Linux support program.

Oracle Unbreakable Linux was first announced in October 2006 and takes Red Hat's Enterprise Linux as a base. To date, Red Hat has steadfastly denied on its quarterly investor calls that Oracle's Linux offering has had any tangible impact on its customer base.

In 2007, Oracle and Red Hat both publicly traded barbs over Yahoo, which apparently is a customer of both Oracle's Unbreakable Linux as well as Red Hat Enterprise Linux.

"We can't comment on them [Red Hat] and what they're saying," Kumar said. "I can tell you that we're seeing a large number of Oracle customers who were running on Linux before coming to Unbreakable Linux. It's difficult to say if they're moving all of their Linux servers to Oracle or not."

That said, Kumar added that Linux customers are coming to Oracle for more than just running Oracle on Linux, they're also coming with other application loads as well.

"Since there are no migration issues we do see a lot of RHEL [Red Hat Enterprise Linux] customers because it's easy for them to transition," Kumar claimed.

Ever since Oracle's Linux first appeared, Oracle has claimed that it was fully compatible with RHEL and it's a claim that Kumar reiterated.

"In the beginning, people had questions about how does compatibility work, but we have been able to address all those questions," Kumar said. "In the least 15 months, Oracle has proved that we're fully compatible and that we're not here to fork Linux but to make it stronger."

[Feb 26, 2008] Role-based access control in SELinux

Learn how to work with RBAC in SELinux, and see how the SELinux policy, kernel, and userspace work together to enforce the RBAC and tie users to a type enforcement policy.

[Jan 24, 2008] Project details for cgipaf

The package also contain Solaris binary of chpasswd clone, which is extremely useful for mass changes of passwords in mixed corporate environments which along with Linux and AIX (both have native chpasswd implementation) include Solaris or other Unixes that does not have chpasswd utility (HP-UX is another example in this category). Version 1.3.2 now includes Solaris binary of chpasswd which works on Solaris 9 and 10.

cgipaf is a combination of three CGI programs.

All programs use PAM for user authentication. It is possible to run a script to update SAMBA passwords or NIS configuration when a password is changed. mailcfg.cgi creates a .procmailrc in the user's home directory. A user with too many invalid logins can be locked. The minimum and maximum UID can be set in the configuration file, so you can specify a range of UIDs that are allowed to use cgipaf.

[Dec 21, 2007] LXER interview with John Hull - the manager of the Dell Linux engineering team

The original sales estimates for Ubuntu computers was around 1% of the total sales, or about 20,000 systems annually. Have the expectations been met so far? Will Dell ever release sales figures for Ubuntu systems?

The program so far is meeting expectations. Customers are certainly showing their interest and buying systems preloaded with Ubuntu, but it certainly won't overtake Microsoft Windows anytime soon. Dell has a policy not to release sales numbers, so I don't expect us to make Ubuntu sales figures available publicly.

[Dec 21, 2007] Red Hat to get new CEO from Delta Air Lines Underexposed - CNET

"When you take them out of the big buildings, without the imprimatur of Hewlett-Packard, IBM and Oracle, or HP around them, they just didn't hold up."

Szulik, who took over as CEO from Bob Young in 1999 just a few months after its initial public offering, said he's stepping down because of family health issues.

"For the last nine months, I've struggled with health issues in my family," and that priority couldn't be balanced with work, Szulik said in an interview. "This job requires a 7x24, 110 percent commitment."

Szulik, who remains chairman of the board, praised Whitehurst in a statement, saying he's a "hands-on guy who will be a strong cultural fit at Red Hat" and "a talented executive who has successfully led a global technology-focused organization at Delta."

On a conference call, Szulik said Whitehurst stood "head and shoulders" above other candidates interviewed in a recruiting process. He was a programmer earlier in his career and runs four versions of Linux at home, he said.

Moreover, Szulik said he wasn't satisfied with more traditional tech executives who were interviewed.

"What we encountered was in many cases was a lack of understanding of open-source software development and of our model," he said. During the interview, he added about the tech industry candidates, "When you take them out of the big buildings, without the imprimatur of Hewlett-Packard, IBM and Oracle, or HP around them, they just didn't hold up."

The surprise move was announced as the leading Linux seller announced results for its third quarter of fiscal 2008. Its revenue increased 28 percent to $135.4 million and net income went up 12 percent to $20.3 million, or 10 cents per share. The company also raised estimates for full-year results to revenue of $521 million to $523 million and earnings of about 70 cents per share.

[Oct 29, 2007] Oracle's Linux Unbreakable Or Just A Necessary Adjustment - Open Source Blog - InformationWeek

.. In fact, Coekaerts has to say this often because Oracle is widely viewed as an opportunistic supporter of Linux, taking Red Hat's product, stripping out its trademarks, and offering it as its own. Coekaerts says what's more important is that Oracle is a contributor to Linux. It contributed the cluster file system and hasn't really generated a competing distribution.

Yet, in some cases, there is an Oracle distribution. Most customers Coekaerts deals with get their Linux from Red Hat and then ask for Oracle's technical support in connection with the Oracle database. But Oracle has been asked often enough to supply Linux with its applications or database that it makes available a version of Red Hat Enterprise Linux, with the Red Hat logos and labels stripped out. Oracle's version of Linux has a "cute" penguin inserted and is optimized to work with Oracle database applications. It may also have a few Oracle-added "bug fixes," Coekaerts says.

The bug fixes, however, lead to confusion about Coekaert's relatively simple formulation of Oracle enterprise support, not an Oracle fork. And that confusion stems from Oracle CEO Larry Ellison's attention-getting way of introducing Unbreakable Linux at the October 2006 Oracle OpenWorld.

When enterprise customers call with a problem, Oracle's technical support finds the problem and supplies a fix. If it's a change in the Linux kernel, the customer would normally have to wait for the fix to be submitted to kernel maintainers for review, get merged into the kernel, and then get included in an updated version of an enterprise edition from Red Hat or Novell. Such a process can take up to two years, observers inside and outside the kernel process say.

The pace of bug fixes "is the most serious problem facing the Linux community today," Ellison explained during an Oracle OpenWorld keynote a year ago.

When Oracle's Linux technical support team has a fix, it gives that fix to the customer without waiting for Red Hat's uptake or the kernel process itself, Ellison said.

Red Hat's Berman argues that when it comes to the size of the problem, Oracle makes too much of too little.

When Red Hat learns of bugs, it retrofits the fixes into its current and older versions of Red Hat Enterprise Linux. That's one of Red Hat's main engineering investments in Linux, Berman said in an interview.

Coekaerts responds, "There are disagreements on what is considered critical by the distribution vendors and us or our customers."

Berman acknowledges that several judgment calls are involved. Some bugs affect only a few enterprise customers. They may apply to an old RHEL version. "Three or four times a year" a proposed fix may not be deemed important enough to undergo this retrofit, he says.

But Coekaerts told InformationWeek: "Oracle customers encounter this problem more than three or four times a year. I cannot give a number, it tends to vary. But it does happen rather frequently."

Berman counters that when Oracle changes Red Hat's tested code with its own bug fixes, it breaks the certification that Red Hat offers on its distribution, so it's no longer guaranteed to work with other software. "Oracle claims they will patch things for a customer. That's a fork," he says.

What Red Hat calls a fork is what Oracle calls a "one-off fix to customers at the time of the problem. … If the customer runs version 5 but Red Hat is at version 8, and the customer runs into a bug, does he want to go into [the next release with a fix] version 9? Likely not. He wants to minimize the amount of change. Oracle will fix the customer's problem in version 5…" Coekaerts says.

I think it's fair to characterize what Oracle does as technical support, not a fork. There's no attempt to sustain the aberration through a succession of Linux kernels offered to the general public as an alternative to the mainstream kernel.

But the Oracle/Red Hat debate defines a gray area in a fast-moving kernel development process. Bugs that affect many users get addressed through the kernel process or the Red Hat and Novell (NSDQ: NOVL) retrofits. That still may not always cover a problem for an individual user or a set of users sitting on a particular piece of aging hardware or caught in a specific hardware/software configuration.

If Oracle fixes some of these problems, I say more power to it.

But if they are problems that are isolated in nature or limited in scope, as I suspect they are, that makes them something less than Ellison's "most serious problem facing the Linux community today."

Ellison needed air cover to take Red Hat's product and do what he wanted with it. In the long run, he's probably increasing the use of Linux in the enterprise and keeping Red Hat on its toes as a support organization. That's less benefit than claimed, but still something.

[Oct 23, 2007] Yast (Yet Another Setup Tool) part of its distribution.

Oracle Enterprise Linux became more compatible with Suse

Yet Another Setup Tool. Yast helps make system administration easier by providing a single utility for configuring and maintaining Linux systems. The version of Yast available here is modified to work with all Enterprise Linux distributions including Enterprise Linux and SuSE.

Special note to Oracle Management Pack for Linux users:

[Oct 23, 2007] UK Unix group newsletter

Oracle hasn't "talked about how our Linux is better than anyone else's Linux. Oracle has not forked and has no desire to fork Red Hat Enterprise Linux and maintain its own version. We don't differentiate on the distribution because we use source code provided by Red Hat to produce Oracle Enterprise Linux and errata. We don't care whether you run Red Hat Enterprise Linux or Enterprise Linux from Oracle and we'll support you in either case because the two are fully binary- and source-compatible. Instead, we focus on the nature and the quality of our support and the way we test Linux using real-world test cases and workloads."


data=writeback While the writeback option provides lower data consistency guarantees than the journal or ordered modes, some applications show very significant speed improvement when it is used. For example, speed improvements can be seen when heavy synchronous writes are performed, or when applications create and delete large volumes of small files, such as delivering a large flow of short email messages. The results of the testing effort described in Chapter 3 illustrate this topic.

When the writeback option is used, data consistency is similar to that provided by the ext2 file system. However, file system integrity is maintained continuously during normal operation in the ext3 file system.

In the event of a power failure or system crash, the file system may not be recoverable if a significant portion of data was held only in system memory and not on permanent storage. In this case, the filesystem must be recreated from backups. Often, changes made since the file system was last backed up are inevitably lost.

[Aug 7, 2007] Linux Replacing atime

August 7, 2007 | KernelTrap

Submitted by Jeremy on August 7, 2007 - 9:26am.

In a recent lkml thread, Linus Torvalds was involved in a discussion about mounting filesystems with the noatime option for better performance, "'noatime,data=writeback' will quite likely be *quite* noticeable (with different effects for different loads), but almost nobody actually runs that way."

He noted that he set O_NOATIME when writing git, "and it was an absolutely huge time-saver for the case of not having 'noatime' in the mount options. Certainly more than your estimated 10% under some loads."

The discussion then looked at using the relatime mount option to improve the situation, "relative atime only updates the atime if the previous atime is older than the mtime or ctime. Like noatime, but useful for applications like mutt that need to know when a file has been read since it was last modified."

Ingo Molnar stressed the significance of fixing this performance issue, "I cannot over-emphasize how much of a deal it is in practice. Atime updates are by far the biggest IO performance deficiency that Linux has today. Getting rid of atime updates would give us more everyday Linux performance than all the pagecache speedups of the past 10 years, _combined_." He submitted some patches to improve relatime, and noted about atime:

"It's also perhaps the most stupid Unix design idea of all times. Unix is really nice and well done, but think about this a bit: 'For every file that is read from the disk, lets do a ... write to the disk! And, for every file that is already cached and which we read from the cache ... do a write to the disk!'"

[Aug 7, 2007] Expect plays a crucial role in network management by Cameron Laird

Jul 31, 2007 |

If you manage systems and networks, you need Expect.

More precisely, why would you want to be without Expect? It saves hours common tasks otherwise demand. Even if you already depend on Expect, though, you might not be aware of the capabilities described below.

Expect automates command-line interactions

You don't have to understand all of Expect to begin profiting from the tool; let's start with a concrete example of how Expect can simplify your work on AIX® or other operating systems:

Suppose you have logins on several UNIX® or UNIX-like hosts and you need to change the passwords of these accounts, but the accounts are not synchronized by Network Information Service (NIS), Lightweight Directory Access Protocol (LDAP), or some other mechanism that recognizes you're the same person logging in on each machine. Logging in to a specific host and running the appropriate passwd command doesn't take long—probably only a minute, in most cases. And you must log in "by hand," right, because there's no way to script your password?

Wrong. In fact, the standard Expect distribution (full distribution) includes a command-line tool (and a manual page describing its use!) that precisely takes over this chore. passmass (see Resources) is a short script written in Expect that makes it as easy to change passwords on twenty machines as on one. Rather than retyping the same password over and over, you can launch passmass once and let your desktop computer take care of updating each individual host. You save yourself enough time to get a bit of fresh air, and multiple opportunities for the frustration of mistyping something you've already entered.

The limits of Expect

This passmass application is an excellent model—it illustrates many of Expect's general properties:

You probably know enough already to begin to write or modify your own Expect tools. As it turns out, the passmass distribution actually includes code to log in by means of ssh, but omits the command-line parsing to reach that code. Here's one way you might modify the distribution source to put ssh on the same footing as telnet and the other protocols:
Listing 1. Modified passmass fragment that accepts the -ssh argument

} "-rlogin" {
set login "rlogin"
} "-slogin" {
set login "slogin"
} "-ssh" {
set login "ssh"
} "-telnet" {
set login "telnet"

In my own code, I actually factor out more of this "boilerplate." For now, though, this cascade of tests, in the vicinity of line #100 of passmass, gives a good idea of Expect's readability. There's no deep programming here—no need for object-orientation, monadic application, co-routines, or other subtleties. You just ask the computer to take over typing you usually do for yourself. As it happens, this small step represents many minutes or hours of human effort saved.

[Jul 30, 2007] Due to problems on high loads in Linux 2.6.23 kernel the Linux kernel process scheduler has been completely ripped out and replaced with a completely new one called Completely Fair Scheduler (CFS) modeled after Solaris 10 scheduler.

This is will not affect the current Linux distributions (Suse 9, 10 and RHEL 4.x) as they forked the kernel and essentially develop it as a separate tree.

But it will affect any future Red Hat or Suse distribution (Suse 11 and RHEL 6 respectively).

How it will fair in comparison with Solaris 10 remains to be seen:

The main idea of CFS's design can be summed up in a single sentence: CFS basically models an "ideal, precise multi-tasking CPU" on real hardware.

Ideal multi-tasking CPU" is a (non-existent) CPU that has 100% physical power and which can run each task at precise equal speed, in parallel, each at 1/n running speed. For example: if there are 2 tasks running then it runs each at exactly 50% speed.

[Apr 10, 2007] Here come the RHEL 5 clones

Of course if you go with a cloned RHEL, while you get the code goodies, you don't get Red Hat's support. Various Red Hat clone distributions, such StartCom AS-5, CentOS, and White Box Enterprise Linux, are built from Red Hat's source code, which is freely available at the Raleigh, NC company's FTP site. The "cloned" versions alter or otherwise remove non-free packages within the RHEL distribution, or non-redistributable bits such as the Red Hat logo.

StartCom Enterprise Linux AS-5 is specifically positioned as a low-cost, server alternative to RHEL 5. This is typical of the RHEL clones.

These distributions, which usually don't offer support options, are meant for expert Linux users who want Red Hat's Linux distribution, but don't feel the need for Red Hat's support.

[Apr 10, 2007] Red Hat Enterprise Linux 5 Some Assembly Required

With RHEL 5, Red Hat has shuffled its SKUs around a bit—what had previously been the entry-level ES server version is now just called Red Hat Enterprise Linux. This version is limited to two CPU sockets, and is priced, per year, at $349 for a basic support plan, $799 for a standard support plan and $1,299 for a premium support plan.

This version comes with an allowance for running up to four guest instances of RHEL. You can run more than that, as well as other operating systems, but only four get updates from, and may be managed through, RHN (Red Hat Network). We thought it was interesting how RHN recognized the difference between guests and hosts on its own and tracked our entitlements accordingly.

What had been the higher-end, AS version of RHEL is now called Red Hat Enterprise Linux Advanced Platform. This version lacks arbitrary hardware limitations and allows for an unlimited number of RHEL guest instances per host. RHEL's Advanced Platform edition is priced, per year, at $1,499 with a standard support plan and $2,499 with a premium plan.

[Mar 23, 2007] Using YUM in RHEL5 for RPM systems

There is more to Red Hat Enterprise Linux 5 (RHEL5) than Xen. I, for one, think people will develop a real taste for YUM (Yellow dog Updater Modified), an automatic update and package installer/remover for RPM systems.

YUM has already been used in the last few Fedora Core releases, but RHEL4 uses the up2date package manager. RHEL5 will use YUM 3.0. Up2date is used as a wrapper around YUM in RHEL5. Third-party code repositories, prepared directories or websites that contain software packages and index files, will also make use of the Anaconda-YUM combination.

... ... ...

Using YUM makes it much easier to maintain groups of machines without having to manually update each one using RPM. Some of its features include:

RHEL5 moves the entire stack of tools which install and update software to YUM. This includes everything from the initial install (through Anaconda) to host-based software management tools, like system-config-packages, to even the updating of your system via Red Hat Network (RHN). New functionality will include the ability to use a YUM repository to supplement the packages provided with your in-house software, as well as plugins to provide additional behavior tweaks.

YUM automatically locates and obtains the correct RPM packages from repositories. It frees you from having to manually find and install new applications or updates. You can use one single command to update all system software, or search for new software by specifying criteria.

[Dec 7, 2006] Survey Finds Red Hat Customers Willing To Stay With Company if it Cuts Prices

(SeekingAlpha) Eric Savitz submits: Red Hat customers are mulling their options. But they can be bought.

That’s one of the takeaways from a fascinating report today from Pacific Crest’s Brendan Barnicle based on a survey he did of 118 enterprise operating system buyers, including 86 Red Hat support customers. The goal of the survey was to see how Linux users are responding to the new offerings from Oracle (MSFT)/Novell (NOVL) partnership.

Reading the results of the study, you reach several conclusions. One, most customers are seriously considering the new offerings. Two, Red Hat can hold on to most of them, if they are willing to cut prices far enough. And three, customers seem a little more interested in the Microsoft/Novell offerings than those from Oracle.

Here are a few details:

[Dec 1, 2006] Red Hat From 'Cuddly Penguin' to Public Enemy No. 1

We have suffered from that image in the past. And some of our competitors have played up the fact that the JBoss guys are behaving like a sect. When, in fact, if you look at the composition of our community, we have an order of magnitude more committers than our direct open-source competitors.

But the perception is still there. Bull even said something about that perception. And we'd been thinking about opening up the governance. So when Bull provided us with a great study case, we decided to put the pedal to the metal. But make no mistake this is not going to be a free-for-all. We care a lot about the quality of what gets committed. We invest very heavily in all our projects. We're serious about this so we expect the same level of seriousness from our collaborators.

There is going to be a hybrid model where there is an opening up of the governance. In terms of code contributions it's always been there. But now it's been made explicit instead of implicit and open to attacks of "closedness." JBoss has always been an open community, but we've hired most of our primary committers.

Well, you seem more willing to compromise and evolve your stance on things. Like SCA [Service Component Architecture]—initially you were against it, but it seems like you've changed your mind.

Well, yeah, the specific SCA stance today is there is no reason for us to be for or against it. If it plays out in the market, we'll support it. And I think Mark Little [a JBoss core developer] said it very well that the ESB implementations usually outlive standards.

So what you're seeing from us is mostly due to Mark Little's influence. Mark has been around in the standards arena and has seen all these standards come and go. So it's not about the standards, it's about our implementation in support of all these standards. And it's not our place to be waging a standards war. It's our place to implement and let the market decide and we'll follow the market.

So where I'll agree with you is that it's less of a dogmatic position in terms of perceived competition and more focus on what we do well, which is implementations.

Another thing is JBoss four years ago was very much Marc Fleury and the competitive stance against Sun and things like that. Today I don't do anything. In fact, I actively stay out in terms of not getting in the way of my guys.

So it's both a sign of maturity and of a more diverse organization. I'm representing more than leading the technical direction these days. And that's a very good thing.

You said you approached David Heinemeier Hansson, the creator of Ruby on Rails, to work at JBoss. What other types of developers are you interested in hiring?

Yeah, we did approach him. There is a lot of talent around the Web framework. One of the problems is it's a very fragmented community at a personal level. You have one guy and his framework. Though, this is not the case with Ruby on Rails. But there's a lot of innovation that's going on that would benefit from unification under a bigger distribution umbrella and bigger R&D umbrella. And I think JBoss/Red Hat is in a position to offer that. So we're always talking about new guys.

One of the things I like to do is talk to the core developers and say, "Where are you in terms of recruitment?" And we're talking to scripting guys. I think scripting is the next frontier as [Ruby on Rails] has showed. We have a unique opportunity of bringing under one big branded umbrella a diverse group of folks that today are doing excellent work, be it the scripting crowd, REST, Web framework, or the Faces, or the guys integrating with Seam. All of the work we're doing is going to take more people and we're always on the lookout for the right talent and the right fit.

[Sep 14, 2005] Dr. Dobb's Red Hat Releases Enterprise Linux 5 Beta September 13

... The Red Hat Enterprise Linux 5 Beta 1 release contains virtualization on the i386 and x86_64 architectures as well as a technology preview for IA64.

... ... ...

Aside from Xen, Red Hat Enterprise Linux 5 Beta 1 features AutoFS and iSCSI network storage support, smart card integration, SELinux security, clustering and a cluster file system, Infiniband and RDMA support, and Kexec and Kdump, which replace the current Diskdump and Netdump. Beta 1 also incorporates improvements to the installation process, analysis and development tools SystemTap and Frysk, a new driver model and enablers for stateless Linux.

Linux Client Migration Cookbook A Practical Planning and Implementation Guide for Migrating to Desktop Linux

IBM Redbooks

The goal of this IBM Redbook is to provide a technical planning reference for IT organizations large or small that are now considering a migration to Linux-based personal computers. For Linux, there is a tremendous amount of “how to” information available online that addresses specific and very technical operating system configuration issues, platform-specific installation methods, user interface customizations, etc. This book includes some technical “how to” as well, but the overall focus of the content in this book is to walk the reader through some of the important considerations and planning issues you could encounter during a migration project. Within the context of a pre-existing Microsoft Windows-based environment, we attempt to present a more holistic, end-to-end view of the technical challenges and methods necessary to complete a successful migration to Linux-based clients.

[Jun 24, 2004] Open Source Blog: Open Sourcery by Blane Warrene

I recently spent some time speaking with a popular Yankee Group analyst who covers the enterprise sector in the US, focusing in on open source and where the movement may go in the next few years.

Just to be clear, I differentiate, as most industry watchers do, between Linux and open source. While Linux is open source, the primary Linux distributors have caught on to how they need to position themselves for success and are starting to run their businesses just as any proprietary software company does.

Red Hat and SUSE make prime examples, realizing the path to long term success and revenue streams resided in proving themselves enterprise worthy to larger businesses and institutions, have shifted business models or been acquired by organizations with roots in the enterprise.

Her views, while not always popular in the open source community. are right on point if open source seeks widespread adoption and a permanent seat at the table for longer term financial success.

There are a few obstacles open source proponents need to accept and move forward on:

  1. It will be more costly for a company to migrate away from Windows to Linux, even in light of slightly reduced ongoing maintenance and improved security and uptime. While I have not always agreed that the costs are higher, having migrated corporate systems to Linux in the past, their research showed it to be true in many cases -- especially when migrating beyond standard web hosting and email systems. The costs are higher when factoring in re-certifying drivers, application integrity and training.
  2. To truly become entrenched as a viable financially-rewarding option (meaning open source companies make money and create jobs), a shift toward commercial software models is necessary. This does not mean forgoing open source, however, what it does mean is developing a structure for development, distribution, patching and support that passes muster with corporate IT managers who could be investing substantial amounts of money in open source.

What it boils down to is that while open source has definitely revolutionized software, and it is found internationally in companies large and small, businesses still pick software because it provides a solution not just because it is open source.

The fact that it is cheaper or free simply means the user will save money, but this does not win the favor of those buyers who could be injecting millions into open source projects rather than proprietary software makers.

I would use Firebird as a model. In an interview with Helen Borrie, forthcoming in my July column on SitePoint, she noted that since many Fortune 500 companies are using an open source database like Firebird speaks volumes to the maturing of their project and open source at large.

The reason as I see it, is due to the treatment of Firebird like an enterprise scale proprietary software project. They have a well managed developer community and active support lists, commercial offerings for support through partnerships with several companies, and commercial development projects for corporate clients.

If more open source projects looked at Borrie's team model and discipline in development and support, we just might see more penetration that attracts longer and more profitable contracts and work for those like us in the SitePoint community.

Selected Comments


It will be more costly for a company to migrate away from Windows to Linux, even in light of slightly reduced ongoing maintenance and improved security and uptime. You mean relative to staying with Windows? Does this include recurring costs of Windows licensing / upgrades?

The costs are higher when factoring in re-certifying drivers, application integrity and training.

On the drivers front, that assumes (if we're saying Linux cf. Windows) that systems need upgrades as frequently. There's generally less need to keep upgrading Linux, when used as a server.

Re application integrity, think thats very hard to research accurately - kind of a wooly comment that needs qualification.

On the training side, it's an interesting area where it's kind of like comparing Apples with Pears.

Windows generally hides administrators from much of what's really happening, so it's probably easier to train someone to the point where they're feeling confident but given serious problems, who do you turn to?

*Nix effectively exposes administrators to everything so more time is required to reach the point where sysadmins are confident. Once they reach that point though, they're typically capable of handling anything. The result is stable systems. I'd also argue that a single *Nix sysadmin is capable of maintaining a greater number of systems (scripts / automation etc.) although no figures to back that.

Firebird is an interesting example. The flip side of Firebirds way of doing things seems to be the Open Source "community" is largely unaware of it (compared to, say, MySQL).

Posted by: HarryF from Jun 24th, 2004 @ 8:03 AM MDT


Yes - on costs - Linux was actually found to be more expensive in numerous cases compared to staying with Windows. This is unfortunate as I am a proponent of finding migration paths from Windows to Linux for stability and administration automation. However, the research did show the total cost of ownership eventually balances out, it simply is much more expensive at the outset than staying on a Windows upgrade path.

This survey (partially on site with staff and others via questionnaire) - 1000 companies with 5000 or more employees - found that they did have to certify drivers at the initial migration, certify all new disk images, provide training or certification to adhere to corporate policy, buy indemnification insurance, perform migrations, test, establish support contracts and finally, pay about a 15 percent premium when bringing in certified L:inux staff.

The benefit if the company decided to take the financial hit: over an extended period they experienced the benefits of Linux - uptime, experienced admins and flexibility of the platform.

Application integrity was ambiguous in the study - however - managers cited it constantly when trying to retire commercial Unix and move apps to Linux, needing certification that an entire applications runs exactly as before.

Perhaps it is time for the open source community to begin establishing central organizational points that act as clearinghouses - like Open Source Development labs does for Linux - to certify open source applications on a major scale.

Posted by: bwarrene from Jun 24th, 2004 @ 1:12 PM MDT


I beg to differ on Harry's view about Firebird. Firebird is not as popular as MySQL because 1) it's a newer project (project, not software) and 2) MySQL support comes built into PHP; no need for additional software. Firebird requires either recompilation or loading this DLL into the extension space.

Posted by: andrecruz Jun 24th, 2004 @ 9:37 PM MDT


It was nice to read about your chat with L... DiD... (why are we keeping her name secret?).

Second, I don't understand your distinction between Linux and Open Source. Maybe I'm slow or something, but what it seems to boil down to is:

"Open Source = unprofessional Proprietary = professional (unstated) Linux = open source, but starting to become professional despite itself by acting like proprietary."

Well I'll grant you there are a lot of unprofesssional Free Software projects out there; but the same is true of proprietary. Bad proprietary programs are slightly less likely to see the light of day, but there's still a bevy of them out there.

Now, on the assertion that Linux companies are succeeding by acting like proprietary companies: there's truth and non-truth to it. On the one hand, Red Hat and SuSE have no doubt learned a lot about management, marketing, and good business practices from established companies. On the other hand, an effective open source player does not act the same as an effective proprietary player: there are all kinds of issues with dealing with the developer community that are not an issue in the proprietary world: they bring plusses and minuses, but have to be dealt with rather than ignored.

And I will note that Red Hat, the most successful Linux distributor, is a pure-play Open Source vendor: they do not ship proprietary code. In fact, they devote a lot of developer time to a community distribution that they make no direct money on (but do get free testing from). Likewise, one of the first things Novell did after its so-far successful acquisition of SuSE was to GPL SuSE's proprietary installer. This suggests that while good management is indispensible in anythin, Open Source ventures should not be running off and trying to ape proprietary vendors blindly.

Finally, there's a big difference between the way mass-market shrinkwrapped proprietary software and the way big-iron stuff is. With big-iron stuff you often have consultants in the field, lots of direct customer feedback, maybe even code sharing under NDA with the client: in short, it works a lot like an Open Source project. And that's where Open Source has shined: *nix boxes, web servers, network infrastructure, compilers, developer tools, and increasingly RDMSes. With mass shrinkwrap you have to do much more seeking out of customer needs on your own and also be prepared to tell customers to shove it and wait for the next release. On stuff like this (desktop guis and apps) Open Source has been less successful.

At least one high-profile OSS desktop project (Mozilla) was a legendary quagmire for a long time and is only beginning to claw its way back. Many of the mistakes came from not being open to community input ("dammit, we don't need a whole platform, just a good browser") as any good project of any kind should be. Thing is, no one has a clear idea of how to be usefully open to community input on a mass-market OSS project yet: the twin dangers of adding every requested feature or my-way-or-the-highway-ism have been so far hard to avoid.

Personally, I think the question of the Open Source desktop is given too much importance. Windows server shipments still account for 60% of the market, so it's not like that area is all sewn up. A company that wants to avoid vendor lock-in would do best to migrate its server infrastructure first - that's gonna be least painful and probably highest long-term benefit. Then maybe desktop apps, the maybe desktop operating system.

On MySQL vs. Firebird: yes, MySQL is more widespread, but they're used for entirely different things.

Posted by: jmcginty Jun 25th, 2004 @ 12:34 PM MDT

Dag Wieers

I'm a bit confused to why you want to differentiate between Linux (eg. Red Hat) and Open Source.

Red Hat releases source packages and contributes largely to Open Source projects, both in resources as in code. Improvements by Red Hat are included in SuSE and vice versa. Everybody wins.

This ensures that Red Hat will have to be the best on its own merits. Competition will always be lurking around the corner to take over. Despite that, Red Hat is doing a good job.

You cannot compare this to proprietary vendors were your money goes into the big company bucket being used for the next version that you have to pay for again.

If I can choose I'd rather pay for services, if it guarantees that the money is used for Open Source development. If my Open Source vendor goes belly-up, its work is still available for anyone to use.

Paying for Open Source just guarantees you that you have freedom and are never tight to any vendor. Red Hat is just one example to show that the money is used for the good of the public.

And if you don't have deep pockets, there's still Fedora, CentOS, TaoLinux or Whitebox. Plenty of competition in the same vendor segment. Hard to beat IMO.

Posted by: Dag Wieers from Jun 26th, 2004 @ 3:57 AM MDT

Ron Johnson

One thing I notice that is never mentioned when talking about Windows vs. Linux TCO is virus & worm costs. Both the cost of AV s/w and clean-up after an infection sneaks into the corporate LAN. That *huge* expense will never be borne by a Linux shop.

Posted by: Ron Johnson Jun 26th, 2004 @ 7:56 AM MDT

HP Throws Weight Behind MySQL, JBoss By Clint Boulton

HP (Quote, Chart) stepped up its commitment to open source software Monday by pledging to offer and support the MySQL database server and JBoss application server software in its servers.

The Palo Alto, Calif. systems vendor said it has inked agreements with those open source purveyors to certify and support MySQL and JBoss software on its servers.

Jeffrey Wade, manager of Linux Marketing Communications at HP, said the certifications factor in the company's Linux reference architecture is a software stack that covers everything from the hardware to the operating system, drivers and management agents.

Deployed on HP ProLiant servers, the open source Linux Reference Architectures are based on software from MySQL, JBoss, Apache, and OpenLDAP. The company's commercial Linux Reference Architectures are based on product from Oracle, BEA and SAP.

Both MySQL and JBoss will join the HP Partner Program and receive joint testing and engineering support on HP's hardware systems.

Wade told the added layer of MySQL and JBoss support addresses one of the largest concerns customers have today in opting to pick open source technology over mainstay proprietary products such as Microsoft (Quote, Chart)Windows, Sun Microsystems' (Quote, Chart) Solaris or UNIX.

"We can provide support for that entire solution stack and we're also now giving our customers flexibility in choice and the types of solutions they want to deploy whether that's a commercial or open source application," Wade said.

Bob Bickel, vice president of strategy and corporate development at JBoss, said commercial use remains somewhat constrained because a CIO doesn't know whom they can turn to for support.

"They don't know who they can turn to for indemnification," Bickel told "Yeah, it works great and it's cheap but what happens in the middle of their big selling season if something goes down. Who do they turn to and get it from. What HP's doing is taking an all encompassing view of this with certification and testing."

Testing keeps customers from guessing what version of a Java virtual machine, operating system, MySQL or JBoss product can all work together in a guaranteed way, Bickel explained.

MySQL Vice President of Marketing Zack Urlocker said companies such as Sabre are using an open source stack for business applications. Partnering with HP, then, provides great validation for MySQL and JBoss software.

"A couple of years ago the big knock on open source was that it might be good on the periphery or Web applications, but was not quite ready for business critical applications," Urlocker told "Now, the No. 1 issues have been support. People who have had a lot of success with Linux are now looking at how to use a whole open source stack."

The deal is truly symbiotic. While MySQL and JBoss get backing from a technology driver such as HP, HP gets the added credibility of being cozy with open source, a label many enterprises and HP rivals, such as IBM (Quote, Chart) and Dell (Quote, Chart), are working toward.

Linux sales are trending tall regardless; according to recent hardware server and database software studies from high-tech research outfit Gartner.

Despite legal threats from SCO Group and competition from Microsoft, Gartner said Linux continued to be the growth powerhouse in the operating systems server market, with a revenue increase of 57.3 percent in the first quarter of 2004.

Gartner also found that Linux siphoned market share from UNIX in the relational database management system (RDBMS) market, a niche that grew 158 percent from $116 million in new license revenue in 2002 to nearly $300 million in 2003.

Recommended Links

Softpanorama hot topic of the month

Softpanorama Recommended

Red Hat 5.2 Enterprise Linux Documentation

Document Published PDF Download
Software Package Manifest May 21, 2008 PDF
Deployment Guide May 21, 2008 PDF
Installation Guide May 21, 2008 PDF
Virtualization Guide May 21, 2008 PDF
Cluster Suite Overview May 21, 2008 PDF
Cluster Administration May 21, 2008 PDF
LVM Administrator's Guide May 21, 2008 PDF
Global File System May 21, 2008 PDF
Using GNBD with GFS May 21, 2008 PDF
Linux Virtual Server Administration May 21, 2008 PDF
Using Device-Mapper Multipath May 21, 2008 PDF
Tuning and Optimizing Red Hat Enterprise Linux for Oracle 9i and 10g Databases Nov  PDF

Differences with Solaris

Migration toolkits


FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  


Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least

Copyright © 1996-2016 by Dr. Nikolai Bezroukov. was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case is down you can use the at


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: May 20, 2017