|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
News | See also | Recommended Links | Tutorials | Introductory materials | Papers | Solaris UFS File System | Protective partitioning |
LVM | NFS | Ext2 / Ext3 | Btrfs | jfs | XFS | ReisnerFS | AFS |
Logical Volume Snapshots | Linux snapshots | Extended Attributes | ACLs | atime | RAM Disks | Loopback filesystem | ZFS |
Disk partitioning | Linux Swap filesystem | Filesystems Recovery | VFS | Proc Filesystem | History | Humor | Etc |
The file system is one of the most important parts of an operating system. The file system stores and manages user data on disk drives, and ensures that what�s read from storage is identical to what was originally written. In addition to storing user data in files, the file system also creates and manages information about files and about itself. Besides guaranteeing the integrity of all that data, file systems are also expected to be extremely reliable and have very good performance.
|
File systems update their structural information (called metadata) by synchronous writes. Each metadata update may require many separate writes, and if the system crashes during the write sequence, metadata may be in inconsistent state. At the next boot the filesystem check utility (called fsck) must walk through the metadata structures, examining and repairing them. This operation takes a very very long time on large filesystems. And the disk may not contain sufficient information to correct the structure. This results in misplaced or removed files. A journaling file system uses a separate area called a log or journal. Before metadata changes are actually performed, they are logged to this separate area. The operation is then performed. If the system crashes during the operation, there is enough information in the log to "replay" the log record and complete the operation. This approach does not require a full scan of the file system, yielding very quick filesystem check time on large file systems, generally a few seconds for a multiple-gigabyte file system. In addition, because all information for the pending operation is saved, no removals or lost-and-found moves are required. Disadvantage of journaling filesystems is that they are slower than other filesystems. Some journaling filesystems: BeFS, HTFS, JFS, NSS, Ext3, VxFS and XFS.
Fortunately, a number of other Linux file systems take up where Ext2 leaves off. Indeed, Linux now offers four alternatives to Ext2:
In addition to meeting some or all of the requirements listed above, each of these alternative file systems also supports journaling, a feature certainly demanded by enterprises, but beneficial to anyone running Linux. A journaling file system can simplify restarts, reduce fragmentation, and accelerate I/O. Better yet, journaling file systems make fscks a thing of the past.
If you maintain a system of fair complexity or require high-availability, you should seriously consider a journaling file system. Let�s find out how journaling file systems work, look at the four journaling file systems available for Linux, and walk through the steps of installing one of the newer systems, JFS. Switching to a journaling file system is easier than you might think, and once you switch � well, you�ll be glad you did.
Fun with File Systems
To better appreciate the benefits of journaling file systems, let�s start by looking at how files are saved in a non-journaled file system like Ext2. To do that, it�s helpful to speak the vernacular of file systems.
A file system consists of blocks of data. The number of bytes constituting a block varies depending on the OS. The internal physical structure of a hard disk consists of cylinders. The hard disk is divided into groups of cylinders known as cylinder groups, further divided into blocks.
The file system is comprised of five main blocks (boot block, superblock, Inode block, data block,
A super block plays an important role during the system boot up and shutdown process. When the system boots, the details in the super block are loaded into the memory to improve the speed of processing. The super block is then updated at regular time intervals from the data in the memory. During system shutdown, a program called sync writes the updated data in the memory back to the super block. This process is very crucial because an inaccurate super block might even lead to an unusable file system. This is precisely why the proper shutdown of a Solaris system is essential.
Because of the critical nature of the super block, it is replicated at
the beginning of every cylinder group. These blocks are known as surrogate
super blocks. A damaged or corrupted super block is recovered from one of
the surrogate super blocks.
Each inode has a unique number associated with it, called the inode number. The -li option of the ls command displays the inode number of a file:
# ls -li
When a user creates a file in the directory or modifies it, the following events occur:
The data block is the storage unit of data in the Solaris file system. The default size of a data block in the Solaris file system is 8192 bytes. After a block is full, the file is allotted another block. The addresses of these blocks are stored as an array in the Inode.
The first 12 pointers in the array are direct addresses of the file; that is, they point to the first 12 data blocks where the file contents are stored. If the file grows larger than these 12 blocks, then a 13th block is added, which does not contain data. This block, called an indirect block, contains pointers to the addresses of the next set of direct blocks.
If the file grows still larger, then a 14th block is added, which contains pointers to the addresses of a set of indirect blocks. This block is called the double indirect block. If the file grows still larger, then a 15th block is added, which contains pointers to the addresses of a set of double indirect blocks. This block is called the triple indirect block.
Hard and soft links are a great features of Unix. It is a reference in a directory to a file stored in another directory. In case of soft links it can be a reference to a directory. There might be multiple links to a file. Links eliminate redundancy because you do not need to store multiple copies of a file.
Links are of two types: hard and soft (also known as symbolic).
To create a symbolic link, you must use the -s option with the ln command. Files that are soft linked contain an l symbol at the first bit of the access permission bits displayed by the ls -l command, whereas those that are hard linked do not contain the l symbol. A directory is symbolically linked to a file. However, it cannot be hard linked.
It is obvious that no file exists with a link count less than one. Relative pathnames . or .. are nothing but links for the current directory and its parent directory. These are present in every directory: any directory stores the two links ., .. and the Inode numbers of the files. They can be listed by the ls -lia option. A directory must have a minimum of two links. The number of links increases as the number of sub-directories increase. Whenever you issue a command to list the file attributes, it refers to the Inode block with the Inode number and the corresponding data is retrieved.
Each file system used in Solaris is intended for a specific purpose.
The root file system is at the top of an inverted tree structure. It is the first file system that the kernel mounts during booting. It contains the kernel and device drivers. The / directory is also called the mount pointdirectory of the file system. All references in the file system are relative to this directory. The entire file system structure is attached to the main system tree at the root directory during the process of mounting, and hence the name. During the creation of the file system, a lost + found directory is created within the mount point directory. This directory is used to dump into the file system any unredeemed files that were found during the customary file system check, which you do with the fsck command.
/ (root)
The directory located at the top of the Unix file system. It is represented by the "/" (forward slash) character.
/usr Contains commands and programs for system-level usage and administration.
/var Contains system log files and spooling files, which grow in size with system usage.
/home Contains user home directories.
/opt Contains optional third-party software and applications.
/tmp Contains temporary files, which are cleared each time the system is booted.
/proc Contains information about all active processes.
You create file systems with the newfs command. The newfs command accepts only logical raw device names. The syntax is as follows:
newfs [ -v ] [ mkfs-options ] raw-special-device
For example, to create a file system on the disk slice c0t3d0s4, the following command is used:
# newfs -v /dev/rdsk/c0t3d0s4
The -v option prints the actions in verbose mode. The newfs command calls the mkfs command to create a file system. You can invoke the mkfs command directly by specifying a -F option followed by the type of file system.
Mounting file systems is the next logical step to creating file systems. Mounting refers to naming the file system and attaching it to the inverted tree structure. This enables access from any point in the structure. A file system can be mounted during booting, manually from the command line, or automatically if you have enabled the automount feature.
With remote file systems, the server shares the file system over the network and the client mounts it.
The / and /usr file systems, as mentioned earlier, are mounted during booting. To mount a file system, attach it to a directory anywhere in the main inverted tree structure. This directory is known as the mount point. The syntax of the mount command is as follows:
# mount <logical block device name> <mount point>
The following steps mount a file system c0t2d0s7 on the /export/home directory:
# mkdir /export/home # mount /dev/dsk/c0t2d0s7 /export/home
You can verify the mounting by using the mount command, which lists all the mounted file systems.
Note: If the mount point directory has any content prior to the mounting operation, it is hidden and remains inaccessible until the file system is unmounted.
Data is stored and retrieved from the physical disk where the file system is mounted.Although there are no defined specifications for creating the file systems on the physical disk, slices are usually allocated as following:
0. Root or /� Files and directories of the OS.
The slices shown above are all allocated on a single single disk. However, there is no restriction that all file systems need to be located on a single disk. They can also span across multiple disks. Slice 2 refers to the entire disk. Hence, if you want to allocate an entire disk for a file system, you can do so by creating it on slice 2. The mount command supports a variety of useful options.
Option | Description |
---|---|
-o largefiles | Files larger than 2GB are supported in the file system. |
-o nolargefiles | Does not mount file systems with files larger than 2GB. |
-o rw | File system is mounted with read and write permissions. |
-o ro | File system is mounted with read-only permission. |
-o bg | Repeats mount attempts in the background. Used with non-critical file systems. |
-o fg | Repeats mount attempts in the foreground. Used with critical file systems. |
-p | Prints the list of mounted file systems in /etc/vfstab format. |
-m | Mounts without making an entry in /etc/mnt /etc/tab file. |
-O | Performs an Overlay mount. Mounts over an existing mount point. |
A file system can be unmounted with the umount command. The following is the syntax for umount:
umount <mount-point or logical block device name > File systems cannot be unmounted when they are in use or when the umount command is issued from any subdirectory within the file system mount point.Note: A file system can be unmounted forcibly if you use the -f option of the umount command. Please refer to the man page to learn about the use of these options.
The umountall command is used to unmount a group of file systems. The umountall command unmounts all file systems in the /etc/mnttab file except the /, /usr, /var, and /proc file systems. If you want to unmount all the file systems from a specified host, use the -h option. If you want to unmount all the file systems mounted from remote hosts, use the -r option.
The /etc/vfstab (Virtual File System Table) file plays a very important role in system operations. This file contains one record for every device that has to be automatically mounted when the system enters run level 2.
Column Name | Description |
---|---|
device to mount | The logical block name of the device to be mounted. It can also be a remote resource name for NFS. |
device to fsck | The logical raw device name to be subjected to the fsck check during booting. It is not applicable for read-only file systems, such as High Sierra File System (HSFS) and network File systems such as NFS. |
Mount point | The mount point directory. |
FS type | The type of the file system. |
fsck pass |
The number used by fsck to decide whether the file system is to be checked. 0� File system is not checked. 1� File system is checked sequentially. 2� File system is checked simultaneously along with other file systems where this field is set to 2. |
Mount at boot | The file system to be mounted by the mount all command at boot time is determined by this field. The options are either yes or no. |
Mount options | The mount options to be supported by the mount command while the particular file system is mounted. |
Note the no values in this field for the root, /usr, and /var file systems. These are mounted by default. The fd field refers to the floppy disk and the swap field refers to the tmpfs in the /tmp directory.
A sample vfstab file looks like:
#device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd - /dev/fd fd - no - /proc - /proc proc - no - /dev/dsk/c0t0d0s4 - - swap - no - /dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 / ufs 1 no - /dev/dsk/c0t0d0s6 /dev/rdsk/c0t0d0s6 /usr ufs 1 no - /dev/dsk/c0t0d0s3 /dev/rdsk/c0t0d0s3 /var ufs 1 no - /dev/dsk/c0t0d0s7 /dev/rdsk/c0t0d0s7 /export/home ufs 2 yes - /dev/dsk/c0t0d0s5 /dev/rdsk/c0t0d0s5 /opt ufs 2 yes - /dev/dsk/c0t0d0s1 /dev/rdsk/c0t0d0s1 /usr/openwin ufs 2 yes - swap - /tmp tmpfs - yes -
The /etc/mnttab file comprises a table that defines which partitions and/or disks are currently mounted by the system.
The /etc/mnttab file contains the following details about each mounted file system:
A sample mnttab file:
/dev/dsk/c0t0d0s0 / ufs rw,intr,largefiles,xattr,onerror=panic,s uid,dev=2200000 1014366934 /dev/dsk/c0t0d0s6 /usr ufs rw,intr,largefiles,xattr,onerror=panic,s uid,dev=2200006 1014366934 /proc /proc proc dev=4300000 1014366933 mnttab /etc/mnttab mntfs dev=43c0000 1014366933 fd /dev/fd fd rw,suid,dev=4400000 1014366935 /dev/dsk/c0t0d0s3 /var ufs rw,intr,largefiles,xattr,onerror=panic,s uid,dev=2200003 1014366937 swap /var/run tmpfs xattr,dev=1 1014366937 swap /tmp tmpfs xattr,dev=2 1014366939 /dev/dsk/c0t0d0s5 /opt ufs rw,intr,largefiles,xattr,onerror=panic,s uid,dev=2200005 1014366939 /dev/dsk/c0t0d0s7 /export/home ufs rw,intr,largefiles,xattr,onerror =panic,suid,dev=2200007 1014366939 /dev/dsk/c0t0d0s1 /usr/openwin ufs rw,intr,largefiles,xattr,onerror =panic,suid,dev=2200001 1014366939 -hosts /net autofs indirect,nosuid,ignore,nobrowse,dev=4580001 10143669 44 auto_home /home autofs indirect,ignore,nobrowse,dev=4580002 10143669 44 -xfn /xfn autofs indirect,ignore,dev=4580003 1014366944 sun:vold(pid295) /vol nfs ignore,dev=4540001 1014366950 #
Some applications and processes create temporary files that occupy a lot of hard disk space. As a result, it is necessary to impose a restriction on the size of the files that are created.
Solaris provides tools to control the storage. They are:
The ulimit command is a built-in shell command, which displays the current file size limit. The default value for the maximum file size, set inside the kernel, is 1500 blocks. The following syntax displays the current limit:
$ ulimit -a time(seconds) unlimited file(blocks) unlimited data(kbytes) unlimited stack(kbytes) 8192 coredump(blocks) unlimited nofiles(descriptors) 256 memory(kbytes) unlimited
If the limit is not set, it reports as unlimited.
The system administrator and the individual users change this value to set the file size at the system level and at the user level, respectively. The following is the syntax of the ulimit command:
ulimit <value>
For example, the following syntax sets the file size limit to 1600 blocks:
# ulimit 1600 # ulimit -a time(seconds) unlimited file(blocks) 1600 data(kbytes) unlimited stack(kbytes) 8192 coredump(blocks) unlimited nofiles(descriptors) 256 memory(kbytes) unlimited #
The file size can be limited at the system level or the user level. To set it at the system level, change the value of the ulimit variable in the /etc/profile file. To set it at the user level, change the value in the .profile file present in the user's home directory. The user-level setting always takes precedence over the system-level setting. It is the user's profile file that sets the working environment.
Note: The ulimit values set at the user level and system level cannot exceed the default ulimit value set in the kernel.
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
Jun 09, 2021 | www.tecmint.com
List all Detected Devices
To discover which hard disks has been detected by kernel, you can search for the keyword " sda " along with " grep " like shown below.
[[email protected] ~]# dmesg | grep sda [ 1.280971] sd 2:0:0:0: [sda] 488281250 512-byte logical blocks: (250 GB/232 GiB) [ 1.281014] sd 2:0:0:0: [sda] Write Protect is off [ 1.281016] sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 1.281039] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 1.359585] sda: sda1 sda2 < sda5 sda6 sda7 sda8 > [ 1.360052] sd 2:0:0:0: [sda] Attached SCSI disk [ 2.347887] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) [ 22.928440] Adding 3905532k swap on /dev/sda6. Priority:-1 extents:1 across:3905532k FS [ 23.950543] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro [ 24.134016] EXT4-fs (sda5): mounted filesystem with ordered data mode. Opts: (null) [ 24.330762] EXT4-fs (sda7): mounted filesystem with ordered data mode. Opts: (null) [ 24.561015] EXT4-fs (sda8): mounted filesystem with ordered data mode. Opts: (null)NOTE : The "˜sda' first SATA hard drive, "˜sdb' is the second SATA hard drive and so on. Search with "˜hda' or "˜hdb' in the case of IDE hard drive.
May 24, 2021 | blog.dougco.com
Recovery LVM Data from RAID – Doug's Blog
- Post author By doug
- Post date March 1, 2018
- No Comments on Recovery LVM Data from RAID
We had a client that had an OLD fileserver box, a Thecus N4100PRO. It was completely dust-ridden and the power supply had burned out.
Since these drives were in a RAID configuration, you could not hook any one of them up to a windows box, or a linux box to see the data. You have to hook them all up to a box and reassemble the RAID.
We took out the drives (3 of them) and then used an external SATA to USB box to connect them to a Linux server running CentOS. You can use parted to see what drives are now being seen by your linux system:
parted -l | grep 'raid\|sd'
Then using that output, we assembled the drives into a software array:
mdadm -A /dev/md0 /dev/sdb2 /dev/sdc2 /dev/sdd2
If we tried to only use two of those drives, it would give an error, since these were all in a linear RAID in the Thecus box.
If the last command went well, you can see the built array like so:
root% cat /proc/mdstat
Personalities : [linear]
md0 : active linear sdd2[0] sdb2[2] sdc2[1]
1459012480 blocks super 1.0 128k roundingNote the personality shows the RAID type, in our case it was linear, which is probably the worst RAID since if any one drive fails, your data is lost. So good thing these drives outlasted the power supply! Now we find the physical volume:
pvdisplay /dev/md0
Gives us:
-- Physical volume --
PV Name /dev/md0
VG Name vg0
PV Size 1.36 TB / not usable 704.00 KB
Allocatable yes
PE Size (KByte) 2048
Total PE 712408
Free PE 236760
Allocated PE 475648
PV UUID iqwRGX-zJ23-LX7q-hIZR-hO2y-oyZE-tD38A3Then we find the logical volume:
lvdisplay /dev/vg0
Gives us:
-- Logical volume --
LV Name /dev/vg0/syslv
VG Name vg0
LV UUID UtrwkM-z0lw-6fb3-TlW4-IpkT-YcdN-NY1orZ
LV Write Access read/write
LV Status NOT available
LV Size 1.00 GB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors 16384-- Logical volume --
LV Name /dev/vg0/lv0
VG Name vg0
LV UUID 0qsIdY-i2cA-SAHs-O1qt-FFSr-VuWO-xuh41q
LV Write Access read/write
LV Status NOT available
LV Size 928.00 GB
Current LE 475136
Segments 1
Allocation inherit
Read ahead sectors 16384We want to focus on the lv0 volume. You cannot mount yet, until you are able to lvscan them.
lvscan
Show us things are inactive currently:
inactive '/dev/vg0/syslv' [1.00 GB] inherit
inactive '/dev/vg0/lv0' [928.00 GB] inheritSo we set them active with:
vgchange vg0 -a y
And doing lvscan again shows:
ACTIVE '/dev/vg0/syslv' [1.00 GB] inherit
ACTIVE '/dev/vg0/lv0' [928.00 GB] inheritNow we can mount with:
mount /dev/vg0/lv0 /mnt
And viola! We have our data up and accessable in /mnt to recover! Of course your setup is most likely going to look different from what I have shown you above, but hopefully this gives some helpful information for you to recover your own data.
Nov 14, 2019 | www.redhat.com
If you've ever booted a Red Hat-based system and have no network connectivity, you'll appreciate this quick fix.
Posted: | (Red Hat)
It might surprise you to know that if you forget to flip the network interface card (NIC) switch to the ON position (shown in the image below) during installation, your Red Hat-based system will boot with the NIC disconnected:
More Linux resources
- Advanced Linux Commands Cheat Sheet for Developers
- Get Started with Red Hat Insights
- Download Now: Basic Linux Commands Cheat Sheet
- Linux System Administration Skills Assessment
But, don't worry, in this article I'll show you how to set the NIC to connect on every boot and I'll show you how to disable/enable your NIC on demand.
If your NIC isn't enabled at startup, you have to edit the
/etc/sysconfig/network-scripts/ifcfg-NIC_name
file, where NIC_name is your system's NIC device name. In my case, it's enp0s3. Yours might be eth0, eth1, em1, etc. List your network devices and their IP addresses with theip addr
command:$ ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:81:d0:2d brd ff:ff:ff:ff:ff:ff 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000 link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ffNote that my primary NIC (enp0s3) has no assigned IP address. I have virtual NICs because my Red Hat Enterprise Linux 8 system is a VirtualBox virtual machine. After you've figured out what your physical NIC's name is, you can now edit its interface configuration file:
$ sudo vi /etc/sysconfig/network-scripts/ifcfg-enp0s3and change the
ONBOOT="no"
entry toONBOOT="yes"
as shown below:TYPE="Ethernet" PROXY_METHOD="none" BROWSER_ONLY="no" BOOTPROTO="dhcp" DEFROUTE="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_FAILURE_FATAL="no" IPV6_ADDR_GEN_MODE="stable-privacy" NAME="enp0s3" UUID="77cb083f-2ad3-42e2-9070-697cb24edf94" DEVICE="enp0s3" ONBOOT="yes"Save and exit the file.
You don't need to reboot to start the NIC, but after you make this change, the primary NIC will be on and connected upon all subsequent boots.
To enable the NIC, use the
ifup
command:ifup enp0s3 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)Now the
ip addr
command displays the enp0s3 device with an IP address:$ ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:81:d0:2d brd ff:ff:ff:ff:ff:ff inet 192.168.1.64/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s3 valid_lft 86266sec preferred_lft 86266sec inet6 2600:1702:a40:88b0:c30:ce7e:9319:9fe0/64 scope global dynamic noprefixroute valid_lft 3467sec preferred_lft 3467sec inet6 fe80::9b21:3498:b83c:f3d4/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000 link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ffTo disable a NIC, use the
ifdown
command. Please note that issuing this command from a remote system will terminate your session:ifdown enp0s3 Connection 'enp0s3' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)That's a wrap
It's frustrating to encounter a Linux system that has no network connection. It's more frustrating to have to connect to a virtual KVM or to walk up to the console to fix it. It's easy to miss the switch during installation, I've missed it myself. Now you know how to fix the problem and have your system network-connected on every boot, so before you drive yourself crazy with troubleshooting steps, try the
ifup
command to see if that's your easy fix.Takeaways: ifup, ifdown, /etc/sysconfig/network-scripts/ifcfg-NIC_name
Nov 24, 2020 | www.redhat.com
The need for an initrdWhen you press a machine's power button, the boot process starts with a hardware-dependent mechanism that loads a bootloader . The bootloader software finds the kernel on the disk and boots it. Next, the kernel mounts the root filesystem and executes an
init
process.This process sounds simple, and it might be what actually happens on some Linux systems. However, modern Linux distributions have to support a vast set of use cases for which this procedure is not adequate.
First, the root filesystem could be on a device that requires a specific driver. Before trying to mount the filesystem, the right kernel module must be inserted into the running kernel. In some cases, the root filesystem is on an encrypted partition and therefore needs a userspace helper that asks the passphrase to the user and feeds it to the kernel. Or, the root filesystem could be shared over the network via NFS or iSCSI, and mounting it may first require configured IP addresses and routes on a network interface.
[ You might also like: Linux networking: 13 uses for netstat ]
To overcome these issues, the bootloader can pass to the kernel a small filesystem image (the initrd) that contains scripts and tools to find and mount the real root filesystem. Once this is done, the initrd switches to the real root, and the boot continues as usual.
The dracut infrastructureOn Fedora and RHEL, the initrd is built through dracut . From its home page , dracut is "an event-driven initramfs infrastructure. dracut (the tool) is used to create an initramfs image by copying tools and files from an installed system and combining it with the dracut framework, usually found in
/usr/lib/dracut/modules.d
."A note on terminology: Sometimes, the names initrd and initramfs are used interchangeably. They actually refer to different ways of building the image. An initrd is an image containing a real filesystem (for example, ext2) that gets mounted by the kernel. An initramfs is a cpio archive containing a directory tree that gets unpacked as a tmpfs. Nowadays, the initrd images are deprecated in favor of the initramfs scheme. However, the initrd name is still used to indicate the boot process involving a temporary filesystem.
Kernel command-lineLet's revisit the NFS-root scenario that was mentioned before. One possible way to boot via NFS is to use a kernel command-line containing the
root=dhcp
argument.The kernel command-line is a list of options passed to the kernel from the bootloader, accessible to the kernel and applications. If you use GRUB, it can be changed by pressing the e key on a boot entry and editing the line starting with linux .
The dracut code inside the initramfs parses the kernel command-line and starts DHCP on all interfaces if the command-line contains
root=dhcp
. After obtaining a DHCP lease, dracut configures the interface with the parameters received (IP address and routes); it also extracts the value of the root-path DHCP option from the lease. The option carries an NFS server's address and path (which could be, for example,192.168.50.1:/nfs/client
). Dracut then mounts the NFS share at this location and proceeds with the boot.If there is no DHCP server providing the address and the NFS root path, the values can be configured explicitly in the command line:
root=nfs:192.168.50.1:/nfs/client ip=192.168.50.101:::24::ens2:noneHere, the first argument specifies the NFS server's address, and the second configures the ens2 interface with a static IP address.
There are two syntaxes to specify network configuration for an interface:
ip=<interface>:{dhcp|on|any|dhcp6|auto6}[:[<mtu>][:<macaddr>]] ip=<client-IP>:[<peer>]:<gateway-IP>:<netmask>:<client_hostname>:<interface>:{none|off|dhcp|on|any|dhcp6|auto6|ibft}[:[<mtu>][:<macaddr>]]The first can be used for automatic configuration (DHCP or IPv6 SLAAC), and the second for static configuration or a combination of automatic and static. Here some examples:
ip=enp1s0:dhcp ip=192.168.10.30::192.168.10.1:24::enp1s0:none ip=[2001:0db8::02]::[2001:0db8::01]:64::enp1s0:noneNote that if you pass an
ip=
option, but dracut doesn't need networking to mount the root filesystem, the option is ignored. To force network configuration without a network root, addrd.neednet=1
to the command line.You probably noticed that among automatic configuration methods, there is also ibft . iBFT stands for iSCSI Boot Firmware Table and is a mechanism to pass parameters about iSCSI devices from the firmware to the operating system. iSCSI (Internet Small Computer Systems Interface) is a protocol to access network storage devices. Describing iBFT and iSCSI is outside the scope of this article. What is important is that by passing
ip=ibft
to the kernel, the network configuration is retrieved from the firmware.Dracut also supports adding custom routes, specifying the machine name and DNS servers, creating bonds, bridges, VLANs, and much more. See the dracut.cmdline man page for more details.
Network modulesThe dracut framework included in the initramfs has a modular architecture. It comprises a series of modules, each containing scripts and binaries to provide specific functionality. You can see which modules are available to be included in the initramfs with the command
dracut --list-modules
.At the moment, there are two modules to configure the network:
network-legacy
andnetwork-manager
. You might wonder why different modules provide the same functionality.
network-legacy
is older and uses shell scripts calling utilities likeiproute2
,dhclient
, andarping
to configure interfaces. After the switch to the real root, a different network configuration service runs. This service is not aware of what thenetwork-legacy
module intended to do and the current state of each interface. This can lead to problems maintaining the state across the root switch boundary.A prominent example of a state to be kept is the DHCP lease. If an interface's address changed during the boot, the connection to an NFS share would break, causing a boot failure.
To ensure a seamless transition, there is a need for a mechanism to pass the state between the two environments. However, passing the state between services having different configuration models can be a problem.
The
network-manager
dracut module was created to improve this situation. The module runs NetworkManager in the initrd to configure connection profiles generated from the kernel command-line. Once done, NetworkManager serializes its state, which is later read by the NetworkManager instance in the real root.Fedora 31 was the first distribution to switch to
Enabling a different network modulenetwork-manager
in initrd by default. On RHEL 8.2,network-legacy
is still the default, butnetwork-manager
is available. On RHEL 8.3, dracut will usenetwork-manager
by default.While the two modules should be largely compatible, there are some differences in behavior. Some of those are documented in the
nm-initrd-generator
man page. In general, it is suggested to use thenetwork-manager
module when NetworkManager is enabled.To rebuild the initrd using a specific network module, use one of the following commands:
# dracut --add network-legacy --force --verbose # dracut --add network-manager --force --verboseSince this change will be reverted the next time the initrd is rebuilt, you may want to make the change permanent in the following way:
# echo 'add_dracutmodules+=" network-manager "' > /etc/dracut.conf.d/network-module.conf # dracut --regenerate-all --force --verboseThe
The network-manager dracut module--regenerate-all
option also rebuilds all the initramfs images for the kernel versions found on the system.As with all dracut modules, the
network-manager
module is split into stages that are called at different times during the boot (see the dracut.modules man page for more details).The first stage parses the kernel command-line by calling
/usr/libexec/nm-initrd-generator
to produce a list of connection profiles in/run/NetworkManager/system-connections
. The second part of the module runs after udev has settled, i.e., after userspace has finished handling the kernel events for devices (including network interfaces) found in the system.When NM is started in the real root environment, it registers on D-Bus, configures the network, and remains active to react to events or D-Bus requests. In the initrd, NetworkManager is run in the
configure-and-quit=initrd
mode, which doesn't register on D-Bus (since it's not available in the initrd, at least for now) and exits after reaching the startup-complete event.The startup-complete event is triggered after all devices with a matching connection profile have tried to activate, successfully or not. Once all interfaces are configured, NM exits and calls dracut hooks to notify other modules that the network is available.
Note that the
Troubleshooting/run/NetworkManager
directory containing generated connection profiles and other runtime state is copied over to the real root so that the new NetworkManager process running there knows exactly what to do.If you have network issues in dracut, this section contains some suggestions for investigating the problem.
The first thing to do is add rd.debug to the kernel command-line, enabling debug logging in dracut. Logs are saved to
/run/initramfs/rdsosreport.txt
and are also available in the journal.If the system doesn't boot, it is useful to get a shell inside the initrd environment to manually check why things aren't working. For this, there is an rd.break command-line argument. Note that the argument spawns a shell when the initrd has finished its job and is about to give control to the init process in the real root filesystem. To stop at a different stage of dracut (for example, after command-line parsing), use the following argument:
rd.break={cmdline|pre-udev|pre-trigger|initqueue|pre-mount|mount|pre-pivot|cleanup}The initrd image contains a minimal set of binaries; if you need a specific tool at the dracut shell, you can rebuild the image, adding what is missing. For example, to add the ping and tcpdump binaries (including all their dependent libraries), run:
# dracut -f --install "ping tcpdump"and then optionally verify that they were included successfully:
# lsinitrd | grep "ping\|tcpdump" Arguments: -f --install 'ping tcpdump' -rwxr-xr-x 1 root root 82960 May 18 10:26 usr/bin/ping lrwxrwxrwx 1 root root 11 May 29 20:35 usr/sbin/ping -> ../bin/ping -rwxr-xr-x 1 root root 1065224 May 29 20:35 usr/sbin/tcpdumpThe generatorIf you are familiar with NetworkManager configuration, you might want to know how a given kernel command-line is translated into NetworkManager connection profiles. This can be useful to better understand the configuration mechanism and find syntax errors in the command-line without having to boot the machine.
The generator is installed in
/usr/libexec/nm-initrd-generator
and must be called with the list of kernel arguments after a double dash. The--stdout
option prints the generated connections on standard output. Let's try to call the generator with a sample command line:$ /usr/libexec/nm-initrd-generator --stdout -- \ ip=enp1s0:dhcp:00:99:88:77:66:55 rd.peerdns=0 802-3-ethernet.cloned-mac-address: '99:88:77:66:55' is not a valid MAC addressIn this example, the generator reports an error because there is a missing field for the MTU after enp1s0 . Once the error is corrected, the parsing succeeds and the tool prints out the connection profile generated:
$ /usr/libexec/nm-initrd-generator --stdout -- \ ip=enp1s0:dhcp::00:99:88:77:66:55 rd.peerdns=0 *** Connection 'enp1s0' *** [connection] id=enp1s0 uuid=e1fac965-4319-4354-8ed2-39f7f6931966 type=ethernet interface-name=enp1s0 multi-connect=1 permissions= [ethernet] cloned-mac-address=00:99:88:77:66:55 mac-address-blacklist= [ipv4] dns-search= ignore-auto-dns=true may-fail=false method=auto [ipv6] addr-gen-mode=eui64 dns-search= ignore-auto-dns=true method=auto [proxy]Note how the rd.peerdns=0 argument translates into the ignore-auto-dns=true property, which makes NetworkManager ignore DNS servers received via DHCP. An explanation of NetworkManager properties can be found on the nm-settings man page.
[ Network getting out of control? Check out Network automation for everyone, a free book from Red Hat . ]
ConclusionThe NetworkManager dracut module is enabled by default in Fedora and will also soon be enabled on RHEL. It brings better integration between networking in the initrd and NetworkManager running in the real root filesystem.
While the current implementation is working well, there are some ideas for possible improvements. One is to abandon the
configure-and-quit=initrd
mode and run NetworkManager as a daemon started by a systemd service. In this way, NetworkManager will be run in the same way as when it's run in the real root, reducing the code to be maintained and tested.To completely drop the
configure-and-quit=initrd
mode, NetworkManager should also be able to register on D-Bus in the initrd. Currently, dracut doesn't have any module providing a D-Bus daemon because the image should be minimal. However, there are already proposals to include it as it is needed to implement some new features.With D-Bus running in the initrd, NetworkManager's powerful API will be available to other tools to query and change the network state, unlocking a wide range of applications. One of those is to run
nm-cloud-setup
in the initrd. The service, shipped in theNetworkManager-cloud-setup
Fedora package fetches metadata from cloud providers' infrastructure (EC2, Azure, GCP) to automatically configure the network.
Dec 28, 2020 | www.servethehome.com
The intellectually easy answer to what is happening is that IBM is putting pressure on Red Hat to hit bigger numbers in the future. Red Hat sees a captive audience in its CentOS userbase and is looking to covert a percentage to paying customers. Everyone else can go to Ubuntu or elsewhere if they do not want to pay...
Dec 28, 2020 | freedomben.medium.com
It seemed obvious (via Occam's Razor) that CentOS had cannibalized RHEL sales for the last time and was being put out to die. Statements like:
If you are using CentOS Linux 8 in a production environment, and are
concerned that CentOS Stream will not meet your needs, we encourage you
to contact Red Hat about options.That line sure seemed like horrific marketing speak for "call our sales people and open your wallet if you use CentOS in prod." ( cue evil mustache-stroking capitalist villain ).
... CentOS will no longer be downstream of RHEL as it was previously. CentOS will now be upstream of the next RHEL minor release .
... ... ...
I'm watching Rocky Linux closely myself. While I plan to use CentOS for the vast majority of my needs, Rocky Linux may have a place in my life as well, as an example powering my home router. Generally speaking, I want my router to be as boring as absolute possible. That said even that may not stay true forever, if for example CentOS gets good WireGuard support.
Lastly, but certainly not least, Red Hat has talked about upcoming low/no-cost RHEL options. Keep an eye out for those! I have no idea the details, but if you currently use CentOS for personal use, I am optimistic that there may be a way to get RHEL for free coming soon. Again, this is just my speculation (I have zero knowledge of this beyond what has been shared publicly), but I'm personally excited.
Dec 27, 2020 | freedomben.medium.com
There are companies that sell appliances based on CentOS. Websense/Forcepoint is one of them. The Websense appliance runs the base OS of CentOS, on top of which runs their Web-filtering application. Same with RSA. Their NetWitness SIEM runs on top of CentOS.
Likewise, there are now countless Internet servers out there that run CentOS. There's now a huge user base of CentOS out there.
This is why the Debian project is so important. I will be converting everything that is currently CentOS to Debian. Those who want to use the Ubuntu fork of Debian, that is also probably a good idea.
Dec 21, 2020 | www.zdnet.com
On Hacker News , the leading comment was: "Imagine if you were running a business, and deployed CentOS 8 based on the 10-year lifespan promise . You're totally screwed now, and Red Hat knows it. Why on earth didn't they make this switch starting with CentOS 9???? Let's not sugar coat this. They've betrayed us."
Over at Reddit/Linux , another person snarled, "We based our Open Source project on the latest CentOS releases since CentOS 4. Our flagship product is running on CentOS 8 and we *sure* did bet the farm on the promised EOL of 31st May 2029."
A popular tweet from The Best Linux Blog In the Unixverse, nixcraft , an account with over 200-thousand subscribers, went: Oracle buys Sun: Solaris Unix, Sun servers/workstation, and MySQL went to /dev/null. IBM buys Red Hat: CentOS is going to >/dev/null . Note to self: If a big vendor such as Oracle, IBM, MS, and others buys your fav software, start the migration procedure ASAP."
Many others joined in this choir of annoyed CentOS users that it was IBM's fault that their favorite Linux was being taken away from them. Still, others screamed Red Hat was betraying open-source itself.
... ... ...
Still another ex-Red Hat official said. If it wasn't for CentOS, Red Hat would have been a 10-billion dollar company before Red Hat became a billion-dollar business .
... ... ...
Dec 23, 2020 | www.zdnet.com
former Red Hat executive confided, "CentOS was gutting sales. The customer perception was 'it's from Red Hat and it's a clone of RHEL, so it's good to go!' It's not. It's a second-rate copy." From where, this person sits, "This is 100% defensive to stave off more losses to CentOS."
Still another ex-Red Hat official said. If it wasn't for CentOS, Red Hat would have been a 10-billion dollar company before Red Hat became a billion-dollar business .
Yet another Red Hat staffer snapped, "Look at the CentOS FAQ . It says right there:
CentOS Linux is NOT Red Hat Linux, it is NOT Fedora Linux. It is NOT Red Hat Enterprise Linux. It is NOT RHEL. CentOS Linux does NOT contain Red Hat® Linux, Fedora, or Red Hat® Enterprise Linux.
CentOS Linux is NOT a clone of Red Hat® Enterprise Linux.
CentOS Linux is built from publicly available source code provided by Red Hat, Inc for Red Hat Enterprise Linux in a completely different (CentOS Project maintained) build system.
We don't owe you anything."
Dec 10, 2020 | blog.centos.org
Matthew Stier says: December 8, 2020 at 8:11 pm
art_ok 1 point· 5 minutes agoMy office switched the bulk of our RHEL to OL years ago, and find it a great product, and great support, and only needing to get support for systems we actually want support on.
Oracle provided scripts to convert EL5, EL6, and EL7 systems, and was able to convert some EL4 systems I still have running. (Its a matter of going through the list of installed packages, use 'rpm -e --justdb' to remove the package from the rpmdb, and re-installing the package (without dependencies) from the OL ISO.)
art_ok 1 point· just nowWe have been using Oracle Linux exclusively last 5-6 years for everything - thousands of servers both for internal use and hundred or so customers.
Not a single time regretted, had any issues or were tempted to move to RedHat let alone CentOS.
I found Oracle Linux has several advantages over RedHat/CentOS:
If you need official support, Oracle support is generally cheaper than RedHat. You can legally run OL free and have access to patches/repositories. Full binary compatibility with RedHat so if anything is certified to run on RedHat, it automatically certified for Oracle Linux as well. It is very easy to switch between supported and free setup (say, you have proof-of-concept setup running free OL, but then it is being promoted to production status - just matter of registering box with Oracle, no need to reinstall/reconfigure anything). You can easily move licensed/support from one box to another so you always run the same OS and do not have to think and decide (RedHat for production / CentOS for Dec/test). You have a choice to run good old RedHat kernel or use newer Oracle kernel (which is pretty much vanilla kernel with minimal modification - just newer). We generally run Oracle kernels on all boxes unless we have to support particularly pedantic customer who insist on using old RedHat kernel. Premium OL subscription includes a few nice bonuses like DTrace and Ksplice.Overall, it is pleasure to work and support OL.
Negatives:
I found RedHat knowledge base / documentation is much better than Oracle's Oracle does not offer extensive support for "advanced" products like JBoss, Directory Server, etc. Obviously Oracle has its own equivalent commercial offerings (Weblogic, etc) and prefers customers to use them. Some complain about quality of Oracle's support. Can't really comment on that. Had no much exposure to RedHat support, maybe used it couple of times and it was good. Oracle support can be slower, but in most cases it is good/sufficient. Actually over the last few years support quality for Linux has improved noticeably - guess Oracle pushes their cloud very aggressively and as a result invests in Linux support (as Oracle cloud aka OCI runs on Oracle Linux).Forgot to mention that converting RedHat Linux to Oracle is very straightforward - just matter of updating yum/dnf config to point it to Oracle repositories. Not sure if you can do it with CentOS (maybe possible, just never needed to convert CentOS to Oracle).
Dec 10, 2020 | blog.centos.org
Internet User says: December 8, 2020 at 5:13 pm
Joel B. D. says: December 8, 2020 at 5:17 pmThis is a pretty clear indication that you people are completely out of touch with your users.
Michael says: December 8, 2020 at 8:31 pmBad idea. The whole point of using CentOS is it's an exact binary-compatible rebuild of RHEL. With this decision RH is killing CentOS and inviting to create a new *fork* or use another distribution. Do you realize how much market share you will be losing and how much chaos you will be creating with this?
"If you are using CentOS Linux 8 in a production environment, and are concerned that CentOS Stream will not meet your needs, we encourage you to contact Red Hat about options". So this is the way RH is telling us they don't want anyone to use CentOS anymore and switch to RHEL?
OS says: December 8, 2020 at 6:20 pmThat's exactly what they're saying. We all knew from the moment IBM bought Redhat that we were on borrowed time. IBM will do everything they can to push people to RHEL even if that includes destroying a great community project like CentOS.
JD says: December 8, 2020 at 6:35 pmFirst CoreOS, now CentOS. It's about time to switch to one of the *BSDs.
ShameOnIBM says: December 8, 2020 at 7:07 pmWow. Well, I guess that means the tens of thousands of cores of research compute I manage at a large University will be migrating to Debian. I've just started preparing to shift from Scientific Linux 7 to CentOS due to SL being discontinued by 2024. Glad I've only just started - not much work to throw away.
MLF says: December 8, 2020 at 7:15 pmIBM is declining, hence they need more profit from "useless" product line. So disgusting
MM says: December 8, 2020 at 7:28 pmAn entire team worked for months on a centos8 transition at the uni I work at. I assume a small portion can be salvaged but reading this it seems most of it will simply go out the window. Does anyone know if this decision of dumping centos8 is final?
Faisal Sehbai says: December 8, 2020 at 7:32 pmUnless the community can center on a new single proper fork of RHEL, it makes the most sense (to me) to seek refuge in Debian as it is quite close to CentOS in stability terms.
Already existing functioning distribution ecosystem, can probably do good with influx of resources to enhance the missing bits, such as further improving SELinux support and expanding Debian security team.
I say this without any official or unofficial involvement with the Debian project, other than being a user.
And we have just launched hundred of Centos 8 servers.
William Smith says: December 8, 2020 at 7:39 pmAnother one bites the dust due to corporate greed, which IBM exemplifies. This is why I shuddered when they bought RH. There is nothing that IBM touches that gets better, other than the bottom line of their suits!
Disgusting!
Daniele Brunengo says: December 8, 2020 at 7:48 pmThis is a big mistake. RedHat did this with RedHat Linux 9 the market leading Linux and created Fedora, now an also-ran to Ubuntu. I spent a lot of time during Covid to convert from earlier versions to 8, and now will have to review that work with my customer.
David Potterveld says: December 8, 2020 at 8:08 pmI just finished building a CentOS 8 web server, worked out all the nooks and crannies and was very satisfied with the result. Now I have to do everything from scratch? The reason why I chose this release was that every website and its brother were giving a 2029 EOL. Changing that is the worst betrayal of trust possible for the CentOS community. It's unbelievable.
a says: December 8, 2020 at 9:08 pmWhat a colossal blunder: a pivot from the long-standing mission of an OS providing stability, to an unstable development platform, in a manner that betrays its current users. They should remove the "C" from CentOS because it no longer has any connection to a community effort. I wonder if this is a move calculated to drive people from a free near clone of RHEL to a paid RHEL subscription? More likely to drive people entirely out of the RHEL ecosystem.
Ralf says: December 8, 2020 at 9:08 pmFrom a RHEL perspective I understand why they'd want it this way. CentOS was probably cutting deep into potential RedHat license sales. Though why or how RedHat would have a say in how CentOS is being run in the first place is.. troubling.
From a CentOS perspective you may as well just take the project out back and close it now. If people wanted to run beta-test tier RHEL they'd run Fedora. "LATER SECURITY FIXES AND UNTESTED 'FEATURES'?! SIGN ME UP!" -nobody
I'll probably run CentOS 7 until the end and then swap over to Debian when support starts hurting me. What a pain.
Tamas says: December 8, 2020 at 10:01 pmDon't trust Red Hat. 1 year ago Red Hat's CTO Chris Wright agreed in an interview: 'Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."' https://www.zdnet.com/article/red-hat-introduces-rolling-release-centos-stream/
I'm a current user of old school CentOS, so keep your promise, Mr CTO.
Konstantin says: December 9, 2020 at 3:36 pmThat was quick: "Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."
https://www.zdnet.com/article/red-hat-introduces-rolling-release-centos-stream/
Samuel C. says: December 8, 2020 at 10:53 pmFrom the same article: 'To be exact, CentOS Stream is an upstream development platform for ecosystem developers. It will be updated several times a day. This is not a production operating system. It's purely a developer's distro.'
Read again: CentOS Stream is not a production operating system. 'Nuff said.
Brendan says: December 9, 2020 at 12:15 amThis makes my decision to go with Ansible and CentOS 8 in our enterprise simple. Nope, time to got with Puppet or Chef. IBM did what I thought they would screw up Red Hat. My company is dumping IBM software everywhere - this means we need to dump CentOS now too.
vinci says: December 8, 2020 at 11:45 pmIronic, and it puts those of us who have recently migrated many of our development serves to CentOS8 in a really bad spot. Luckily we haven't licensed RHEL8 production servers yet -- and now that's never going to happen.
Peter Vonway says: December 8, 2020 at 11:56 pmI can't believe what IBM is actually doing. This is a direct move against all that open source means. They want to do exactly the same thing they're doing with awx (vs. ansible tower). You're going against everything that stands for open source. And on top of that you choose to stop offering support for Centos 8, all of a sudden! What a horrid move on your part. This only reliable choice that remains is probably going to be Debian/Ubuntu. What a waste...
Scott says: December 9, 2020 at 8:38 amWhat IBM fails to understand is that many of us who use CentOS for personal projects also work for corporations that spend millions of dollars annually on products from companies like IBM and have great influence over what vendors are chosen. This is a pure betrayal of the community. Expect nothing less from IBM.
OSLover says: December 9, 2020 at 12:09 amThis is exactly it. IBM is cashing in on its Red Hat acquisition by attempting to squeeze extra licenses from its customers.. while not taking into account the fact that Red Hat's strong adoption into the enterprise is a direct consequence of engineers using the nonproprietary version to develop things at home in their spare time.
Having an open source, non support contract version of your OS is exactly what drives adoption towards the supported version once the business decides to put something into production.
They are choosing to kill the golden goose in order to get the next few eggs faster. IBM doesn't care about anything but its large enterprise customers. Very stereotypically IBM.
technick says: December 9, 2020 at 12:09 amSo sad. Not only breaking the support promise but so quickly (2021!)
Business wise, a lot of business software is providing CentOS packages and support. Like hosting panels, backup software, virtualization, Management. I mean A LOT of money worldwide is in dark waters now with this announcement. It took years for CentOS to appear in their supported and tested distros. It will disappear now much faster.
Community wise, this is plain bad news for Open Source and all Open Source communities. This is sad. I wonder, are open source developers nowadays happy to spend so many hours for something that will in the end benefit IBM "subscribers" only in the end? I don't think they are.
What a sad way to end 2020.
ConcernedAdmin says: December 9, 2020 at 12:47 amI don't want to give up on CentOS but this is a strong life changing decision. My background is linux engineering with over 15+ years of hardcore experience. CentOS has always been my go to when an organization didn't have the appetite for RHEL and the $75 a year license fee per instance. I fought off Ubuntu take overs at 2 of the last 3 organizations I've been with successfully. I can't, won't fight off any more and start advocating for Ubuntu or pure Debian moving forward.
RIP CentOS. Red Hat killed a great project. I wonder if Anisble will be next?
John says: December 9, 2020 at 1:32 amHoping that stabbing Open Source community in the back, will make it switch to commercial licenses is absolutely preposterous. This shows how disconnected they're from reality and consumed by greed and it will simply backfire on them, when we switch to Debian or any other LTS alternative. I can't think moving everything I so caressed and loved to a mess like Ubuntu.
Concerned Fren says: December 9, 2020 at 1:52 amAssinine. This is completely ridiculous. I have migrated several servers from CentOS 7 to 8 recently with more to go. We also have a RHEL subscription for outward facing servers, CentOS internal. This type of change should absolutely have been announced for CentOS 9. This is garbage saying 1 year from now when it was supposed to be till 2029. A complete betrayal. One year to move everything??? Stupid.
Now I'm going to be looking at a couple of other options but it won't be RHEL after this type of move. This has destroyed my trust in RHEL as I'm sure IBM pushed for this. You will be losing my RHEL money once I chose and migrate. I get companies exist to make money and that's fine. This though is purely a naked money grab that betrays an established timeline and is about to force massive work on lots of people in a tiny timeframe saying "f you customers.". You will no longer get my money for doing that to me
William Ashford says: December 9, 2020 at 2:02 amIn hind sight it's clear to see that the only reason RHEL took over CentOS was to kill the competition.
This is also highly frustrating as I just completed new CentOS8 and RHEL8 builds for Non-production and Production Servers and had already begun deployments. Now I'm left in situation of finding a new Linux distribution for our enterprise while I sweat out the last few years of RHEL7/CentOS7. Ubuntu is probably a no go there enterprise tooling is somewhat lacking, and I am of the opinion that they will likely be gobbled up buy Microsoft in the next few years.
Unfortunately, the short-sighted RH/IBMer that made this decision failed to realize that a lot of Admins that used Centos at home and in the enterprise also advocated and drove sales towards RedHat as well. Now with this announcement I'm afraid the damage is done and even if you were to take back your announcement, trust has been broken and the blowback will ultimately mean the death of CentOS and reduced sales of RHEL. There is however an opportunity for another Corporations such as SUSE which is own buy Microfocus to capitalize on this epic blunder simply by announcing an LTS version of OpenSues Leap. This would in turn move people/corporations to the Suse platform which in turn would drive sale for SLES.
Christian Reiss says: December 9, 2020 at 6:28 amSo the inevitable has come to pass, what was once a useful Distro will disappear like others have. Centos was handy for education and training purposes and production when you couldn't afford the fees for "support", now it will just be a shadow of Fedora.
Ian says: December 9, 2020 at 2:10 amThis is disgusting. Bah. As a CTO I will now - today - assemble my teams and develop a plan to migrate all DataCenters back to Debian for good. I will also instantly instruct the termination of all mirroring of your software.
For the software (CentOS) I hope for a quick death that will not drag on for years.
cody says: December 9, 2020 at 4:53 amThis is a bit sad. There was always a conflict of interest associated with Redhat managing the Centos project and this is the end result of this conflict of interest.
There is a genuine benefit associated with the existence of Centos for Redhat however it would appear that that benefit isn't great enough and some arse clown thought that by forcing users to migrate it will increase Redhat's revenue.
The reality is that someone will repackage Redhat and make it just like Centos. The only difference is that Redhat now live in the same camp as Oracle.
Ganesan Rajagopal says: December 9, 2020 at 5:09 amEveryone predicted this when redhat bought centos. And when IBM bought RedHat it cemented everyone's notion.
Bomel says: December 9, 2020 at 6:22 amThankfully we just started our migration from CentOS 7 to 8 and this surely puts a stop to that. Even if CentOS backtracks on this decision because of community backlash, the reality is the trust is lost. You've just given a huge leg for Ubuntu/Debian in the enterprise. Congratulations!
Steve says: December 9, 2020 at 8:57 amI am senior system admin in my organization which spends millions dollar a year on RH&IBM products. From tomorrow, I will do my best to convince management to minimize our spending on RH & IBM, and start looking for alternatives to replace existing RH & IBM products under my watch.
Ralf says: December 9, 2020 at 10:29 amIBM are seeing every CentOS install as a missed RHEL subscription...
Michel-André says: December 9, 2020 at 5:18 pmSome years ago IBM bought Informix. We switched to PostgreSQL, when Informix was IBMized. One year ago IBM bought Red Hat and CentOS. CentOS is now IBMized. Guess what will happen with our CentOS installations. What's wrong with IBM?
PeteVM says: December 9, 2020 at 5:27 pmHi all,
Remember when RedHat, around RH-7.x, wanted to charge for the distro, the community revolted so much that RedHat saw their mistake and released Fedora. You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.
Even though RedHat/CentOS has a very large share of the Linux server market, it will suffer the same fate as Novell (had 85% of the matket), disappearing into darkness !
Mihel-André
JadeK says: December 9, 2020 at 6:36 pmAs I predicted, RHEL is destroying CentOS, and IBM is running Red Hat into the ground in the name of profit$. Why is anyone surprised? I give Red Hat 12-18 months of life, before they become another ordinary dept of IBM, producing IBM Linux.
CentOS is dead. Time to either go back to Debian and its derivatives, or just pay for RHEL, or IBMEL, and suck it up.
Godimir Kroczweck says: December 9, 2020 at 8:21 pmI am mid-migration from Rhel/Cent6 to 8. I now have to stop a major project for several hundred systems. My group will have to go back to rebuild every CentOS 8 system we've spent the last 6 months deploying.
Congrats fellas, you did it. You perfected the transition to Debian from CentOS.
Paul R says: December 9, 2020 at 9:14 pmI find it kind of funny, I find it kind of sad. The dreams in which I moving 1.5K+ machines to whatever distro I yet have to find fitting for replacement to are the..
Wait. How could one with all the seriousness consider cutting down already published EOL a good idea?
I literally had to convince people to move from Ubuntu and Debian installations to CentOS for sake of stability and longer support, just for become looking like a clown now, because with single move distro deprived from both of this.
Nicholas Knight says: December 9, 2020 at 9:34 pmHappy to donate and be part of the revolution away the Corporate vampire Squid that is IBM
Red Hat's word now means nothing to me. Disagreements over future plans and technical direction are one thing, but you *lied* to us about CentOS 8's support cycle, to the detriment of *everybody*. You cost us real money relying on a promise you made, we thought, in good faith. It is now clear Red Hat no longer knows what "good faith" means, and acts only as a Trumpian vacuum of wealth.
Dec 10, 2020 | blog.centos.org
Orsiris de Jong says: December 9, 2020 at 9:41 am
Dear IBM,
As a lot of us here, I've been in the CentOS / RHEL community for more than 10 years.
Reasons of that choice were stability, long term support and good hardware vendor support.Like many others, I've built much of my skills upon this linux flavor for years, and have been implicated into the community for numerous bug reports, bug fixes, and howto writeups.
Using CentOS was the good alternative to RHEL on a lot of non critical systems, and for smaller companies like the one I work for.
The moral contract has always been a rock solid "Community Enterprise OS" in exchange of community support, bug reports & fixes, and growing interest from developers.
Redhat endorsed that moral contract when you brought official support to CentOS back in 2014.
Now that you decided to turn your back on the community, even if another RHEL fork comes out, there will be an exodus of the community.
Also, a lot of smaller developers won't support RHEL anymore because their target weren't big companies, making less and less products available without the need of self supporting RPM builds.
This will make RHEL less and less widely used by startups, enthusiasts and others.
CentOS Stream being the upstream of RHEL, I highly doubt system architects and developers are willing to be beta testers for RHEL.
Providing a free RHEL subscription for Open Source projects just sounds like your next step to keep a bit of the exodus from happening, but I'd bet that "free" subscription will get more and more restrictions later on, pushing to a full RHEL support contract.
As a lot of people here, I won't go the Oracle way, they already did a very good job destroying other company's legacy.
Gregory Kurtzer's fork will take time to grow, but in the meantime, people will need a clear vision of the future.
This means that we'll now have to turn to other linux flavors, like Debian, or OpenSUSE, of which at least some have hardware vendor support too, but with a lesser lifecycle.
I think you destroyed a large part of the RHEL / CentOS community with this move today.
Maybe you'll get more RHEL subscriptions in the next months yielding instant profits, but the long run growth is now far more uncertain.
... ... ...
Dec 10, 2020 | www.zdnet.com
I'm far from alone. By W3Tech 's count, while Ubuntu is the most popular Linux server operating system with 47.5%, CentOS is number two with 18.8% and Debian is third, 17.5%. RHEL? It's a distant fourth with 1.8%.
If you think you just realized why Red Hat might want to remove CentOS from the server playing field, you're far from the first to think that.
Red Hat will continue to support CentOS 7 and produce it through the remainder of the RHEL 7 life cycle . That means if you're using CentOS 7, you'll see support through June 30, 2024
Dec 10, 2020 | www.reddit.com
I bet Fermilab are thrilled back in 2019 they announced that they wouldn't develop Scientific Linux 8, and focus on CentOS 8 instead. https://listserv.fnal.gov/scripts/wa.exe?A2=SCIENTIFIC-LINUX-ANNOUNCE;11d6001.1904 l
clickwir 19 points· 1 day agoTime to bring back Scientific Linux.
Dec 10, 2020 | www.reddit.com
KugelKurt 18 points� 1 day ago
I wonder what Red Hat's plan is WRT companies like Blackmagic Design that ship CentOS as part of their studio equipment.
The cost of a RHEL license isn't the issue when the overall cost of the equipment is in the tens of thousands but unless I missed a change in Red Hat's trademark policy, Blackmagic cannot distribute a modified version of RHEL and without removing all trademarks first.
I don't think a rolling release distribution is what BMD wants.
My gut feeling is that something like Scientific Linux will make a return and current CentOS users will just use that.
Dec 10, 2020 | linux.oracle.com
Oracle Linux: A better alternative to CentOSWe firmly believe that Oracle Linux is the best Linux distribution on the market today. It's reliable, it's affordable, it's 100% compatible with your existing applications, and it gives you access to some of the most cutting-edge innovations in Linux like Ksplice and DTrace.
But if you're here, you're a CentOS user. Which means that you don't pay for a distribution at all, for at least some of your systems. So even if we made the best paid distribution in the world (and we think we do), we can't actually get it to you... or can we?
We're putting Oracle Linux in your hands by doing two things:
- We've made the Oracle Linux software available free of charge
- We've created a simple script to switch your CentOS systems to Oracle Linux
We think you'll like what you find, and we'd love for you to give it a try.
FAQ
- Wait, doesn't Oracle Linux cost money?
- Oracle Linux support costs money. If you just want the software, it's 100% free. And it's all in our yum repo at yum.oracle.com . Major releases, errata, the whole shebang. Free source code, free binaries, free updates, freely redistributable, free for production use. Yes, we know that this is Oracle, but it's actually free. Seriously.
- Is this just another CentOS?
- Inasmuch as they're both 100% binary-compatible with Red Hat Enterprise Linux, yes, this is just like CentOS. Your applications will continue to work without any modification whatsoever. However, there are several important differences that make Oracle Linux far superior to CentOS.
- How is this better than CentOS?
- Well, for one, you're getting the exact same bits our paying enterprise customers are getting . So that means a few things. Importantly, it means virtually no delay between when Red Hat releases a kernel and when Oracle Linux does:
So if you don't want to risk another CentOS delay, Oracle Linux is a better alternative for you. It turns out that our enterprise customers don't like to wait for updates -- and neither should you.
- What about the code quality?
- Again, you're running the exact same code that our enterprise customers are, so it has to be rock-solid. Unlike CentOS, we have a large paid team of developers, QA, and support engineers that work to make sure this is reliable.
- What if I want support?
- If you're running Oracle Linux and want support, you can purchase a support contract from us (and it's significantly cheaper than support from Red Hat). No reinstallation, no nothing -- remember, you're running the same code as our customers.
Contrast that with the CentOS/RHEL story. If you find yourself needing to buy support, have fun reinstalling your system with RHEL before anyone will talk to you.
- Why are you doing this?
- This is not some gimmick to get you running Oracle Linux so that you buy support from us. If you're perfectly happy running without a support contract, so are we. We're delighted that you're running Oracle Linux instead of something else.
At the end of the day, we're proud of the work we put into Oracle Linux. We think we have the most compelling Linux offering out there, and we want more people to experience it.
- How do I make the switch?
- Run the following as root:
curl -O https://linux.oracle.com/switch/centos2ol.sh
sh centos2ol.sh- What versions of CentOS can I switch?
- centos2ol.sh can convert your CentOS 6 and 7 systems to Oracle Linux.
- What does the script do?
- The script has two main functions: it switches your yum configuration to use the Oracle Linux yum server to update some core packages and installs the latest Oracle Unbreakable Enterprise Kernel. That's it! You won't even need to restart after switching, but we recommend you do to take advantage of UEK.
- Is it safe?
- The centos2ol.sh script takes precautions to back up and restore any repository files it changes, so if it does not work on your system it will leave it in working order. If you encounter any issues, please get in touch with us by emailing [email protected] .
Dec 10, 2020 | blog.centos.org
Joe says: December 9, 2020 at 1:03 pmsays: December 8, 2020 at 8:44 pm
IBM is messing up RedHat after the take over last year. This is the most unfortunate news to the Free Open-Source community. Companies have been using CentOS as a testing bed before committing to purchase RHEL subscription licenses.
We need to rethink before rolling out RedHat/CentOS 8 training in our Centre.
TechSmurf says: December 9, 2020 at 12:38 amYou can use Oracle Linux in exactly the same way as you did CentOS except that you have the option of buying support without reinstalling a "commercial" variant.
Everything's in the public repos except a few addons like ksplice. You don't even have to go through the e-delivery to download the ISOs any more, they're all linked from yum.oracle.com
David Anderson says: December 8, 2020 at 7:16 pmNot likely. Oracle Linux has extensive use by paying Oracle customers as a host OS for their database software and in general purposes for Oracle Cloud Infrastructure.
Oracle customers would be even less thrilled about Streams than CentOS users. I hate to admit it, but Oracle has the opportunity to take a significant chunk of the CentOS user base if they don't do anything Oracle-ish, myself included.
I'll be pretty surprised if they don't completely destroy their own windfall opportunity, though.
Bill Murmor says: December 9, 2020 at 5:04 pm"OEL is literally a rebranded RH."
So, what's not to like? I also was under the impression that OEL was a paid offering, but apparently this is wrong - https://www.oracle.com/ar/a/ocom/docs/linux/oracle-linux-ds-1985973.pdf - "Oracle Linux is easy to download and completely free to use, distribute, and update."
k1 says: December 9, 2020 at 7:58 pmSo, what's the problem?
IBM has discontinued CentOS. Oracle is producing a working replacement for CentOS. If, at some point, Oracle attacks their product's users in the way IBM has here, then one can move to Debian, but for now, it's a working solution, as CentOS no longer is.
Because it's a trust issue. RedHat has lost trust. Oracle never had it in the first place.
Dec 10, 2020 | blog.centos.org
Charlie F. says: December 8, 2020 at 6:37 pm
David Anderson says: December 8, 2020 at 7:15 pmOracle has a converter script for CentOS 7, and they will sell you OS support after you run it:
Max Grü says: December 9, 2020 at 2:05 pmThe link says that you don't have to pay for Oracle Linux . So switching to it from CentOS 8 could be a very easy option.
Phil says: December 9, 2020 at 2:10 pmOracle Linux is free. The only thing that costs money is support for it. I quote "Yes, we know that this is Oracle, but it's actually free. Seriously."
Replythis quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv:
repobase=http://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/x86_64/getPackage wget \ ${repobase}/redhat-release-8.3-1.0.0.1.el8.x86_64.rpm \ ${repobase}/oracle-release-el8-1.0-1.el8.x86_64.rpm \ ${repobase}/oraclelinux-release-8.3-1.0.4.el8.x86_64.rpm \ ${repobase}/oraclelinux-release-el8-1.0-9.el8.x86_64.rpmrpm -e centos-linux-release --nodeps dnf --disablerepo='*' localinstall ./*rpm :> /etc/dnf/vars/ociregion dnf remove centos-linux-repos dnf --refresh distro-sync # since I wanted to try out the unbreakable enterprise kernel: dnf install kernel-uek reboot dnf remove kernel
Dec 10, 2020 | blog.centos.org
Ward Mundy says: December 9, 2020 at 3:12 am
Happy to report that we've invested exactly one day in CentOS 7 to CentOS 8 migration. Thanks, IBM. Now we can turn our full attention to Debian and never look back.
Here's a hot tip for the IBM geniuses that came up with this. Rebrand CentOS as New Coke, and you've got yourself a real winner.
Dec 10, 2020 | aws.amazon.com
Amazon Linux 2 is the next generation of Amazon Linux, a Linux server operating system from Amazon Web Services (AWS). It provides a secure, stable, and high performance execution environment to develop and run cloud and enterprise applications. With Amazon Linux 2, you get an application environment that offers long term support with access to the latest innovations in the Linux ecosystem. Amazon Linux 2 is provided at no additional charge.
Amazon Linux 2 is available as an Amazon Machine Image (AMI) for use on Amazon Elastic Compute Cloud (Amazon EC2). It is also available as a Docker container image and as a virtual machine image for use on Kernel-based Virtual Machine (KVM), Oracle VM VirtualBox, Microsoft Hyper-V, and VMware ESXi. The virtual machine images can be used for on-premises development and testing. Amazon Linux 2 supports the latest Amazon EC2 features and includes packages that enable easy integration with AWS. AWS provides ongoing security and maintenance updates for Amazon Linux 2.
Dec 10, 2020 | blog.centos.org
Sam Callis says: December 8, 2020 at 3:58 pm
Sieciowski says: December 9, 2020 at 11:19 amI have been using CentOS for over 10 years and one of the things I loved about it was how stable it has been. Now, instead of being a stable release, it is changing to the beta testing ground for RHEL 8.
And instead of 10 years of a support you need to update to the latest dot release. This has me, very concerned.
Joe says: December 9, 2020 at 11:47 amwell, 10 years - have you ever contributed with anything for the CentOS community, or paid them a wage or at least donated some decent hardware for development or maybe just being parasite all the time and now are you surprised that someone has to buy it's your own lunches for a change?
If you think you might have done it even better why not take RH sources and make your own FreeRHos whatever distro, then support, maintain and patch all the subsequent versions for free?
Ljubomir Ljubojevic says: December 9, 2020 at 12:31 pmThat's ridiculous. RHEL has benefitted from the free testing and corner case usage of CentOS users and made money hand-over-fist on RHEL. Shed no tears for using CentOS for free. That is the benefit of opening the core of your product.
Matt Phelps says: December 8, 2020 at 4:12 pmYou are missing a very important point. Goal of CentOS project was to rebuild RHEL, nothing else. If money was the problem, they could have asked for donations and it would be clear is there can be financial support for rebuild or not.
Putting entire community in front of done deal is disheartening and no one will trust Red Hat that they are pro-community, not to mention Red Hat employees that sit in CentOS board, who can trust their integrity after this fiasco?
fahrradflucht says: December 8, 2020 at 5:37 pmThis is a breach of trust from the already published timeline of CentOS 8 where the EOL was May 2029. One year's notice for such a massive change is unacceptable.
Move this approach to CentOS 9
Gregory Kurtzer says: December 8, 2020 at 4:27 pmThis! People already started deploying CentOS 8 with the expectation of 10 years of updates. - Even a migration to RHEL 8 would imply completely reprovisioning the systems which is a big ask for systems deployed in the field.
A says: December 8, 2020 at 7:11 pmI am considering creating another rebuild of RHEL and may even be able to hire some people for this effort. If you are interested in helping, please join the HPCng slack (link on the website hpcng.org).
Greg (original founder of CentOS)
ReplyMichael says: December 8, 2020 at 8:26 pmNot a programmer, but I'd certainly use it. I hope you get it off the ground.
Bond Masuda says: December 8, 2020 at 11:53 pmThis sounds like a great idea and getting control away from corporate entities like IBM would be helpful. Have you considered reviving the Scientific Linux project?
Rex says: December 9, 2020 at 3:46 amFeel free to contact me. I'm a long time RH user (since pre-RHEL when it was RHL) in both server and desktop environments. I've built and maintained some RPMs for some private projects that used CentOS as foundation. I can contribute compute and storage resources. I can program in a few different languages.
dovla091 says: December 9, 2020 at 10:47 amDear Greg,
Thank you for considering starting another RHEL rebuild. If and when you do, please consider making your new website a Brave Verified Content Creator. I earn a little bit of money every month using the Brave browser, and I end up donating it to Wikipedia every month because there are so few Brave Verified websites.
The verification process is free, and takes about 15 to 30 minutes. I believe that the Brave browser now has more than 8 million users.
dan says: December 9, 2020 at 4:00 amWikipedia. The so called organization that get tons of money from tech oligarchs and yet the whine about we need money and support? (If you don't believe me just check their biggest donors) also they keen to be insanely biased and allow to write on their web whoever pays the most... Seriously, find other organisation to donate your money
Chad Gregory says: December 9, 2020 at 7:21 pmPlease keep us updated. I can't donate much, but I'm sure many would love to donate to this cause.
Vasile M says: December 8, 2020 at 8:43 pmNot sure what I could do but I will keep an eye out things I could help with. This change to CentOS really pisses me off as I have stood up 2 CentOS servers for my works production environment in the last year.
LOL... CentOS is RH from 2014 to date. What you expected? As long as CentOS is so good and stable, that cuts some of RHEL sales... RH and now IBM just think of profit. It was expected, search the net for comments back in 2014.
Apr 30, 2020 | www.redhat.com
Red Hat Sysddmin
Listing partitions with parted
The first thing that you want to do anytime that you need to make changes to your disk is to find out what partitions you already have. Displaying existing partitions allows you to make informed decisions moving forward and helps you nail down the partition names will need for future commands. Run the
parted
command to startparted
in interactive mode and list partitions. It will default to your first listed drive. You will then use the[root@rhel ~]# parted /dev/sdc GNU Parted 3.2 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print Error: /dev/sdc: unrecognised disk label Model: ATA VBOX HARDDISK (scsi) Disk /dev/sdc: 1074MB Sector size (logical/physical): 512B/512B Partition Table: unknown Disk Flags: (parted)Creating new partitions with parted
Now that you can see what partitions are active on the system, you are going to add a new partition to
/dev/sdc
. You can see in the output above that there is no partition table for this partition, so add one by using themklabel
command. Then usemkpart
to add the new partition. You are creating a new primary partition using the ext4 architecture. For demonstration purposes, I chose to create a 50 MB partition.(parted) mklabel msdos (parted) mkpart Partition type? primary/extended? primary File system type? [ext2]? ext4 Start? 1 End? 50 (parted) (parted) print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sdc: 1074MB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 50.3MB 49.3MB primary ext4 lbaModifying existing partitions with parted
Now that you have created the new partition at 50 MB, you can resize it to 100 MB, and then shrink it back to the original 50 MB. First, note the partition number. You can find this information by using the
resizepart
command to make the modifications.(parted) resizepart Partition number? 1 End? [50.3MB]? 100 (parted) print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sdc: 1074MB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 100MB 99.0MB primaryYou can see in the above output that I resized partition number one from 50 MB to 100 MB. You can then verify the changes with the
(parted) resizepart Partition number? 1 End? [100MB]? 50 Warning: Shrinking a partition can cause data loss, are you sure you want to continue? Yes/No? yes (parted) print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sdc: 1074MB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 50.0MB 49.0MB primaryRemoving partitions with parted
Now, let's look at how to remove the partition you created at
/dev/sdc1
by using therm
command inside of theparted
suite. Again, you will need the partition number, which is found in theNOTE: Be sure that you have all of the information correct here, there are no safeguards or are you sure? questions asked. When you run the
rm
command, it will delete the partition number you give it.(parted) rm 1 (parted) print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sdc: 1074MB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags
Jul 13, 2019 | www.linuxtoday.com
- Hardening Linux for Production Use (Jul 12, 2019)
- Quick and Dirty MySQL Performance Troubleshooting (May 09, 2019)
Feb 05, 2020 | forums.centos.org
Post by neuronetv " 2014/08/20 22:24:51
I can't figure out how to disable the startup graphic in centos 7 64bit. In centos 6 I always did it by removing "rhgb quiet" from /boot/grub/grub.conf but there is no grub.conf in centos 7. I also tried yum remove rhgb but that wasn't present either.<moan> I've never understood why the devs include this startup graphic, I see loads of users like me who want a text scroll instead.</moan>
Thanks for any help.
Feb 05, 2020 | forums.centos.org
disable startup graphicPost by neuronetv " 2014/08/20 22:24:51
I can't figure out how to disable the startup graphic in centos 7 64bit. In centos 6 I always did it by removing "rhgb quiet" from /boot/grub/grub.conf but there is no grub.conf in centos 7. I also tried yum remove rhgb but that wasn't present either.
<moan> I've never understood why the devs include this startup graphic, I see loads of users like me who want a text scroll instead.</moan>
Thanks for any help. TopRe: disable startup graphic
- TrevorH
- Forum Moderator
- Posts: 27492
- Joined: 2009/09/24 10:40:56
- Location: Brighton, UK
Post by TrevorH " 2014/08/20 23:09:40
The file to amend now is /boot/grub2/grub.cfg and also /etc/default/grub. If you only amend the defaults file then you need to run grub2-mkconfig -o /boot/grub2/grub.cfg afterwards to get a new file generated but you can also edit the grub.cfg file directly though your changes will be wiped out next kernel install if you don't also edit the 'default' file. CentOS 6 will die in November 2020 - migrate sooner rather than later!
CentOS 5 has been EOL for nearly 3 years and should no longer be used for anything!
Full time Geek, part time moderator. Use the FAQ Luke TopRe: disable startup graphic
- neuronetv
- Posts: 76
- Joined: 2012/01/08 21:53:07
Post by neuronetv " 2014/08/21 13:12:45
thanks for that, I did the edits and now the scroll is back. TopRe: disable startup graphic
- larryg
- Posts: 3
- Joined: 2014/07/17 04:48:28
Post by larryg " 2014/08/21 19:27:16
The preferred method to do this is using the command plymouth-set-default-theme.If you enter this command, without parameters, as user root you'll see something like
>plymouth-set-default-theme
charge
details
textThis lists the themes installed on your computer. The default is 'charge'. If you want to see the boot up details you used to see in version 6, try
>plymouth-set-default-theme detailsFollowed by the command
>dracut -fThen reboot.
This process modifies the boot loader so you won't have to update your grub.conf file manually everytime for each new kernel update.
There are numerous themes available you can download from CentOS or in general. Just google 'plymouth themes' to see other possibilities, if you're looking for graphics type screens. Top
Re: disable startup graphic
- TrevorH
- Forum Moderator
- Posts: 27492
- Joined: 2009/09/24 10:40:56
- Location: Brighton, UK
Post by TrevorH " 2014/08/21 22:47:49
Editing /etc/default/grub to remove rhgb quiet makes it permanent too. CentOS 6 will die in November 2020 - migrate sooner rather than later!
CentOS 5 has been EOL for nearly 3 years and should no longer be used for anything!
Full time Geek, part time moderator. Use the FAQ Luke TopRe: disable startup graphic
- MalAdept
- Posts: 1
- Joined: 2014/11/02 20:06:27
Post by MalAdept " 2014/11/02 20:23:37
I tried both TrevorH's and LarryG's methods, and LarryG wins.Editing /etc/default/grub to remove "rhgb quiet" gave me the scrolling boot messages I want, but it reduced maxmum display resolution (nouveau driver) from 1920x1080 to 1024x768! I put "rhgb quiet" back in and got my 1920x1080 back.
Then I tried "plymouth-set-default-theme details; dracut -f", and got verbose booting without loss of display resolution. Thanks LarryG! Top
Re: disable startup graphic
- dunwell
- Posts: 116
- Joined: 2010/12/20 18:49:52
- Location: Colorado
- Contact: Contact dunwell
Post by dunwell " 2015/12/13 00:17:18
I have used this mod to get back the details for grub boot, thanks to all for that info.However when I am watching it fills the page and then rather than scrolling up as it did in V5 it blanks and starts again at the top. Of course there is FAIL message right before it blanks that I want to see and I can't slam the Scroll Lock fast enough to catch it. Anyone know how to get the details to scroll up rather than the blank and re-write?
Alan D. Top
Re: disable startup graphic
- aks
- Posts: 2915
- Joined: 2014/09/20 11:22:14
Post by aks " 2015/12/13 09:15:51
Yeah the scroll lock/ctrl+q/ctrl+s will not work with systemd you can't pause the screen like you used to be able to (it was a design choice, due to parallel daemon launching, apparently).
If you do boot, you can always use journalctrl to view the logs.
In Fedora you can use journalctl --list-boots to list boots (not 100% sure about CentOS 7.x - perhaps in 7.1 or 7.2?). You can also use things like journalctl --boot=-1 (the last boot), and parse the log at you leisure. TopRe: disable startup graphic
- dunwell
- Posts: 116
- Joined: 2010/12/20 18:49:52
- Location: Colorado
- Contact: Contact dunwell
Post by dunwell " 2015/12/13 14:18:29
aks wrote: Yeah the scroll lock/ctrl+q/ctrl+s will not work with systemd you can't pause the screen like you used to be able to (it was a design choice, due to parallel daemon launching, apparently).Thanks for the followup aks. Actually I have found that the Scroll Lock does pause (Ctrl-S/Q not) but it all goes by so fast that I'm not fast enough to stop it before the screen blanks and then starts writing again. What I am really wondering is how to get the screen to scroll up when it gets to the bottom of the screen rather than blanking and starting to write again at the top. That is annoying!
If you do boot, you can always use journalctrl to view the logs.
In Fedora you can use journalctl --list-boots to list boots (not 100% sure about CentOS 7.x - perhaps in 7.1 or 7.2?). You can also use things like journalctl --boot=-1 (the last boot), and parse the log at you leisure.Alan D. Top
Re: disable startup graphic Yes it is and no you can't. Kudos to Lennard for making or lives so much shitter....
- aks
- Posts: 2915
- Joined: 2014/09/20 11:22:14
Jan 01, 2012 | askubuntu.com
Ask Question Asked 8 years ago Active 7 years, 7 months ago Viewed 57k times
> ,
11Jo-Erlend Schinstad , 2012-01-25 22:06:57
Lately, booting Ubuntu on my desktop has become seriously slow. We're talking two minutes. It used to take 10-20 seconds. Because of plymouth, I can't see what's going on. I would like to deactivate it, but not really uninstall it. What's the quickest way to do that? I'm using Precise, but I suspect a solution for 11.10 would work just as well.WinEunuuchs2Unix , 2017-07-21 22:08:06
Did you try: sudo update-initramfs – mgajda Jun 19 '12 at 0:54> ,
17Panther ,
Easiest quick fix is to edit the grub line as you boot.Hold down the shift key so you see the menu. Hit the e key to edit
Edit the 'linux' line, remove the 'quiet' and 'splash'
To disable it in the long run
Edit
/etc/default/grub
Change the line –
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
toGRUB_CMDLINE_LINUX_DEFAULT=""And then update grub
sudo update-grubPanther , 2016-10-27 15:43:04
Removing quiet and splash removes the splash, but I still only have a purple screen with no text. What I want to do, is to see the actual boot messages. – Jo-Erlend Schinstad Jan 25 '12 at 22:25Tuminoid ,
How about pressingCTRL+ALT+F2
for console allowing you to see whats going on.. You can go back to GUI/Plymouth byCTRL+ALT+F7
.Don't have my laptop here right now, but IIRC Plymouth has upstart job in
/etc/init
, named plymouth???.conf, renaming that probably achieves what you want too more permanent manner.Jānis Elmeris , 2013-12-03 08:46:54
No, there's nothing on the other consoles. – Jo-Erlend Schinstad Jan 25 '12 at 22:22
Nov 09, 2019 | blogs.oracle.com
Mirroring a running system into a ramdisk Greg Marsden
- October 29, 2019
In this blog post, Oracle Linux kernel developer William Roche presents a method to mirror a running system into a ramdisk.
A RAM mirrored System ?There are cases where a system can boot correctly but after some time, can lose its system disk access - for example an iSCSI system disk configuration that has network issues, or any other disk driver problem. Once the system disk is no longer accessible, we rapidly face a hang situation followed by I/O failures, without the possibility of local investigation on this machine. I/O errors can be reported on the console:
XFS (dm-0): Log I/O Error Detected....Or losing access to basic commands like:
# ls -bash: /bin/ls: Input/output errorThe approach presented here allows a small system disk space to be mirrored in memory to avoid the above I/O failures situation, which provides the ability to investigate the reasons for the disk loss. The system disk loss will be noticed as an I/O hang, at which point there will be a transition to use only the ram-disk.
To enable this, the Oracle Linux developer Philip "Bryce" Copeland created the following method (more details will follow):
Disk and memory sizes:
- Create a "small enough" system disk image using LVM (a minimized Oracle Linux installation does that)
- After the system is started, create a ramdisk and use it as a mirror for the system volume
- when/if the (primary) system disk access is lost, the ramdisk continues to provide all necessary system functions.
As we are going to mirror the entire system installation to the memory, this system installation image has to fit in a fraction of the memory - giving enough memory room to hold the mirror image and necessary running space.
Of course this is a trade-off between the memory available to the server and the minimal disk size needed to run the system. For example a 12GB disk space can be used for a minimal system installation on a 16GB memory machine.
A standard Oracle Linux installation uses XFS as root fs, which (currently) can't be shrunk. In order to generate a usable "small enough" system, it is recommended to proceed to the OS installation on a correctly sized disk space. Of course, a correctly sized installation location can be created using partitions of large physical disk. Then, the needed application filesystems can be mounted from their current installation disk(s). Some system adjustments may also be required (services added, configuration changes, etc...).
This configuration phase should not be underestimated as it can be difficult to separate the system from the needed applications, and keeping both on the same space could be too large for a RAM disk mirroring.
The idea is not to keep an entire system load active when losing disks access, but to be able to have enough system to avoid system commands access failure and analyze the situation.
We are also going to avoid the use of swap. When the system disk access is lost, we don't want to require it for swap data. Also, we don't want to use more memory space to hold a swap space mirror. The memory is better used directly by the system itself.
The system installation can have a swap space (for example a 1.2GB space on our 12GB disk example) but we are neither going to mirror it nor use it.
Our 12GB disk example could be used with: 1GB /boot space, 11GB LVM Space (1.2GB swap volume, 9.8 GB root volume).
Ramdisk memory footprint:The ramdisk size has to be a little larger (8M) than the root volume size that we are going to mirror, making room for metadata. But we can deal with 2 types of ramdisk:
- A classical Block Ram Disk (brd) device
- A memory compressed Ram Block Device (zram)
We can expect roughly 30% to 50% memory space gain from zram compared to brd, but zram must use 4k I/O blocks only. This means that the filesystem used for root has to only deal with a multiple of 4k I/Os.
Basic commands:Here is a simple list of commands to manually create and use a ramdisk and mirror the root filesystem space. We create a temporary configuration that needs to be undone or the subsequent reboot will not work. But we also provide below a way of automating at startup and shutdown.
Note the root volume size (considered to be ol/root in this example):
?
1 2 3 # lvs --units k -o lv_size ol/root
LSize
10268672.00k
Create a ramdisk a little larger than that (at least 8M larger):
?
1 # modprobe brd rd_nr=1 rd_size=$((10268672 + 8*1024))
Verify the created disk:
?
1 2 3 # lsblk /dev/ram0
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
ram0 1:0 0 9.8G 0 disk
Put the disk under lvm control
?
1 2 3 4 5 6 7 8 9 # pvcreate /dev/ram0
Physical volume
"/dev/ram0"
successfully created.
# vgextend ol /dev/ram0
Volume group
"ol"
successfully extended
# vgscan --cache
Reading volume
groups
from cache.
Found volume group
"ol"
using metadata
type
lvm2
# lvconvert -y -m 1 ol/root /dev/ram0
Logical volume ol/root
successfully converted.
We now have ol/root mirror to our /dev/ram0 disk.
?
1 2 3 4 5 6 7 8 # lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices
root ol rwi-aor--- 9.79g 40.70 root_rimage_0(0),root_rimage_1(0)
[root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307)
[root_rimage_1] ol Iwi-aor--- 9.79g /dev/ram0(1)
[root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814)
[root_rmeta_1] ol ewi-aor--- 4.00m /dev/ram0(0)
swap ol -wi-ao---- <1.20g /dev/sda2(0)
A few minutes (or seconds) later, the synchronization is completed:
?
1 2 3 4 5 6 7 8 # lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices
root ol rwi-aor--- 9.79g 100.00 root_rimage_0(0),root_rimage_1(0)
[root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307)
[root_rimage_1] ol iwi-aor--- 9.79g /dev/ram0(1)
[root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814)
[root_rmeta_1] ol ewi-aor--- 4.00m /dev/ram0(0)
swap ol -wi-ao---- <1.20g /dev/sda2(0)
We have our mirrored configuration running !
For security, we can also remove the swap and /boot, /boot/efi(if it exists) mount points:
?
1 2 3 # swapoff -a
# umount /boot/efi
# umount /boot
Stopping the system also requires some actions as you need to cleanup the configuration so that it will not be looking for a gone ramdisk on reboot.
?What about in-memory compression ?
1 2 3 4 5 6 7 # lvconvert -y -m 0 ol/root /dev/ram0
Logical volume ol/root
successfully converted.
# vgreduce ol /dev/ram0
Removed
"/dev/ram0"
from volume group
"ol"
# mount /boot
# mount /boot/efi
# swapon -a
As indicated above, zRAM devices can compress data in-memory, but 2 main problems need to be fixed:
Make lvm work with zram:
- LVM does take into account zRAM devices by default
- zRAM only works with 4K I/Os
The lvm configuration file has to be changed to take into account the "zram" type of devices. Including the following "types" entry to the /etc/lvm/lvm.conf file in its "devices" section:
?Root file system I/Os:
1 2 3 devices {
types = [
"zram"
, 16 ]
}
A standard Oracle Linux installation uses XFS, and we can check the sector size used (depending on the disk type used) with
?
1 2 3 4 5 6 7 8 9 10 # xfs_info /
meta-data=/dev/mapper/ol-root
isize=256 agcount=4, agsize=641792 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0 spinodes=0
data = bsize=4096 blocks=2567168, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
We can notice here that the sector size (sectsz) used on this root fs is a standard 512 bytes. This fs type cannot be mirrored with a zRAM device, and needs to be recreated with 4k sector sizes.
Transforming the root file system to 4k sector size:This is simply a backup (to a zram disk) and restore procedure after recreating the root FS. To do so, the system has to be booted from another system image. Booting from an installation DVD image can be a good possibility.
?
- Boot from an OL installation DVD [Choose "Troubleshooting", "Rescue a Oracle Linux system", "3) Skip to shell"]
- Activate and mount the root volume
1 2 3 sh-4.2
# vgchange -a y ol
2 logical volume(s)
in
volume group
"ol"
now active
sh-4.2
# mount /dev/mapper/ol-root /mnt
?
- create a zram to store our disk backup
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 sh-4.2
# modprobe zram
sh-4.2
# echo 10G > /sys/block/zram0/disksize
sh-4.2
# mkfs.xfs /dev/zram0
meta-data=/dev/zram0
isize=256 agcount=4, agsize=655360 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=0 finobt=0, sparse=0
data = bsize=4096 blocks=2621440, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
sh-4.2
# mkdir /mnt2
sh-4.2
# mount /dev/zram0 /mnt2
sh-4.2
# xfsdump -L BckUp -M dump -f /mnt2/ROOT /mnt
xfsdump: using
file
dump (drive_simple) strategy
xfsdump: version 3.1.7 (dump
format
3.0) -
type
^C
for
status and control
xfsdump: level 0 dump of localhost:/mnt
...
xfsdump: dump complete: 130 seconds elapsed
xfsdump: Dump Summary:
xfsdump: stream 0 /mnt2/ROOT
OK (success)
xfsdump: Dump Status: SUCCESS
sh-4.2
# umount /mnt
?
- recreate the xfs on the disk with a 4k sector size
1 2 3 4 5 6 7 8 9 10 11 12 sh-4.2
# mkfs.xfs -f -s size=4096 /dev/mapper/ol-root
meta-data=/dev/mapper/ol-root
isize=256 agcount=4, agsize=641792 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=0 finobt=0, sparse=0
data = bsize=4096 blocks=2567168, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
sh-4.2
# mount /dev/mapper/ol-root /mnt
?
- restore the backup
1 2 3 4 5 6 7 8 9 10 11 sh-4.2
# xfsrestore -f /mnt2/ROOT /mnt
xfsrestore: using
file
dump (drive_simple) strategy
xfsrestore: version 3.1.7 (dump
format
3.0) -
type
^C
for
status and control
xfsrestore: searching media
for
dump
...
xfsrestore: restore complete: 337 seconds elapsed
xfsrestore: Restore Summary:
xfsrestore: stream 0 /mnt2/ROOT
OK (success)
xfsrestore: Restore Status: SUCCESS
sh-4.2
# umount /mnt
sh-4.2
# umount /mnt2
?
- reboot the machine on its disk (may need to remove the DVD)
1 sh-4.2
# reboot
?
- login and verify the root filesystem
1 2 3 4 5 6 7 8 9 10 $ xfs_info /
meta-data=/dev/mapper/ol-root
isize=256 agcount=4, agsize=641792 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=0 finobt=0 spinodes=0
data = bsize=4096 blocks=2567168, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
With sectsz=4096, our system is now ready for zRAM mirroring.
Basic commands with a zRAM device: ?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 # modprobe zram
# zramctl --find --size 10G
/dev/zram0
# pvcreate /dev/zram0
Physical volume
"/dev/zram0"
successfully created.
# vgextend ol /dev/zram0
Volume group
"ol"
successfully extended
# vgscan --cache
Reading volume
groups
from cache.
Found volume group
"ol"
using metadata
type
lvm2
# lvconvert -y -m 1 ol/root /dev/zram0
Logical volume ol/root
successfully converted.
# lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices
root ol rwi-aor--- 9.79g 12.38 root_rimage_0(0),root_rimage_1(0)
[root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307)
[root_rimage_1] ol Iwi-aor--- 9.79g /dev/zram0(1)
[root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814)
[root_rmeta_1] ol ewi-aor--- 4.00m /dev/zram0(0)
swap ol -wi-ao---- <1.20g /dev/sda2(0)
# lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices
root ol rwi-aor--- 9.79g 100.00 root_rimage_0(0),root_rimage_1(0)
[root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307)
[root_rimage_1] ol iwi-aor--- 9.79g /dev/zram0(1)
[root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814)
[root_rmeta_1] ol ewi-aor--- 4.00m /dev/zram0(0)
swap ol -wi-ao---- <1.20g /dev/sda2(0)
# zramctl
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0
lzo 10G 9.8G 5.3G 5.5G 1
The compressed disk uses a total of 5.5GB of memory to mirror a 9.8G volume size (using in this case 8.5G).
Removal is performed the same way as brd, except that the device is /dev/zram0 instead of /dev/ram0.
Automating the process:Fortunately, the procedure can be automated on system boot and shutdown with the following scripts (given as examples).
The start method: /usr/sbin/start-raid1-ramdisk: [ https://github.com/oracle/linux-blog-sample-code/blob/ramdisk-system-image/start-raid1-ramdisk ]
After a chmod 555 /usr/sbin/start-raid1-ramdisk, running this script on a 4k xfs root file system should show something like:
?
1 2 3 4 5 6 7 8 9 10 11 # /usr/sbin/start-raid1-ramdisk
Volume group
"ol"
is already consistent.
RAID1 ramdisk: intending to use 10276864 K of memory
for
facilitation of [ / ]
Physical volume
"/dev/zram0"
successfully created.
Volume group
"ol"
successfully extended
Logical volume ol/root
successfully converted.
Waiting
for
mirror to synchronize...
LVM RAID1
sync
of [ / ] took 00:01:53 sec
Logical volume ol/root
changed.
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0
lz4 9.8G 9.8G 5.5G 5.8G 1
The stop method: /usr/sbin/stop-raid1-ramdisk: [ https://github.com/oracle/linux-blog-sample-code/blob/ramdisk-system-image/stop-raid1-ramdisk ]
After a chmod 555 /usr/sbin/stop-raid1-ramdisk, running this script should show something like:
?
1 2 3 4 5 6 # /usr/sbin/stop-raid1-ramdisk
Volume group
"ol"
is already consistent.
Logical volume ol/root
changed.
Logical volume ol/root
successfully converted.
Removed
"/dev/zram0"
from volume group
"ol"
Labels on physical volume
"/dev/zram0"
successfully wiped.
A service Unit file can also be created: /etc/systemd/system/raid1-ramdisk.service [https://github.com/oracle/linux-blog-sample-code/blob/ramdisk-system-image/raid1-ramdisk.service]
?Conclusion:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 [Unit]
Description=Enable RAMdisk RAID 1 on LVM
After=
local
-fs.target
Before=
shutdown
.target reboot.target halt.target
[Service]
ExecStart=/usr/sbin/start-raid1-ramdisk
ExecStop=/usr/sbin/stop-raid1-ramdisk
Type=oneshot
RemainAfterExit=
yes
TimeoutSec=0
[Install]
WantedBy=multi-user.target
When the system disk access problem manifests itself, the ramdisk mirror branch will provide the possibility to investigate the situation. This procedure goal is not to keep the system running on this memory mirror configuration, but help investigate a bad situation.
When the problem is identified and fixed, I really recommend to come back to a standard configuration -- enjoying the entire memory of the system, a standard system disk, a possible swap space etc.
Hoping the method described here can help. I also want to thank for their reviews Philip "Bryce" Copeland who also created the first prototype of the above scripts, and Mark Kanda who also helped testing many aspects of this work.
Nov 08, 2019 | opensource.com
In Figure 1, two complete physical hard drives and one partition from a third hard drive have been combined into a single volume group. Two logical volumes have been created from the space in the volume group, and a filesystem, such as an EXT3 or EXT4 filesystem has been created on each of the two logical volumes.
Figure 1: LVM allows combining partitions and entire hard drives into Volume Groups.
Adding disk space to a host is fairly straightforward but, in my experience, is done relatively infrequently. The basic steps needed are listed below. You can either create an entirely new volume group or you can add the new space to an existing volume group and either expand an existing logical volume or create a new one.
Adding a new logical volumeThere are times when it is necessary to add a new logical volume to a host. For example, after noticing that the directory containing virtual disks for my VirtualBox virtual machines was filling up the /home filesystem, I decided to create a new logical volume in which to store the virtual machine data, including the virtual disks. This would free up a great deal of space in my /home filesystem and also allow me to manage the disk space for the VMs independently.
The basic steps for adding a new logical volume are as follows.
- If necessary, install a new hard drive.
- Optional: Create a partition on the hard drive.
- Create a physical volume (PV) of the complete hard drive or a partition on the hard drive.
- Assign the new physical volume to an existing volume group (VG) or create a new volume group.
- Create a new logical volumes (LV) from the space in the volume group.
- Create a filesystem on the new logical volume.
- Add appropriate entries to /etc/fstab for mounting the filesystem.
- Mount the filesystem.
Now for the details. The following sequence is taken from an example I used as a lab project when teaching about Linux filesystems.
ExampleThis example shows how to use the CLI to extend an existing volume group to add more space to it, create a new logical volume in that space, and create a filesystem on the logical volume. This procedure can be performed on a running, mounted filesystem.
WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted filesystem. Many other filesystems including BTRFS and ZFS cannot be resized.
Install hard driveIf there is not enough space in the volume group on the existing hard drive(s) in the system to add the desired amount of space it may be necessary to add a new hard drive and create the space to add to the Logical Volume. First, install the physical hard drive, and then perform the following steps.
Create Physical Volume from hard driveIt is first necessary to create a new Physical Volume (PV). Use the command below, which assumes that the new hard drive is assigned as /dev/hdd.
pvcreate /dev/hddIt is not necessary to create a partition of any kind on the new hard drive. This creation of the Physical Volume which will be recognized by the Logical Volume Manager can be performed on a newly installed raw disk or on a Linux partition of type 83. If you are going to use the entire hard drive, creating a partition first does not offer any particular advantages and uses disk space for metadata that could otherwise be used as part of the PV.
Extend the existing Volume GroupIn this example we will extend an existing volume group rather than creating a new one; you can choose to do it either way. After the Physical Volume has been created, extend the existing Volume Group (VG) to include the space on the new PV. In this example the existing Volume Group is named MyVG01.
vgextend /dev/MyVG01 /dev/hddCreate the Logical VolumeFirst create the Logical Volume (LV) from existing free space within the Volume Group. The command below creates a LV with a size of 50GB. The Volume Group name is MyVG01 and the Logical Volume Name is Stuff.
lvcreate -L +50G --name Stuff MyVG01Create the filesystemCreating the Logical Volume does not create the filesystem. That task must be performed separately. The command below creates an EXT4 filesystem that fits the newly created Logical Volume.
mkfs -t ext4 /dev/MyVG01/StuffAdd a filesystem labelAdding a filesystem label makes it easy to identify the filesystem later in case of a crash or other disk related problems.
e2label /dev/MyVG01/Stuff StuffMount the filesystemAt this point you can create a mount point, add an appropriate entry to the /etc/fstab file, and mount the filesystem.
You should also check to verify the volume has been created correctly. You can use the df , lvs, and vgs commands to do this.
Resizing a logical volume in an LVM filesystemThe need to resize a filesystem has been around since the beginning of the first versions of Unix and has not gone away with Linux. It has gotten easier, however, with Logical Volume Management.
Example
- If necessary, install a new hard drive.
- Optional: Create a partition on the hard drive.
- Create a physical volume (PV) of the complete hard drive or a partition on the hard drive.
- Assign the new physical volume to an existing volume group (VG) or create a new volume group.
- Create one or more logical volumes (LV) from the space in the volume group, or expand an existing logical volume with some or all of the new space in the volume group.
- If you created a new logical volume, create a filesystem on it. If adding space to an existing logical volume, use the resize2fs command to enlarge the filesystem to fill the space in the logical volume.
- Add appropriate entries to /etc/fstab for mounting the filesystem.
- Mount the filesystem.
This example describes how to resize an existing Logical Volume in an LVM environment using the CLI. It adds about 50GB of space to the /Stuff filesystem. This procedure can be used on a mounted, live filesystem only with the Linux 2.6 Kernel (and higher) and EXT3 and EXT4 filesystems. I do not recommend that you do so on any critical system, but it can be done and I have done so many times; even on the root (/) filesystem. Use your judgment.
WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted filesystem. Many other filesystems including BTRFS and ZFS cannot be resized.
Install the hard driveIf there is not enough space on the existing hard drive(s) in the system to add the desired amount of space it may be necessary to add a new hard drive and create the space to add to the Logical Volume. First, install the physical hard drive and then perform the following steps.
Create a Physical Volume from the hard driveIt is first necessary to create a new Physical Volume (PV). Use the command below, which assumes that the new hard drive is assigned as /dev/hdd.
pvcreate /dev/hddIt is not necessary to create a partition of any kind on the new hard drive. This creation of the Physical Volume which will be recognized by the Logical Volume Manager can be performed on a newly installed raw disk or on a Linux partition of type 83. If you are going to use the entire hard drive, creating a partition first does not offer any particular advantages and uses disk space for metadata that could otherwise be used as part of the PV.
Add PV to existing Volume GroupFor this example, we will use the new PV to extend an existing Volume Group. After the Physical Volume has been created, extend the existing Volume Group (VG) to include the space on the new PV. In this example, the existing Volume Group is named MyVG01.
vgextend /dev/MyVG01 /dev/hddExtend the Logical VolumeExtend the Logical Volume (LV) from existing free space within the Volume Group. The command below expands the LV by 50GB. The Volume Group name is MyVG01 and the Logical Volume Name is Stuff.
lvextend -L +50G /dev/MyVG01/StuffExpand the filesystemExtending the Logical Volume will also expand the filesystem if you use the -r option. If you do not use the -r option, that task must be performed separately. The command below resizes the filesystem to fit the newly resized Logical Volume.
resize2fs /dev/MyVG01/StuffYou should check to verify the resizing has been performed correctly. You can use the df , lvs, and vgs commands to do this.
TipsOver the years I have learned a few things that can make logical volume management even easier than it already is. Hopefully these tips can prove of some value to you.
- Use the Extended file systems unless you have a clear reason to use another filesystem. Not all filesystems support resizing but EXT2, 3, and 4 do. The EXT filesystems are also very fast and efficient. In any event, they can be tuned by a knowledgeable sysadmin to meet the needs of most environments if the defaults tuning parameters do not.
- Use meaningful volume and volume group names.
- Use EXT filesystem labels.
I know that, like me, many sysadmins have resisted the change to Logical Volume Management. I hope that this article will encourage you to at least try LVM. I am really glad that I did; my disk management tasks are much easier since I made the switch. Topics Business Linux How-tos and tutorials Sysadmin About the author David Both - David Both is an Open Source Software and GNU/Linux advocate, trainer, writer, and speaker who lives in Raleigh North Carolina. He is a strong proponent of and evangelist for the "Linux Philosophy." David has been in the IT industry for nearly 50 years. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for over 20 years. David prefers to purchase the components and build his...
Mar 16, 2015 | serverfault.com
LVM spanning over multiple disks: What disk is a file on? Can I lose a drive without total loss? Ask Question Asked 8 years, 10 months ago Active 4 years, 6 months ago Viewed 9k times 7 2 I have three 990GB partitions over three drives in my server. Using LVM, I can create one ~3TB partition for file storage.
1) How does the system determine what partition to use first?
2) Can I find what disk a file or folder is physically on?
3) If I lose a drive in the LVM, do I lose all data, or just data physically on that disk? storage lvm shareHopelessN00b 49k 25 25 gold badges 121 121 silver badges 194 194 bronze badges asked Dec 2 '10 at 2:28 Luke has no name Luke has no name 989 10 10 silver badges 13 13 bronze badges
add a comment | 3 Answers 3 active oldest votes 12Peter Grace Peter Grace 2,676 2 2 gold badges 22 22 silver badges 38 38 bronze badges
- The system fills from the first disk in the volume group to the last, unless you configure striping with extents.
- I don't think this is possible, but where I'd start to look is in the lvs/vgs commands man pages.
- If you lose a drive in a volume group, you can force the volume group online with the missing physical volume, but you will be unable to open the LV's that were contained on the dead PV, whether they be in whole or in part.
- So, if you had for instance 10 LV's, 3 total on the first drive, #4 partially on first drive and second drive, then 5-7 on drive #2 wholly, then 8-10 on drive 3, you would be potentially able to force the VG online and recover LV's 1,2,3,8,9,10.. #4,5,6,7 would be completely lost.
add a comment | 3
- To #3, that's what I was afraid of. Thank you. – Luke has no name Dec 2 '10 at 5:14
1) How does the system determine what partition to use first?
LVM doesn't really have the concept of a partition it uses PVs (Physical Volumes), which can be a partition. These PVs are broken up into extents and then these are mapped to the LVs (Logical Volumes). When you create the LVs you can specify if the data is striped or mirrored but the default is linear allocation. So it would use the extents in the first PV then the 2nd then the 3rd.
2) Can I find what disk a file or folder is physically on?
You can determine what PVs a LV has allocation extents on. But I don't know of a way to get that information for an individual file.
3) If I lose a drive in the LVM, do I lose all data, or just data physically on that disk?
As Peter has said the blocks appear as 0's if a PV goes missing. So you can potentially do data recovery on files that are on the other PVs. But I wouldn't rely on it. You normally see LVM used in conjunction with RAIDs for this reason.
3dinfluence 3dinfluence 12k 1 1 gold badge 23 23 silver badges 38 38 bronze badges
add a comment | 2 I don't know the answer to #2, so I'll leave that to someone else. I suspect "no", but I'm willing to be happily surprised.
- So here's a derivative of my question: I have 3x1 TB drives and I want to use 3TB of that space. What's the best way to configure the drives so I am not splitting my data over folders/mountpoints? or is there a way at all, other than what I've implied above? – Luke has no name Dec 2 '10 at 5:12
- If you want to use 3TB and aren't willing to split data over folders/mount points I don't see any other way. There may be some virtual filesystem solution to this problem like unionfs although I'm not sure if it would solve your particular problem. But LVM is certainly the most straight forward and simple solution as such it's the one I'd go with. – 3dinfluence Dec 2 '10 at 14:51
1 is: you tell it, when you combine the physical volumes into a volume group.
3 is: it's effectively as if a huge chunk of your disk suddenly turned to badblocks. You can patch things back together with a new, empty drive to which you give the same UUID, and then run an fsck on any filesystems on logical volumes that went across the bad drive to hope you can salvage something.
And to the overall, unasked question: yeah, you probably don't really want to do that.
Oct 03, 2019 | hardforum.com
RAID5 can survive a single drive failure. However, once you replace that drive, it has to be initialized. Depending on the controller and other things, this can take anywhere from 5-18 hours. During this time, all drives will be in constant use to re-create the failed drive. It is during this time that people worry that the rebuild would cause another drive near death to die, causing a complete array failure.
This isn't the only danger. The problem with 2TB disks, especially if they are not 4K sector disks, is that they have relative high BER rate for their capacity, so the likelihood of BER actually occurring and translating into an unreadable sector is something to worry about.
If during a rebuild one of the remaining disks experiences BER, your rebuild stops and you may have headaches recovering from such a situation, depending on controller design and user interaction.
So i would say with modern high-BER drives you should say:
- RAID5: 0 complete disk failures, BER covered
- RAID6: 1 complete disk failure, BER covered
So essentially you'll lose one parity disk alone for the BER issue. Not everyone will agree with my analysis, but considering RAID5 with today's high-capacity drives 'safe' is open for debate.
RAID5 + a GOOD backup is something to consider, though.
So you're saying BER is the error count that 'escapes' the ECC correction? I do not believe that is correct, but i'm open to good arguments or links.As i understand, the BER is what prompt bad sectors, where the number of errors exceed that of the ECC error correcting ability; and you will have an unrecoverable sector (Current Pending Sector in SMART output).
Also these links are interesting in this context:
http://blog.econtech.selfip.org/200...s-not-fully-readable-a-lawsuit-in-the-making/
The short story first: Your consumer level 1TB SATA drive has a 44% chance that it can be completely read without any error. If you run a RAID setup, this is really bad news because it may prevent rebuilding an array in the case of disk failure, making your RAID not so Redundant. Click to expand...Not sure on the numbers the article comes up with, though.Also this one is interesting:
http://lefthandnetworks.typepad.com/virtual_view/2008/02/what-does-data.htmlBER simply means that while reading your data from the disk drive you will get an average of one non-recoverable error in so many bits read, as specified by the manufacturer. Click to expand...Rebuilding the data on a replacement drive with most RAID algorithms requires that all the other data on the other drives be pristine and error free. If there is a single error in a single sector, then the data for the corresponding sector on the replacement drive cannot be reconstructed, and therefore the RAID rebuild fails and data is lost. The frequency of this disastrous occurrence is derived from the BER. Simple calculations will show that the chance of data loss due to BER is much greater than all other reasons combined. Click to expand...These links do suggest that BER works to produce un-recoverable sectors, and not 'escape' them as 'undetected' bad sectors, if i understood you correctly.
parityOCP said: ↑That's guy's a bit of a scaremonger to be honest. He may have a point with consumer drives, but the article is sensationalised to a certain degree. However, there are still a few outfits that won't go past 500GB/drive in an array (even with enterprise drives), simply to reduce the failure window during a rebuild. Click to expand...Why is he a scaremonger? He is correct. Have you read his article? In fact, he has copied his argument from Adam Leventhal(?) that was one of the ZFS developers I believe.Adam's argument goes likes this:
Disks are getting larger all the time, in fact, the storage increases exponentially. At the same time, the bandwidth is increasing not that fast - we are still at 100MB/sek even after decades. So, bandwidth has increased maybe 20x after decades. While storage has increased from 10MB to 3TB = 300.000 times.The trend is clear. In the future when we have 10TB drives, they will not be much faster than today. This means, to repair an raid with 3TB disks today, will take several days, maybe even one week. With 10TB drives, it will take several weeks, maybe a month.
Repairing a raid stresses the other disks much, which means they can break too. Experienced sysadmins reports that this happens quite often during a repair. Maybe because those disks come from the same batch, they have the same weakness. Some sysadmins therefore mix disks from different vendors and batches.
Hence, I would not want to run a raid with 3TB disks and only use raid-5. During those days, if only another disk crashes you have lost all your data.
Hence, that article is correct, and he is not a scaremonger. Raid-5 is obsolete if you use large drives, such as 2TB or 3TB disks. You should instead use raid-6 (two disks can fail). That is the conclusion of the article: use raid-6 with large disks, forget raid-5. This is true, and not scaremongery.
In fact, ZFS has therefore something called raidz3 - which means that three disks can fail without problems. To the OT: no raid-5 is not safe. Neither is raid-6, because neither of them can not always repair nor detect corrupted data. There are cases when they dont even notice that you got corrupted bits. See my other thread for more information about this. That is the reason people are switching to ZFS - which always CAN detect and repair those corrupted bits. I suggest, sell your hardware raid card, and use ZFS which requires no hardware. ZFS just uses JBOD.
Here are research papers on raid-5, raid-6 and ZFS and corruption:
http://hardforum.com/showpost.php?p=1036404173&postcount=73Is RAID5 safe with Five 2TB Hard Drives ? | [H]ard|Forum Your browser indicates if you've visited this link
brutalizer said: ↑The trend is clear. In the future when we have 10TB drives, they will not be much faster than today. This means, to repair an raid with 3TB disks today, will take several days, maybe even one week. With 10TB drives, it will take several weeks, maybe a month. Click to expand...While I agree with the general claim that the larger HDDs (1.5, 2, 3TBs) are best used in RAID 6, your claim about rebuild times is way off.I think it is not unreasonable to assume that the 10TB drives will be able to read and write at 200 MB/s or more. We already have 2TB drives with 150MB/s sequential speeds, so 200 MB/s is actually a conservative estimate.
10e12/200e6 = 50000 secs = 13.9 hours. Even if there is 100% overhead (half the throughput), that is less than 28 hours to do the rebuild. It is a long time, but it is no where near a month! Try to back your claims in reality.
And you have again made the false claim that "ZFS - which always CAN detect and repair those corrupted bits". ZFS can usually detect corrupted bits, and can usually correct them if you have duplication or parity, but nothing can always detect and repair. ZFS is safer than many alternatives, but nothing is perfectly safe. Corruption can and has happened with ZFS, and it will happen again.
https://hardforum.com /threads/is-raid5-safe-with-five-2tb-hard-drives.1560198/
Hence, that article is correct, and he is not a scaremonger. Raid-5 is obsolete if you use large drives , such as 2TB or 3TB disks. You should instead use raid-6 ( two disks can fail). That is the conclusion of the article: use raid-6 with large disks, forget raid-5 . This is true, and not scaremongery.
RAID 5 Data Recovery How to Rebuild a Failed RAID 5 - YouTube
RAID 5 vs RAID 10: Recommended RAID For Safety and ... Your browser indicates if you've visited this linkhttps://www.cyberciti.biz /tips/raid5-vs-raid-10-safety-performance.html
RAID 6 offers more redundancy than RAID 5 (which is absolutely essential, RAID 5 is a walking disaster) at the cost of multiple parity writes per data write. This means the performance will be typically worse (although it's not theoretically much worse, since the parity operations are in parallel).
Oct 02, 2019 | serverfault.com
Can I recover a RAID 5 array if two drives have failed? Ask Question Asked 9 years ago Active 2 years, 3 months ago Viewed 58k times I have a Dell 2600 with 6 drives configured in a RAID 5 on a PERC 4 controller. 2 drives failed at the same time, and according to what I know a RAID 5 is recoverable if 1 drive fails. I'm not sure if the fact I had six drives in the array might save my skin.
I bought 2 new drives and plugged them in but no rebuild happened as I expected. Can anyone shed some light? raid raid5 dell-poweredge share Share a link to this question
add a comment | 4 Answers 4 active oldest votes11 Regardless of how many drives are in use, a RAID 5 array only allows for recovery in the event that just one disk at a time fails.
What 3molo says is a fair point but even so, not quite correct I think - if two disks in a RAID5 array fail at the exact same time then a hot spare won't help, because a hot spare replaces one of the failed disks and rebuilds the array without any intervention, and a rebuild isn't possible if more than one disk fails.
For now, I am sorry to say that your options for recovering this data are going to involve restoring a backup.
For the future you may want to consider one of the more robust forms of RAID (not sure what options a PERC4 supports) such as RAID 6 or a nested RAID array . Once you get above a certain amount of disks in an array you reach the point where the chance that more than one of them can fail before a replacement is installed and rebuilt becomes unacceptably high. share Share a link to this answer Copy link | improve this answer edited Jun 8 '12 at 13:37 longneck 21.1k 3 3 gold badges 43 43 silver badges 76 76 bronze badges answered Sep 21 '10 at 14:43 Rob Moir Rob Moir 30k 4 4 gold badges 53 53 silver badges 84 84 bronze badges
add a comment | 2 You can try to force one or both of the failed disks to be online from the BIOS interface of the controller. Then check that the data and the file system are consistent. share Share a link to this answer Copy link | improve this answer answered Sep 21 '10 at 15:35 Mircea Vutcovici Mircea Vutcovici 13.6k 3 3 gold badges 42 42 silver badges 69 69 bronze badges
- 1 thanks robert I will take this advise into consideration when I rebuild the server, lucky for me I full have backups that are less than 6 hours old. regards – bonga86 Sep 21 '10 at 15:00
- If this is (somehow) likely to occur again in the future, you may consider RAID6. Same idea as RAID5 but with two Parity disks, so the array can survive any two disks failing. – gWaldo Sep 21 '10 at 15:04
- g man(mmm...), i have recreated the entire system from scratch with a RAID 10. So hopefully if 2 drives go out at the same time again the system will still function? Otherwise everything has been restored and working thanks for ideas and input – bonga86 Sep 23 '10 at 11:34
- Depends which two drives go... RAID 10 means, for example, 4 drives in two mirrored pairs (2 RAID 1 mirrors) striped together (RAID 0) yes? If you lose both disks in 1 of the mirrors then you've still got an outage. – Rob Moir Sep 23 '10 at 11:43
- 1 Remember, that RAID is not a backup. No more robust forms of RAID will save you from data corruption. – Kazimieras Aliulis May 31 '13 at 10:57
add a comment | 2 Direct answer is "No". In-direct -- "It depends". Mainly it depends on whether disks are partially out of order, or completely. In case there're partially broken, you can give it a try -- I would copy (using tool like ddrescue) both failed disks. Then I'd try to run the bunch of disks using Linux SoftRAID -- re-trying with proper order of disks and stripe-size in read-only mode and counting CRC mismatches. It's quite doable, I should say -- this text in Russian mentions 12 disk RAID50's recovery using LSR , for example. share Share a link to this answer Copy link | improve this answer edited Jun 8 '12 at 15:12 Skyhawk 13.5k 3 3 gold badges 45 45 silver badges 91 91 bronze badges answered Jun 8 '12 at 14:11 poige poige 7,370 2 2 gold badges 16 16 silver badges 38 38 bronze badges add a comment | 0 It is possible if raid was with one spare drive , and one of your failed disks died before the second one. So, you just need need to try reconstruct array virtually with 3d party software . Found small article about this process on this page: http://www.angeldatarecovery.com/raid5-data-recovery/
- 2 Dell systems, especially, in my experience, built on PERC3 or PERC4 cards had a nasty tendency to simply have a hiccup on the SCSI bus which would know two or more drives off-line. A drive being offline does NOT mean it failed. I've never had a two drives fail at the same time, but probably a half dozen times, I've had two or more drives go off-line. I suggest you try Mircea's suggestion first... could save you a LOT of time. – Multiverse IT Sep 21 '10 at 16:32
- Hey guys, i tried the force option many times. Both "failed" drives would than come back online, but when I do a restart it says logical drive :degraded and obviously because of that they system still could not boot. – bonga86 Sep 23 '10 at 11:27
And, if you realy need one of died drives you can send it to recovery shops. With of this images you can reconstruct raid properly with good channces.
Sep 23, 2019 | linuxconfig.org
Contents
- Details
- Egidio Docile
- System Administration
- 15 September 2019
In this article we will talk about
foremost
, a very useful open source forensic utility which is able to recover deleted files using the technique calleddata carving
. The utility was originally developed by the United States Air Force Office of Special Investigations, and is able to recover several file types (support for specific file types can be added by the user, via the configuration file). The program can also work on partition images produced by dd or similar tools.In this tutorial you will learn:
<img src=https://linuxconfig.org/images/foremost_manual.png alt=foremost-manual width=1200 height=675 /> Foremost is a forensic data recovery program for Linux used to recover files using their headers, footers, and data structures through a process known as file carving. Software Requirements and Conventions Used
- How to install foremost
- How to use foremost to recover deleted files
- How to add support for a specific file type
Installation
Software Requirements and Linux Command Line Conventions Category Requirements, Conventions or Software Version Used System Distribution-independent Software The "foremost" program Other Familiarity with the command line interface Conventions # - requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo
command
$ - requires given linux commands to be executed as a regular non-privileged userSince
foremost
is already present in all the major Linux distributions repositories, installing it is a very easy task. All we have to do is to use our favorite distribution package manager. On Debian and Ubuntu, we can useapt
:$ sudo apt install foremostIn recent versions of Fedora, we use the
dnf
package manager to install packages , thednf
is a successor ofyum
. The name of the package is the same:$ sudo dnf install foremostIf we are using ArchLinux, we can use
pacman
to installforemost
. The program can be found in the distribution "community" repository:$ sudo pacman -S foremost
SUBSCRIBE TO NEWSLETTER
Subscribe to Linux Career NEWSLETTER and receive latest Linux news, jobs, career advice and tutorials.
Basic usageWARNING
No matter which file recovery tool or process your are going to use to recover your files, before you begin it is recommended to perform a low level hard drive or partition backup, hence avoiding an accidental data overwrite !!! In this case you may re-try to recover your files even after unsuccessful recovery attempt. Check the following dd command guide on how to perform hard drive or partition low level backup.The
foremost
utility tries to recover and reconstruct files on the base of their headers, footers and data structures, without relying onfilesystem metadata
. This forensic technique is known asfile carving
. The program supports various types of files, as for example:
- jpg
- gif
- png
- bmp
- avi
- exe
- mpg
- wav
- riff
- wmv
- mov
- ole
- doc
- zip
- rar
- htm
- cpp
The most basic way to use
foremost
is by providing a source to scan for deleted files (it can be either a partition or an image file, as those generated withdd
). Let's see an example. Imagine we want to scan the/dev/sdb1
partition: before we begin, a very important thing to remember is to never store retrieved data on the same partition we are retrieving the data from, to avoid overwriting delete files still present on the block device. The command we would run is:$ sudo foremost -i /dev/sdb1By default, the program creates a directory called
output
inside the directory we launched it from and uses it as destination. Inside this directory, a subdirectory for each supported file type we are attempting to retrieve is created. Each directory will hold the corresponding file type obtained from the data carving process:output ├── audit.txt ├── avi ├── bmp ├── dll ├── doc ├── docx ├── exe ├── gif ├── htm ├── jar ├── jpg ├── mbd ├── mov ├── mp4 ├── mpg ├── ole ├── pdf ├── png ├── ppt ├── pptx ├── rar ├── rif ├── sdw ├── sx ├── sxc ├── sxi ├── sxw ├── vis ├── wav ├── wmv ├── xls ├── xlsx └── zipWhen
foremost
completes its job, empty directories are removed. Only the ones containing files are left on the filesystem: this let us immediately know what type of files were successfully retrieved. By default the program tries to retrieve all the supported file types; to restrict our search, we can, however, use the-t
option and provide a list of the file types we want to retrieve, separated by a comma. In the example below, we restrict the search only togif
and$ sudo foremost -t gif,pdf -i /dev/sdb1In this video we will test the forensic data recovery program Foremost to recover a single https://www.youtube.com/embed/58S2wlsJNvo
png
file from/dev/sdb1
partition formatted with theEXT4
filesystem.
Specifying an alternative destinationAs we already said, if a destination is not explicitly declared, foremost creates an
output
directory inside ourcwd
. What if we want to specify an alternative path? All we have to do is to use the-o
option and provide said path as argument. If the specified directory doesn't exist, it is created; if it exists but it's not empty, the program throws a complain:ERROR: /home/egdoc/data is not empty Please specify another directory or run with -T.To solve the problem, as suggested by the program itself, we can either use another directory or re-launch the command with the
-T
option. If we use the-T
option, the output directory specified with the-o
option is timestamped. This makes possible to run the program multiple times with the same destination. In our case the directory that would be used to store the retrieved files would be:/home/egdoc/data_Thu_Sep_12_16_32_38_2019The configuration fileThe
foremost
configuration file can be used to specify file formats not natively supported by the program. Inside the file we can find several commented examples showing the syntax that should be used to accomplish the task. Here is an example involving thepng
type (the lines are commented since the file type is supported by default):# PNG (used in web pages) # (NOTE THIS FORMAT HAS A BUILTIN EXTRACTION FUNCTION) # png y 200000 \x50\x4e\x47? \xff\xfc\xfd\xfeThe information to provide in order to add support for a file type, are, from left to right, separated by a tab character: the file extension (
png
in this case), whether the header and footer are case sensitive (y
), the maximum file size in Bytes (200000
), the header (\x50\x4e\x47?
) and and the footer (\xff\xfc\xfd\xfe
). Only the latter is optional and can be omitted.If the path of the configuration file it's not explicitly provided with the
-c
option, a file namedforemost.conf
is searched and used, if present, in the current working directory. If it is not found the default configuration file,/etc/foremost.conf
is used instead.Adding the support for a file type
By reading the examples provided in the configuration file, we can easily add support for a new file type. In this example we will add support for
flac
audio files.Flac
(Free Lossless Audio Coded) is a non-proprietary lossless audio format which is able to provide compressed audio without quality loss. First of all, we know that the header of this file type in hexadecimal form is66 4C 61 43 00 00 00 22
(fLaC
in ASCII), and we can verify it by using a program likehexdump
on a flac file:$ hexdump -C blind_guardian_war_of_wrath.flac|head 00000000 66 4c 61 43 00 00 00 22 12 00 12 00 00 00 0e 00 |fLaC..."........| 00000010 36 f2 0a c4 42 f0 00 4d 04 60 6d 0b 64 36 d7 bd |6...B..M.`m.d6..| 00000020 3e 4c 0d 8b c1 46 b6 fe cd 42 04 00 03 db 20 00 |>L...F...B.... .| 00000030 00 00 72 65 66 65 72 65 6e 63 65 20 6c 69 62 46 |..reference libF| 00000040 4c 41 43 20 31 2e 33 2e 31 20 32 30 31 34 31 31 |LAC 1.3.1 201411| 00000050 32 35 21 00 00 00 12 00 00 00 54 49 54 4c 45 3d |25!.......TITLE=| 00000060 57 61 72 20 6f 66 20 57 72 61 74 68 11 00 00 00 |War of Wrath....| 00000070 52 45 4c 45 41 53 45 43 4f 55 4e 54 52 59 3d 44 |RELEASECOUNTRY=D| 00000080 45 0c 00 00 00 54 4f 54 41 4c 44 49 53 43 53 3d |E....TOTALDISCS=| 00000090 32 0c 00 00 00 4c 41 42 45 4c 3d 56 69 72 67 69 |2....LABEL=Virgi|As you can see the file signature is indeed what we expected. Here we will assume a maximum file size of 30 MB, or 30000000 Bytes. Let's add the entry to the file:
flac y 30000000 \x66\x4c\x61\x43\x00\x00\x00\x22The
footer
signature is optional so here we didn't provide it. The program should now be able to recover deletedflac
files. Let's verify it. To test that everything works as expected I previously placed, and then removed, a flac file from the/dev/sdb1
partition, and then proceeded to run the command:$ sudo foremost -i /dev/sdb1 -o $HOME/Documents/outputAs expected, the program was able to retrieve the deleted flac file (it was the only file on the device, on purpose), although it renamed it with a random string. The original filename cannot be retrieved because, as we know, files metadata is contained in the filesystem, and not in the file itself:
/home/egdoc/Documents └── output ├── audit.txt └── flac └── 00020482.flac
The audit.txt file contains information about the actions performed by the program, in this case:
Foremost version 1.5.7 by Jesse Kornblum, Kris Kendall, and Nick Mikus Audit File Foremost started at Thu Sep 12 23:47:04 2019 Invocation: foremost -i /dev/sdb1 -o /home/egdoc/Documents/output Output directory: /home/egdoc/Documents/output Configuration file: /etc/foremost.conf ------------------------------------------------------------------ File: /dev/sdb1 Start: Thu Sep 12 23:47:04 2019 Length: 200 MB (209715200 bytes) Num Name (bs=512) Size File Offset Comment 0: 00020482.flac 28 MB 10486784 Finish: Thu Sep 12 23:47:04 2019 1 FILES EXTRACTED flac:= 1 ------------------------------------------------------------------ Foremost finished at Thu Sep 12 23:47:04 2019ConclusionIn this article we learned how to use foremost, a forensic program able to retrieve deleted files of various types. We learned that the program works by using a technique called
data carving
, and relies on files signatures to achieve its goal. We saw an example of the program usage and we also learned how to add the support for a specific file type using the syntax illustrated in the configuration file. For more information about the program usage, please consult its manual page.
Aug 31, 2019 | www.zdnet.com
Before EFI, the standard boot process for virtually all PC systems was called "MBR", for Master Boot Record; today you are likely to hear it referred to as "Legacy Boot". This process depended on using the first physical block on a disk to hold some information needed to boot the computer (thus the name Master Boot Record); specifically, it held the disk address at which the actual bootloader could be found, and the partition table that defined the layout of the disk. Using this information, the PC firmware could find and execute the bootloader, which would then bring up the computer and run the operating system.This system had a number of rather obvious weaknesses and shortcomings. One of the biggest was that you could only have one bootable object on each physical disk drive (at least as far as the firmware boot was concerned). Another was that if that first sector on the disk became corrupted somehow, you were in deep trouble.
Over time, as part of the Extensible Firmware Interface, a new approach to boot configuration was developed. Rather than storing critical boot configuration information in a single "magic" location, EFI uses a dedicated "EFI boot partition" on the desk. This is a completely normal, standard disk partition, the same as which may be used to hold the operating system or system recovery data.
The only requirement is that it be FAT formatted, and it should have the boot and esp partition flags set (esp stands for EFI System Partition). The specific data and programs necessary for booting is then kept in directories on this partition, typically in directories named to indicate what they are for. So if you have a Windows system, you would typically find directories called 'Boot' and 'Microsoft' , and perhaps one named for the manufacturer of the hardware, such as HP. If you have a Linux system, you would find directories called opensuse, debian, ubuntu, or any number of others depending on what particular Linux distribution you are using.
It should be obvious from the description so far that it is perfectly possible with the EFI boot configuration to have multiple boot objects on a single disk drive.
Before going any further, I should make it clear that if you install Linux as the only operating system on a PC, it is not necessary to know all of this configuration information in detail. The installer should take care of setting all of this up, including creating the EFI boot partition (or using an existing EFI boot partition), and further configuring the system boot list so that whatever system you install becomes the default boot target.
If you were to take a brand new computer with UEFI firmware, and load it from scratch with any of the current major Linux distributions, it would all be set up, configured, and working just as it is when you purchase a new computer preloaded with Windows (or when you load a computer from scratch with Windows). It is only when you want to have more than one bootable operating system – especially when you want to have both Linux and Windows on the same computer – that things may become more complicated.
The problems that arise with such "multiboot" systems are generally related to getting the boot priority list defined correctly.
When you buy a new computer with Windows, this list typically includes the Windows bootloader on the primary disk, and then perhaps some other peripheral devices such as USB, network interfaces and such. When you install Linux alongside Windows on such a computer, the installer will add the necessary information to the EFI boot partition, but if the boot priority list is not changed, then when the system is rebooted after installation it will simply boot Windows again, and you are likely to think that the installation didn't work.
There are several ways to modify this boot priority list, but exactly which ones are available and whether or how they work depends on the firmware of the system you are using, and this is where things can get really messy. There are just about as many different UEFI firmware implementations as there are PC manufacturers, and the manufacturers have shown a great deal of creativity in the details of this firmware.
First, in the simplest case, there is a software utility included with Linux called efibootmgr that can be used to modify, add or delete the boot priority list. If this utility works properly, and the changes it makes are permanent on the system, then you would have no other problems to deal with, and after installing it would boot Linux and you would be happy. Unfortunately, while this is sometimes the case it is frequently not. The most common reason for this is that changes made by software utilities are not actually permanently stored by the system BIOS, so when the computer is rebooted the boot priority list is restored to whatever it was before, which generally means that Windows gets booted again.
The other common way of modifying the boot priority list is via the computer BIOS configuration program. The details of how to do this are different for every manufacturer, but the general procedure is approximately the same. First you have to press the BIOS configuration key (usually F2, but not always, unfortunately) during system power-on (POST). Then choose the Boot item from the BIOS configuration menu, which should get you to a list of boot targets presented in priority order. Then you need to modify that list; sometimes this can be done directly in that screen, via the usual F5/F6 up/down key process, and sometimes you need to proceed one level deeper to be able to do that. I wish I could give more specific and detailed information about this, but it really is different on every system (sometimes even on different systems produced by the same manufacturer), so you just need to proceed carefully and figure out the steps as you go.
I have seen a few rare cases of systems where neither of these methods works, or at least they don't seem to be permanent, and the system keeps reverting to booting Windows. Again, there are two ways to proceed in this case. The first is by simply pressing the "boot selection" key during POST (power-on). Exactly which key this is varies, I have seen it be F12, F9, Esc, and probably one or two others. Whichever key it turns out to be, when you hit it during POST you should get a list of bootable objects defined in the EFI boot priority list, so assuming your Linux installation worked you should see it listed there. I have known of people who were satisfied with this solution, and would just use the computer this way and have to press boot select each time they wanted to boot Linux.
The alternative is to actually modify the files in the EFI boot partition, so that the (unchangeable) Windows boot procedure would actually boot Linux. This involves overwriting the Windows file bootmgfw.efi with the Linux file grubx64.efi. I have done this, especially in the early days of EFI boot, and it works, but I strongly advise you to be extremely careful if you try it, and make sure that you keep a copy of the original bootmgfw.efi file. Finally, just as a final (depressing) warning, I have also seen systems where this seemed to work, at least for a while, but then at some unpredictable point the boot process seemed to notice that something had changed and it restored bootmgfw.efi to its original state – thus losing the Linux boot configuration again. Sigh.
So, that's the basics of EFI boot, and how it can be configured. But there are some important variations possible, and some caveats to be aware of.
Aug 03, 2019 | linuxize.com
There are several different applications available for free use which will allow you to flash ISO images to USB drives. In this example, we will use Etcher. It is a free and open-source utility for flashing images to SD cards & USB drives and supports Windows, macOS, and Linux.
Head over to the Etcher downloads page , and download the most recent Etcher version for your operating system. Once the file is downloaded, double-click on it and follow the installation wizard.
Creating Bootable Linux USB Drive using Etcher is a relatively straightforward process, just follow the steps outlined below:
- Connect the USB flash drive to your system and Launch Etcher.
- Click on the
Select image
button and locate the distribution.iso
file.- If only one USB drive is attached to your machine, Etcher will automatically select it. Otherwise, if more than one SD cards or USB drives are connected make sure you have selected the correct USB drive before flashing the image.
Mar 25, 2019 | linuxhint.com
Monitoring Specific Storage Devices or Partitions with iostat:
By default, iostat monitors all the storage devices of your computer. But, you can monitor specific storage devices (such as sda, sdb etc) or specific partitions (such as sda1, sda2, sdb4 etc) with iostat as well.
For example, to monitor the storage device sda only, run iostat as follows:
$ sudo iostat sdaOr
$ sudo iostat -d 2 sdaAs you can see, only the storage device sda is monitored.
You can also monitor multiple storage devices with iostat.
For example, to monitor the storage devices sda and sdb , run iostat as follows:
$ sudo iostat sda sdbOr
$ sudo iostat -d 2 sda sdbIf you want to monitor specific partitions, then you can do so as well.
For example, let's say, you want to monitor the partitions sda1 and sda2 , then run iostat as follows:
$ sudo iostat sda1 sda2Or
$ sudo iostat -d 2 sda1 sda2As you can see, only the partitions sda1 and sda2 are monitored.
Monitoring LVM Devices with iostat:You can monitor the LVM devices of your computer with the -N option of iostat.
To monitor the LVM devices of your Linux machine as well, run iostat as follows:
$ sudo iostat -N -d 2You can also monitor specific LVM logical volume as well.
For example, to monitor the LVM logical volume centos-root (let's say), run iostat as follows:
$ sudo iostat -N -d 2 centos-root Changing the Units of iostat:By default, iostat generates reports in kilobytes (kB) unit. But there are options that you can use to change the unit.
For example, to change the unit to megabytes (MB), use the -m option of iostat.
You can also change the unit to human readable with the -h option of iostat. Human readable format will automatically pick the right unit depending on the available data.
To change the unit to megabytes, run iostat as follows:
$ sudo iostat -m -d 2 sdaTo change the unit to human readable format, run iostat as follows:
$ sudo iostat -h -d 2 sdaI copied as file and as you can see, the unit is now in megabytes (MB).
It changed to kilobytes (kB) as soon as the file copy is over.
Extended Display of iostat:If you want, you can display a lot more information about disk i/o with iostat. To do that, use the -x option of iostat.
For example, to display extended information about disk i/o, run iostat as follows:
$ sudo iostat -x -d 2 sdaYou can find what each of these fields (rrqm/s, %wrqm etc) means in the man page of iostat.
Getting Help:If you need more information on each of the supported options of iostat and what each of the fields of iostat means, I recommend you take a look at the man page of iostat.
You can access the man page of iostat with the following command:
$ man iostatSo, that's how you use iostat in Linux. Thanks for reading this article.
Oct 14, 2018 | linux.slashdot.org
Reverend Green ( 4973045 ) , Monday December 11, 2017 @04:48AM ( #55714431 )Re: Does systemd make ... ( Score: 5 , Funny)Systemd is nothing but a thinly-veiled plot by Vladimir Putin and Beyonce to import illegal German Nazi immigrants over the border from Mexico who will then corner the market in kimchi and implement Sharia law!!!
Anonymous Coward , Monday December 11, 2017 @01:38AM ( #55714015 )Re:It violates fundamental Unix principles ( Score: 4 , Funny)The Emacs of the 2010s.
DontBeAMoran ( 4843879 ) , Monday December 11, 2017 @01:57AM ( #55714059 )serviscope_minor ( 664417 ) , Monday December 11, 2017 @04:47AM ( #55714427 ) JournalRe:It violates fundamental Unix principles ( Score: 5 , Funny)We are systemd. Lower your memory locks and surrender your processes. We will add your calls and code distinctiveness to our own. Your functions will adapt to service us. Resistance is futile.
Re:It violates fundamental Unix principles ( Score: 4 , Insightful)I think we should call systemd the Master Control Program since it seems to like making other programs functions its own.
Anonymous Coward , Monday December 11, 2017 @01:47AM ( #55714035 )Don't go hating on systemd ( Score: 5 , Funny)RHEL7 is a fine OS, the only thing it's missing is a really good init system.
Dec 23, 2018 | hexmode.com
A while back I mentioned Atul Gawande 's book The Checklist Manifesto . Today, I got another example of how to improve my checklists.
The book talks about how checklists reduce major errors in surgery. Hospitals that use checklists are drastically less likely to amputate the wrong leg .
So, the takeaway for me is this: any checklist should start off verifying that what you "know" to be true is true . (Thankfully, my errors can be backed out with very little long term consequences, but I shouldn't use this as an excuse to forego checklists.)
Before starting, ask the "Is it plugged in?" question first. What happened today was an example of when asking "Is it plugged in?" would have helped.
Today I was testing the thumbnailing of some MediaWiki code and trying to understand the
$wgLocalFileRepo
variable. I copied part of an/images/
directory over from another wiki to my test wiki. I verified that it thumbnailed correctly.So far so good.
Then I changed the directory parameter and tested. No thumbnail. Later, I realized this is to be expected because I didn't copy over the original images. So that is one issue.
I erased (what I thought was) the thumbnail image and tried again on the main repo. It worked again–I got a thumbnail.
I tried copying over the images directory to the new directory, but it the new thumbnailing directory structure didn't produce a thumbnail.
I tried over and over with the same thumbnail and was confused because it kept telling me the same thing.
I added debugging statements and still got no where.
Finally, I just did an
ls
on the directory to verify it was there. It was. And it had files in it.But not the file I was trying to produce a thumbnail of.
The system that "worked" had the thumbnail, but not the original file.
So, moral of the story: Make sure that your understanding of the current state is correct. If you're a developer trying to fix a problem, make sure that you are actually able to understand the problem first.
Maybe your perception of reality is wrong. Mine was. I was sure that the thumbnails were being generated each time until I discovered that I hadn't deleted the thumbnails, I had deleted the original.
Dec 13, 2018 | www.linkedin.com
Oracle recommmendations:
ip_local_port_range Minimum:9000 Maximum: 65000 /proc/sys/net/ipv4/ip_local_port_range rmem_default 262144 /proc/sys/net/core/rmem_default rmem_max 4194304 /proc/sys/net/core/rmem_max wmem_default 262144 /proc/sys/net/core/wmem_default wmem_max 1048576 /proc/sys/net/core/wmem_max tcp_wmem 262144 /proc/sys/net/ipv4/tcp_wmem tcp_rmem 4194304 /proc/sys/net/ipv4/tcp_rmem Minesh Patel , Site Reliability Engineer, Austin, Texas Area
TCP IO setting on Red hat will reduce your intermittent or random slowness problem or there issue if you have TCP IO of default settings.
For Red Hat Linux: 131071 is default value.
Double the value from 131071 to 262144 cat /proc/sys/net/core/rmem_max 131071 → 262144 cat /proc/sys/net/core/rmem_default 129024 → 262144 cat /proc/sys/net/core/wmem_default 129024 → 262144 cat /proc/sys/net/core/wmem_max 131071 → 262144To improve fail over performance in a RAC cluster, consider changing the following IP kernel parameters as well:net.ipv4.tcp_keepalive_time net.ipv4.tcp_keepalive_intvl net.ipv4.tcp_retries2 net.ipv4.tcp_syn_retries # sysctl -w net.ipv4.ip_local_port_range="1024 65000"To make the change permanent, add the following line to the /etc/sysctl.conf file, which is used during the boot process:
net.ipv4.ip_local_port_range=1024 65000The first number is the first local port allowed for TCP and UDP traffic, and the second number is the last port number.
Oct 15, 2018 | www.2daygeek.com
It's a important topic for Linux admin (such a wonderful topic) so, everyone must be aware of this and practice how to use this in the efficient way.In Linux, whenever we install any packages which has services or daemons. By default all the services "init & systemd" scripts will be added into it but it wont enabled.
Hence, we need to enable or disable the service manually if it's required. There are three major init systems are available in Linux which are very famous and still in use.
What is init System?In Linux/Unix based operating systems, init (short for initialization) is the first process that started during the system boot up by the kernel.
It's holding a process id (PID) of 1. It will be running in the background continuously until the system is shut down.
Init looks at the
/etc/inittab
file to decide the Linux run level then it starts all other processes & applications in the background as per the run level.BIOS, MBR, GRUB and Kernel processes were kicked up before hitting init process as part of Linux booting process.
Below are the available run levels for Linux (There are seven runlevels exist, from zero to six).
0:
halt1:
Single user mode2:
Multiuser, without NFS3:
Full multiuser mode4:
Unused5:
X11 (GUI � Graphical User Interface):
rebootBelow three init systems are widely used in Linux.
- System V (Sys V)
- Upstart
- systemd
What is System V (Sys V)?System V (Sys V) is one of the first and traditional init system for Unix like operating system. init is the first process that started during the system boot up by the kernel and it's a parent process for everything.
Most of the Linux distributions started using traditional init system called System V (Sys V) first. Over the years, several replacement init systems were released to address design limitations in the standard versions such as launchd, the Service Management Facility, systemd and Upstart.
But systemd has been adopted by several major Linux distributions over the traditional SysV init systems.
What is Upstart?Upstart is an event-based replacement for the /sbin/init daemon which handles starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running.
It was originally developed for the Ubuntu distribution, but is intended to be suitable for deployment in all Linux distributions as a replacement for the venerable System-V init.
It was used in Ubuntu from 9.10 to Ubuntu 14.10 & RHEL 6 based systems after that they are replaced with systemd.
What is systemd?Systemd is a new init system and system manager which was implemented/adapted into all the major Linux distributions over the traditional SysV init systems.
systemd is compatible with SysV and LSB init scripts. It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1.
It's a parant process for everything and Fedora 15 is the first distribution which was adapted systemd instead of upstart. systemctl is command line utility and primary tool to manage the systemd daemons/services such as (start, restart, stop, enable, disable, reload & status).
systemd uses .service files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring
/cgroup/systemd
file.How to Enable or Disable Services on Boot Using chkconfig Commmand?The chkconfig utility is a command-line tool that allows you to specify in which
runlevel to start a selected service, as well as to list all available services along with their current setting.Also, it will allows us to enable or disable a services from the boot. Make sure you must have superuser privileges (either root or sudo) to use this command.
All the services script are located on
/etc/rd.d/init.d
.How to list All Services in run-levelThe
-�list
parameter displays all the services along with their current status (What run-level the services are enabled or disabled).# chkconfig --list NetworkManager 0:off 1:off 2:on 3:on 4:on 5:on 6:off abrt-ccpp 0:off 1:off 2:off 3:on 4:off 5:on 6:off abrtd 0:off 1:off 2:off 3:on 4:off 5:on 6:off acpid 0:off 1:off 2:on 3:on 4:on 5:on 6:off atd 0:off 1:off 2:off 3:on 4:on 5:on 6:off auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off . .How to check the Status of Specific ServiceIf you would like to see a particular service status in run-level then use the following format and grep the required service.
In this case, we are going to check the
auditd
service status in run-level
Jul 03, 2018 | itsfoss.com
British software company Micro Focus International has agreed to sell SUSE Linux and its associated software business to Swedish private equity group EQT Partners for $2.535 billion. Read the details. rm 3 months ago
This comment is awaiting moderation
Novell acquired SUSE in 2003 for $210 million asoc 4 months agoThis comment is awaiting moderation
"It has over 1400 employees all over the globe "
They should be updating their CVs.
Sep 10, 2018 | www.mysysad.com
lsb_release -aHere is the syntax to determine which Ubuntu kernel version you are currently running on.
uname -a uname -r
Aug 24, 2018 | linuxconfig.org
Objective Our goal is to build rpm packages with custom content, unifying scripts across any number of systems, including versioning, deployment and undeployment. Operating System and Software Versions
- 1. Objective
- 2. Operating System and Software Versions
- 3. Requirements
- 4. Difficulty
- 5. Conventions
- 6. Introduction
- 7. Distributions, major and minor versions
- 8. Setting up building environment
- 9. Building the first version of the package
- 10. Building another version of the package
- 11. Conclusion
Requirements Privileged access to the system for install, normal access for build. Difficulty MEDIUM Conventions
- Operating system: Red Hat Enterprise Linux 7.5
- Software: rpm-build 4.11.3+
Introduction One of the core feature of any Linux system is that they are built for automation. If a task may need to be executed more than one time - even with some part of it changing on next run - a sysadmin is provided with countless tools to automate it, from simple
- # - requires given linux commands to be executed with root privileges either directly as a root user or by use of
sudo
command- $ - given linux commands to be executed as a regular non-privileged user
shell
scripts run by hand on demand (thus eliminating typo errors, or only save some keyboard hits) to complex scripted systems where tasks run fromcron
at a specified time, interacting with each other, working with the result of another script, maybe controlled by a central management system etc.While this freedom and rich toolset indeed adds to productivity, there is a catch: as a sysadmin, you write a useful script on a system, which proves to be useful on another, so you copy the script over. On a third system the script is useful too, but with minor modification - maybe a new feature useful only that system, reachable with a new parameter. Generalization in mind, you extend the script to provide the new feature, and complete the task it was written for as well. Now you have two versions of the script, the first is on the first two system, the second in on the third system.
You have 1024 computers running in the datacenter, and 256 of them will need some of the functionality provided by that script. In time you will have 64 versions of the script all over, every version doing its job. On the next system deployment you need a feature you recall you coded at some version, but which? And on which systems are they?
On RPM based systems, such as Red Hat flavors, a sysadmin can take advantage of the package manager to create order in the custom content, including simple shell scripts that may not provide else but the tools the admin wrote for convenience.
In this tutorial we will build a custom rpm for Red Hat Enterprise Linux 7.5 containing two
bash
scripts,parselogs.sh
andpullnews.sh
to provide a way that all systems have the latest version of these scripts in the/usr/local/sbin
directory, and thus on the path of any user who logs in to the system.
Distributions, major and minor versions In general, the minor and major version of the build machine should be the same as the systems the package is to be deployed, as well as the distribution to ensure compatibility. If there are various versions of a given distribution, or even different distributions with many versions in your environment (oh, joy!), you should set up build machines for each. To cut the work short, you can just set up build environment for each distribution and each major version, and have them on the lowest minor version existing in your environment for the given major version. Of cause they don't need to be physical machines, and only need to be running at build time, so you can use virtual machines or containers.In this tutorial our work is much easier, we only deploy two scripts that have no dependencies at all (except
bash
), so we will buildnoarch
packages which stand for "not architecture dependent", we'll also not specify the distribution the package is built for. This way we can install and upgrade them on any distribution that usesrpm
, and to any version - we only need to ensure that the build machine'srpm-build
package is on the oldest version in the environment. Setting up building environment To build custom rpm packages, we need to install therpm-build
package:# yum install rpm-buildFrom now on, we do not useroot
user, and for a good reason. Building packages does not requireroot
privilege, and you don't want to break your building machine.Building the first version of the package Let's create the directory structure needed for building:
$ mkdir -p rpmbuild/SPECSOur package is called admin-scripts, version 1.0. We create aspecfile
that specifies the metadata, contents and tasks performed by the package. This is a simple text file we can create with our favorite text editor, such asvi
. The previously installedrpmbuild
package will fill your empty specfile with template data if you usevi
to create an empty one, but for this tutorial consider the specification below calledadmin-scripts-1.0.spec
:
Name: admin-scripts Version: 1 Release: 0 Summary: FooBar Inc. IT dept. admin scripts Packager: John Doe Group: Application/Other License: GPL URL: www.foobar.com/admin-scripts Source0: %{name}-%{version}.tar.gz BuildArch: noarch %description Package installing latest version the admin scripts used by the IT dept. %prep %setup -q %build %install rm -rf $RPM_BUILD_ROOT mkdir -p $RPM_BUILD_ROOT/usr/local/sbin cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/ %clean rm -rf $RPM_BUILD_ROOT %files %defattr(-,root,root,-) %dir /usr/local/sbin /usr/local/sbin/parselogs.sh /usr/local/sbin/pullnews.sh %doc %changelog * Wed Aug 1 2018 John Doe - release 1.0 - initial releasePlace the specfile in therpmbuild/SPEC
directory we created earlier.We need the sources referenced in the
specfile
- in this case the two shell scripts. Let's create the directory for the sources (called as the package name appended with the main version):$ mkdir -p rpmbuild/SOURCES/admin-scripts-1/scriptsAnd copy/move the scripts into it:$ ls rpmbuild/SOURCES/admin-scripts-1/scripts/ parselogs.sh pullnews.sh
As this tutorial is not about shell scripting, the contents of these scripts are irrelevant. As we will create a new version of the package, and thepullnews.sh
is the script we will demonstrate with, it's source in the first version is as below:#!/bin/bash echo "news pulled" exit 0Do not forget to add the appropriate rights to the files in the source - in our case, execution right:chmod +x rpmbuild/SOURCES/admin-scripts-1/scripts/*.shNow we create atar.gz
archive from the source in the same directory:cd rpmbuild/SOURCES/ && tar -czf admin-scripts-1.tar.gz admin-scripts-1We are ready to build the package:rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.0.specWe'll get some output about the build, and if anything goes wrong, errors will be shown (for example, missing file or path). If all goes well, our new package will appear in the RPMS directory generated by default under therpmbuild
directory (sorted into subdirectories by architecture):$ ls rpmbuild/RPMS/noarch/ admin-scripts-1-0.noarch.rpmWe have created a simple yet fully functional rpm package. We can query it for all the metadata we supplied earlier:$ rpm -qpi rpmbuild/RPMS/noarch/admin-scripts-1-0.noarch.rpm Name : admin-scripts Version : 1 Release : 0 Architecture: noarch Install Date: (not installed) Group : Application/Other Size : 78 License : GPL Signature : (none) Source RPM : admin-scripts-1-0.src.rpm Build Date : 2018. aug. 1., Wed, 13.27.34 CEST Build Host : build01.foobar.com Relocations : (not relocatable) Packager : John Doe URL : www.foobar.com/admin-scripts Summary : FooBar Inc. IT dept. admin scripts Description : Package installing latest version the admin scripts used by the IT dept.And of cause we can install it (withroot
privileges): Installing custom scripts with rpm
As we installed the scripts into a directory that is on every user's$PATH
, you can run them as any user in the system, from any directory:$ pullnews.sh news pulledThe package can be distributed as it is, and can be pushed into repositories available to any number of systems. To do so is out of the scope of this tutorial - however, building another version of the package is certainly not. Building another version of the package Our package and the extremely useful scripts in it become popular in no time, considering they are reachable anywhere with a simpleyum install admin-scripts
within the environment. There will be soon many requests for some improvements - in this example, many votes come from happy users that thepullnews.sh
should print another line on execution, this feature would save the whole company. We need to build another version of the package, as we don't want to install another script, but a new version of it with the same name and path, as the sysadmins in our organization already rely on it heavily.First we change the source of the
pullnews.sh
in the SOURCES to something even more complex:#!/bin/bash echo "news pulled" echo "another line printed" exit 0We need to recreate the tar.gz with the new source content - we can use the same filename as the first time, as we don't change version, only release (and so theSource0
reference will be still valid). Note that we delete the previous archive first:cd rpmbuild/SOURCES/ && rm -f admin-scripts-1.tar.gz && tar -czf admin-scripts-1.tar.gz admin-scripts-1Now we create another specfile with a higher release number:cp rpmbuild/SPECS/admin-scripts-1.0.spec rpmbuild/SPECS/admin-scripts-1.1.specWe don't change much on the package itself, so we simply administrate the new version as shown below:Name: admin-scripts Version: 1 Release: 1 Summary: FooBar Inc. IT dept. admin scripts Packager: John Doe Group: Application/Other License: GPL URL: www.foobar.com/admin-scripts Source0: %{name}-%{version}.tar.gz BuildArch: noarch %description Package installing latest version the admin scripts used by the IT dept. %prep %setup -q %build %install rm -rf $RPM_BUILD_ROOT mkdir -p $RPM_BUILD_ROOT/usr/local/sbin cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/ %clean rm -rf $RPM_BUILD_ROOT %files %defattr(-,root,root,-) %dir /usr/local/sbin /usr/local/sbin/parselogs.sh /usr/local/sbin/pullnews.sh %doc %changelog * Wed Aug 22 2018 John Doe - release 1.1 - pullnews.sh v1.1 prints another line * Wed Aug 1 2018 John Doe - release 1.0 - initial release
All done, we can build another version of our package containing the updated script. Note that we reference the specfile with the higher version as the source of the build:rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.1.specIf the build is successful, we now have two versions of the package under our RPMS directory:ls rpmbuild/RPMS/noarch/ admin-scripts-1-0.noarch.rpm admin-scripts-1-1.noarch.rpmAnd now we can install the "advanced" script, or upgrade if it is already installed. Upgrading custom scripts with rpmAnd our sysadmins can see that the feature request is landed in this version:
rpm -q --changelog admin-scripts * sze aug 22 2018 John Doe - release 1.1 - pullnews.sh v1.1 prints another line * sze aug 01 2018 John Doe - release 1.0 - initial releaseConclusionWe wrapped our custom content into versioned rpm packages. This means no older versions left scattered across systems, everything is in it's place, on the version we installed or upgraded to. RPM gives the ability to replace old stuff needed only in previous versions, can add custom dependencies or provide some tools or services our other packages rely on. With effort, we can pack nearly any of our custom content into rpm packages, and distribute it across our environment, not only with ease, but with consistency.
Jan 14, 2018 | kerneltalks.com
Most of the time on newly created file systems of NFS filesystems we see error like below :
1 2 3 4 root @ kerneltalks # touch file1 touch : cannot touch ' file1 ' : Read - only file system This is because file system is mounted as read only. In such scenario you have to mount it in read-write mode. Before that we will see how to check if file system is mounted in read only mode and then we will get to how to re mount it as a read write filesystem.
How to check if file system is read onlyTo confirm file system is mounted in read only mode use below command –
1 2 3 4 # cat /proc/mounts | grep datastore / dev / xvdf / datastore ext3 ro , seclabel , relatime , data = ordered 0 0 Grep your mount point in
cat /proc/mounts
and observer third column which shows all options which are used in mounted file system. Herero
denotes file system is mounted read-only.You can also get these details using
mount -v
command
1 2 3 4 root @ kerneltalks # mount -v |grep datastore / dev / xvdf on / datastore type ext3 ( ro , relatime , seclabel , data = ordered ) In this output. file system options are listed in braces at last column.
Re-mount file system in read-write modeTo remount file system in read-write mode use below command –
1 2 3 4 5 6 root @ kerneltalks # mount -o remount,rw /datastore root @ kerneltalks # mount -v |grep datastore / dev / xvdf on / datastore type ext3 ( rw , relatime , seclabel , data = ordered ) Observe after re-mounting option
ro
changed torw
. Now, file system is mounted as read write and now you can write files in it.Note : It is recommended to
fsck
file system before re mounting it.You can check file system by running
fsck
on its volume.
1 2 3 4 5 6 7 8 9 10 root @ kerneltalks # df -h /datastore Filesystem Size Used Avail Use % Mounted on / dev / xvda2 10G 881M 9.2G 9 % / root @ kerneltalks # fsck /dev/xvdf fsck from util - linux 2.23.2 e2fsck 1.42.9 ( 28 - Dec - 2013 ) / dev / xvdf : clean , 12 / 655360 files , 79696 / 2621440 blocks Sometimes there are some corrections needs to be made on file system which needs reboot to make sure there are no processes are accessing file system.
Jan 13, 2015 | cyberciti.biz
As my journey continues with Linux and Unix shell, I made a few mistakes. I accidentally deleted /tmp folder. To restore it all you have to do is:
mkdir /tmp chmod 1777 /tmp chown root:root /tmp ls -ld /tmp mkdir /tmp chmod 1777 /tmp chown root:root /tmp ls -ld /tmp
- September 8, 2011
- By Henry Newman
- Send Email
- More Articles
The storage industry continues to make the same mistakes over and over again, and enterprises continue to take vendors' bold statements as facts. Previously, we introduced our two-part series, "The Evolution of Stupidity," explaining how issues seemingly resolved more than 20 years ago are again rearing their heads. Clearly, the more things change, the more they stay the same.
This time I ask, why do we continue to believe that the current evolutionary file system path will meet our needs today and in the future and cost nothing? Let's go back and review a bit of history for free and non-free systems file systems.
Time Machine -- Back to the Early 1980s
My experiences go back only to the early 1980s, but we have repeated history a few times since then. Why can we not seem to remember history, learn from it or even learn about it? It never ceases to amaze me. I talk to younger people, and more often than not, they say that they do not want to do hear about history, just about the presentation, and how they are going to make the future better. I coined a saying (at least I think I coined it) in the late 1990s: There are no new engineering problems, just new engineers solving old problems. I said this when I was helping someone develop a new file system using technology and ideas I had helped optimize the design around 10 years earlier.
In the mid-1980s, most of the open system file systems came as part of a standard Unix release from USL. A few vendors, such as Cray and Amdahl, wrote their own file systems. These vendors generally did so because the standard UNIX file did not meet the requirements of the day. UFS on Solaris came from another operating system, which was written in the 1960s, called Multics . That brings us to the late 1980s, and by this time, we had a number of high-performance file systems from companies such as Convex, MultiFlow and Thinking Machines. Everyone who had larger systems had their own file system, and everyone was trying to address many, if not all, of the same issues. They were in my opinion the scalability of:
- Metadata performance
- Recovery performance
- Small block performance
- Large block performance
- Storage management
The keyword here is scalability. Remember, during this time disk drive density was growing very rapidly and performance was scaling far better than it is today. Some of the vendors began the process of looking at parallel systems and some began charging for file systems that were once free. Does any of this sound like what I said in a recent blog post, "It's like deja-vu, all over again" (Yogi Berra)? But since this article is about stupidity, let's also remember the quote from another Yogi, Yogi Bear the cartoon character, "I'm smarter than the average bear!" and ask the question: Is the industry any smarter?
Around 1990, Veritas released VxFS, the first commercial UNIX file system. This file system tried to address all of the bullets points above except storage management, and Veritas added that later with VxVM. VxFS was revolutionary for commercial UNIX implementations at the time. Most of the major vendors used this product in some fashion, either supporting it or OEMing. Soon Veritas added things like the DB edition, which removed some of the POSIX-required write lock restrictions.
While Veritas was taking over the commercial world in the 1990s and making money on the file system, Silicon Graphics (SGI) decided to write its own file system, called XFS. It was released in the mid-1990s. It was later open sourced and had similar some characteristic to VxFS (imagine that), given that some of the developers were the same people. By the late 1990s and early 2000s, a number of vendors had shared file systems, but you had to pay for most of them in the HPC community. Most were implemented with a single metadata server and clients. Meanwhile, a smaller number of vendors were trying to solve large shared data problems problems with a shared name space and implementation of distributed allocation of space.
Guess what? None of these file systems were free, and all of them were trying to resolve the list of the five areas noted above. From about 2004 until Sun Microsystems purchased CFS in 2007, there was an exception to the free parallel file system-- Lustre. But "free" is relative because for much of that time significant funding was coming from the U.S. government. It was not long after the funding ran out that Sun Microsystems purchased the company that developed the Lustre file system and hoped to monetize the company purchase cost by developing hardware around the Lustre file system.
Related Articles
- The Evolution of Stupidity: Research (Don't Repeat) the Storage Past
- The Evolution of Stupidity: File Systems
At the same time, on the commercial front, the move to Linux was in full swing. Enter the XFS file system, which came with many standard Linux distributions and met many requirements. Appliance-based storage from the NAS vendors also met many of the requirements for performance and was far easier to management than provisioning file systems from crop of file system vendors selling file systems.
Now you have everyone moving to free file systems, not from vendors like in the 1980s but from the Linux distribution or from the NAS appliances vendors. Storage is purchased with a built-in file system.
This is all well and good, but now I am seeing the beginnings of change back to the early 1990s. Remember the saying that railroad executives in the 1920s and 1930s did not realize they were in the transportation business? Rather, they saw themselves as being only in the railroad business and thus did not embrace the airline industry. Similarly, NAS vendors do not seem to realize they are in the scalable storage business, and large shared file system vendors are now building appliances to better address many of the five bullets above.
Why Are We Going Around in Circles?
It seems to me that we are going around in circles. The 1980s are much like the early 2000s in the file system world, and the early 1990s are like the mid-2000s. The mid-1990s are similar to what we are going into again. The same is likely true for other areas of computing, as I have shown for storage in the previous article If we all thought about it, that could be said for computational design with scalar processors, vector process, GPUs and FPGAs today and yesteryear.
So everything is new every 20 years or so, and the problem solutions are not really that different. Why? Is it because no one remembers the past? Is it because everyone thinks they are smarter than their manager was when he or she was doing the work 20 years ago? Or is it something far different, like the market mimics other cycles in life like fashion, food preparation and myriad other things.
Almost 20 years ago, some friends of mine at Cray Research had the idea to separate file system metadata and data on different storage technologies, as data and metadata had different access patterns. Off and on file systems over the past 20 years have done this, but the concept has never really caught on as a must-have for file systems. I am now hearing rumblings that lots of people are talking about doing this with xyz file system. Was this NIH? I think in some cases, yes. The more I think about it, there is not a single answer to explain what happens, and if I did figure it out, I will be playing the futures market rather than doing what I am doing. We all need to learn from the past if we are going to break this cycle and make dramatic changes to technology.
POSIX is now about 20 years old since the last real changes were made. I am now hearing that POSIX limitations from a number of circles are limiting the five factors. If we change POSIX to support parallel I/O, I hope we look beyond today and think to the future.
Henry Newman is CEO and CTO of Instrumental Inc. and has worked in HPC and large storage environments for 29 years. The outspoken Mr. Newman initially went to school to become a diplomat, but was firmly told during his first year that he might be better suited for a career that didn't require diplomatic skills. Diplomacy's loss was HPC's gain.
Guide to Linux Filesystem Mastery
Journaling File Systems Linux Magazine
Ext3: http://www.zipworld.com.au/~akpm/linux/ext3
JFS for Linux: http://oss.software.ibm.com/jfs
ReiserFS: http://www.namesys.com
Linux XFS: http://oss.sgi.com/projects/xfs
Extended Attributes & Access Controls Lists: http://acl.bestbits.at
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater�s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright � 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: January 03, 2021