Ubuntu Notes
Ubuntu is dumped down derivative of Debian, which looks to me like an oversimplified distribution. Nothing interesting.
I am surprised at the amount of lemming-like praising of Ubuntu. In reality
Ubuntu desktop is much weaker then OpenSuse.
The strongest part of Ubuntu is packages repositories, the advantage
that it shares with Debian. But that's about it as for strong points.
Ubuntu designers just try to hide the complexity and default installation
"kind of work" for complete novice. But if you try to change something or
install new software you need to know as much if not more then on regular
distributions like OpenSuse. BTW, by default,
sshd is not installed. That means that is is weak even as client.
GUI is actually pretty weak, especially in configuration area. Much weaker
then in OpenSuse (might be even weaker then in Fedora, although I am not
sure). There is nothing comparable with YaST on Ubuntu and that fact
alone makes it a weaker distribution even by any Linux distribution comparison
standards.
BTW Oracle ported YaST to Red Hat and I do not understand why Ubuntu
can't do the same.
Also installed applications usually are not added to GUI menu.
And Linux is about using open source applications not just browsing the
Web.
Patches sometimes break the system. For example I experienced a patch
that prevents system from reaching runlevel 2. Machine was a toast after
the patch and this is just one example.
Cleaning Ubuntu
- app-get purge evolution
- app-get purge cups
Runlevels in Ubuntu is a mess (that's the trait it shares with Debian)
- 20210315 : CentOS - RHEL 7 - How to Install GUI The Geek Diary ( Mar 15, 2021 , www.thegeekdiary.com )
- 20210315 : Install-Convert A Minimal Installation Into GUI on CentOS-RHEL 6-7 ( Mar 15, 2021 , kapendra.com )
- 20190306 : Can not install RHEL 7 on disk with existing partitions ( May 12, 2014 , access.redhat.com )
- 20190204 : Red Hat Enterprise Linux 7 8.14. Installation Destination ( Jan 30, 2019 , access.redhat.com )
- 20181205 : How To Find The Package That Provides A File (Installed Or Not) On Ubuntu, Debian Or Linux Mint - Linux Uprising Blog ( Nov 30, 2018 , www.linuxuprising.com )
- 20170806 : Some basics of MBR vs GPT and BIOS vs UEFI - Manjaro Linux ( Aug 06, 2017 , wiki.manjaro.org )
- 20170806 : uefi - CentOS Kickstart Installation - Error populating transaction ( Aug 06, 2017 , superuser.com )
- 20170628 : AptGet-Howto - Community Help Wiki ( Jun 28, 2017 , help.ubuntu.com )
- 20130719 : Why I left Ubuntu ( Everyday Linux User )
- 20130719 : 504957 ( 504957, )
- 20100110 : by default sshd is not installed on Ubuntu. sshd can be installed using Synaptic ( Jan 10, 2010 )
- 20100110 : SSHD can be used as remote instead of VNC ( Jan 10, 2010 )
- 20091213 : VFSFTPD has option force_dot_files=YES ( Dec 13, 2009 )
- 20091213 : Synaptic is OK tool for adding and removing packages. ( Dec 13, 2009 )
- 20091213 : Apparix augmenting the command-line with directory bookmarks ( Apparix augmenting the command-line with directory bookmarks, Dec 13, 2009 )
- 20091212 : FTE editor package is available on Ubuntu ( Dec 12, 2009 )
- 20091212 : MC on Ubuntu comes with wrong setting -- external editor (nano). To change go to Options and set internal editor ( Dec 12, 2009 )
- 20091212 : Strange RC settings -- no K files ( Dec 12, 2009 )
- 20091212 : Hardening the Linux server ( Hardening the Linux server, )
Installing the environment group "Server with GUI"
1. Check the available environment groups :
# yum grouplist
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
There is no installed groups file.
Maybe run: yum groups mark convert (see man yum)
Available Environment Groups:
Minimal Install
Infrastructure Server
File and Print Server
Basic Web Server
Virtualization Host
Server with GUI
Available Groups:
Compatibility Libraries
Console Internet Tools
Development Tools
Graphical Administration Tools
Legacy UNIX Compatibility
Scientific Support
Security Tools
Smart Card Support
System Administration Tools
System Management
Done
2. Execute the following to install the environments for GUI.
# yum groupinstall "Server with GUI"
.......
Transaction Summary
====================================================
Install 199 Packages (+464 Dependent packages)
Upgrade ( 8 Dependent packages)
Total download size: 523 M
Is this ok [y/d/N]:
The above will install the GUI in RHEL 7, which by default get installed to text mode.
3. Enable GUI on system start up. In RHEL 7, systemd uses 'targets' instead of runlevels.
The file /etc/inittab is no more used to change run levels. Issue the following command to
enable the GUI on system start.
To set a default target :
# systemctl set-default graphical.target
To change the current target to graphical without reboot :
# systemctl start graphical.target
Verify the default target :
# systemctl get-default
graphical.target
4. Reboot the machine to verify that it boots into GUI directly.
# systemctl reboot
Installing core GNOME packages
"Server with GUI" installs the default GUI which is GNOME. In case if you want to install
only core GNOME packages use :
# yum groupinstall 'X Window System' 'GNOME'
....
Transaction Summary
===========================================================
Install 104 Packages (+427 Dependent packages)
Upgrade ( 8 Dependent packages)
Total download size: 318 M
Is this ok [y/d/N]:
Step 1: Install Gnome GUI
Run the following command to install GUI
For CentOS 7:
https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-1312971726265182&output=html&h=60&slotname=7232524359&adk=475810333&adf=4289060371&pi=t.ma~as.7232524359&w=468&lmt=1615834166&psa=0&format=468x60&url=https%3A%2F%2Fkapendra.com%2Finstall-convert-a-minimal-installation-into-gui-on-centosrhel-6-7%2F&flash=0&wgl=1&dt=1615834165915&bpp=3&bdt=5549&idt=275&shv=r20210309&cbv=r20190131&ptt=9&saldr=aa&abxe=1&prev_fmts=728x90%2C468x60&correlator=5214860735252&frm=20&pv=1&ga_vid=263473286.1615834166&ga_sid=1615834166&ga_hid=622691513&ga_fc=0&u_tz=-240&u_his=3&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=364&ady=1446&biw=1536&bih=762&scr_x=0&scr_y=0&eid=42530672%2C21066428%2C31060305&oid=3&pvsid=1400452504347324&pem=31&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=640&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CoeEbr%7C&abl=CS&pfx=0&fu=8192&bc=31&ifi=3&uci=a!3&btvi=2&fsb=1&xpc=7Ex3Qs11m5&p=https%3A//kapendra.com&dtd=287
# yum group install "GNOME Desktop" "Graphical Administration Tools"
For RHEL 7:
# yum groupinstall "Server with GUI"
... ... ... Step 2: Make GUI Default Mode For Every Reboot
With the upgrade of Centos/RHEL 7 from CentOS/RHEL 6 concept of runlevel has been changed to
systemd so run following command
For RHEL/CentOS 7:
ln -sf /lib/systemd/system/runlevel5.target /etc/systemd/system/default.targ
... ... ...
Step 3: Reboot the Server
# reboot
Few Short Cut Commands
GUI to CLI : Ctrl + Alt + F6
CLI to GUI : Ctrl + Alt + F1
Kapendra
http://kapendra.com Love to write technical stuff with personal experience as I am working
as a Sr. Linux Admin. and every day is a learning day and Trust me being tech geek is really
cool.
Notable quotes:
"... In the case of the partition tool, it is too complicated for a common user to use so don't assume that the person using it is an idiot. ..."
Can not install RHEL 7 on disk with existing partitions
Latest response
May 23 2014 at 8:23 AM
Can not install RHEL 7 on disk with existing partitions. Menus for old partitions are grayed out. Can not
select mount points for existing partitions. Can not delete them and create new ones. Menus say I can but
installer says there is a partition error. It shows me a complicated menu that warns be what is about to
happen and asks me to accept the changes. It does not seem to accept "accept" as an answer. Booting to
the recovery mode on the USB drive and manually deleting all partitions is not a practical solution.
You
have to assume that someone who is doing manual partitioning knows what he is doing. Provide a simple
tool during install so that the drive can be fully partitioned and formatted. Do not try so hard to make
tools that protect use from our own mistakes. In the case of the partition tool, it is too complicated for
a common user to use so don't assume that the person using it is an idiot.
The common user does not know
hardware and doesn't want to know it. He expects security questions during the install and everything
else should be defaults. A competent operator will backup anything before doing a new OS install. He
needs the details and doesn't need an installer that tells him he can't do what he has done for 20 years.
13
May 2014 8:40 PM
PixelDrift.NET Support
Community Leader
Can you give more details?
Are the partitions on the disk
Linux partitions you want to re-use? eg. /home? or are they from another OS?
Is the goal to delete the partitions are re-use the space, or to mount the partitions
in the new install?
Are you running RHEL 7 Beta or RHEL 7 RC?
22
May 2014 5:24 PM
RogerOdle
I did manage to get it to work. It needed a bios partition.
It seemed that I had to free it and create a new one. I do not know. I tried many things
so I am not clear what ultimately made it work.
My history with installed does back to
the 90s
- text based, tedious and need to be an expert
- gui's makes easier, on reinstall partition labels show where they were mounted
before (improvement)
- partitions get UUID, gui shows drive numbers instead of mount lavels (worse)
- (since Fedora 18?) shows previous fedora install. Wants to do new install on other
partitions. Unclear how to tell new install how to use old partitions. New requirement
for bios partition not clearly presented. When failure to proceed is because there is
no bios partition, no explaination is given, just a whine about there being some error
but no help for how to resolve. (much worse).
Wish list:
1. Default configuration should put /var and /home on separate partitions. If installer
sees these on separate partitions then it should provide an option to use them.
2. It should not be neccessary to manuall move partitions from an old installation to a
new one. I do not know if I am typical but I never put more than one OS on a hard drive (execpt
for VMs and that don't count). Hard drives are cheap so I put one and only one OS per
drive. When I upgrade, I reformat and replace the root partition which cleans /etc and
/usr. I also reformat and clean /var. I never keep anything that I want long term in /var
that includes web sites (/var/www) and databases so that I can always discard /var on an
update. This prevents propagating problems from one release to the next. On my systems,
each release stands on its own. I would like the parition tool to recognize the existing
partition scheme and provide a simple choice to reuse it. The only questions I want to
answer are whether a particular partition should be reformatted.
3. I do not have much experience with the live iso's so maybe they already do this. The
installer has way too intimidating for newbies. I am an engineer so I am used to
complexity. The average user and the user's that Fedora needs to connect to long term are
overwhelmed by the install process. They need an installer that does not ask any technical
questions at all. They just want to plug it in and turn it on and it should just work like
their TV just works. Maybe it should be an Entertainment release since these people
typically do email, web surfing, write letters, play games and not much else.
Notable quotes:
"... Base Environments ..."
"... Kickstart Installations ..."
8.13. SOFTWARE SELECTION
To specify which packages will be installed, select Software Selection at the Installation Summary screen. The package groups
are organized into Base Environments . These environments are pre-defined sets of packages with a specific purpose; for
example, the Virtualization Host environment contains a set of software packages needed for running virtual machines on the system.
Only one software environment can be selected at installation time. For each environment, there are additional packages available
in the form of Add-ons . Add-ons are presented in the right part of the screen and the list of them is refreshed when a
new environment is selected. You can select multiple add-ons for your installation environment. A horizontal line separates the list
of add-ons into two areas:
- Add-ons listed above the horizontal line are specific to the environment you selected. If you select any add-ons
in this part of the list and then select a different environment, your selection will be lost.
- Add-ons listed below the horizontal line are available for all environments. Selecting a different environment will
not impact the selections made in this part of the list.
Figure 8.16. Example of a Software Selection for a Server Installation The availability of base environments and add-ons depends
on the variant of the installation ISO image which you are using as the installation source. For example, the server
variant provides environments designed for servers, while the workstation
variant has several choices for deployment
as a developer workstation, and so on. The installation program does not show which packages are contained in the available environments.
To see which packages are contained in a specific environment or add-on, see the repodata/*-comps- variant . architecture
.xml
file on the Red Hat Enterprise Linux Installation DVD which you are using as the installation source. This file contains
a structure describing available environments (marked by the <environment>
tag) and add-ons (the <group>
tag).
Important The pre-defined environments and add-ons allow you to customize your system, but in a manual installation, there is
no way to select individual packages to install. If you are not sure what package should be installed, Red Hat recommends you to
select the Minimal Install environment. Minimal install only installs a basic version of Red Hat Enterprise Linux with only a minimal
amount of additional software. This will substantially reduce the chance of the system being affected by a vulnerability. After the
system finishes installing and you log in for the first time, you can use the Yum package manager to install any additional software
you need. For more details on Minimal install , see the
Installing the Minimum Amount of Packages Required section of the Red Hat Enterprise Linux 7 Security Guide. Alternatively, automating
the installation with a Kickstart file allows for a much higher degree of control over installed packages. You can specify environments,
groups and individual packages in the %packages
section of the Kickstart file. See
Section 26.3.2, "Package Selection" for instructions on selecting packages to install in a Kickstart file, and
Chapter 26, Kickstart Installations for general information about automating the installation with Kickstart. Once you
have selected an environment and add-ons to be installed, click Done to return to the Installation Summary screen. 8.13.1. Core
Network Services All Red Hat Enterprise Linux installations include the following network services:
- centralized logging through the
rsyslog
service
- email through SMTP (Simple Mail Transfer Protocol)
- network file sharing through NFS (Network File System)
- remote access through SSH (Secure SHell)
- resource advertising through mDNS (multicast DNS)
Some automated processes on your Red Hat Enterprise Linux system use the email service to send reports and messages to the system
administrator. By default, the email, logging, and printing services do not accept connections from other systems. You can configure
your Red Hat Enterprise Linux system after installation to offer email, file sharing, logging, printing, and remote desktop access
services. The SSH service is enabled by default. You can also use NFS to access files on other systems without enabling the NFS sharing
service.
There are multiple ways of finding out to which package a particular file belongs to, on
Ubuntu, Debian or Linux Mint. This article presents two ways of achieving this, both from the
command line.
From the same series:
1. Using apt-file to find the package that provides a file (for repository packages, either
installed or not installed)
apt-file indexes the contents of all packages available in your repositories, and allows you to
search for files in all these packages.
That means you can use apt-file to search for files inside DEB packages that are installed
on your system, as well as packages that are not installed on your Debian (and Debian-based
Linux distributions, like Ubuntu) machine, but are available to install from the repositories.
This is useful in case you want to find what package contains a file that you need to compile
some program, etc.
apt-file cannot find the package that provides a file in case you downloaded a DEB package
and installed it, without using a repository. The package needs to be available in the
repositories for apt-file to be able to find it.
apt-file may not be installed on your system. To install it in Debian, Ubuntu, Linux Mint
and other Debian-based or Ubuntu-based Linux distributions, use this command:
sudo apt install apt-file
This tool find the files belonging to a package by using a database, which needs to be updated
in order to be able to use it. To update the apt-file database, use:
sudo apt-file update
Now you can use apt-file to find the DEB package that provides a file, be it a package
you've installed from the repositories, or a package available in the repositories, but not
installed on your Debian / Ubuntu / Linux Mint system. To do this, run:
apt-file search filename
Replacing filename
with the name of the file you want to find.
This command will list all occurrences of filename
found in various packages.
If you know the exact file path and filename, you can get the search results to only list the
package that includes that exact file, like this:
apt-file search /path/to/filename
For example, running only apt-file search cairo.h
will list a large list search
results:
$ apt-file search cairo.h
fltk1.3-doc: /usr/share/doc/fltk1.3-doc/HTML/group__group__cairo.html
ggobi: /usr/include/ggobi/ggobi-renderer-cairo.h
glabels-dev: /usr/include/libglbarcode-3.0/libglbarcode/lgl-barcode-render-to-cairo.h
glabels-dev: /usr/share/gtk-doc/html/libglbarcode-3.0/libglbarcode-3.0-lgl-barcode-render-to-cairo.html
gstreamer1.0-plugins-good-doc: /usr/share/gtk-doc/html/gst-plugins-good-plugins-1.0/gst-plugins-good-plugins-plugin-cairo.html
guile-cairo-dev: /usr/include/guile-cairo/guile-cairo.h
guitarix-doc: /usr/share/doc/guitarix-doc/namespacegx__cairo.html
ipe: /usr/share/ipe/7.2.7/doc/group__cairo.html
libcairo-ocaml-dev: /usr/share/doc/libcairo-ocaml-dev/html/Pango_cairo.html
libcairo-ocaml-dev: /usr/share/doc/libcairo-ocaml-dev/html/type_Pango_cairo.html
libcairo2-dev: /usr/include/cairo/cairo.h
...
However, if you know the file path, e.g. you want to find out to which package the file
/usr/include/cairo/cairo.h
belongs to, run:
apt-file search /usr/include/cairo/cairo.h
This only lists the package that contains this file:
$ apt-file search /usr/include/cairo/cairo.h
libcairo2-dev: /usr/include/cairo/cairo.h
In this example, the package that includes the file I searched for (
/usr/include/cairo/cairo.h
) is libcairo2-dev
.
apt-file may also be used to list all the files included in a package ( apt-file list
packagename
), perform regex search, and more. Consult its man page ( man
apt-file
) and help for more information ( apt-file --help
).
2. Using dpkg to find the package that provides a file (only for installed DEB packages -
from any source)
dpkg can also be used to find out to which package a file belongs to. It can be faster to use
than apt-file, because you don't need to install anything, and there's no database to
update.
However, dpkg can only search for files belonging to installed packages, so if you're
searching for a file in a package that's not installed on your system, use apt-file. On the
other hand, dpkg can be used to find files belonging to packages that were installed without
using a repository, a feature that's not available for apt-file.
To use dpkg to find the installed DEB package that provides a file, run it with the
-S
(or --search
) flag, followed by the filename (or pattern) you
want to see to which package it belongs, like this:
dpkg -S filename
For example, to find out to which package the cairo.h
file belongs to, use
dpkg -S cairo.h
:
$ dpkg -S cairo.h
libgtk2.0-dev:amd64: /usr/include/gtk-2.0/gdk/gdkcairo.h
libcairo2-dev:amd64: /usr/include/cairo/cairo.h
libpango1.0-dev: /usr/include/pango-1.0/pango/pangocairo.h
libgtk-3-dev:amd64: /usr/include/gtk-3.0/gdk/gdkcairo.h
Just like for apt-file, this may show multiple packages that have files containing the filename
you're looking for. You can enter the full path of the file to get only the package that
contains that specific file. Example:
$ dpkg -S /usr/include/cairo/cairo.h
libcairo2-dev:amd64: /usr/include/cairo/cairo.h
In this example, the Debian package that includes the file I searched for (
/usr/include/cairo/cairo.h
) is libcairo2-dev
.
Other notable ways of finding the package a file belongs to is using the online search
provided by Ubuntu and Debian:
For both, you'll also find options to find the packages that contain files named exactly like
your input keyword, packages ending with the keyword, or packages that contains files whose
names contain the keyword.
The Linux Mint package search
website doesn't include an option to search for files inside packages, but you can use the
Ubuntu or Debian online package search for packages that Linux Mint imports from Debian /
Ubuntu.
Some basics of MBR v/s GPT and BIOS v/s UEFI From Manjaro Linux Jump to:
navigation ,
search Contents [
hide ]
MBR
A master boot record (MBR) is a special type of boot sector at the very beginning of partitioned
computer mass storage devices like fixed disks or removable drives intended for use with IBM PC-compatible
systems and beyond. The concept of MBRs was publicly introduced in 1983 with PC DOS 2.0.
The MBR holds the information on how the logical partitions, containing file systems, are organized
on that medium. Besides that, the MBR also contains executable code to function as a loader for the
installed operating system!usually by passing control over to the loader's second stage, or in conjunction
with each partition's volume boot record (VBR). This MBR code is usually referred to as a boot loader.
The organization of the partition table in the MBR limits the maximum addressable storage space
of a disk to 2 TB (232 × 512 bytes). Therefore, the MBR-based partitioning scheme is in the process
of being superseded by the GUID Partition Table (GPT) scheme in new computers. A GPT can coexist
with an MBR in order to provide some limited form of a backwards compatibility for older systems.
[1]
GPT
GUID Partition Table (GPT) is a standard for the layout of the partition table on a physical hard
disk, using globally unique identifiers (GUID). Although it forms a part of the Unified Extensible
Firmware Interface (UEFI) standard (Unified EFI Forum proposed replacement for the PC BIOS), it is
also used on some BIOS systems because of the limitations of master boot record (MBR) partition tables,
which use 32 bits for storing logical block addresses (LBA) and size information.
MBR-based partition table schemes insert the partitioning information for (usually) four "primary"
partitions in the master boot record (MBR) (which on a BIOS system is also the container for code
that begins the process of booting the system). In a GPT, the first sector of the disk is reserved
for a "protective MBR" such that booting a BIOS-based computer from a GPT disk is supported, but
the boot loader and O/S must both be GPT-aware. Regardless of the sector size, the GPT header begins
on the second logical block of the device.
[2]
GPT uses modern logical block addressing (LBA) in place of the cylinder-head-sector (CHS) addressing
used with MBR. Legacy MBR information is contained in LBA 0, the GPT header is in LBA 1, and the
partition table itself follows. In 64-bit Windows operating systems, 16,384 bytes, or 32 sectors,
are reserved for the GPT, leaving LBA 34 as the first usable sector on the disk.
[3]
MBR vs. GPT
Compared with MBR disk, A GPT disk can support larger than 2 TB volumes where MBR cannot. A GPT
disk can be basic or dynamic, just like an MBR disk can be basic or dynamic. GPT disks also support
up to 128 partitions rather than the 4 primary partitions limited to MBR. Also, GPT keeps a backup
of the partition table at the end of the disk. Furthermore, GPT disk provides greater reliability
due to replication and cyclical redundancy check (CRC) protection of the partition table.
[4]
The GUID partition table (GPT) disk partitioning style supports volumes up to 18 exabytes in size
and up to 128 partitions per disk, compared to the master boot record (MBR) disk partitioning style,
which supports volumes up to 2 terabytes in size and up to 4 primary partitions per disk (or three
primary partitions, one extended partition, and unlimited logical drives). Unlike MBR partitioned
disks, data critical to platform operation is located in partitions instead of unpartitioned or hidden
sectors. In addition, GPT partitioned disks have redundant primary and backup partition tables for
improved partition data structure integrity.
[5]
BIOS
In IBM PC compatible computers, the Basic Input/Output System (BIOS), also known as System BIOS,
ROM BIOS or PC BIOS, is a de facto standard defining a firmware interface. The name originated from
the Basic Input/Output System used in the CP/M operating system in 1975. The BIOS software is built
into the PC, and is the first software run by a PC when powered on.
The fundamental purposes of the BIOS are to initialize and test the system hardware components,
and to load a bootloader or an operating system from a mass memory device. The BIOS additionally
provides abstraction layer for the hardware, i.e. a consistent way for application programs and operating
systems to interact with the keyboard, display, and other input/output devices. Variations in the
system hardware are hidden by the BIOS from programs that use BIOS services instead of directly accessing
the hardware. Modern operating systems ignore the abstraction layer provided by the BIOS and access
the hardware components directly. [6]
UEFI
The Unified Extensible Firmware Interface (UEFI) (pronounced as an initialism U-E-F-I or like
"unify" without the n) is a specification that defines a software interface between an operating
system and platform firmware. UEFI is meant to replace the Basic Input/Output System (BIOS) firmware
interface, present in all IBM PC-compatible personal computers. In practice, most UEFI images provide
legacy support for BIOS services. UEFI can support remote diagnostics and repair of computers, even
without another operating system.
The original EFI (Extensible Firmware Interface) specification was developed by Intel. Some of
its practices and data formats mirror ones from Windows.] In 2005, UEFI deprecated EFI 1.10 (final
release of EFI). The UEFI specification is managed by the Unified EFI Forum.
BIOS vs. UEFI
UEFI enables better use of bigger hard drives. Though UEFI supports the traditional master boot
record (MBR) method of hard drive partitioning, it doesn't stop there. It's also capable of working
with the GUID Partition Table (GPT), which is free of the limitations the MBR places on the number
and size of partitions. GPT ups the maximum partition size from 2.19TB to 9.4 zettabytes.
UEFI may be faster than the BIOS. Various tweaks and optimizations in the UEFI may help your system
boot more quickly it could before. For example: With UEFI you may not have to endure messages asking
you to set up hardware functions (such as a RAID controller) unless your immediate input is required;
and UEFI can choose to initialize only certain components. The degree to which a boot is sped up
will depend on your system configuration and hardware, so you may see a significant or a minor speed
increase.
Technical changes abound in UEFI. UEFI has room for more useful and usable features than could
ever be crammed into the BIOS. Among these are cryptography, network authentication, support for
extensions stored on non-volatile media, an integrated boot manager, and even a shell environment
for running other EFI applications such as diagnostic utilities or flash updates. In addition, both
the architecture and the drivers are CPU-independent, which opens the door to a wider variety of
processors (including those using the ARM architecture, for example).
However, UEFI is still not widespread. Though major hardware companies have switched over almost
exclusively to UEFI use, you still won't find the new firmware in use on all motherboards!or in quite
the same way across the spectrum. Many older and less expensive motherboards also still use the BIOS
system. [7]
MBR vs. GPT and BIOS vs. UEFI
Usually, MBR and BIOS (MBR + BIOS), and GPT and UEFI (GPT + UEFI) go hand in hand. This is compulsory
for some systems (eg Windows), while optional for others (eg Linux).
http://en.wikipedia.org/wiki/GUID_Partition_Table#Operating_systems_support
http://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface#DISKDEVCOMPAT
Converting from MBR to GPT
From http://www.rodsbooks.com/gdisk/mbr2gpt.html
One of the more unusual features of gdisk is its ability to read an MBR partition table or BSD
disklabel and convert it to GPT format without damaging the contents of the partitions on the disk.
This feature exists to enable upgrading to GPT in case the limitations of MBRs or BSD disklabels
become too onerous!for instance, if you want to add more OSes to a multi-boot configuration, but
the OSes you want to add require too many primary partitions to fit on an MBR disk.
Conversions from MBR to GPT works because of inefficiencies in the MBR partitioning scheme. On
an MBR disk, the bulk of the first cylinder of the disk goes unused!only the first sector (which
holds the MBR itself) is used. Depending on the disk's CHS geometry, this first cylinder is likely
to be sufficient space to store the GPT header and partition table. Likewise, space is likely to
go unused at the end of the disk because the cylinder (as seen by the BIOS and whatever tool originally
partitioned the disk) will be incomplete, so the last few sectors will go unused. This leaves space
for the backup GPT header and partition table. (Disks partitioned with 1 MiB alignment sometimes
leave no gaps at the end of the disk, which can prevent conversion to GPT format!at least, unless
you delete or resize the final partition.)
The task of converting MBR to GPT therefore becomes one of extracting the MBR data and stuffing
the data into the appropriate GPT locations. Partition start and end points are straightforward to
manage, with one important caveat: GPT fdisk ignores the CHS values and uses the LBA values exclusively.
This means that the conversion will fail on disks that were partitioned with very old software. If
the disk is over 8 GiB in size, though, GPT fdisk should find the data it needs.
Once the conversion is complete, there will be a series of gaps between partitions. Gaps at the
start and end of the partition set will be related to the inefficiencies mentioned earlier that permit
the conversion to work. Additional gaps before each partition that used to be a logical partition
exist because of inefficiencies in the way logical partitions are allocated. These gaps are likely
to be quite small (a few kilobytes), so you're unlikely to be able to put useful partitions in those
spaces. You could resize your partitions with GNU Parted to remove the gaps, but the risks of such
an operation outweigh the very small benefits of recovering a few kilobytes of disk space.
Switching from BIOS to UEFI
See:
UEFI_-_Install_Guide#Switching_from_BIOS_to_UEFI
Note
Switching from [MBR + BIOS] to [GPT + UEFI]
Switching from BIOS to UEFI consists of 2 parts-
i. Conversion of disk from MBR to GPT. Side effects- Possible Data Loss, other OS installed on
same disk may or may not boot (eg Windows)..
ii. Changing from BIOS to UEFI (and installing GRUB in UEFI mode). Side Effects- Other OS (can
be both Linux and Windows) may or may not boot, with systemd you need to comment out the swap partition
in /etc/fstab on a GPT partition table (if you use a swap partition).
After converting from MBR to GPT, probably your installed Manjaro wont work, so you would need
to prepare beforehand what to do in such a case. (eg, chroot using a live disk and installing GRUB
in UEFI way)
And Windows 8 if installed in MBR way, would need to be repaired/reinstalled in accordance to
UEFI way.
Feedback
Questions, suggestions, critics? Please post here:
[8]
I am trying to perform a network unattended installation for my servers. They are all UEFI systems
and I have gotten them to successfully boot over the network, load grub2, and start the kickstart
script for installation.
It seems to reach the point where it runs yum update
, although I am not entirely
sure. It downloads the CentOS image from the mirror fine and then continually tells me error
populating transaction
10 times and then quits.
I've run through this multiple times with different mirrors, so I don't think this is a bad image
problem.
Here is an image of the error.
Here is the compiled code for my kickstart script.
install
url --url http://mirror.umd.edu/centos/7/os/x86_64/
lang en_US.UTF-8
selinux --enforcing
keyboard us
skipx
network --bootproto dhcp --hostname r2s2.REDACTED.com --device=REDACTED
rootpw --iscrypted REDACTED
firewall --service=ssh
authconfig --useshadow --passalgo=SHA256 --kickstart
timezone --utc UTC
services --disabled gpm,sendmail,cups,pcmcia,isdn,rawdevices,hpoj,bluetooth,openibd,avahi-daemon,avahi-dnsconfd,hidd,hplip,pcscd
bootloader --location=mbr --append="nofb quiet splash=quiet"
zerombr
clearpart --all --initlabel
autopart
text
reboot
%packages
yum
dhclient
ntp
wget
@Core
redhat-lsb-core
%end
%post --nochroot
exec < /dev/tty3 > /dev/tty3
#changing to VT 3 so that we can see whats going on....
/usr/bin/chvt 3
(
cp -va /etc/resolv.conf /mnt/sysimage/etc/resolv.conf
/usr/bin/chvt 1
) 2>&1 | tee /mnt/sysimage/root/install.postnochroot.log
%end
%post
logger "Starting anaconda r2s2.REDACTED.com postinstall"
exec < /dev/tty3 > /dev/tty3
#changing to VT 3 so that we can see whats going on....
/usr/bin/chvt 3
(
# eno1 interface
real=`ip -o link | awk '/REDACTED/ {print $2;}' | sed s/:$//`
sanitized_real=`echo $real | sed s/:/_/`
cat << EOF > /etc/sysconfig/network-scripts/ifcfg-$sanitized_real
BOOTPROTO="dhcp"
DEVICE=$real
HWADDR="REDACTED"
ONBOOT=yes
PEERDNS=yes
PEERROUTES=yes
DEFROUTE=yes
EOF
#update local time
echo "updating system time"
/usr/sbin/ntpdate -sub 0.fedora.pool.ntp.org
/usr/sbin/hwclock --systohc
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# update all the base packages from the updates repository
if [ -f /usr/bin/dnf ]; then
dnf -y update
else
yum -t -y update
fi
# SSH keys setup snippet for Remote Execution plugin
#
# Parameters:
#
# remote_execution_ssh_keys: public keys to be put in ~/.ssh/authorized_keys
#
# remote_execution_ssh_user: user for which remote_execution_ssh_keys will be
# authorized
#
# remote_execution_create_user: create user if it not already existing
#
# remote_execution_effective_user_method: method to switch from ssh user to
# effective user
#
# This template sets up SSH keys in any host so that as long as your public
# SSH key is in remote_execution_ssh_keys, you can SSH into a host. This only
# works in combination with Remote Execution plugin.
# The Remote Execution plugin queries smart proxies to build the
# remote_execution_ssh_keys array which is then made available to this template
# via the host's parameters. There is currently no way of supplying this
# parameter manually.
# See http://projects.theforeman.org/issues/16107 for details.
if [ -f /usr/bin/dnf ]; then
dnf -y install puppet
else
yum -t -y install puppet
fi
cat > /etc/puppet/puppet.conf << EOF
[main]
vardir = /var/lib/puppet
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = \$vardir/ssl
[agent]
pluginsync = true
report = true
ignoreschedules = true
ca_server = foreman.REDACTED.com
certname = r2s2.lab.REDACTED.com
environment = production
server = foreman.REDACTED.com
EOF
puppet_unit=puppet
/usr/bin/systemctl list-unit-files | grep -q puppetagent && puppet_unit=puppetagent
/usr/bin/systemctl enable ${puppet_unit}
/sbin/chkconfig --level 345 puppet on
# export a custom fact called 'is_installer' to allow detection of the installer environment in Puppet modules
export FACTER_is_installer=true
# passing a non-existent tag like "no_such_tag" to the puppet agent only initializes the node
/usr/bin/puppet agent --config /etc/puppet/puppet.conf --onetime --tags no_such_tag --server foreman.REDACTED.com --no-daemonize
sync
# Inform the build system that we are done.
echo "Informing Foreman that we are built"
wget -q -O /dev/null --no-check-certificate http://foreman.REDACTED.com/unattended/built?token=REDACTED
) 2>&1 | tee /root/install.post.log
exit 0
%end
Notable quotes:
"... /etc/apt/sources.list ..."
"... /etc/apt/preferences ..."
"... /etc/apt/preferences ..."
"... /var/cache/apt/archives ..."
"... /var/cache/apt/archives ..."
"... not ..."
"... /etc/apt/apt.conf ..."
The apt
tool on Ubuntu 14.04 and above makes this very easy to list installed packages
apt list --installed
All these commands except the search commands must be run as root or with superuser privileges,
see sudo for more
information.
|
Installation commands
-
apt-get install <package_name>
This command installs a new package.
-
apt-get build-dep <package_name>
This command searches the repositories and installs the build dependencies for <package_name>.
If the package is not in the repositories it will return an error.
-
aptitude install <package_name>
Aptitude is an Ncurses viewer
of packages installed or available. Aptitude can be used from the command line in a similar way
to apt-get. Enter man aptitude for more information.
- APT and aptitude will accept multiple package names as a space delimited list. For example:
apt-get install <package1_name> <package2_name> <package3_name>
Use the -s flag to simulate an action. For example: "apt-get -s install <package_name>"
will simulate installing the package, showing you what packages will be installed and configured.
|
auto-apt
-
auto-apt run <command_string>
This command runs <command_string> under the control of auto-apt. If a program tries to access
a file known to belong in an uninstalled package, auto-apt will install that package using apt-get.
This feature requires apt and sudo to work.
- Auto-apt keeps databases which need to be kept up-to-date in order for it to be effective.
This is achieved by calling the commands auto-apt update, auto-apt updatedb and auto-apt update-local.
- Usage example
-
You're compiling a program and, all of a sudden, there's an error because it needs a file you
don't have. The program auto-apt asks you to install packages if they're needed, stopping the
relevant process and continuing once the package is installed.
# auto-apt run ./configure
It will then ask to install the needed packages and call apt-get automatically. If you're running
X, a graphical interface will replace the default text interface.
Maintenance commands
-
apt-get update
Run this command after changing /etc/apt/sources.list or /etc/apt/preferences
. For information regarding /etc/apt/preferences , see
PinningHowto . Run
this command periodically to make sure your source list is up-to-date. This is the equivalent
of "Reload" in Synaptic or "Fetch updates" in Adept.
-
apt-get upgrade
This command upgrades all installed packages. This is the equivalent of "Mark all upgrades" in
Synaptic.
-
apt-get dist-upgrade
The same as the above, except add the "smart upgrade" checkbox. It tells APT to use "smart" conflict
resolution system, and it will attempt to upgrade the most important packages at the expense of
less important ones if necessary.
-
apt-get check
This command is a diagnostic tool. It does an update of the package lists and checks for broken
dependencies.
-
apt-get -f install
This command does the same thing as Edit->Fix Broken Packages in Synaptic. Do this if you get
complaints about packages with "unmet dependencies".
-
apt-get autoclean
This command removes .deb files for packages that are no longer installed on your system. Depending
on your installation habits, removing these files from /var/cache/apt/archives may regain
a significant amount of diskspace.
-
apt-get clean
The same as above, except it removes all packages from the package cache. This may
not be desirable if you have a slow Internet connection, since it will cause you to redownload
any packages you need to install a program.
-
dpkg-reconfigure <package_name>
Reconfigure the named package. With many packages, you'll be prompted with some configuration
questions you may not have known were there.
-
For example:
will present you with a "wizard" on configuring fonts in Ubuntu.
-
echo "<package_name> hold" | dpkg --set-selections
This command places the desired package on hold. TODO: Bug #42178: This is the same as Synaptic's
Package->Lock Version . */
-
This command may have the unintended side effect of preventing upgrades to packages that
depend on updated versions of the pinned package. apt-get dist-upgrade will
override this, but will warn you first. If you want to use this command with sudo, you
need to use echo "<package_name> hold" | sudo dpkg --set-selections not
sudo echo "<package_name> hold" | dpkg --set-selections .
|
-
echo "<package_name> install" | dpkg --set-selections
This command removes the "hold" or "locked package" state set by the above command. The note above
about sudo usage applies to this command.
Removal commands
-
apt-get remove <package_name>
This command removes an installed package, leaving configuration files intact.
-
apt-get purge <package_name>
This command completely removes a package and the associated configuration files. Configuration
files residing in ~ are not usually affected by this command.
-
apt-get autoremove
This command removes packages that were installed by other packages and are no longer needed.
-
While there is no built in way to remove all of your configuration information from your removed
packages you can remove all configuration data from every removed package with the following command.
dpkg -l | grep '^rc' | awk '{print $2}' | xargs dpkg --purge
Search commands
-
apt-cache search <search_term>
Each package has a name and a description. This command lists packages whose name or description
contains <search_term>.
-
dpkg -l *<search_term>*
apt-cache search
,
but also shows whether a package is installed on your
system by marking it with
ii
(installed) and
un
(not installed).
-
apt-cache show <package_name>
This command shows the description of package <package_name> and other relevant information
including version, size, dependencies and conflicts.
-
dpkg --print-avail <package_name>
This command is similar to "apt-cache show".
-
dpkg -L <package_name>
This command will list files in package <package_name>.
-
dpkg -c foo.deb
This command lists files in the package "foo.deb". Note that foo.deb is a pathname
. Use this command on .deb packages that you have manually downloaded.
-
dlocate <package_name>
This command determines which installed package owns <package_name>. It shows files from installed
packages that match <package_name>, with the name of the package they came from. Consider this
to be a "reverse lookup" utility.
In order to use this command, the package dlocate must be installed on your system.
|
-
dpkg -S <filename_search_pattern>
This command does the same as dlocate , but does not require the installation of any
additional packages. It is slower than dlocate but has the advantage of being installed
by default on all Debian and Ubuntu systems.
-
apt-file search <filename_search_pattern>
This command acts like dlocate and dpkg -S , but searches all available packages.
It answers the question, "what package provides this file?".
In order to use this command, the package apt-file must be installed on your system.
|
-
apt-cache pkgnames
This command provides a listing of every package in the system
-
A general note on searching: If searching generates a list that is too long, you can filter
your results by piping them through the command grep . Examples:
-
apt-cache search <filename> | grep -w <filename>
will show only the files that contain <filename> as a whole word
-
dpkg -L package | grep /usr/bin
will list files located in the directory /usr/bin, useful if you're looking for a particular
executable.
For more information on apt-get, apt-cache and dpkg consult their manual pages by using the
man command. These manuals will provide a wider scope of information in addition to all
of the options that you can use with each program.
-
Example:
man apt-get
.
Typical usage example
I want to feel the wind in my hair, I want the adrenaline of speed. So let's install a racing
game. But what racing games are available?
apt-cache search racing game
It gives me a lot of answers. I see a game named "torcs". Let's get some more information on this
game.
apt-cache show torcs
Hmmm... it seems interesting. But is this game not already installed on my computer? And what
is the available version? Which repository is it from (Universe or Main)?
apt-cache policy torcs
Ok, so now, let's install it!
apt-get install torcs
What is the command I must type in the console to launch this game? In this example, it's straightforward
("torcs"), but that's not always the case. One way of finding the name of the binary is to look at
what files the package has installed in "/usr/bin". For games, the binary will be in "/usr/games".
For administrative programs, it's in "/usr/sbin".
dpkg -L torcs | grep /usr/games/
The first part of the command display all files installed by the package "torcs" (try it). With
the second part, we ask to only display lines containing "/usr/games/".
Hmmm, that game is cool. Maybe there are some extra tracks?
apt-cache search torcs
But I'm running out of space. I will delete the apt cache!
apt-get clean
Oh no, my mother asked me to remove all games from this computer. But I want to keep the configuration
files so I can simply re-install it later.
apt-get remove torcs
If I want to also remove config files :
apt-get purge torcs
Setting up apt-get to use a http-proxy
These are three methods of using apt-get with a http-proxy.
Temporary proxy session
This is a temporary method that you can manually use each time you want to use apt-get through
a http-proxy. This method is useful if you only want to temporarily use a http-proxy.
Enter this line in the terminal prior to using apt-get (substitute your details for yourproxyaddress
and proxyport).
export http_proxy=http://yourproxyaddress:proxyport
If you normally use sudo to run apt-get you will need to login as root first for this to work
unless you also add some explicit environment settings to /etc/sudoers, e.g.
Defaults env_keep = "http_proxy https_proxy ftp_proxy"
APT configuration file method
This method uses the apt.conf file which is found in your /etc/apt/ directory. This method is
useful if you only want apt-get (and not other applications) to use a http-proxy permanently.
On some installations there will be no apt-conf file set up. This procedure will either edit
an existing apt-conf file or create a new apt-conf file.
|
gksudo gedit /etc/apt/apt.conf
Add this line to your /etc/apt/apt.conf file (substitute your details for yourproxyaddress
and proxyport).
Acquire::http::Proxy "http://yourproxyaddress:proxyport";
Save the apt.conf file.
BASH rc method
This method adds a two lines to your .bashrc file in your $HOME directory. This method is useful
if you would like apt-get and other applications for instance wget, to use a http-proxy.
gedit ~/.bashrc
Add these lines to the bottom of your ~/.bashrc file (substitute your details for yourproxyaddress
and proxyport)
http_proxy=http://yourproxyaddress:proxyport
export http_proxy
Save the file. Close your terminal window and then open another terminal window or source the
~/.bashrc file:
source ~/.bashrc
Test your proxy with sudo apt-get update and whatever networking tool you desire. You can use
firestarter or conky to see active connections.
If you make a mistake and go back to edit the file again, you can close the terminal and reopen
it or you can source ~/.bashrc as shown above.
source ~/.bashrc
How to login a proxy user
If you need to login to the Proxy server this can be achieved in most cases by using the following
layout in specifying the proxy address in http-proxy. (substitute your details for username, password,
yourproxyaddress and proxyport)
http_proxy=http://username:password@yourproxyaddress:proxyport
13:47 Guest Post, MIR, UBUNTU 25 comments
Introduction
As some of you may know I have recently started a new blog called "My Ubuntu
Blog" (www.myubuntublog.com).
Some of you may therefore be alarmed by the title of this post as leaving Ubuntu
would be a strange decision to make having committed to a whole blog on the
subject.
This article is actually a guest post from Paul Smith who left a well thought
out and well written comment at the bottom of the article "Is Unity Bashing a
hobby?".
Having read the comment I made the decision that it was too good to languish at
the bottom of the post and so I asked for Paul's permission to publish his
comment as a full article on this site, which is about Linux in general.
So without further ado here is Paul Smith's article "Why I Left Ubuntu".
Why I left Ubuntu
I was a great fan of Ubuntu and Canonical. I loved the pre-Unity versions of
Ubuntu. I found the last Gnome 2 version to be especially functional and
polished.
When Canonical switched to Unity on 11.04, I tried it and mostly liked it.
Admittedly, there were some issues but I really liked the fact that Unity did a
better job of maximizing the screen real-estate available to applications than
any other desktop environment I have used previously. I was hopeful that the
wrinkles in Unity would be worked out in the next version and was just about
ready to pay for support from Canonical for all the systems in my home, mostly
as a thank you, when Ubuntu 11.10 came out.
Ubuntu 11.10 seemed to be a lot buggier overall. Unity would do weird things to
my applications and sometimes make the desktop unusable, forcing me to drop down
to the shell to restart X. Pulse audio on this version was a dog and would
simply not work with a sound card I'd been using successfully on Linux for about
5 years. I also discovered a number of newly introduced library compatibility
issues that broke some of the commercial software I needed for my job.
The final straws for me was Canonical's decision not to include snd-pcm-oss as a
kernel model (which I discovered with Ubuntu 12.04), breaking ALSA's OSS
emulation, as well as the inclusion of Amazon search.
I now use Scientific Linux with the Trinity desktop since I really liked KDE 3.
I find that I can easily get everything to work with that distribution and it is
extremely stable. At this point the only thing I miss is the old Synaptic
package manager and some features of the Debian package file format.
Why Mir concerns me
I do a significant amount of technical computing. I could care less if the
same OS runs on both my desktop and my phone or tablet. I need a desktop that
provides a good environment for code development, modeling, as well as a limited
amount of CAD. I do this work on machines at work and, to a lesser extent, on my
home systems. Until my phone or tablet can support a large amount of DRAM, many
cores, and can plug into a keyboard and several large monitors, I don't see
myself migrating away from a desktop. Canonical's direction appears to be to
water down the desktop experience in order to make it more like the phone, the
same bad mistake Microsoft made with Windows 8. Developing Mir is a result of
this direction. I expect to use my desktop and phone for very different tasks
and could care less if they use the same OS. As an user, Mir does not appear to
offer me anything of real value.
Given the direction Canonical is taking, I am very concerned that NVIDIA
and/or AMD will make X and Wayland second class citizens in favor of Mir. I
would love to use Nouveau and similar open source drivers; however, they're not
functional enough yet, either for the software I use for my job or for
recreational use with the games. Some of these games purchased from Loki Games
dating back to the late 1990′s.
I am thrilled that Steam and other game developers are beginning to fully
embrace Linux and would like to spend some money on these games. Assuming these
games are coded to work exclusively with Mir, then buying these Linux games is
not an option unless other distributions such as Scientific Linux, Fedora, or
Debian also migrate to Mir. For legacy games such as the ones produced by Loki
Games, I am concerned that Mir may not emulate X well enough. Full support for
legacy applications that depend on X will be more of an issue if the Linux
community's effort to develop X emulation is split between Wayland and Mir.
Given the direction of the rest of the Linux ecosystem to standardize on
Wayland as well as the concern over OpenGL support from NVIDIA and AMD, I really
wish Canonical would have worked with the Wayland team to reach their goals
rather than going their own direction. In my opinion, trying to make the same OS
and applications work on both big iron and small phones or tablets is just silly
at this point. Given this, fragmenting the Linux ecosystem right now to save a
little power on low end devices is just plain stupid. I understand Canonical's
argument for the other issues, such as, the desire for a more extensible input
system; however, Canonical should have been able to work through these issues
with the Wayland team.
About the author
Paul Smith is an electrical engineer with 23 years of post college experience.
He wrote his first program on an MOS Technology KIM 1 in 1979. Paul has been
using Linux since 1998.
Summary
I would like to thank Paul for allowing me to post this article and I hope you
enjoyed it as much as I did.
If you think that you have an article worth posting on this site please feel
free to get in touch at [email protected].
504957 sysv initscripts not started
on boot
Binary package hint: upstart
On boot, upstart isn't starting the legacy sysv initscripts.
The `runlevel` command returns 'unknown'.
I have a workaround:
axa@artemis:~$ runlevel
unknown
axa@artemis:~$ sudo telinit 2
axa@artemis:~$ runlevel
N 2
The debug information collected with ubuntu-bug is from before switching
runlevels.
ProblemType: Bug
Architecture: i386
Date: Fri Jan 8 21:21:57 2010
DistroRelease: Ubuntu 9.10
ExecutablePath: /sbin/init
Package: upstart 0.6.3-11
ProcEnviron: PATH=(custom, no user)
ProcVersionSignature: Ubuntu 2.6.31-16.53-generic
SourcePackage: upstart
Uname: Linux 2.6.31-16-generic i686
Binary package hint:
upstart
On boot, upstart
isn't starting the legacy
sysv initscripts. The
`runlevel` command returns
'unknown'.
I have a workaround:
axa@artemis:~$ runlevel
unknown
axa@artemis:~$ sudo
telinit 2
axa@artemis:~$ runlevel
N 2
The debug information
collected with ubuntu-bug
is from before switching
runlevels.
ProblemType: Bug
Architecture: i386
Date: Fri Jan 8 21:21:57
2010
DistroRelease: Ubuntu
9.10
ExecutablePath: /sbin/init
Package: upstart 0.6.3-11
ProcEnviron: PATH=(custom,
no user)
ProcVersionSignature:
Ubuntu 2.6.31-16.53-generic
SourcePackage: upstart
Uname: Linux 2.6.31-16-generic
i686
[Jan 10, 2010] By default sshd is not installed on Ubuntu. sshd can
be installed using Synaptic
[Jan 10, 2010] SSHD can be used as remote instead of VNC
[Dec 13, 2009] VFSFTPD has option force_dot_files=YES
[Dec 13, 2009] Synaptic is OK tool for adding and removing packages.
[Dec 12, 2009] FTE editor package is available on Ubuntu
[Dec 12, 2009] MC on Ubuntu comes with wrong setting -- external editor
(nano). To change go to Options and set internal editor
[Dec 12, 2009] Strange RC settings -- no K files
:/home/serg# cd /etc/init.d
:/etc/init.d# ls
acpid dns-clean procps stop-bootlogd
acpi-support gdm pulseaudio stop-bootlogd-single
alsa-utils grub-common rc udev
anacron hal rc.local udev-finish
apparmor halt rcS udevmonitor
apport hwclock README udevtrigger
atd hwclock-save reboot ufw
avahi-daemon kerneloops rsync umountfs
binfmt-support keyboard-setup rsyslog umountnfs.sh
bluetooth killprocs rsyslog-kmsg umountroot
bootlogd laptop-mode saned unattended-upgrades
brltty module-init-tools screen-cleanup urandom
console-setup networking sendsigs usplash
cron network-manager single wpa-ifupdown
cups ondemand skeleton x11-common
dbus pcmciautils speech-dispatcher
dmesg pppd-dns sreadahead
:/etc/init.d# cd /etc/rc1.d
:/etc/rc1.d# ls
K15pulseaudio K20saned K99laptop-mode S70pppd-dns
K20acpi-support K20speech-dispatcher README S90single
K20kerneloops K74bluetooth S30killprocs
K20rsync K80cups S70dns-clean
:/etc/rc1.d# cd /etc/rc2*
:/etc/rc2.d# ls
README S50cups S70dns-clean S99grub-common
S20kerneloops S50pulseaudio S70pppd-dns S99laptop-mode
S20speech-dispatcher S50rsync S90binfmt-support S99ondemand
S25bluetooth S50saned S99acpi-support S99rc.local
:/etc/rc2.d# cd /etc/rc3*
:/etc/rc3.d# ls
README S50cups S70dns-clean S99grub-common
S20kerneloops S50pulseaudio S70pppd-dns S99laptop-mode
S20speech-dispatcher S50rsync S90binfmt-support S99ondemand
S25bluetooth S50saned S99acpi-support S99rc.local
:/etc/rc3.d# cd /etc/rc5*
:/etc/rc5.d# ls
README S50cups S70dns-clean S99grub-common
S20kerneloops S50pulseaudio S70pppd-dns S99laptop-mode
S20speech-dispatcher S50rsync S90binfmt-support S99ondemand
S25bluetooth S50saned S99acpi-support S99rc.local
Servers -- whether used for testing or production -- are primary
targets for attackers. By taking the proper steps, you can turn a vulnerable
box into a hardened server and help thwart outside attackers. Learn
how to secure SSH sessions, configure firewall rules, and set up intrusion
detection to alert you to any possible attacks on your GNU/Linux server.
Once you've gained a solid foundation in the basics of securing your
server, you can build on this knowledge to further harden your systems.
Society
Groupthink :
Two Party System
as Polyarchy :
Corruption of Regulators :
Bureaucracies :
Understanding Micromanagers
and Control Freaks : Toxic Managers :
Harvard Mafia :
Diplomatic Communication
: Surviving a Bad Performance
Review : Insufficient Retirement Funds as
Immanent Problem of Neoliberal Regime : PseudoScience :
Who Rules America :
Neoliberalism
: The Iron
Law of Oligarchy :
Libertarian Philosophy
Quotes
War and Peace
: Skeptical
Finance : John
Kenneth Galbraith :Talleyrand :
Oscar Wilde :
Otto Von Bismarck :
Keynes :
George Carlin :
Skeptics :
Propaganda : SE
quotes : Language Design and Programming Quotes :
Random IT-related quotes :
Somerset Maugham :
Marcus Aurelius :
Kurt Vonnegut :
Eric Hoffer :
Winston Churchill :
Napoleon Bonaparte :
Ambrose Bierce :
Bernard Shaw :
Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient
markets hypothesis :
Political Skeptic Bulletin, 2013 :
Unemployment Bulletin, 2010 :
Vol 23, No.10
(October, 2011) An observation about corporate security departments :
Slightly Skeptical Euromaydan Chronicles, June 2014 :
Greenspan legacy bulletin, 2008 :
Vol 25, No.10 (October, 2013) Cryptolocker Trojan
(Win32/Crilock.A) :
Vol 25, No.08 (August, 2013) Cloud providers
as intelligence collection hubs :
Financial Humor Bulletin, 2010 :
Inequality Bulletin, 2009 :
Financial Humor Bulletin, 2008 :
Copyleft Problems
Bulletin, 2004 :
Financial Humor Bulletin, 2011 :
Energy Bulletin, 2010 :
Malware Protection Bulletin, 2010 : Vol 26,
No.1 (January, 2013) Object-Oriented Cult :
Political Skeptic Bulletin, 2011 :
Vol 23, No.11 (November, 2011) Softpanorama classification
of sysadmin horror stories : Vol 25, No.05
(May, 2013) Corporate bullshit as a communication method :
Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000):
the triumph of the US computer engineering :
Donald Knuth : TAoCP
and its Influence of Computer Science : Richard Stallman
: Linus Torvalds :
Larry Wall :
John K. Ousterhout :
CTSS : Multix OS Unix
History : Unix shell history :
VI editor :
History of pipes concept :
Solaris : MS DOS
: Programming Languages History :
PL/1 : Simula 67 :
C :
History of GCC development :
Scripting Languages :
Perl history :
OS History : Mail :
DNS : SSH
: CPU Instruction Sets :
SPARC systems 1987-2006 :
Norton Commander :
Norton Utilities :
Norton Ghost :
Frontpage history :
Malware Defense History :
GNU Screen :
OSS early history
Classic books:
The Peter
Principle : Parkinson
Law : 1984 :
The Mythical Man-Month :
How to Solve It by George Polya :
The Art of Computer Programming :
The Elements of Programming Style :
The Unix Hater’s Handbook :
The Jargon file :
The True Believer :
Programming Pearls :
The Good Soldier Svejk :
The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society :
Ten Commandments
of the IT Slackers Society : Computer Humor Collection
: BSD Logo Story :
The Cuckoo's Egg :
IT Slang : C++ Humor
: ARE YOU A BBS ADDICT? :
The Perl Purity Test :
Object oriented programmers of all nations
: Financial Humor :
Financial Humor Bulletin,
2008 : Financial
Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related
Humor : Programming Language Humor :
Goldman Sachs related humor :
Greenspan humor : C Humor :
Scripting Humor :
Real Programmers Humor :
Web Humor : GPL-related Humor
: OFM Humor :
Politically Incorrect Humor :
IDS Humor :
"Linux Sucks" Humor : Russian
Musical Humor : Best Russian Programmer
Humor : Microsoft plans to buy Catholic Church
: Richard Stallman Related Humor :
Admin Humor : Perl-related
Humor : Linus Torvalds Related
humor : PseudoScience Related Humor :
Networking Humor :
Shell Humor :
Financial Humor Bulletin,
2011 : Financial
Humor Bulletin, 2012 :
Financial Humor Bulletin,
2013 : Java Humor : Software
Engineering Humor : Sun Solaris Related Humor :
Education Humor : IBM
Humor : Assembler-related Humor :
VIM Humor : Computer
Viruses Humor : Bright tomorrow is rescheduled
to a day after tomorrow : Classic Computer
Humor
The Last but not Least Technology is dominated by
two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt.
Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org
was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP)
without any remuneration. This document is an industrial compilation designed and created exclusively
for educational use and is distributed under the Softpanorama Content License.
Original materials copyright belong
to respective owners. Quotes are made for educational purposes only
in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains
copyrighted material the use of which has not always been specifically
authorized by the copyright owner. We are making such material available
to advance understanding of computer science, IT technology, economic, scientific, and social
issues. We believe this constitutes a 'fair use' of any such
copyrighted material as provided by section 107 of the US Copyright Law according to which
such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free)
site written by people for whom English is not a native language. Grammar and spelling errors should
be expected. The site contain some broken links as it develops like a living tree...
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or
referenced source) and are
not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness
of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be
tracked by Google please disable Javascript for this site. This site is perfectly usable without
Javascript.
Last modified:
March 12, 2019