|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
|
|
General concepts of logical volume manager (LVM) goes to the long gone era of 40MB drives (Not 40GB, just forty megabytes) which created a legitimate desire to create filesystems that span several physical disks. Another driver was the desire to have a capability to change size of existing partitions of the fly. Yet another important "desirable" feature is the ability to create snapshots. See Snapshots.
With the current size of harddrives, when even laptops usually have at least 320GB drives, the usage of LVM on desktop is just additional danger for your data as recovery of data is more complex (especially if you use striping). Also using the second drive you can copy and resize ext3/ext4 partitions almost as easily as with LVM using imaging programs such as Acronis. The only important and useful thing provided LVM on desktop are snapshots. In you know how to use them or want to learn, LVM can be a valuable asset. Otherwise avoid it.
But LVM still has an important place in the server environment, where the ability to do things on the fly that are impossible with regular partitions represents a great value. And the ability to resize partitions on the fly is still very important on the server, especially if you have highly dynamic setup. LVM is also a must, if you use multipath (situation typical when you use SAN). Otherwise naming of partitions with Linux device-mapper is plain vanilla bizarre and you would curse Linus Torvalds way too often ;-).
But, there is no free lunch, and because of this you generally should limit LMV to cases were it provides an additional value and first of all for only those partitions size of which change often and unpredictably. Static partitions (especially root partition) are much better off without LVM.
Excessive zeal of using LVM when you don't need it can badly burn you, as recovery of data with LVM after corruption is more (in case of striping much more) difficult and time consuming.
The Linux LVM implementation is similar to the HP-UX LVM implementation although actual code probably has more in common with AIX implementation. Both of them are derivatives of VxFS, the Veritas filesystem. It was originally developed by VERITAS Software as a proprietary, closed source product. Dan Koren is cited as one of the original developers of VxFS. This product was first released in 1991, the same year as Linux was born. Current version (Veritas was acquired by Symantec, the company famous for destroying most of acquired products :-) supports built-in deduplication, compression in primary storage and migration across operating systems without downtime.
The Veritas filesystem became almost a standard part of Solaris (since at least 1993) as it was very widely used as a primary filesystem to the extent that most of large servers have it installed and knowledge of VxFS was a requirement for any Solaris sysadmin job. Despite tremendous popularity it was not licensed by Sun and organizations need to pay license fee to Veritas, a huge blunder of Sun brass. Later it was licensed and integrated into HP-UX (via OEM agreement as a primary filesystem; one of the few thing that HP-UX did right ;-), AIX and SCO UNIX. Unlike Solaris users of those OSes can use it for free.
Veritas Volume Manager code also has been used (in extensively modified form and without command line utilities) in Windows.
Linux LVM was not an original project, but a plain vanilla reimplementation. It was originally written (adapted from IBM code?) in 1998 by Heinz Mauelshagen. Some code was donated by IBM [IBM pitches its open source side]. It is unclear if it's still used. See Enterprise Volume Management System - Wikipedia
IBM has donated technology, code and skills to the Linux community, Kloeckner said, citing the company's donation of the Logical Volume Manager and its Journaling File System.
Matthew O'Keefe who from 1990 to May 2000, taught and performed research in storage systems and parallel simulation software as a professor of electrical and computer engineering at the University of Minnesota founded Sistina Software in May of 2000 to develop storage infrastructure software for Linux, including the Linux Logical Volume Manager (LVM). They created LVM2. Sistina was acquired by Red Hat in December 2003 and code was released under GPL. LVM2 is still the version used today both in RHEL and SLES.
The quality and architectural integrity of the current implementation of LVM is low and reflects low quality of Linux I/O filesystems layer in general. Recovery utilities in case of severe malfunction are limited. Linux LVM is the source of many difficult to resolve problem, including, but not limited to situation when production server became unbootable after regular patches (yes that did happened in SLES). One example of excessive zeal is putting root partition on LVM
For a regular sysadmin who does not have much LVM experience, the sense of desperation and cold down the spine in case LVM-based partition goes south on an important production server dampen all the advantages that LVM provides. You can find pretty interesting and opinionated tidbits about such situations on the Net. For example, emotional statement in the discussion thread dev-dm-0?:
I only use those for mounting flash drives, and mapping encrypted partitions. Sorry, I don't do LVM anymore, after a small problem lost me 300GB of data. Its much easier to backup.
It is important to understand that LVM adds additional layer of complexity and greatly complicates recovery of corrupted data. As such it makes quality of backup it its regularity paramount. In other words daily backups are a must.
So my recommendation is to avoid using it due to fashion, especially on OS partitions where you do not need flexibility it provides. You should have real reasons to install it. RHEL installer now suggests it as a default option for some unknown for me reason -- probably because Red Hat is now member of S&P500 and as member of this privileged club that includes a lot of financial companies. And as bad example is more easily emulated then good. Red Hat now probably does not care one bit about its customers, like most financial companies ;-).
So use it only for data partitions, unless you do need some of the functions provided. OS partitions are pretty static and with the current size of harddrives it is easy to oversize them so that even significant changes does not affect initial setup. And use it on the part of the drive not indiscriminately for all partitions. Just add 4-8GB to each based on the size of already installed similar server and you generally will be fine. In particular /var should not be over specified as older logs can always be moved to other volume using cron job.
One of the most serious problems with LVM is that recovery of LVM-controlled partitions is more complex and time consuming. It helps if installation DVD rescue mode automatically recognizes LVM group. This is the case for Suse. See
I never understood the rational of using LVM on home PCs servers or any servers for which reboot and some downtime is not a problem. You voluntarily introduce complex and not very reliable subsystem that threaten your data, the threat that can be mitigated only by religiously making backups in best enterprise style (and buying corresponding equipment such as RAID 1 enclosures and controllers, which are not cheap) . The question arise: for the sake of what ?
So the first rule is that LVM makes sense mostly in enterprise environment where that ability to extend partition on the fly without shutting server down (and other similar features provided by LVM) is an essential feature and reliability of components is usually higher that in home hardware or "make IT cheap" startups. Among cases when LVM is essential are the following:
All-in-all current LVM is a pretty convoluted implementation of three tier storage hierarchy (physical volumes, logical volumes, partitions). Such an implementation both from architectural and from efficiency standpoints is somewhat inferior to integrated solutions like ZFS.
Putting the root partition under LVM is a risky decision and you may pay the price unless you are a LVM expert. In large enterprise environment if the partition is not on SAN or NAS usually extension of partition means adding a new pair of hardrives. In such cases creating cpio archive of the partition, recreation of partitions and restoring the content is a better deal as such cases happen once in several years. Another path to avoid LVM is to start with it, optimize size of the partitions based on actual usage, and then move all content to the second pair of drives without LVM, modify /etc/fstab and replace original drives with the new pair.
Putting root filesystem under LVM often happens, if the first partition is a service partition (for example Dell Service partition). In this case swap partition and boot partition take another two primary partitions and extended is the last one and it is usually put completely under LVM. In this case it is better to allocate SWAP partition on a different volume or to a file (with the current size of RAM it is rarely used on servers) so that the root partition is a primary partition.
If you have root system on LVM volume you need to train yourself to use recovery disk and mount those partitions. It also helps to have a separate backup on CD or other media of /etc/lvm. Among other things it contains the file with the structure of your LVM volume, for example /etc/lvm/backup/vg01
Don't put operating system partitions on the same logical volume groups as data
LVM can't do operations with logical volumes when they are active. That means that you increase flexibility of your environment if you put you "unmountable" partitions on a different logical volume group. Root and /var partitions are generally unmountable. /tmp is close to that too.
LVM2 is identical in Red Hat and Suse although it has different GUI interface for managing volumes. The installers for both Red Hat and Suse are LVM-aware.
Although Linux volume manager works OK and is pretty reliable, documentation sucks badly for a commercial product. The most readable documentation that I have found is the article by Klaus Heinrich Kiwi Logical volume management published at IBM Developer Works on September 11, 2007. A good cheatsheet is available from RedHat. I slightly reformatted and adapted it. See modified version LVM Cheatsheet
The most readable documentation that I have found is the article by Klaus Heinrich Kiwi
Logical volume management published at IBM Developer Works on September 11, 2007. Unfortunately,
it is now outdated... Good cheatsheet is available from RedHat - LVM cheatsheet |
Moreover in RHEL GUI interface is almost unusable as the left pane cannot be enlarged. YAST in Suse 10 and 11 was a much better deal.
The LVM hierarchy includes Physical Volume (PV) (typically a hard disk or partition, though it may well just be a device that 'looks' like a hard disk e.g. a RAID device). Volume Group (VG) (the new virtual disk that can contain several physical disks) and Logical Volumes (LV) -- the equivalent of a disk partition in a non-LVM system. The Volume Group is the highest level abstraction used within the LVM.
hda1 hdc1 (PV:s on partitions or whole disks) \ / \ / diskvg (VG) / | \ / | \ usrlv syslv varlv (LV:s) | | | ext3 ext3 xfs (filesystems)
The lowest level in the LVM storage hierarchy is the Physical Volume (PV). A PV is a single device or partition and is created with the command: pvcreate device. This step initializes a partition for later use. During this step each physical volume is divided chunks of data, known as physical extents, these extents have the same size as the logical extents for the volume group.
Multiple Physical Volumes (initialized partitions) are merged into a Volume Group (VG). This is done with the command: vgcreate volume_name device {device}. This step also registers volume_name in the LVM kernel module and therefore it is made accessible to the kernel I/O layer.
first you need to create pv volume with command pvcreate
Then you can create volume group:
vgcreate test-volume /dev/hda2
A Volume Group is pool from which Logical Volumes (LV) can be allocated. LV is the equivalent of a disk partition in a non-LVM system. The LV is visible as a standard block device; as such the LV can contain a file system (eg. /home). Creating an LV is done with lvcreate command
Here is summary of terminology used:
pvcreate /dev/hdb
This creates a volume group descriptor at the start of the second IDE disk. You can
initialize several disks and/or partitions at once. Just list all the disks and partitions on
the command line you wish to format as PVs.
The system internally numbers the extents for both logical and physical volumes. These are called logical extents (or LEs) and physical extents (or PEs), respectively. When a logical volume is created a mapping is defined between logical extents (which are logically numbered sequentially starting at zero) and physical extents (which are also numbered sequentially).
To provide acceptable performance the extent size must be a multiple of the actual disk cluster size (i.e., the size of the smallest chunk of data that can be accessed in a single disk I/O operation). In addition some applications (such as Oracle database) have performance that is very sensitive to the extent size. So setting this correctly also depends on what the storage will be used for, and is considered part of the system administrator's job of tuning the system.
LVM exposes its functionality via set of command line utilities or GUI interface. We will discuss command line utilities. They provide a much wider spectrum of operations then GUI. They can be classified into several categories:
pvscan -- Show properties of existing physical volumes
When you first get to the new to you server, or the server with which you did not work for a while the first thing is to create a map of the LVM environment. This might help to prevent errors or blunders in subsequent work.
There are three commands that can help you in this task:
pvscan [-d|--debug] [-e|--exported] [-h|--help] [--ignorelockingfailure] [-n|--novolumegroup] [-s|--short] [-u|--uuid] [-v[v]|--verbose [--verbose]]
pvscan scans all supported LVM block devices in the system for physical volumes.
See lvm for common options.
- -e, --exported
- Only show physical volumes belonging to exported volume groups.
- -n, --novolumegroup
- Only show physical volumes not belonging to any volume group.
- -s, --short
- Short listing format.
-v[v] --verbose --verbose
- -u, --uuid
- Show UUIDs (Uniform Unique Identifiers) in addition to device special names.
vgdisplay vg0 | grep "Total PE"
More space may be added to a VG by adding new devices with the command: vgextend. The following is adapted from A Walkthrough of the LVM for Linux :
To use LVM, partitions and whole disks must first be converted into physical volumes (PVs) using the pvcreate command.
Here is how create volume group vg0 out of single partition /dev/mapper/mparts_part7 (which, for example, couple be previously /home partition or a little used swap partition, that you decided to replace with swap file):
pvcreate /dev/mapper/mparts_part7 pvscan vgcreate vg0 /dev/mapper/mparts_part7 vgdisplay
Use the "vgcreate
" program to group selected PVs into VGs, and to optionally set the
extent size (the default is 4MB). You can use command vgextend instead of vgcreate:
vgextend vg0 /dev/mapper/mparts_part7
You can also specify the extent size with this command using the "-s size
"
option, if the 4Mb default not what you want. The size
is a value followed
by one of k (for kilobytes), m (megabytes), g (gigabytes), or t (tetrabytes). In addition you
can put some limits on the number of physical or logical volumes the volume can have. You may
want to change the extent size for performance, administrative convenience, or to support very large
logical volumes. (Note there may be kernel limits and/or application limits on the size of LVs
and files on your system. For example Linux 2.4 kernel has a max size of 2TB.)
The "vgcreate
" command adds some information to the headers of the included PVs.
However the kernel modules needed to use the VGs as disks aren't loaded yet, and thus the kernel doesn't
"see" the VGs you created. To make the VGs visible you must activate them. Only
active volume groups are subject to changes and allow access to their logical volumes.
To activate a single volume group vg0
, use the command:
vgchange -a y /dev/vg0
("-a
" is the same as "--available
".) To active all volume groups
on the system use:
vgchange -a y
if you did something wrong you can remove volume group with the command vgremove. After volume group is create you now can create individual logical volumes (virtual partitions) with the command lvcreate (see also The Linux Logical Volume Manager Red Hat)
For further reading see also outdated but still useful A Beginner's Guide To LVM - Page 2 - Page 2 and RHEL 4.3. Volume Group Administration
vgextend vg01 /dev/hda6You can check results with the command
vgdisplay -v vg01
lvcreate -l4 -nlv02 -i2 vg01 /dev/hda5 /dev/hda6
Specifying the PV on the command line tells LVM which PEs to use, while the -i2 command
tells it to stripe it across the two. You now have an LV striped across two PVs!
In fact, you can move an entire LV from one PV to another, even while the disk is mounted and in use!
This will impact your performance, but proved to be useful if you need to expand partitions and in several other cases. For example, this allows you to expand /var partition which is pretty difficult to expand with other means as you can't generally unmount it. You first move all of them to a new volume (which you can create on USB drive) and then back expanding those which need to be expanded and shrinking those which can be shrunk one by one. This new volume can be created it. For example, let's move lv01 to hda6 from hda5.
The command pvmove can be used in several ways to move any LV to the other LVM volume on a different set of physical disks. Unlike gparted, LVM can move a partition while it is in use, and will not corrupt your data if it is interrupted
For example:
pvmove -n /dev/vg01/lv01 /dev/hda5 /dev/hda6
will move all LEs used by lv01 mapped to PEs on /dev/hda5 to new PEs on /dev/hda6. Effectively, this migrates data from hda5 to hda6. It takes a while, but when it's done, take a look with lvdisplay -v /dev/vg01/lv01 and notice that it now resides entirely on /dev/hda6!
Look at the Volume Group and notice that the PEs are now unused. This operation can be classified
simultaneously as operation on logical volume and volume group And it is decraimed in more details
in How to remove LVM logical volume (virtual partition)
vgdisplay shows logical volumes one by one and provides the information about free disk space on each:
vgdisplay volume_group_one | grep "Total PE"
vgcreate vg01 /dev/hda2 /dev/hda10 Volume group "vg01" successfully created
One of the most important advantages of LVM is that LVM allows sysadmin to perform pretty powerful "partition acrobatics". Moreover it can be performed on the fly, if server is low loaded (which is typical situation for most servers at night). While LVs was bound to a particular PV (psychical partition or disk), you can move an entire LV from one PV to another, even while the disk is mounted and in use!
lvcreate -L 5G -n data vg02 Logical volume "data" created
mkfs -t ext3 /dev/vg02/data
mkdir /data mount /dev/vg02/data /data/
df -h /data Filesystem Size Used Avail Use% Mounted on /dev/mapper/test--volume-data 50.0G 33M 5.0G 1% /data
You can create shell function to simplify this task if you need to create many similar partitions like is often the case with Oracle databases. For example:
# Create oracle archive filesystem # Parameters: # 1 - name of archive # 2 - size in gigabytes # 3 - name of logical volume (default lv0) function make_archive { mkdir -p /oracle/$1/archive chown oracle:dba /oracle/$1/archive lvcreate -L ${2}G -n archive vg0 mkfs -t ext3 /dev/vg0/archive echo "/dev/$3/archive /oracle/$1/archive ext2 defaults 1 2" >> /etc/fstab mount /oracle/$1/archive # that will check the mount point if fstab df -k }See also LVM Cheatsheet
A file system on LVM partition may be extended. For example:
lvextend -L +80G /dev/mapper/vg00-sge Extending logical volume sge to 90.00 GB Logical volume sge successfully resized
Here is a typical df map of a server with volume manager installed. As you can see all partitions except /boot partition are referred vi path /dev/mapper/VolGroup00-LogVolxx where xx is two digit number:
# df -l Filesystem 1K-blocks Used Available Use% Mounted on /dev/cciss/c0d0p3 31738420 4497848 25602344 15% / /dev/mapper/vg00-var 15870920 351348 14700372 3% /var /dev/cciss/c0d0p2 99188500 3149924 90918664 4% /home /dev/mapper/vg00-tmp 7935392 320524 7205268 5% /tmp /dev/cciss/c0d0p1 497829 31672 440455 7% /boot tmpfs 3051112 0 3051112 0% /dev/shm /dev/mapper/vg00-sge 10321208 8225504 1571416 84% /sge
After extending the volume group and the logical volume, it is possible to resize the file system on the fly. This is done using resize2fs.
Let's get a DF map of the server first:
# df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/cciss/c0d0p3 31738420 4497848 25602344 15% / /dev/mapper/vg00-var 15870920 351364 14700356 3% /var /dev/cciss/c0d0p2 99188500 3149924 90918664 4% /home /dev/mapper/vg00-tmp 7935392 320524 7205268 5% /tmp /dev/cciss/c0d0p1 497829 31672 440455 7% /boot tmpfs 3051112 0 3051112 0% /dev/shm /dev/mapper/vg00-sge 10321208 7272668 2524252 75% /sge
Now we can perform resizing (it's better first to backup data just in case, but this step is omitted for brevity):
# resize2fs /dev/mapper/vg00-sge resize2fs 1.39 (29-May-2006) Filesystem at /dev/mapper/vg00-sge is mounted on /sge; on-line resizing required Performing an on-line resize of /dev/mapper/vg00-sge to 23592960 (4k) blocks. The filesystem on /dev/mapper/vg00-sge is now 23592960 blocks long.
Now let's check results via DF map:
#df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/cciss/c0d0p3 31738420 4497848 25602344 15% / /dev/mapper/vg00-var 15870920 351368 14700352 3% /var /dev/cciss/c0d0p2 99188500 3149924 90918664 4% /home /dev/mapper/vg00-tmp 7935392 320524 7205268 5% /tmp /dev/cciss/c0d0p1 497829 31672 440455 7% /boot tmpfs 3051112 0 3051112 0% /dev/shm /dev/mapper/vg00-sge 92891128 7285756 80887804 9% /sge
For more information see Resizing the file system
I have the following situation:
My current Ubuntu installation is running from an external HDD (250 GB) because I was to lazy to buy an new internal hdd. Now i've got a new internal (120GB) and i want to move everything to the internal. Installing Ubuntu new is out of disscussion because its to peronalized.
Luckily (i hope so) the root partition is partitioned with LVM, so i hope i can move the partition to the smaller internal HDD.
Is this possible? And where do i find help?
As you suspect, this is extremely elegant to do using lvm
pvcreate
it, use vgextend
to add it the same
vg as your root partitionpvmove
to transparently move all data over vgreduce
to remove your external hd from your vgupdate-grub
and grub-install
to make your new root disk bootableDone.
First, if you used the whole 250GB disk for your current installation, you'll need to shrink it to fit the 120GB disk. You can only shrink an ext4 filesystem while it's unmounted, so you'll need to boot off an Ubuntu live system (CD or USB), or a specialized maintenance live system such as GParted live. You can use
resize2fs
or GParted to resize the existing filesystem.Once you've shrunk the filesystem(s) of your existing installation to fit on the new disk, you can do the rest of the move with the filesystem mounted if you like. If the existing filesystem fits on the new disk, you can do the transfer without unmounting anything or rebooting.
In the following description, I'll show how to move from the physical volume
/dev/sdb1
to the physical volume/dev/sda1
, with an existing volume group calledoldvg
. Be sure to adjust the disk letters and partition numbers to match your system.To do a live transfer:
- Partition the new disk, using the partitioning tool of your choice (
cfdisk
,fdisk
,parted
, …). See e.g. How do I add an additional hard drive?- Create a physical volume on the new disk:
pvcreate /dev/sda1
- Add this physical volume to the existing volume group containing the logical volume(s) you want to move:
vgextend oldvg /dev/sda1
- Move the logical volumes from one physical volume to another:
pvmove /dev/sdb1 /dev/sda1
- Split the existing volume group in two:
vgsplit oldvg newvg /dev/sda1
Another method is to make the existing logical volume(s) a mirror volume with
lvconvert --mirror
, set up a mirror on the new disk, then split the mirrors withlvconvert --splitmirrors
. This way, you end up with two copies of your data, and after the split each copy leads its own life.After you've done the copy, you'll need to make the new disk bootable. Mount the filesystem for this. Assuming it's mounted on
/mnt
, run these commands as root:chroot /mnt # if the name of the volume group has changed, edit /etc/fstab update-grub grub-install /dev/sda
Alternatively, you might be able to use Clonezilla. This is a powerful disk manipulation and cloning tool, and I think it covers your situation, but I have no experience with it.
Here is a quote from RHEL/CentOS documentation
4.4.6. Removing Logical Volumes
To remove an inactive logical volume, use the lvremove command. You must close a logical volume with the umount command before it can be removed. In addition, in a clustered environment you must deactivate a logical volume before it can be removed.
If the logical volume is currently mounted, unmount the volume before removing it.
The following command removes the logical volume /dev/testvg/testlv. from the volume group testvg. Note that in this case the logical volume has not been deactivated.
[root@tng3-1 lvm]# lvremove /dev/testvg/testlv Do you really want to remove active logical volume "testlv"? [y/n]: y Logical volume "testlv" successfully removedYou could explicitly deactivate the logical volume before removing it with the lvchange -an command, in which case you would not see the prompt verifying whether you want to remove an active logical volume.
Use command lvremove to remove a logical volume from a logical volume group, after unmounting it
lvremove [-A/--autobackup y/n] [-d/--debug] [-f/--force] [-h/-?/--help] [-t/--test] [-v/--verbose] LogicalVolumePath [LogicalVolumePath...]
lvremove removes one or more logical volumes. Confirmation will be requested before deactivating any active logical volume prior to removal. Logical volumes cannot be deactivated or removed while they are open (e.g. if they contain a mounted filesystem).
Options.
-f, --force Remove active logical volumes without confirmation.
For example:
lvremove -f vg00/lvol1
lvremove vg00
When you create a snapshot, you create a new Logical Volume to act as a clone of the original Logical Volume. The snapshot volume initially does not use any space, but as changes are made to the original volume, the changed blocks are copied to the snapshot volume before they are changed, in order to preserve them. This means that the more changes you make to the origin, the more space the snapshot needs. If the snapshot volume uses all of the space allocated to it, then the snapshot is broken and can not be used any more, leaving you only with the modified origin. The lvs command will tell you how much space has been used in a snapshot Logical Volume. If it starts to get full, you might want to extend it with the lvextend command. To create a snapshot of the bar Logical Volume and name it snap, run:
lvcreate -s -n snap -L 5g foo/bar
This will create a snapshot named snap of the original Logical Volume bar and allocate 5 GB of space for it. Since the snapshot volume only stores the ares of the disk that have changed since it was created, it can be much smaller than the original volume.
While you have the snapshot, you can mount it if you wish and will see the original filesystem as it appeared when you made the snapshot. In the above example you would mount the /dev/foo/snap device. You can modify the snapshot without affecting the original, and the original without affecting the snapshot.
If you take a snapshot of your root Logical Volume, and then upgrade some packages, or to the next whole distribution release, and then decide it isn't working out, you can merge the snapshot back into the origin volume, effectively reverting to the state at the time you made the snapshot:
sudo lvconvert --merge foo/snap
If the origin volume of foo/snap is in use, it will inform you that the merge will take place the next time the volumes are activated. If this is the root volume, then you will need to reboot for this to happen. At the next boot, the volume will be activated and the merge will begin in the background, so your system will boot up as if you had never made the changes since the snapshot was created, and the actual data movement will take place in the background while you work.
Consult the man pages for more details on these other useful LVM operations:
LVM identifies PVs by UUID, not by device name. Each disk (PV) is labeled with a UUID, which uniquely identifies it to the system.
vgscan identifies this after a new disk is added that changes your drive numbering. Most linux distributions run vgscan in the LVM startup scripts to cope with this on reboot after a hardware addition. If you're doing a hot-add, you'll have to run this by hand. On the other hand, if your VG is activated and being used, the renumbering should not affect it at all. It's only the activation that needs the identifier, and the worst case scenario is that the activation will fail without a vgscan with a complaint about a missing PV.
The failure or removal of a drive that LVM is currently using will cause problems with current use and future activations of the VG that was using it.
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
To extend a LVM partition for Eg. say /usr
follow the below given steps
boot the system using a live cd
umount/mnt/lvm/localvg-usrlv
lvextend --size +2G -n /dev/localvg/usrlv
e2fsck -f /dev/localvg/usrlv
resize2fs /dev/localvg/usrlv
e2fsck -f /dev/localvg/usrlvsem007
Hi rconan,
If your volume gorup has free space then simply you can run lvextend.
if your volume-gorup has no space and you have free space on you HDD then you can follow steps suggested by cmisip.
http://www.howtoforge.com/logical-vo...a-volume-group
A link provide how to extend VG. after that you can extend LV.
HTH
Now that the multipath is configured, you need to perform disk management to make them available for use. If your original install was on LVM, you may want to add the new disks to the existing volume group and create some new logical volumes for use. If your original install was on regular disk partitions, you may want to create new volume groups and logical volumes for use. In both cases, you might want to partition the volume groups and automate the mounting of these new partitions to certain mount points.About this task
The following example illustrates how the above can be achieved. For detailed information about LVM administration, please consult: Red Hat LVM Administrator's Guide at http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Cluster_Logical_Volume_Manager/ or SLES10 SP2 Storage Administration Guide at http://www.novell.com/documentation/sles10/stor_evms/index.html?page=/documentation/sles10/stor_evms/data/mpiousing.html
Starting with an existing Linux environment on a blade, and a multipath zone configuration that will allow the blade to access some storage, here are a set of generic steps to make use of the new storage:
Procedure
- Determine which disks are not multipathed disks. From step 1 of previous section With existing configuration, the output of df and fdisk -l should indicate which disks are already in use before multipath was setup. In this example, sda is the only disk exists before multipath was setup.
- Create and/or open /etc/multipath.conf and blacklist the local disk.
- For a SLES10 machine:
cp /usr/share/doc/packages/multipath- tools/multipath.conf.synthetic /etc/multipath.confThe /usr/share/doc/packages/multipath-tools/multipath.conf.annotated file can be used as a reference to further determine how to configure your multipathing environment.- For a RHEL5 machine:
Edit the /etc/multipath.conf that has already been created by default. Related documentations can be found in the /usr/share/doc/device-mapper-multipath-0.4.7/ directory.
- Open the /etc/multipath.conf file, and edit the file to black list disks that are not meant to be multipathed. In this example, sda is blacklisted.
blacklist { devnode "^sda" }- Enable and activate the multipath daemon(s).
On both RHEL and SLES, the commands are:
chkconfig multipathd onAdditionally, on a RHEL system, this command is required:
chkconfig mdmpd on- Reboot the blade
Note: Note: if the machine is not rebooted, the latest configuration may not be detected.
- Check if multipathd daemon is running by issuing:
service mdmpd statusAdditionally, if you are running a RHEL system, check if mdmpd daemon is running:
service multipathd status- Run the command multipath -ll to verify that the disk(s) are now properly recognized as multipath devices.
multipath -ll mpath2 (350010b900004b868) dm-3 IBM-ESXS,GNA073C3ESTT0Z [size=68G][features=0][hwhandler=0] \_ round-robin 0 [prio=1][active] \_ 1:0:1:0 sdc 8:32 [active][ready] \_ round-robin 0 [prio=1][enabled] \_ 1:0:3:0 sde 8:64 [active][ready] mpath1 (35000cca0071acd29) dm-2 IBM-ESXS,VPA073C3-ETS10 [size=68G][features=0][hwhandler=0] \_ round-robin 0 [prio=1][active] \_ 1:0:0:0 sdb 8:16 [active][ready] \_ round-robin 0 [prio=1][enabled] \_ 1:0:2:0 sdd 8:48 [active][ready]As expected, two sets of paths are detected - two paths in each set. From examining the above output, notice that sdc and sde are actually the same physical disk, accessible from the blade via two different devices. Similarly in the case of sdb and sdd. Note that the device names are dm-2 and dm-3.
- If your disks are new, skip this step. Optionally, if you have previous data or partition table, use the following command to erase the partition table.
dd if=/dev/zero of=/dev/dm-X bs=8k count=100X is the device number as shown in step 9. Be very careful when doing this step as it is destructive. Your data will not be able to be recovered if you erase the partition table. In the test environment, both the disks will be used in the existing volume group.
dd if=/dev/zero of=/dev/dm-2 bs=8k count=100 dd if=/dev/zero of=/dev/dm-3 bs=8k count=100Note: This step will erase your disks- Create a new physical volume with each disk by entering the following command:
pvcreate /dev/dm-XIn our environment:
pvcreate /dev/dm-2 pvcreate /dev/dm-3- Run lvm pvdisplay to see if the physical volumes are displayed correctly. If at any time in this LV management process, you would like to view the status of existing related entities like physical volume (pv), volume group (vg), and logical volume (lv), issue the corresponding command:
lvm pvdisplay lvm vgdisplay lvm lvdisplay- If the new entity you just created or changed could not be found, you may want to issue the corresponding command to scan for the device:
pvscan vgscan lvscan- Run the vgscan command to show any existing volume groups. On a RHEL system, the installer creates VolGroup00 by default if another partitioning scheme is not chosen. On a SLES system, no volume groups exist. The following shows an output of an existing volume group VolGroup00.
vgscan Reading all physical volumes. This may take a while... Found volume group "VolGroup00" using metadata type lvm2- Add the physical volume(s) to an existing volume group using the vgextend command. In our environment, add /dev/dm-2 and /dev/dm-3 created in step 11 to the existing volume group VolGroup00 found in step 14, using the command:
vgextend VolGroup00 /dev/dm-2 /dev/dm-3 Volume Group "VolGroup00" successfully extended- If there is no existing volume group, create a new volume group using the vgcreate command. For example, to create a new volume group VolGroup00 with the physical volumes /dev/dm-2 and /dev/dm-3, run this command:
vgcreate VolGroup00 /dev/dm-2 /dev/dm-3 Volume group "VolGroup00" successfully created
- Creating a new logical volume and setting up automounting
Now that more storage is available in the VolGroup00 volume group, you can use the extra storage to save your data.
The Linux and Unix Menagerie
Physical Volumes:
The two commands we'll be using here are pvscan and pvdisplay.
pvscan, as with all of the following commands, pretty much does what the name implies. It scans your system for LVM physical volumes. When used straight-up, it will list out all the physical volumes it can find on the system, including those "not" associated with volume groups (output truncated to save on space):
host # pvscanNext, we'll use pvdisplay to display our only physical volume:
pvscan
pvscan -- reading all physical volumes (this may take a while...)
...
pvscan -- ACTIVE PV "/dev/hda1" is in no VG [512 MB]
...
pvscan -- ACTIVE PV "/dev/hdd1" of VG "vg01"[512 MB / 266 MB free]
...host # pvdisplay /dev/hdd1 <-- Note that you can leave the /dev/hdd1, or any specification, off of the command line if you want to display all of your physical volumes. We just happen to know we only have one and are being particular ;)
...
PV Name /dev/hdd1
VG Name vg01
PV Size 512 MB
...Other output should include whether or not the physical volume is allocatable (or "can be used" ;), total physical extents (see our post on getting started with LVM for a little more information on PE's), free physical extents, allocated physical extents and the physical volume's UUID (Identifier).
Volume Groups:
The two commands we'll be using here are vgscan and vgdisplay.
vgscan will report on all existing volume groups, as well as create a file (generally) called /etc/lvmtab (Some versions will create an /etc/lvmtab.d directory as well):
host # vgscan
vgscan -- reading all physical volumes (this may take a while...)
vgscan -- found active volume group "vg01"
...vgdisplay can be used to check on the state and condition of our volume group(s). Again, we're specifying our volume group on the command line, but this is not necessary:
host # vgdisplay vg01
...
VG Name vg01
...
VG Size 246 MB
...this command gives even more effusive output. Everything from the maximum logical volumes the volume group can contain (including how many it currently does and how many of those are open), separate (yet similar) information with regards to the physical volumes it can encompass, all of the information you've come to expect about the physical extents and, of course, each volume's UUID.
redhat.com
Storage technology plays a critical role in increasing the performance, availability, and manageability of Linux servers. One of the most important new developments in the Linux 2.6 kernel-on which the Red Hat® Enterprise Linux® 4 kernel is based-is the Linux Logical Volume Manager, version 2 (or LVM 2). It combines a more consistent and robust internal design with important new features including volume mirroring and clustering, yet it is upwardly compatible with the original Logical Volume Manager 1 (LVM 1) commands and metadata. This article summarizes the basic principles behind the LVM and provide examples of basic operations to be performed with it.
Introduction
Logical volume management is a widely-used technique for deploying logical rather than physical storage. With LVM, "logical" partitions can span across physical hard drives and can be resized (unlike traditional ext3 "raw" partitions). A physical disk is divided into one or more physical volumes (Pvs), and logical volume groups (VGs) are created by combining PVs as shown in Figure 1. LVM internal organization. Notice the VGs can be an aggregate of PVs from multiple physical disks.
Figure 2. Mapping logical extents to physical extents shows how the logical volumes are mapped onto physical volumes. Each PV consists of a number of fixed-size physical extents (PEs); similarly, each LV consists of a number of fixed-size logical extents (LEs). (LEs and PEs are always the same size, the default in LVM 2 is 4 MB.) An LV is created by mapping logical extents to physical extents, so that references to logical block numbers are resolved to physical block numbers. These mappings can be constructed to achieve particular performance, scalability, or availability goals.
For example, multiple PVs can be connected together to create a single large logical volume as shown in Figure 3. LVM linear mapping. This approach, known as a linear mapping, allows a file system or database larger than a single volume to be created using two physical disks. An alternative approach is a striped mapping, in which stripes (groups of contiguous physical extents) from alternate PVs are mapped to a single LV, as shown in Figure 4. LVM striped mapping. The striped mapping allows a single logical volume to nearly achieve the combined performance of two PVs and is used quite often to achieve high-bandwidth disk transfers.
Figure 4. LVM striped mapping (4 physical extents per stripe)Through these different types of logical-to-physical mappings, LVM can achieve four important advantages over raw physical partitions:
- Logical volumes can be resized while they are mounted and accessible by the database or file system, removing the downtime associated with adding or deleting storage from a Linux server
- Data from one (potentially faulty or damaged) physical device may be relocated to another device that is newer, faster or more resilient, while the original volume remains online and accessible
- Logical volumes can be constructed by aggregating physical devices to increase performance (via disk striping) or redundancy (via disk mirroring and I/O multipathing)
- Logical volume snapshots can be created to represent the exact state of the volume at a certain point-in-time, allowing accurate backups to proceed simultaneously with regular system operation
Initializing disks or disk partitions
To use LVM, partitions and whole disks must first be converted into physical volumes (PVs) using the pvcreate command. For example, to convert /dev/hda and /dev/hdb into PVs use the following commands:
pvcreate /dev/hda pvcreate /dev/hdbIf a Linux partition is to be converted make sure that it is given partition type 0x8E using fdisk, then use pvcreate:
pvcreate /dev/hda1Creating a volume group
Once you have one or more physical volumes created, you can create a volume group from these PVs using the vgcreate command. The following command:
vgcreate volume_group_one /dev/hda /dev/hdbcreates a new VG called volume_group_one with two disks, /dev/hda and /dev/hdb, and 4 MB PEs. If both /dev/hda and /dev/hdb are 128 GB in size, then the VG volume_group_one will have a total of 2**16 physical extents that can be allocated to logical volumes.
Additional PVs can be added to this volume group using the vgextend command. The following commands convert /dev/hdc into a PV and then adds that PV to volume_group_one:
pvcreate /dev/hdc vgextend volume_group_one /dev/hdcThis same PV can be removed from volume_group_one by the vgreduce command:
vgreduce volume_group_one /dev/hdcNote that any logical volumes using physical extents from PV /dev/hdc will be removed as well. This raises the issue of how we create an LV within a volume group in the first place.
Creating a logical volume
We use the lvcreate command to create a new logical volume using the free physical extents in the VG pool. Continuing our example using VG volume_group_one (with two PVs /dev/hda and /dev/hdb and a total capacity of 256 GB), we could allocate nearly all the PEs in the volume group to a single linear LV called logical_volume_one with the following LVM command:
lvcreate -n logical_volume_one --size 255G volume_group_oneInstead of specifying the LV size in GB we could also specify it in terms of logical extents. First we use vgdisplay to determine the number of PEs in the volume_group_one:
vgdisplay volume_group_one | grep "Total PE"which returns
Total PE 65536Then the following lvcreate command will create a logical volume with 65536 logical extents and fill the volume group completely:
lvcreate -n logical_volume_one -l 65536 volume_group_oneTo create a 1500MB linear LV named logical_volume_one and its block device special file /dev/volume_group_one/logical_volume_one use the following command:
lvcreate -L1500 -n logical_volume_one volume_group_oneThe lvcreate command uses linear mappings by default.
Striped mappings can also be created with lvcreate. For example, to create a 255 GB large logical volume with two stripes and stripe size of 4 KB the following command can be used:
lvcreate -i2 -I4 --size 255G -n logical_volume_one_striped volume_group_oneIt is possible to allocate a logical volume from a specific physical volume in the VG by specifying the PV or PVs at the end of the lvcreate command. If you want the logical volume to be allocated from a specific physical volume in the volume group, specify the PV or PVs at the end of the lvcreate command line. For example, this command:
lvcreate -i2 -I4 -L128G -n logical_volume_one_striped volume_group_one /dev/hda /dev/hdbcreates a striped LV named logical_volume_one that is striped across two PVs (/dev/hda and /dev/hdb) with stripe size 4 KB and 128 GB in size.
An LV can be removed from a VG through the lvremove command, but first the LV must be unmounted:
umount /dev/volume_group_one/logical_volume_one lvremove /dev/volume_group_one/logical_volume_oneNote that LVM volume groups and underlying logical volumes are included in the device special file directory tree in the /dev directory with the following layout:
/dev// so that if we had two volume groups myvg0 and myvg2 and eatt>, six device special files would be created:
/dev/myvg0/lv01 /dev/myvg0/lv02 /dev/myvg0/lv03 /dev/myvg2/lv01 /dev/myvg2/lv02 /dev/myvg2/lv03Extending a logical volume
An LV can be extended by using the lvextend command. You can specify either an absolute size for the extended LV or how much additional storage you want to add to the LVM. For example:
lvextend -L120G /dev/myvg/homevolwill extend LV /dev/myvg/homevol to 12 GB, while
lvextend -L+10G /dev/myvg/homevolwill extend LV /dev/myvg/homevol by an additional 10 GB. Once a logical volume has been extended, the underlying file system can be expanded to exploit the additional storage now available on the LV. With Red Hat Enterprise Linux 4, it is possible to expand both the ext3fs and GFS file systems online, without bringing the system down. (The ext3 file system can be shrunk or expanded offline using the ext2resize command.) To resize ext3fs, the following command
ext2online /dev/myvg/homevolwill extend the ext3 file system to completely fill the LV, /dev/myvg/homevol, on which it resides.
The file system specified by device (partition, loop device, or logical volume) or mount point must currently be mounted, and it will be enlarged to fill the device, by default. If an optional size parameter is specified, then this size will be used instead.
Simple partition resizing operations, such as those described in Part 1 of this series, usually conclude successfully. Sometimes, though, you need to do something different or troubleshoot problems. This article covers some of these situations. The first topic is LVM configuration and how it interacts with partition resizing. The second topic is troubleshooting techniques. Although a complete description of all the problems that can occur when resizing partitions might fill a book, a few basic principles can help you work through many common problems. Finally, this article describes some alternatives to partition resizing, should the problems you encounter prove insurmountable.
LVM is a disk allocation technique that supplements or replaces traditional partitions. In an LVM configuration, one or more partitions, or occasionally entire disks, are assigned as physical volumes in a volume group, which in turn is broken down into logical volumes. File systems are then created on logical volumes, which are treated much like partitions in a conventional configuration. This approach to disk allocation adds complexity, but the benefit is flexibility. An LVM configuration makes it possible to combine disk space from several small disks into one big logical volume. More important for the topic of partition resizing, logical volumes can be created, deleted, and resized much like files on a file system; you needn't be concerned with partition start points, only with their absolute size.
Note: I don't attempt to describe how to set up an LVM in this article. If you don't already use an LVM configuration, you can convert your system to use one, but you should consult other documentation, such as the Linux LVM HOWTO (see Resources), to learn how to do so.
If you've resized non-LVM partitions, as described in Part 1 of this series, and want to add the space to your LVM configuration, you have two choices:
- You can create a new partition in the empty space and add the new partition to your LVM.
- You can resize an existing LVM partition, if it's contiguous with the new space.
Unfortunately, the GParted (also known as Gnome Partition Editor) tool described in Part 1 of this series does not support resizing LVM partitions. Therefore, the easiest way to add space to your volume group is to create a new partition in the free space and add it as a new physical volume to your existing volume group.
Although GParted can't directly create an LVM partition, you can do so with one of the following tools:
parted
(text-mode GNU Parted)fdisk
for Master Boot Record (MBR) disksgdisk
for globally unique identifier (GUID) Partition Table (GPT) disksIf you use
parted
, you can use theset
command to turn on thelvm
flag, as inset 1 lvm on
to flag partition 1 as an LVM partition. Usingfdisk
, you should use thet
command to set the partition's type code to 8e. You do the same withgdisk
, except that its type code for LVM partitions is 8e00.In any of these cases, you must use the
pvcreate
command to set up the basic LVM data structures on the partition and thenvgextend
to add the partition to the volume group. For instance, to add /dev/sda1 to the existing MyGroup volume group, you type the following commands:
pvcreate /dev/sda1 vgextend MyGroup /dev/sda1
With these changes finished, you should be able to extend the logical volumes in your volume group, as described shortly.For file systems, resizing logical volumes can be simpler than resizing partitions because LVM obviates the need to set aside contiguous sets of numbered sectors in the form of partitions. Resizing the logical volume itself is accomplished by means of the
lvresize
command. This command takes a number of options (consult itsman
page for details), but the most important is-L
, which takes a new size or a change in size, a change being denoted by a leading plus (+) or minus (-) sign. You must also offer a path to the logical volume. For instance, suppose you want to add 5 gibibytes (GiB) to the size of theusr
logical volume in theMyGroup
group. You could do so as follows:
lvresize -L +5G /dev/mapper/MyGroup-usr
This command adjusts the size of the specified logical volume. Keep in mind, however, that this change is much like a change to a partition alone. That is, the size of the file system contained in the logical volume is not altered. To adjust the file system, you must use a file system-specific tool, such asresize2fs
,resizereiserfs
,xfs_growfs
, or theresize
mount option when mounting Journaled File System (JFS). When used without size options, these tools all resize the file system to fill the new logical volume size, which is convenient when growing a logical volume.If you want to shrink a logical volume, the task is a bit more complex. You must first resize the file system (using
resize2fs
or similar tools) and then shrink the logical volume to match the new size. Because of the potential for a damaging error should you accidentally set the logical volume size too small, I recommend first shrinking the file system to something significantly smaller than your target size, then resizing the logical volume to the correct new size, and then resizing the file system again to increase its size, relying on the auto-sizing feature to have the file system exactly fill the new logical volume size.Remember also that, although you can shrink most Linux-native file systems, you can't shrink XFS or JFS. If you need to shrink a logical volume containing one of these file systems, you may have to create a new smaller logical volume, copy the first one's contents to the new volume, juggle your mount points, and then delete the original. If you lack sufficient free space to do this, you may be forced to use a backup as an intermediate step.
Although the text-mode tools just described get the job done, they can be intimidating. If you prefer to work with graphical user interface (GUI) tools, at least two are available for LVM operations:
- kvpm-This is a tool that integrates with the K Desktop Environment (KDE) and provides access to common LVM operations, including logical volume resizing options.
- system-config-lvm-This program originated with Red Hat, but is available in some other distributions. It's similar to kvpm in that it provides point-and-click access to LVM management, including resizing operations.
Of the two, system-config-lvm provides a somewhat simpler and friendlier user interface; however, either will get the job done. Figure 1 shows system-config-lvm in action. To resize a logical volume, you click its name in the left panel, then click the Edit Properties button that appears in the middle panel. You can then use a slider to adjust the volume's size.
Figure 1. GUI tools make resizing logical volumes relatively easy
Troubleshooting problemsUnfortunately, partition resizing operations sometimes don't work as expected. Most commonly, the resizing software reports an error, frequently with a cryptic message. Although there are numerous possible causes of such problems, you can overcome a great many of them by applying a few simple workarounds, such as fixing file system problems and breaking a complex resizing operation down into several parts.
One common cause of resizing failures is a damaged file system. All production file systems include file system recovery tools that enable you to fix such problems, so running them on a file system prior to resizing it can often make for a smoother resizing operation.
In Linux, the file system check tool is called
fsck
, and you call it by passing it the device filename associated with the file system you want to check, as infsck /dev/sda1
to check/dev/sda1
. Thefsck
utility, however, is mainly a front-end to file system-specific tools, such ase2fsck
(for ext2fs, ext3fs, and ext4fs). You can often gain access to more advanced options by calling the file system-specific tool directly. The-f
option toe2fsck
, for instance, forces it to check the device even if the file system appears to be clean. This option may be necessary to uncover corruption that's not obvious in a cursory examination. Check the documentation for your file system-specificfsck
helper program to learn about its options.In most cases, it's necessary to run
fsck
or its helper program on an unmounted file system. Thus, you may need to do this from an emergency boot disc, as described in Part 1 of this series.If you run into problems with a non-Linux file system, you may be able to use
fsck
to check it; however, you may also need to boot into the file system's native operating system to do the job properly. In particular, Microsoft® Windows® New Technology File System (NTFS) has only rudimentary maintenance tools in Linux. You must use the WindowsCHKDSK
utility to do a proper job of checking NTFS. You may need to run this utility several times, until it reports no more problems with the disk. The Linuxntfsfix
utility performs what few checks are possible in Linux and then flags the file system for automatic checking the next time Windows boots.Although not a file system integrity issue per se, disk fragmentation is another issue that might need attention. You can sometimes eliminate problems by performing a disk defragmenting operation prior to a resizing operation. This task is seldom necessary (and is usually not possible) with Linux native file systems; however, it may help with File Allocation Table (FAT) or NTFS partitions.
Breaking the operation into parts
If you enter a number of resizing and moving operations into GParted and the operation fails, you can try entering just one operation at a time and then immediately clicking the Apply button. You might still run into problems, but you may at least be able to perform other operations that aren't dependent on the one that causes problems. Depending on the details, you may be able to achieve at least some of your overall goals or find some other way to work around the problem.
In some cases, you may be able to split the resizing operation across multiple utilities. For instance, you may be able to use a Windows or Mac OS X utility to resize FAT, NTFS, or Hierarchical File System Plus (HFS+) partitions. Although GParted is the most user-friendly way to resize partitions in Linux, if just one operation is causing problems, using an underlying text-mode utility, such as
resize2fs
, may provide you with better diagnostic output or even succeed where GParted fails. Keep in mind, however, that most text-mode tools resize either partitions or file systems, but not both; you must combine both types of tools to resize a partition and its file system. The GNU Parted utility is an exception to this rule; like its GUI cousin, GParted, Parted resizes partitions and their contained file systems simultaneously.Sometimes an attempt to resize your partitions just doesn't work. Perhaps a file system has errors that can't be easily resolved, or maybe you need to shrink a file system (such as XFS or JFS) that can't be shrunk. In these cases, you must move on to an alternative, such as relocating directories in your existing partition structure, performing a backup-repartition-restore operation, or adding more disk space.
Relocating directories without repartitioning
Sometimes you can relocate directories without actually repartitioning the disk. The trick is to use symbolic links to point from one location to another, even across partitions. For instance, suppose you're using a Gentoo system, which can consume vast quantities of disk space in the /usr/portage and /var/tmp/portage directories. If you didn't consider these needs when setting up your system, you might run out of space. You might, however, have space available on a separate /home partition. To use this space for Portage, you can create one or more directories in /home, copy the contents of /usr/portage or /var/tmp/portage to the new directories, delete the original directories, and create symbolic links in place of the originals that point to the new subdirectories of /home.
This approach can be effective and is convenient on a small scale; however, it does create a somewhat non-standard system, and it removes many of the advantages of using separate partitions. Thus, I recommend using this approach sparingly and preferably only on a short-term basis-for instance, as a stop-gap measure while you wait for a new hard disk to arrive or on a system you plan to retire in a month or two.
Backing up, repartitioning, and restoring
Prior to the development of file system resizing tools, the only practical way to repartition a disk was to back up its contents, repartition (creating new empty file systems), and restore the backup to the repartitioned disk. This approach continues to work, but of course it's less convenient than using GParted to repartition nondestructively. On the other hand, for safety it's best to create a backup before resizing partitions. So to be safe, you have to do half of this job anyway.
In today's world, an external hard drive is often used as a backup medium. You can buy terabyte external disks for under $100, and after your partition juggling you can use them to back up your important files, to transfer large files between systems, or in other ways. Alternatively, you can use recordable DVDs, tape units, or network servers as backup systems.
Backup software can include old standbys such as
tar
or newer tools such as Clonezilla. Operational details vary depending on the software and the backup medium, so you should consult the backup software's documentation for details.If you need to modify your Linux boot partition or any partition that's required for basic root (superuser) access, you need to perform these operations from an emergency boot system. Part 1 of this series described such systems.
Adding a disk can be a viable alternative to repartitioning, and in some cases, adding disk space may be preferable. Disk capacities continue to increase, and a newer disk is likely to be more reliable than one that's several years old.
If you choose to replace an existing disk with a newer one, you should be able to transfer your existing system to the new disk with a tool such as Clonezilla or by using older tools, such as
fdisk
andtar
. You may need to reinstall your boot loader, and, for this task, a boot using a tool such as the Super Grub Disk may be helpful. You can boot your system using this CD-based boot loader, then usegrub-install
or a similar tool to reinstall the GRand Unified Bootloader (GRUB) to your new hard disk.If you buy a new disk to supplement, rather than replace, your existing disk, you need to decide what, if any, data to transfer to the new disk. You should partition the new disk using
fdisk
, GParted, or some other tool, transfer files to the new partitions, and then permanently mount the new disk's partitions in your existing directory tree by editing /etc/fstab appropriately. Remember to delete any files you transfer to the new disk from the old disk. If you don't, they'll continue to consume disk space on the old disk, even if you mount the new disk to take over the original files' directories.However you do it, altering a working system's disk allocation can be an anxiety-inducing task, and for good reason: Many things can go wrong. If such changes are necessary, though, you'll find that your system is more usable after you make your changes. With a reduced risk of disk-full errors, you can get on with actually using your system for its intended task. The process of resizing your partitions can also help familiarize you with GParted and other disk utilities, as well as with the optimum sizes for various partitions. All of this can be useful knowledge the next time you install a new Linux system.
Knowledge Base
You need to back up logical volumes of LVM and ordinary (non-LVM) partitions. There is no need to back up physical volumes of LVM, as they are backed up sector-by-sector and there is no guarantee that it will work after the restore.
The listed Acronis products recognize logical LVM volumes as Dynamic or GPT volumes.
Logical LVM volumes can be restored as non-LVM (regular) partitions in Acronis Rescue Mode. Logical LVM volumes can be restored on top of existing LVM volumes. See LVM Volumes Acronis True Image 9.1 Server for Linux Supports or LVM Volumes Supported by Acronis True Image Echo.
Solution
Restoring LVM volumes as non-LVMs
- Restore partitions by one with Acronis backup software.
- Do not forget to make the boot partition Active (/ or /boot if available).
- Make the system bootable
- Boot from Linux Distribution Rescue CD.
- Enter rescue mode.
- Mount the restored root(/) partition. If the rescue CD mounted partitions automatically, skip to the next step.
Most distributions will try to mount the system partitions as designated in /etc/fstab of the restored system. Since there are no LVMs available, this process is likely to fail. This is why you might need to mount the restored partitions manually:
Enter the following command:
#cat /proc/partitions
You will get the list of recognized partitions:
major minor #blocks name 8 0 8388608 sda 8 1 104391 sda1 8 2 8281507 sda2 Mount the root(/) partition:
#mount -t [fs_type] [device] [system_mount_point]
In the example below /dev/sda2 is root, because it was restored as second primary partition on SATA disk
#mount -t ext3 /dev/sda2 /mnt/sysimage
- Mount /boot if it was not mounted automatically:
#mount -t [fs_type] /dev/[device] /[system_mount_point]/boot
Example:
#mount -t ext3 /dev/sda1 /mnt/sysimage/boot
- chroot to the mounted / of the restored partition:
#chroot [mount_point]
- Mount /proc in chroot
#mount -t proc proc /proc
- Create hard disk devices in /dev if it was not populated automatically.
Check existing partitions with cat /proc/partitions and create appropriate devices for them:
#/sbin/MAKEDEV [device]
- Edit /etc/fstab on the restored partition:
Replace all entries of /dev/VolGroupXX/LogVolXX with appropriate /dev/[device]. You can find which device you need to mount in cat /proc/partitions.
- Edit grub.conf
Open /boot/grub/grub.conf and edit it to replace /dev/VolGroupXX/LogVolXX with appropriate /dev/[device]
- Reactivate GRUB
Run the following command to re-activate GRUB automatically:
#grub-install /dev/[device]
- Make sure the system boots fine.
Restoring LVM volumes on prepared LVMs
- Prepare the LVM volumes
- Boot from Acronis Bootable Media;
- Press F11 after the Starting Acronis Loader... message appears and you get to the selection screen of the program;
- After you get the Linux Kernel Settings prompt, remove the word quiet and click OK;
- Select the Full version menu item to boot. Wait for # prompt to appear;
- List the partitions you have on the hard disk:
#fdisk -l
This will give not only the list of partitions on the hard drive, but also the name of the device associated with the hard disk.
- Start creating partitions using fdisk:
#fdisk [device]
where [device] is the name of the device associated with the hard disk
- Create physical volumes for LVMs:
#lvm pvcreate [partition]
for example, #lvm pvcreate /dev/sda2
- Create LVM group
#lvm vgcreate [name] [device]
where [name] is a name of the Volume Group you create; and [device] is the name of the device associated with the partition you want to add to the Volume Group
for example, #lvm vgcreate VolGroup00 /dev/sda2
- Create LVM volumes inside the group:
#lvm lvcreate –L[size] -n[name] [VolumeGroup]
where [size] is the size of the Volume being created (e.g. 4G); [name] is the name of the Volume being created; [VolumeGroup] is the name of the Volume Group where we want to place the volume
For example, #lvm lvcreate -L6G -nLogVol00 VolGroup00
- Activate the created LVM:
#lvm vgchange -ay
- Start Acronis product:
#/bin/product
- Restore partitions
- Restore partitions from your backup archive to the created LVM volumes
January 31, 2008 | www.bgevolution.com
This concept works just as for an internal hard drive. Although, USB drives seem to not remain part of the array after a reboot, therefore to use a USB device in a RAID1 setup, you will have to leave the drive connected, and the computer running. Another tactic is to occasionally sync your USB drive to the array, and shut down the USB drive after synchronization. Either tactic is effective.
You can create a quick script to add the USB partitions to the RAID1.
The first thing to do when synchronizing is to add the partition:
sudo mdadm --add /dev/md0 /dev/sdb1
I have 4 partitions therefore my script contains 4 add commands.
Then grow the arrays to fit the number of devices:
sudo mdadm --grow /dev/md0 --raid-devices=3
After growing the array your USB drive will magically sync USB is substantially slower than SATA or PATA. Anything over 100 Gigabytes will take some time. My 149 Gigabyte /home partition takes about an hour and a half to synchronize. Once its synced I do not experience any apparent difference in system performance.
15.12.2008 | Linuxconfig.org
This article describes a basic logic behind a Linux logical volume manager by showing real examples of configuration and usage. Despite the fact that Debian Linux will be used for this tutorial, you can also apply the same command line syntax with other Linux distributions such as Red Hat, Mandriva, SuSe Linux and others.
LinuxQuestions.org
This has been very helpful to me. I found this thread by Goggle on dm-0 because I also got the no partition table error message.
Here is what I think:
When the programs fdisk and sfdisk are run with the option -l and no argument, e.g. # /sbin/fdisk -l
they look for all devices that can have cylinders, heads, sectors, etc. If they find such a device, they output that information to standard output and they output the partition table to standard output. If there is no partition table, they have an error message (also standard output).
One can see this by piping to 'less', e.g.
# /sbin/fdisk -l | less/dev/dm-0 ... /dev/dm3 on my fedora C5 system seem to be device mappers
associated with LVM.RAID might also require device mappers.
2008-08-25 | www.jejik.com
I went with SystemRescueCD which comes with both mdadm and LVM out-of-the-box.
The system layout is quite simple. /dev/sda1 and /dev/sdb1 make up a 500 GB mdadm RAID1 volume. This RAID volume contains an LVM volume group called "3ware", named so because in my old server it was connected to my 3ware RAID card. It contains a single logical volume called "media". The original 80 GB disk is on /dev/sdc1 which contains an LVM volume group called "linuxvg". Inside that volume group are three volumes: "boot", "root" and "swap". Goal: Move linuxvg-root and linuxvg-boot to the 3ware volume group. Additional goal: Rename 3ware to linuxvg. The latter is more for aesthetic reasons but as a bonus it also means that there is no need to fiddle with grub or fstab settings after the move.
Before starting SystemRescueCD and start moving things around there are a few things that need to be done first. Start by making a copy of /etc/mdadm/mdadm.conf because you will need it later. Also, because the machine will be booting from the RAID array I need to install grub to those two disks.
# grub-install /dev/sda # grub-install /dev/sdbNow it's time to boot into SystemRescueCD. I start off by copying /etc/mdadm/mdadm.conf back and starting the RAID1 array. This command scans for all the arrays defined in mdadm.conf and tries to start them.
# mdadm --assemble --scanNext I need to make a couple of changes to /etc/lvm/lvm.conf. If I were to scan for LVM volume groups at this point, it would find the 3ware group three times: once in /dev/md0, /dev/sda1 and /dev/sdb1. So I adjust the filter setting in lvm.conf so it will not scan /dev/sda1 and /dev/sdb1.
filter = [ "r|/dev/cdrom|", "r|/dev/sd[ab]1|" ]LVM can now scan the hard drives and find all the volume groups.
# vgscanI disable the volume groups so that I can rename them. linuxvg becomes linuxold and 3ware becomes the new linuxvg. Then I re-enable the volume groups.
# vgchange -a n # vgrename linuxvg linuxold # vgrename 3ware linuxvg # vgchange -a yNow I can create a new logical volume in the 500 Gb volume group for my boot partition and create an ext3 filesystem in it.
# lvcreate --name boot --size 512MB linuxvg # mkfs.ext3 /dev/mapper/linuxvg-bootI create mount points to mount the original boot partition and the new boot partition and then use rsync to copy all the data. Don't use cp for this! Rsync with the -ah option will preserve all soft links, hard links and file permissions while cp does not. If you do not want to use rsync you could also use the dd command to transfer the data directly from block device to block device.
# mkdir /mnt/src /mnt/dst # mount -t ext3 /dev/mapper/linuxold-boot /mnt/src # mount -t ext3 /dev/mapper/linuxvg-boot /mnt/dst # rsync -avh /mnt/src/ /mnt/dst/ # umount /mnt/src /mnt/dstRinse and repeat to copy over the root filesystem.
# lvcreate --name root --size 40960MB linuxvg # mkfs.ext3 /dev/mapper/linuxvg-root # mount -t ext3 /dev/mapper/linuxold-root /mnt/src # mount -t ext3 /dev/mapper/linuxvg-root /mnt/dst # rsync -avh /mnt/src/ /mnt/dst/ # umount /mnt/src /mnt/dstThere's no sense in copying the swap volume. Simply create a new one.
# lvcreate --name swap --size 1024MB linuxvg # mkswap /dev/mapper/linuxvg-swapAnd that's it. I rebooted into Debian Lenny to make sure that everything worked and I removed the 80 GB disk from my server. While this wans't particularly hard, I do hope that the maintainers of LVM create an lvmove command to make this even easier.
LinuxPlanet
Creating RAID 10
No Linux installer that I know of supports RAID 10, so we have to jump through some extra hoops to set it up in a fresh installation. This is my favorite layout for RAID systems:
- /dev/md0 is a RAID 1 array containing the root filesystem.
- /dev/md1 is a RAID 10 array containing a single LVM group divided into logical volumes for /home, /var, and /tmp, and anything else I feel like stuffing in there.
- Each disk has its own identical swap partition that is not part of RAID or LVM, just plain old ordinary swap.
One way is to use your Linux installer to create the RAID 1 array and the swap partitions, then boot into the new filesystem and create the RAID 10 array. This works, but then you have to move /home, /var, /tmp, and whatever you else you want there, which means copying files and editing /etc/fstab. I get tired thinking about it.
Another way is to prepare your arrays and logical volumes in advance and then install your new system over them, and that is what we are going to do. You need a bootable live Linux that includes mdadm, LVM2 and GParted, unless you're a crusty old command-line commando that doesn't need any sissy GUIs, and are happy with fdisk. Two that I know have all of these are Knoppix and SystemRescueCD; I used SystemRescueCD.
Step one is to partition all of your drives identically. The partition sizes in my example system are small for faster testing; on a production system the 2nd primary partition would be as large as possible:
- 1st primary partition, 5GB
- 2nd primary partition, 7GB
- swap partition, 1GB
The first partition on each drive must be marked as bootable, and the first two partitions must be marked as "fd Linux raid auto" in fdisk. In GParted, use Partition -> Manage Flags.
Now you can create your RAID arrays with the mdadm command. This command creates the RAID1 array for the root filesystem:
# mdadm -v --create /dev/md0 --level=raid1 --raid-devices=2 /dev/hda1 /dev/sda1
mdadm: layout defaults to n1
mdadm: chunk size defaults to 64K
mdadm: size set to 3076352K
mdadm: array /dev/md0 started.This will take some time, which cat /proc/mdstat will tell you:
Personalities : 'linear' 'raid0' 'raid1' 'raid6' 'raid5' 'raid4' 'multipath' 'raid10' md0 : active raid10 sda1'1' hda1'0'
3076352 blocks 2 near-copies '2/2' 'UU'
'====>................' resync = 21.8% (673152/3076352) finish=3.2min speed=12471K/secThis command creates the RAID 10 array:
# mdadm -v --create /dev/md1 --level=raid10 --raid-devices=2 /dev/hda2 /dev/sda2
Naturally you want to be very careful with your drive names, and give mdadm time to finish. It will tell you when it's done:
RAID10 conf printout:
--- wd: rd:2
disk 0, wo:0, o:1, dev:hda2
disk 1, wo:0, o:1, dev:sda2mdadm --detail /dev/md0 displays detailed information on your arrays.
Create LVM Group and Volumes
Now we'll put a LVM group and volumes on /dev/md1. I use vg- for volume group names and lv- for the logical volumes in the volume groups. Using descriptive names, like lv-home, will save your sanity later when you're creating filesystems and mountpoints. The -L option specifies the size of the volume:
# pvcreate /dev/md1
# vgcreate vg-server1 /dev/md1
# lvcreate -L4g -nlv-home vg-server1
# lvcreate -L2g -nlv-var vg-server1
# lvcreate -L1g -nlv-tmp vg-server1You'll get confirmations for every command, and you can use vgdisplay and lvdisplay to see the fruits of your labors. Use vgdisplay to see how much space is left.
I use the MD (multiple device) logical volume manager to mirror the boot devices on the Linux servers I support. When I first started using MD, the mdadm utility was not available to manage and monitor MD devices. Since disk failures are relatively common in large shops, I used the shell script from my SysAdmin article Monitoring and Managing Linux Software RAID to send E-mail when a device entered the failed state. While reading through the mdadm(8) manual page, I came across the "–monitor" and "–mail" options. These options can be used to monitor the operational state of the MD devices in a server, and generate E-mail notifications if a problem is detected. E-mail notification support can be enabled by running mdadm with the "–monitor" option to monitor devices, the "–daemonise" option to create a daemon process, and the "–mail" option to generate E-mail:
$ /sbin/mdadm –monitor –scan –daemonise –mail=root@localhost
Once mdadm is daemonized, an E-mail similar to the following will be sent each time a failure is detected:
From: mdadm monitoringTo: [email protected] Subject: Fail event on /dev/md1:biscuit This is an automatically generated mail message from mdadm running on biscuit A Fail event had been detected on md device /dev/md1. Faithfully yours, etc. I digs me some mdadm!
While attempting to create a 2-way LVM mirror this weekend on my Fedora Core 5 workstation, I received the following error:
$ lvcreate -L1024 -m 1 vgdata
Not enough PVs with free space available for parallel allocation. Consider --alloc anywhere if desperate.Since the two devices were initialized specifically for this purpose and contained no other data, I was confused by this error message. After scouring Google for answers, I found a post that indicated that I needed a log LV for this to work, and the log LV had to be on it's own disk. I am not sure about most people, but who on earth orders a box with three disks? Ugh!
Posted by matty, filed under Linux LVM. Date: May 3, 2006, 9:50 pm | 2 Comments
- From: "Wayne Pascoe" <lists-june2004 penguinpowered org>
- To: linux-lvm redhat com
- Subject: [linux-lvm] Raid 0+1
- Date: Wed, 21 Jul 2004 13:22:53 +0100 (BST)
Hi all, I am working on a project to evaluate LVM2 against Veritas Volume Manager for a new Linux deployment. I am trying to get a Raid 0+1 solution working and I'm struggling. So far, this is where I am: 1. I created 8GB partitions on 4 disks, sdb, sdc, sdd and sde, and set their partition types to 8e with fdisk 2. I then ran vgscan, follwed by pvcreate /dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1 3. Next, I created 2 volume groups as follows: vgcreate StripedData1 /dev/sdb1 /dev/sdc1 vgcreate StripedData2 /dev/sdd1 /dev/sde1 4. Next, I created 2 volumes, one in each group as follows: lvcreate -i 2 -I 64 -n Data1 -L 6G StripedData1 lvcreate -i 2 -I 64 -n Data2 -L 6G StripedData2 Now I have 2 striped volumes, but no redundancy. This is where I think things start to go wrong. 5. I now create a raid device, /dev/md0 consisting of these two volumes. I run mkraid on this, create a file system, and mount it on /Data1. This all works fine, and I have a 6GB filesystem on /Data1 Now I need to be able to resize this whole solution, and I'm not sure if the way I've built it caters for what I need to do... I unmount /Data1 and use lvextend to extend the 2 volumes from 6GB to 7.5GB. This succeeds. Now even though both of the volumes that make up /dev/md0 are extended, I cannot resize /dev/md0 using resize2fs /dev/md0 Can anyone advise me how I can achieve what I'm looking for here ? I'm guessing maybe I did things the wrong way around, but I can't find a solution that will give me both striping and mirroring :( Thanks in advance, -- Wayne Pascoe
- Introduction
- 1. Latest Version
- 2. Disclaimer
- 3. Contributors
- 1. What is LVM?
- 2. What is Logical Volume Management?
- 2.1. Why would I want it?
- 2.2. Benefits of Logical Volume Management on a Small System
- 2.3. Benefits of Logical Volume Management on a Large System
- 3. Anatomy of LVM
- 3.1. volume group (VG)
- 3.2. physical volume (PV)
- 3.3. logical volume (LV)
- 3.4. physical extent (PE)</ mapping modes (linear/striped)
- 3.8. Snapshots
- 4. Frequently Asked Questions
- 4.1. LVM 2 FAQ
- 4.2. LVM 1 FAQ
- 5. Acquiring LVM
- 5.1. Download the source
- 5.2. Download the development source via CVS
- 5.3. Before You Begin
- 5.4. Initial Setup
- 5.5. Checking Out Source Code
- 5.6. Code Updates
- 5.7. Starting a Project
- 5.8. Hacking the Code
- 5.9. Conflicts
- 6. Building the kernel modules
- 6.1. Building the device-mapper module
- 6.2. Build the LVM 1 kernel module
- 7. LVM 1 Boot time scripts
- 7.1. Caldera
- 7.2. Debian
- 7.3. Mandrake
- 7.4. Redhat
- 7.5. Slackware
- 7.6. SuSE
- 8. LVM 2 Boot Time Scripts
- 9. Building LVM from the Source
- 9.1. Make LVM library and tools
- 9.2. Install LVM library and tools
- 9.3. Removing LVM library and tools
- 10. Transitioning from previous versions of LVM to LVM 1.0.8
- 10.1. Upgrading to LVM 1.0.8 with a non-LVM root partition
- 10.2. Upgrading to LVM 1.0.8 with an LVM root partition and initrd
- 11. Common Tasks
- 11.1. Initializing disks or disk partitions
- 11.2. Creating a volume group
- 11.3. Activating a volume group
- 11.4. Removing a volume group
- 11.5. Adding physical volumes to a volume group
- 11.6. Removing physical volumes from a volume group
- 11.7. Creating a logical volume
- 11.8. Removing a logical volume
- 11.9. Extending a logical volume
- 11.10. Reducing a logical volume
- 11.11. Migrating data off of a physical volume
- 12. Disk partitioning
- 12.1. Multiple partitions on the same disk
- 12.2. Sun disk labels
- 13. Recipes
- 13.1. Setting up LVM on three SCSI disks
- 13.2. Setting up LVM on three SCSI disks with striping
- 13.3. Add a new disk to a multi-disk SCSI system
- 13.4. Taking a Backup Using Snapshots
- 13.5. Removing an Old Disk
- 13.6. Moving a volume group to another system
- 13.7. Splitting a volume group
- 13.8. Converting a root filesystem to LVM 1
- 13.9. Recover physical volume metadata
- A. Dangerous Operations
- A.1. Restoring the VG UUIDs using uuid_fixer
- A.2. Sharing LVM volumes
- B. Reporting Errors and Bugs
- C. Contact and Links
- C.1. Mail lists
- C.2. Links
- D. GNU Free Documentation License
- D.1. PREAMBLE
- D.2. APPLICABILITY AND DEFINITIONS
- D.3. VERBATIM COPYING
- D.4. COPYING IN QUANTITY
- D.5. MODIFICATIONS
- D.6. COMBINING DOCUMENTS
- D.7. COLLECTIONS OF DOCUMENTS
- D.8. AGGREGATION WITH INDEPENDENT WORKS
- D.9. TRANSLATION
- D.10. TERMINATION
- D.11. FUTURE REVISIONS OF THIS LICENSE
- D.12. ADDENDUM: How to use this License for your documents
1. LVM Basic relationships. A quick run-down on how the different parts are related
Physical volume - This consists of one, or many, partitions (or physical extent groups) on a physical drive.
Volume group - This is composed of one or more physical volumes and contains one or more logical volumes.
Logical volume - This is contained within a volume group.2. LVM creation commands (These commands are used to initialize, or create, new logical objects) - Note that we have yet to explore these fully, as they can be used to do much more than we've demonstrated so far in our simple setup.
pvcreate - Used to create physical volumes.
vgcreate - Used to create volume groups.
lvcreate - Used to create logical volumes.3. LVM monitoring and display commands (These commands are used to discover, and display the properties of, existing logical objects). Note that some of these commands include cross-referenced information. For instance, pvdisplay includes information about volume groups associated with the physical volume.
pvscan - Used to scan the OS for physical volumes.
vgscan - Used to scan the OS for volume groups.
lvscan - Used to scan the OS for logical volumes.
pvdisplay - Used to display information about physical volumes.
vgdisplay - Used to display information about volume groups.
lvdisplay - Used to display information about logical volumes.4. LVM destruction or removal commands (These commands are used to ensure that logical objects are not allocable anymore and/or remove them entirely) Note, again, that we haven't fully explored the possibilities with these commands either. The "change" commands in particular are good for a lot more than just prepping a logical object for destruction.
pvchange - Used to change the status of a physical volume.
vgchange - Used to change the status of a volume group.
lvchange - Used to change the status of a logical volume.
pvremove - Used to wipe the disk label of a physical drive so that LVM does not recognize it as a physical volume.
vgremove - Used to remove a volume group.
lvremove - Used to remove a logical volume.5. Manipulation commands (These commands allow you to play around with your existing logical objects. We haven't posted on "any" of these commands yet - Some of them can be extremely dangerous to goof with for no reason)
pvextend - Used to add physical devices (or partition(s) of same) to a physical volume.
pvreduce - Used to remove physical devices (or partition(s) of same) from a physical volume.
vgextend - Used to add new physical disk (or partition(s) of same) to a volume group.
vgreduce - Used to remove physical disk (or partition(s) of same) from a volume group.
lvextend - Used to increase the size of a logical volume.
lvreduce - Used to decrease the size of a logical volume.
Linux Home Networking
Determine The Partition Types
You have to change each LVM partition used to be of type 8e (Linux LVM). You can test this with the fdisk -l command. Here is an example using /dev/hde that shows your target partitions are of the incorrect type.
sh-2.05b# fdisk -l /dev/hde Disk /dev/hde: 4311 MB, 4311982080 bytes 16 heads, 63 sectors/track, 8355 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Device Boot Start End Blocks Id System /dev/hde1 1 4088 2060320+ fd Linux raid autodetect /dev/hde2 4089 5713 819000 83 Linux /dev/hde3 5714 6607 450576 83 Linux /dev/hde4 6608 8355 880992 5 Extended /dev/hde5 6608 7500 450040+ 83 Linux sh-2.05b#Start FDISK
You can change the partition type using fdisk with the disk name as its argument. Use it to modify both partitions /dev/hde5 and /dev/hdf1. The fdisk examples that follow are for /dev/hde5; repeat them for /dev/hdf1.
sh-2.05b# fdisk /dev/hde The number of cylinders for this disk is set to 8355. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help):Set The ID Type To 8e
You now need to set the partition types to the LVM value of 8e. Partitions /dev/hde5 and /dev/hdf1 are the fifth and sixth partitions on disk /dev/hde. Modify their type using the t command, and then specify the partition number and type code. You can also use the L command to get a full listing of ID types in case you forget.
Command (m for help): t Partition number (1-6): 5 Hex code (type L to list codes): 8e Changed system type of partition 5 to 8e (Linux LVM) Command (m for help): t Partition number (1-6): 6 Hex code (type L to list codes): 8e Changed system type of partition 6 to 8e (Linux LVM) Command (m for help):Make Sure The Change Occurred
Use the p command to get the new proposed partition table.
Command (m for help): p Disk /dev/hde: 4311 MB, 4311982080 bytes 16 heads, 63 sectors/track, 8355 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Device Boot Start End Blocks Id System /dev/hde1 1 4088 2060320+ fd Linux raid autodetect /dev/hde2 4089 5713 819000 83 Linux /dev/hde3 5714 6607 450576 83 Linux /dev/hde4 6608 8355 880992 5 Extended /dev/hde5 6608 7500 450040+ 8e Linux LVM Command (m for help):Save The Partition Changes
Use the w command to permanently save the changes to disk /dev/hde.
Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks. sh-2.05b#The error above will occur if any of the other partitions on the disk is mounted. This shouldn't be grave as you are already in single user mode in which most of the system's processes that would be accessing the partition have been shutdown.
Define Each Physical Volume
After modifying the partition tables of /dev/hde and /dev/hdf, initialize the target partitions with the pvcreate command. This wipes out all the data on them in preparation for the next step. If you haven't backed up your data yet, do it now!
sh-2.05b# pvcreate /dev/hde5 /dev/hdf1 pvcreate -- physical volume "/dev/hde5" successfully created pvcreate -- physical volume "/dev/hdf1" successfully created sh-2.05b#Create A Volume Group For the PVs
Use the vgcreate command to combine the two physical volumes into a single unit called a volume group. The LVM software effectively tricks the operating system into thinking the volume group is a new hard disk. In the example, the volume group is called lvm-hde.
sh-2.05b# vgcreate lvm-hde /dev/hdf1 /dev/hde5 Volume group "lvm-hde" successfully created sh-2.05b#Therefore, the vgcreate syntax uses the name of the volume group as the first argument followed by the partitions that it will be comprised of as all subsequent arguments.
Run VGscan
The next step is to verify that Linux can find your new LVM disk partitions. To do this, use the vgscan command.
sh-2.05b# vgscan vgscan -- reading all physical volumes (this may take a while...) Found volume group "lvm-hde" using metadata type lvm2 sh-2.05b#Create A Logical Volume From The Volume Group
Now you're ready to partition the volume group into logical volumes with the lvcreate command. Like hard disks, which are divided into blocks of data, logical volumes are divided into units called physical extents (PEs).
You'll have to know the number of available PEs before creating the logical volume. This is done with the vgdisplay command using the new lvm-hde volume group as the argument.
sh-2.05b# vgdisplay lvm-hde --- Volume group --- VG Name lvm-hde VG Access read/write VG Status available/resizable VG # 0 MAX LV 256 Cur LV 0 Open LV 0 MAX LV Size 255.99 GB Max PV 256 Cur PV 2 Act PV 2 VG Size 848 MB PE Size 4 MB Total PE 212 Alloc PE / Size 0 / 0 Free PE / Size 212 / 848 MB VG UUID W7bgLB-lAFW-wtKi-wZET-jDJF-8VYD-snUaSZ sh-2.05b#As you can see, 212 PEs are available as free. You can now use all 212 of them to create a logical volume named lvm0 from volume group lvm-hde.
sh-2.05b# lvcreate -l 212 lvm-hde -n lvm0 Logical volume "lvm0" created sh-2.05b#Note: You can also define percentages of the volume group to be used. The first example defines the use of 100% of the volume group's free space and the second example specifies using 50% of the total volume group.
sh-2.05b# lvcreate -l 100%FREE -n lvm0 lvm-hdesh-2.05b# lvcreate -l 50%VG -n lvm0 lvm-hdeFormat The Logical Volume
After the logical volume is created, you can format it as if it were a regular partition. In this case, use the -t switch to specify to the mkfs formatting program that you want a type ext3 partition.
sh-2.05b# mkfs -t ext3 /dev/lvm-hde/lvm0 mke2fs 1.32 (09-Nov-2002) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 108640 inodes, 217088 blocks 10854 blocks (5.00%) reserved for the super user First data block=0 7 block groups 32768 blocks per group, 32768 fragments per group 15520 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840 Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 38 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. sh-2.05b#Create A Mount Point
When you formatted the /dev/hde5 partition, you lost the /home directory. Now you have to recreate /home on which you'll later mount your new logical volume.
sh-2.05b# mkdir /homeUpdate The /etc/fstab File
The /etc/fstab file lists all the partitions that need to be automatically mounted when the system boots. This snippet configures the newly labeled partition to be mounted on the /home mount point.
/dev/lvm-hde/lvm0 /home ext3 defaults 1 2The /dev/hde5 and /dev/hdf1 partitions are replaced by the combined /lvm0 logical volume. You, therefore, don't want the old partitions to be mounted again. Make sure that any reference to them in this file has either been commented a # character at the beginning of each line or deleted entirely.
#/dev/hde5 /data1 ext3 defaults 1 2 #/dev/hdf1 /data2 ext3 defaults 1 2Mount The Volume
The mount -a command reads the /etc/fstab file and mounts all the devices that haven't been mounted already. After mounting, test the volume by listing its directory contents. It should just contain the lost+found directory
sh-2.05b# mount -a sh-2.05b# ls /home lost+found sh-2.05b#Restore Your Data
You can now restore your backed up data to /home.
Create New Snapshot
To create a snapshot of lvstuff use the lvcreate command like before but use the -s flag.
lvcreate -L512M -s -n lvstuffbackup /dev/vgpool/lvstuff
Here we created a logical volume with only 512 MB because the drive isn't being actively used. The 512 MB will store any new writes while we make our backup.
Mount New Snapshot
Just like before we need to create a mount point and mount the new snapshot so we can copy files from it.
mkdir /mnt/lvstuffbackup
mount /dev/vgpool/lvstuffbackup /mnt/lvstuffbackup
Copy Snapshot and Delete Logical Volume
All you have left to do is copy all of the files from /mnt/lvstuffbackup/ to an external hard drive or tar it up so it is all in one file.
Note: tar -c will create an archive and -f will say the location and file name of the archive. For help with the tar command use
man tar
in the terminal.
tar -cf /home/rothgar/Backup/lvstuff-ss /mnt/lvstuffbackup/
Remember that while the backup is taking place all of the files that would be written to lvstuff are being tracked in the temporary logical volume we created earlier. Make sure you have enough free space while the backup is happening.
Once the backup finishes, unmount the volume and remove the temporary snapshot.
umount /mnt/lvstuffbackup
lvremove /dev/vgpool/lvstuffbackup/Deleting a Logical Volume
To delete a logical volume you need to first make sure the volume is unmounted, and then you can use lvremove to delete it. You can also remove a volume group once the logical volumes have been deleted and a physical volume after the volume group is deleted.
Here are all the commands using the volumes and groups we've created.
umount /mnt/lvstuff
lvremove /dev/vgpool/lvstuff
vgremove vgpool
pvremove /dev/sdb1 /dev/sdc1That should cover most of what you need to know to use LVM. If you've got some experience on the topic, be sure to share your wisdom in the comments.
In this example hda1, hda2, and hda3 are all physical volumes. We'll initialize hda3 as a physical volume:
root@lappy:~# pvcreate /dev/hda3If you wanted to combine several disks, or partitions you could do the same for those:
root@lappy:~# pvcreate /dev/hdb root@lappy:~# pvcreate /dev/hdcOnce we've initialised the partitions, or drives, we will create a volume group which is built up of them:
root@lappy:~# vgcreate skx-vol /dev/hda3Here "skx-vol" is the name of the volume group. (If you wanted to create a single volume spanning two disks you'd run "vgcreate skx-vol /dev/hdb /dev/hdc".)
If you've done this correctly you'll be able to see it included in the output of vgscan:
root@lappy:~# vgscan Reading all physical volumes. This may take a while... Found volume group "skx-vol" using metadata type lvm2Now that we have a volume group (called skx-vol) we can actually start using it.
Working with logical volumes
What we really want to do is create logical volumes which we can mount and actually use. In the future if we run out of space on this volume we can resize it to gain more storage. Depending on the filesystem you've chosen you can even do this on the fly!
For test purposes we'll create a small volume with the name 'test':
root@lappy:~# lvcreate -n test --size 1g skx-vol Logical volume "test" createdThis command creates a volume of size 1Gb with the name test hosted on the LVM volume group skx-vol.
The logical volume will now be accessible via /dev/skx-vol/test, and may be formatted and mounted just like any other partition:
root@lappy:~# mkfs.ext3 /dev/skx-vol/test root@lappy:~# mkdir /home/test root@lappy:~# mount /dev/skx-vol/test /home/test
Adapted from the LVM How-to page
from The Linux Documentation Project website.LVM (Logical Volume Management) Overview:
... ... ...
Extents:
When creating a volume group from one or more physical volumes, you must specify the size of the "extents" of each of the physical volumes that make up the VG. Each extent is a single contiguous chunk of disk space, typically 4M in size, but can range from 8K to 16G in powers of 2 only. (Extents are analogous to disk blocks or clusters.) The significance of this is that the size of logical volumes are specified as a number of extents. Logical volumes can thus grow and shrink in increments of the extent size. A volume group's extent size cannot be changed after it is set.
The system internally numbers the extents for both logical and physical volumes. These are called logical extents (or LEs) and physical extents (or PEs), respectively. When a logical volume is created a mapping is defined between logical extents (which are logically numbered sequentially starting at zero) and physical extents (which are also numbered sequentially).
To provide acceptable performance the extent size must be a multiple of the actual disk cluster size (i.e., the size of the smallest chunk of data that can be accessed in a single disk I/O operation). In addition some applications (such as Oracle database) have performance that is very sensitive to the extent size. So setting this correctly also depends on what the storage will be used for, and is considered part of the system administrator's job of tuning the system.
... ... ...
Linear and Striped Mapping:
Let's suppose we have a volume group called
VG1
, and this volume group has a physical extent size of 4M. Suppose too this volume group is composed of one disk partition/dev/hda1
and one whole disk/dev/hdb
. These will become physical volumesPV1
andPV2
(more meaningful names for a particular scenario can be given if desired).The PVs are different sizes and we get 99 (4M) extents in
PV1
and 248 extents inPV2
, for a total of 347 extents inVG1
. Now any number of LVs of any size can be created from the VG, as long as the total number of extents of all LVs sums to no more than 347. To make the LVs appear the same as regular disk partitions to the filesystem software, the logical extents are numbered sequentially within the LV. However some of these LEs may be stored in the PEs onPV1
and others onPV2
. For instance LE[1] of some LV inVG1
could map onto PE[51] ofPV1
, and thus data written to the first 4M of the LV is in fact written to the 51st extent ofPV1
.When creating LVs an administrator can choose between two general strategies for mapping logical extents onto physical extents:
- Linear mapping will assign a range of PE's to an area of an LV in order (e.g., LE 1–99 map to
PV1
's PEs, and LE 100–347 map ontoPV2
's PEs).- Striped mapping will interleave the disk blocks of the logical extents across a number of physical volumes. You can decide the number of PVs to stripe across (the stripe set size), as well as the size of each stripe.
When using striped mapping, all PVs in the same stripe set need to be the same size. So in our example the LV can be no more than 198 (99 + 99) extents in size. The remaining extents in PV2 can be used for some other LVs, using linear mapping.
The size of the stripes is independent of the extent size, but must be a power of 2 between 4K and 512K. (This value
n
is specified as a power of 2 in this formula:(2^n) × 1024 bytes
, where 2 ≤n
≤ 9.) The stripe size should also be a multiple of the disk sector size, and finally the extent size should be a multiple of this stripe size. If you don't do this, you will end up with fragmented extents (as the last bit of space in the extent will be unusable).Tables 2 and 3 below illustrate the differences between linear and striped mapping. Suppose you use a stripe size of 4K, an extent size of 12K, and a stripe set of 3 PVs (PVa, PVb, and PVc), each of which is 100 extents. Then the mapping for an LV ( whose extents we'll call LV1, LV2, ...) to PVs (whose extents we'll call PVa1, PVa2, ..., PVb1, PVb2, ..., PVc1, PVc2, ...) might look something like the following. (In this table the notation means volume_name extent_number . stripe_number):
linear and striped mappings
Example of Linear Mapping Logical Extents Physical Extents LV1 → PVa1 LV2 → PVa2 LV3 → PVa3 LV4 → PVa4 ... → ... LV99 → PVa99 LV100 → PVb1 LV101 → PVb2 ... → ... LV199 → PVb99 LV200 → PVc1 LV201 → PVc2 ... → ...
Example of Striped Mapping Logical Extents Physical Extents LV1.1 → PVa1.1 LV1.2 → PVb1.1 LV1.3 → PVc1.1 LV2.1 → PVa1.2 LV2.2 → PVb1.2 LV2.3 → PVc1.2 LV3.1 → PVa1.3 LV3.2 → PVa1.3 LV3.3 → PVa1.3 LV4.1 → PVa2.1 LV4.2 → PVb2.1 LV4.3 → PVc2.1 ... → ... Tables 2 and 3: Linear versus Striped Mapping
In certain situations striping can improve the performance of the logical volume but it can be complex to manage. However note that striped mapping is useless and will in fact hurt performance, unless the PVs used in the stripe set are from different disks, preferably using different controllers.
(In version 1 of LVM LVs created using striping cannot be extended past the PVs on which they were originally created. In the current version (LVM 2) striped LVs can be extended by concatenating another set of devices onto the end of the first set. However this could lead to a situation where (for example) a single LV ends up as a 2 stripe set, concatenated with a linear (non-striped) set, and further concatenated with a 4 stripe set!
Snapshots:
A wonderful facility provided by LVM is a snapshot. This allows an administrator to create a new logical volume which is an exact copy of an existing logical volume (called the original), frozen at some point in time. This copy is read-only. Typically this would be used when (for instance) a backup needs to be performed on the logical volume but you don't want to halt a live system that is changing the data. When done with the snapshot the system administrator can just unmount it and then remove it. This facility does require that the snapshot be made at a time when the data on the logical volume is in a consistent state, but the time the original LV must be off-line is much less than a normal backup would take to complete.
In addition the copy typically only needs about 20% or less of the disk space of the original. Essentially, when the snapshot is made nothing is copied. However as the original changes, the updated disk blocks are first copied to the snapshot disk area before being written with the changes. The more changes are made to the original, the more disk space the snapshot will need.
When creating logical volumes to be used for snapshots, you must specify the chunk size. This is the size of the data block copied from the original to the snapshot volume. For good performance this should be set to the size of the data blocks written by the applications using the original volume. While this chunk size is independent of both the extent size and the stripe size (if striping is used), it is likely that the disk block (or cluster or page) size, the stripe size, and the chunk size should all be the same. Note the chunk size must be a power of 2 (like the stripe size), between 4K and 1M. (The extent size should be a multiple of this size.)
You should remove snapshot volumes as soon as you are finished with them, because they take a copy of all data written to the original volume and this can hurt performance. In addition, if the snapshot volume fills up errors will occur.
LVM Administration - Commands and Procedures:
The
lvm
command permits the administrator to perform all LVM operations using this one interactive command, which includes built-in help and will remember command line arguments used from previous commands for the current command. However each LVM command is also available as a stand-alone command (that can be scripted). These are discussed briefly below, organized by task. See the man page for the commands (or use the built-in help oflvm
) for complete details.... ... ...
Format Physical Volumes (PVs)
To initialize a disk or disk partition as a physical volume you just run the "
pvcreate
" command on the whole disk. For example:pvcreate /dev/hdbThis creates a volume group descriptor at the start of the second IDE disk. You can initialize several disks and/or partitions at once. Just list all the disks and partitions on the command line you wish to format as PVs.
Sometimes this procedure may not work correctly, depending on how the disk (or partition) was previously formatted. If you get an error that LVM can't initialize a disk with a partition table on it, first make sure that the disk you are operating on is the correct one! Once you have confirmed that
/dev/hdb
is the disk you really want to reformat, run the followingdd
command to erase the old partition table:# Warning DANGEROUS! # The following commands will destroy the partition table on the # disk being operated on. Be very sure it is the correct disk! dd if=/dev/zero of=/dev/hdb bs=1k count=1 blockdev \ --rereadpt /dev/hdbFor partitions run "
pvcreate
" on the partition:pvcreate /dev/hdb1This creates a volume group descriptor at the start of the
/dev/hdb1
partition. (Note that if using LVM version 1 on PCs with DOS partitions, you must first set the partition type to "0x8e
" usingfdisk
or some other similar program.)Create Volume Groups (VGs)
Use the "
vgcreate
" program to group selected PVs into VGs, and to optionally set the extent size (the default is 4MB). The following command creates a volume group named "VG1
" from two disk partitions from different disks:vgcreate VG1 /dev/hda1 /dev/hdb1Modern systems may use "
devfs
" or some similar system, which creates symlinks in/dev
for detected disks. With such systems names like "/dev/hda1
" are actually the symlinks to the real names. You can use either the symlink or the real name in the LVM commands, however the older version of LVM demanded you use the real device names, such as/dev/ide/host0/bus0/target0/lun0/part1
and/dev/ide/host0/bus0/target1/lun0/part1
.You can also specify the extent size with this command using the "
-s size
" option, if the 4Mb default not what you want. Thesize
is a value followed by one of k (for kilobytes), m (megabytes), g (gigabytes), or t (tetrabytes). In addition you can put some limits on the number of physical or logical volumes the volume can have. You may want to change the extent size for performance, administrative convenience, or to support very large logical volumes. (Note there may be kernel limits and/or application limits on the size of LVs and files on your system. For example Linux 2.4 kernel has a max size of 2TB.)The "
vgcreate
" command adds some information to the headers of the included PVs. However the kernel modules needed to use the VGs as disks aren't loaded yet, and thus the kernel doesn't "see" the VGs you created. To make the VGs visible you must activate them. Only active volume groups are subject to changes and allow access to their logical volumes.To activate a single volume group
VG1
, use the command:vgchange -a y /dev/VG1("
-a
" is the same as "--available
".) To active all volume groups on the system use:vgchange -a y... ... ...
Create and Use a Snapshot
To create a snapshot of some existing LV, a form of the
lvcreate
command is used:root# lvcreate size option -s -n name existing_LVwhere size is as discussed previously, "
-s
" (or "--snapshot
") indicates a snapshot LV, "-n name
" (or "--name name
") says to call the snapshot LV name. The only option allowed is "-c chunk_size
" (or "--chunksize chunk_size
"), where chunk_size is specified as a power of 2 in this formula:(2^chunk_size) × 1024 bytes
, where 2 ≤chunk_size
≤ 10.)Suppose you have a volume group VG1 with a logical volume LV1 you wish to backup using a snapshot. you can estimate the time the backup will take, and the amount of disk writes that will take place during that time (plus a generous fudge factor), say 300MB. Then you would run the command:
root# lvcreate -l 300m -s -n backup LV1to create a snapshot logical volume named
/dev/VG1/backup
which has read-only access to the contents of the original logical volume named/dev/VG1/LV1
at point in time the snapshot was created. Assuming the original logical volume contains a file system you nowmount
the snapshot logical volume on some (empty) directory, then backup the mounted snapshot while the original filesystem continues to get updated. When finished, unmount the snapshot and delete it (or it will continue to grow as LV1 changes, and eventually run out of space).Note: If the snapshot is of an XFS filesystem, the
xfs_freeze
command should be used to quiesce the filesystem before creating the snapshot (if the filesystem is mounted):/root# xfs_freeze -f /mnt/point; /root# lvcreate -L 300M -s -n backup /dev/VG1/LV1 /root# xfs_freeze -u /mnt/point Warning Full snapshot are automatically disabledNow create a mount-point (an empty directory) and mount the volume:
/root# mkdir /mnt/dbbackup /root# mount /dev/VG1/backup /mnt/dbbackup mount: block device /dev/ops/dbbackup is write-protected, mounting read-onlyIf you are using XFS as the filesystem you will need to add the "nouuid" option to the mount command as follows:
/root# mount /dev/VG1/backup /mnt/dbbackup -o nouuid,roDo the backup, say by using
tar
to some "DDS4" or "DAT" tape backup device:/root# tar -cf /dev/rmt0 /mnt/dbbackup tar: Removing leading `/' from member namesWhen the backup has finished you unmount the volume and remove it from the system:
root# umount /mnt/dbbackup root# lvremove /dev/VG1/backup lvremove -- do you really want to remove "/dev/VG1/backup"? [y/n]: y lvremove -- doing automatic backup of volume group "VG1" lvremove -- logical volume "/dev/VG1/backup" successfully removedExamining LVM Information
To see information about some VG use:
vgdisplay some_volume_group vgs some_volume_groupTo see information about some PV use the command:
pvdisplay some_disk_or_partition # e.g., /dev/hda1 pvs some_disk_or_partitionTo see information about some LV use:
lvdisplay some-logical-volume lvs some-logical-volumeThe man pages for these commands provides further details.
Grow VGs, LVs, and Filesystems
To grow a filesystem, you must install a new hard disk (unless you have free space available), format it as a PV,add that PV to your VG, then add the space to your LV, and finally use the filesystem tools to grow it. (Not all filesystem allow or come with tools to grow and shrink them!)
VGs are resizable (spelled in Linux as "resizeable") by adding or removing PVs from them. However by default they are created as fixed in size. To mark a VG as resizable use the command:
root# vgchange -x y #or --resizeable yOnce this is done add a PV (say "hdb2") to some VG (say "VG1") with the command:
root# vgextend VG1 /dev/hdb2Next, extend an LV with the "
lvextend
" command. This command works almost the same as the "lvcreate
" command, but with a few different options. When specifying how much to increase the size of the LV, you can either specify how much to grow the LV with "+size" or you can specify the new (absolute) size (by omitting the plus sign). So to extend the LV "LV1" on VG "VG1" by 2GB, use:root# lvextend -L +2G /dev/VG1/LV1You could also use:
root# lvresize -L +2G /dev/VG1/LV1It would be a good idea to use the same mapping as the original LV, or you will have strange performance issues! Also note this command can be used to extend a snapshot volume if necessary.
After you have extended the logical volume the last step is to increase the file system size. How you do this depends on the file system you are using. Most filesystem types come with their own utilities to grow/shrink filesystems, if they allow that. These utilities usually grow to fill the entire partition or LV, so there is no need to specify the filesystem size.
Some common filesystem utilities are (assume we are expanding the
/home
filesystem in LV1 on VG1):
- EXT2/3 filesystems must be unmounted before they can be resized. The commands to use are:
root# umount /home # /home is the mount point for /dev/VG1/LV1 root# fsck -f /home # required! root# resize2fs /dev/VG1/LV1 # grow FS to fill LV1. root# mount /home- ... ... ...
Shrink VGs, LVs, and Filesystems
To shrink a filesystem, you perform the same steps for growing one but in reverse order. You first shrink the filesystem, then remove the space from the LV (and put it back into the VG). Other LVs in the same VG can now use that space. To use it in another VG, you must remove the corresponding PV from the one VG and add it to the other VG.
To shrink a LV you must first shrink the filesystem in that LV. This can be done with the
resize2fs
for EXT2/3, orresize_reiserfs
for ReiserFS (doing this off-line is safer but not required). There are similar tools for other filesystem types. Here's an example of shrinking/home
by 1 GB:# df Filesystem Size Used Avail Use% Mounted on /dev/sda1 145M 16M 122M 12% /boot /dev/mapper/vg01-lv01 49G 3.7G 42G 9% /home ... # umount /home # fsck -f /home # required! fsck 1.38 (30-Jun-2005) e2fsck 1.38 (30-Jun-2005) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /home: 32503/6406144 files (0.3% non-contiguous), 1160448/12845056 blocks # resize2fs -p /dev/vg01/lv01 48G resize2fs 1.38 (30-Jun-2005) Resizing the filesystem on /dev/vg01/lv01 to 12799788 (4k) blocks. Begin pass 3 (max = 9) Scanning inode table XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX The filesystem on /dev/vg01/lv01 is now 12799788 blocks long.Currently XFS and JFS filesystem types do not support shrinking. If a newer version of these filesystems will support this,
mount
may have been updated to support these filesystem types. (And if not a new tool may be released.) For such filesystems you can resize them the hard way: Backup the data using some archive tool (e.g.,cpio
,tar
,star
, or you can copy the data to some other disk). Then delete the filesystem in the LV, then shrink the LV, then recreate the new (smaller) filesystem, and finally restore the data.Once the filesystem has been shrunk it is time to shrink the logical volume. You can use either the
lvreduce
command or thelvresize
command. Continuing from the example above:# lvresize -L -1G /dev/vg01/lv01 Rounding up size to full physical extent 96.00 MB WARNING: Reducing active logical volume to 48 GB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce lv01? [y/n]: y Reducing logical volume lv01 to 48 GB Logical volume lv01 successfully resized # mount /homeTo shrink a VG (say "
VG1
"), a PV (say "hdc
") can be removed from it if none of that PV's extents (the PEs) are in use by any LV. Run the command:root# vgreduce VG1 /dev/hdcYou might want to do this to upgrade or replace a worn-out disk. If the PV is in use by some LV, you must first migrate the data to another available PV within the same VG. To move all the data from a PV (say "
hdb2
") to any unused, large enough PV within that VG, use the command:root# pvmove /dev/hdb2Delete LVs and VGs
A logical volume (say "
LV3
" on the volume group "VG2
") must be unmounted before it can be removed. The steps for this are simple:root# umount /dev/VG2/LV3 root# lvremove /dev/VG2/LV3Before a volume group (say "
VG2
") is removed you must first deactivate it. This is done with the command:root# vgchange -a n VG2Now the VG can be removed. This of course will destroy all LVs within it. The various PVs that made up that VG can then be re-assigned to some other VGs. Remove (a non-active) volume group with:
root# vgremove VG2Summary and Examples
In the following examples assume that LVM2 is installed and up to date, and the boot scripts have been modified already if needed. The first example includes some commentary and some command output; the second is much shorter but uses the long option names just for fun.
Home directory Example
In this example we will create a logical volume to hold the "
/home
" partition for a multi-media development system. The system will use a standard EXT3 filesystem of 60 GB, built using 3 25GB SCSI disks (and no hardware RAID). Since multi-media uses large files it makes sense to use stripe mapping and read-ahead. We will call the volume group "vg1
" and the logical volume "home
":
- Initialize the disks as PVs:
/root# pvcreate /dev/sda /dev/sdb /dev/sdc- Create a Volume Group, then check it's size:
/root# vgcreate vg1 /dev/sda /dev/sdb /dev/sdc /root# vgdisplay vgdisplay --- Volume Group --- VG Name vg1 VG Access read/write VG Status available/resizable VG # 1 MAX LV 256 Cur LV 0 Open LV 0 MAX LV Size 255.99 GB Max PV 256 Cur PV 3 Act PV 3 VG Size 73.45 GB PE Size 4 MB Total PE 18803 Alloc PE / Size 0 / 0 Free PE / Size 18803/ 73.45 GB VG UUID nP2PY5-5TOS-hLx0-FDu0-2a6N-f37x-0BME0Y- Create a 60 GB logical volume, with stripe set of 3 PVs and stripe size of 4 (which shows 2^4 KB = 16KB):
/root# lvcreate -i 3 -I 4 -L 60G -n home vg1 lvcreate -- rounding 62614560 KB to stripe boundary size 62614560 KB / 18803 PE lvcreate -- doing automatic backup of "vg1" lvcreate -- logical volume "/dev/vg1/home" successfully created- Create an EXT3 filesystem in the new LV:
/root# mkfs -t ext3 /dev/vg1/home- Test the new FS:
/root# mount /dev/vg1/home /mnt /root# df | grep /mnt /root# umount /dev/vg1/home- Update
/etc/fstab
with the revised entry for/home
.- Finally, don't forget to update the system journal.
Oracle Database Example
In this example we will create 2 LVs for an Oracle database. Oracle manages its own striping and read-head/caching, so we won't use these LVM features. However using hardware RAID is useful, so we will use two RAID 10 disks,
hdb
andhdc
. The tables will use one logical volume called "tables
" on one disk and the indexes and control files will be on a second LV called "indexes
" on the other disk. Both LVs will exist in the VG called "db
". Both filesystems will be XFS, for good performance for large database files:/root# pvcreate /dev/hdb /dev/hdc /root# vgcreate db /dev/hdb /dev/hdc /root# lvcreate --size 200G --name tables db /root# lvcreate --size 200G --name indexes db /root# mkfs -t xfs /dev/db/tables /root# mkfs -t xfs /dev/db/indexes /root# vi /etc/fstab /root# vi ~/system-journal
Send comments and questions to [email protected]
Last updated by Wayne Pollock on 02/19/2015 06:01:18.
10-21-2010 | linuxquestions.org
Slowfamily
Hello,
I am very new to LVM, as well as not especially experienced at linux, and have some questions that I'm hoping are rather simple, but please let me know if I'm misunderstanding anything about how lvm works or if there's any guidance you can give me.
A few months back I set up a server running FC10 and tried creating Logical Groups during the the initial setup. We've realized that we are not using all the available space on the physical drive, and I realized that for some reason (I'm thinking this might have been the default?), we initially created two Logical Groups (VolGroup00 and VolGroup01) and it appears two Logical volumes in each (LogVol00 and LogVol01). LogVol00 in VolGroup00 is mapped to /, and the other Group was actually unused.
I figure that it would be simplest to just use all this space mapped to /, so I thought the thing to do would be to simply merge VolGroup01 to VolGroup00. I tried this:
[root@office mapper]# vgmerge VolGroup00 VolGroup01
Logical volumes in "VolGroup01" must be inactiveSo after a bit of research, I tried this:
[root@office mapper]# vgchange -a n VolGroup01
Can't deactivate volume group "VolGroup01" with 1 open logical volume(s)So apparently There's an open volume, but I don't know how to go about closing it. I removed the LogVol00 from that group, but LogVol01 won't budge.
[root@office mapper]# lvremove VolGroup01
Can't remove open logical volume "LogVol01"So how do I go about closing this Volume? At one point, there was some output that told me LogVol01 was being used as swap space. How do I handle that?
Thanks in advance!
valen_tino
You cannot delete LVs when they are active or make any VG changes with active LVs.
Here are high level steps of what you could do after taking a backup of your data:
- Disable and remove swap (see here)..
- Unmount and remove LV0 and LV1 from vg0 with umount/lvremove
- Remove vg0 with vgremove
- Unmount LV0 and LV1 from VG0 with umount
- Extend VG0 with any available PVs if necessary
- Mount LV0 and LV1 on VG0 with mount
- Create and enable swap (see here)
Note: You could also merge the VGs instead of step 3 above. Of course you will have to unmount volumes in VG0 prior to doing that.
Basically you are left with VG0 which contains LV0 and LV1 .......
....or you could simply backup your data, rebuild the server to your liking and restore the data!
Google matched content |
Please visit nixCraft site. It
has material well worth your visit.
Dr. Nikolai Bezroukov |
Novell
Etc
Expanding Linux Partitions with LVM - FedoraNEWS.ORG
Sep 11, 2007 | IBM developerworks
Volume management is not new in the -ix world (UNIX®, AIX, and so forth). And logical volume management (LVM) has been around since Linux® kernel 2.4v1 and 2.6.9v2. This article reveals the most useful features of LVM2-a relatively new userspace toolset that provides logical volume management facilities-and suggests several ways to simplify your system administration tasks.
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: February 10, 2021