Discussion:
[gentoo-user] grub and what happens exactly when booting.
(too old to reply)
Dale
2011-09-16 02:00:01 UTC
Permalink
OK. The Chief Idiot is going to experiment some. You ALL know what
this means right? Yep, I'm about to really make a mess of things so
here comes some questions. This is a result of the /usr and udev crap.
So, go to -dev and blame them, not me. ;-)

OK. I have three drives in my rig. One is for data files, mounted on
/data ironically, and has nothing to do with the OS. So, for that
reason I'm going to leave it out of this. So, I now have two drives in
my rig that are about to be OS related. sda is a 160Gb and sdb is a
250Gb. I'm going to leave the first one, sda, as is and will use sdb
for testing. Before I start, I want to sort of get my brain wrapped
back around this. It has been a LONG time since I dual booted Linux and
that was only for a month or so. I have grub installed on the MBR of
sda. My boot partition is on sda1 like most likely 99% of the rest of
you and it will stay there even after all this is done. I got /boot
from the old handbook days. When I put my new install on sdb, with this
new initramfs thingy and quite possibly LVM, do I leave grub on sda's
MBR and just point to sdb for the kernel and init thingy and all will be
well?

If this all works out, I will be moving everything from sdb to sda
anyway. My plan is to get them both bootable, then use one to copy to
another after I have learned a bit about this mess. I haven't got that
far yet but wanting to figure this init thingy out before I'm forced to
which will only make it taste even worse.

What I am reading so far:

http://en.gentoo-wiki.com/wiki/Initramfs

I did a google search and found some others boot this is more Gentoo
oriented. So, anything wrong with this as a guide? Pointers to others
if they are better would be great.

Here starts a learning process. It could get bumpy. lol

Dale

:-) :-)
Dale
2011-09-16 02:20:01 UTC
Permalink
<< SNIP >>
http://en.gentoo-wiki.com/wiki/Initramfs
I did a google search and found some others boot this is more Gentoo
oriented. So, anything wrong with this as a guide? Pointers to
others if they are better would be great.
Here starts a learning process. It could get bumpy. lol
Dale
:-) :-)
Oops, typo up there. Should read "I did a google search and found some
others *but* this is more Gentoo oriented." Hey, I got the first and
last right. lol

While I am at this. I just noticed something else. I think this
changed after the openrc upgrade.

***@fireball / # mount
rootfs on / type rootfs (rw)
/dev/root on / type reiserfs (rw,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
rc-svcdir on /lib64/rc/init.d type tmpfs
(rw,nosuid,nodev,noexec,relatime,size=1024k,mode=755)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
udev on /dev type tmpfs (rw,nosuid,relatime,size=10240k,mode=755)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime)
/dev/sda1 on /boot type ext2 (rw)
/dev/sda8 on /var type ext3 (rw,commit=0)
/dev/sda6 on /usr/portage type ext3 (rw,commit=0)
/dev/sda7 on /home type reiserfs (rw)
/dev/sdc1 on /data type ext4 (rw,commit=0)
usbfs on /proc/bus/usb type usbfs (rw,noexec,nosuid,devmode=0664,devgid=85)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc
(rw,noexec,nosuid,nodev)
tmpfs on /var/tmp/portage type tmpfs (rw,noatime)
***@fireball / #

Now usually when I boot into a dual OS, I go to a console and type in
mount and make certain of what drive / is mounted too. Example for
this, if mounted to sda* then it is my main OS and if mounted to sdb*
then it is my test install. How does one decipher that up there? Heck,
root could be mounted on anything right now. I'm not going to remove
any partitions right now. I could be running off sdb and not even know
it. o_O

Dale

:-) :-)
Pandu Poluan
2011-09-16 03:00:01 UTC
Permalink
Post by Dale
<< SNIP >>
http://en.gentoo-wiki.com/wiki/Initramfs
I did a google search and found some others boot this is more Gentoo
oriented. So, anything wrong with this as a guide? Pointers to others if
they are better would be great.
Post by Dale
Here starts a learning process. It could get bumpy. lol
Dale
:-) :-)
Oops, typo up there. Should read "I did a google search and found some
others *but* this is more Gentoo oriented." Hey, I got the first and last
right. lol
Post by Dale
While I am at this. I just noticed something else. I think this changed
after the openrc upgrade.
Post by Dale
rootfs on / type rootfs (rw)
/dev/root on / type reiserfs (rw,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
rc-svcdir on /lib64/rc/init.d type tmpfs
(rw,nosuid,nodev,noexec,relatime,size=1024k,mode=755)
Post by Dale
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs
(rw,nosuid,nodev,noexec,relatime)
Post by Dale
udev on /dev type tmpfs (rw,nosuid,relatime,size=10240k,mode=755)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime)
/dev/sda1 on /boot type ext2 (rw)
/dev/sda8 on /var type ext3 (rw,commit=0)
/dev/sda6 on /usr/portage type ext3 (rw,commit=0)
/dev/sda7 on /home type reiserfs (rw)
/dev/sdc1 on /data type ext4 (rw,commit=0)
usbfs on /proc/bus/usb type usbfs
(rw,noexec,nosuid,devmode=0664,devgid=85)
Post by Dale
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc
(rw,noexec,nosuid,nodev)
Post by Dale
tmpfs on /var/tmp/portage type tmpfs (rw,noatime)
Now usually when I boot into a dual OS, I go to a console and type in
mount and make certain of what drive / is mounted too. Example for this, if
mounted to sda* then it is my main OS and if mounted to sdb* then it is my
test install. How does one decipher that up there? Heck, root could be
mounted on anything right now. I'm not going to remove any partitions right
now. I could be running off sdb and not even know it. o_O
Try 'realname /dev/root'. Or realpath, I forgot which exactly.

Heck, just do 'ls -la /dev/root' :-)

It's a symlink to the actual dev

Rgds,
Dale
2011-09-16 03:20:01 UTC
Permalink
Post by Dale
Now usually when I boot into a dual OS, I go to a console and type
in mount and make certain of what drive / is mounted too. Example for
this, if mounted to sda* then it is my main OS and if mounted to sdb*
then it is my test install. How does one decipher that up there?
Heck, root could be mounted on anything right now. I'm not going to
remove any partitions right now. I could be running off sdb and not
even know it. o_O
Try 'realname /dev/root'. Or realpath, I forgot which exactly.
Heck, just do 'ls -la /dev/root' :-)
It's a symlink to the actual dev
Rgds,
Dead on the mark:

***@fireball / # realpath /dev/root
/dev/sda3
***@fireball / # ls -la /dev/root
lrwxrwxrwx 1 root root 4 Sep 2 09:11 /dev/root -> sda3
***@fireball / #

Thanks. It is sda. Whew !!

Dale

:-) :-)
Alan McKinnon
2011-09-16 08:20:02 UTC
Permalink
On Thu, 15 Sep 2011 20:49:02 -0500
Post by Dale
OK. The Chief Idiot is going to experiment some. You ALL know what
this means right? Yep, I'm about to really make a mess of things so
here comes some questions. This is a result of the /usr and udev
crap. So, go to -dev and blame them, not me. ;-)
Uh-oh! :-)
Post by Dale
OK. I have three drives in my rig. One is for data files, mounted
on /data ironically, and has nothing to do with the OS. So, for that
reason I'm going to leave it out of this. So, I now have two drives
in my rig that are about to be OS related. sda is a 160Gb and sdb is
a 250Gb. I'm going to leave the first one, sda, as is and will use
sdb for testing. Before I start, I want to sort of get my brain
wrapped back around this. It has been a LONG time since I dual
booted Linux and that was only for a month or so. I have grub
installed on the MBR of sda. My boot partition is on sda1 like most
likely 99% of the rest of you and it will stay there even after all
this is done. I got /boot from the old handbook days. When I put my
new install on sdb, with this new initramfs thingy and quite possibly
LVM, do I leave grub on sda's MBR and just point to sdb for the
kernel and init thingy and all will be well?
The basic idea is you set the boot drive in the bios and which runs grub
from that drive's mbr. When you installed that grub you hard-coded it
to know where to find it's grub.conf.

You can use the existing grub and it's config files just fine. Add a
new entry for your new stuff on sdb - grub will reference that drive as
(hd1) in grub.conf - and configure the root, kernel and initrd
settings appropriately.

If I were you I'd install grub to the mbr on sdb as well. If you happen
to switch sda and sdb around, you'll still have code to boot from on
the new first drive and not need to change the boot drive settings in
the bios. It's not a necessity, just a convenience.
Post by Dale
If this all works out, I will be moving everything from sdb to sda
anyway. My plan is to get them both bootable, then use one to copy
to another after I have learned a bit about this mess. I haven't got
that far yet but wanting to figure this init thingy out before I'm
forced to which will only make it taste even worse.
http://en.gentoo-wiki.com/wiki/Initramfs
I did a google search and found some others boot this is more Gentoo
oriented. So, anything wrong with this as a guide? Pointers to
others if they are better would be great.
Here starts a learning process. It could get bumpy. lol
Dale
:-) :-)
--
Alan McKinnnon
***@gmail.com
Dale
2011-09-16 10:40:02 UTC
Permalink
Post by Alan McKinnon
The basic idea is you set the boot drive in the bios and which runs grub
from that drive's mbr. When you installed that grub you hard-coded it
to know where to find it's grub.conf.
You can use the existing grub and it's config files just fine. Add a
new entry for your new stuff on sdb - grub will reference that drive as
(hd1) in grub.conf - and configure the root, kernel and initrd
settings appropriately.
If I were you I'd install grub to the mbr on sdb as well. If you happen
to switch sda and sdb around, you'll still have code to boot from on
the new first drive and not need to change the boot drive settings in
the bios. It's not a necessity, just a convenience.
That's what I was thinking. Now that I got that straight in my head.
Moooooving on.

I was wanting to play with reiserfs4. Where in the heck is the
command? I have this:

***@fireball / # mk << tab twice >>
mk_cmds mke2fs mkfs mkfs.ext3
mkfs.msdos mkhybrid mkmanifest mkswap
mkdir mkfifo mkfs.bfs mkfs.ext4
mkfs.reiserfs mk_isdnhwdb mknod mktap
mkdiskimage mkfontdir mkfs.cramfs mkfs.ext4dev
mkfs.vfat mkisofs mkpasswd mktap-2.7
mkdosfs mkfontscale mkfs.ext2 mkfs.minix
mkhomedir_helper mklost+found mkreiserfs mktemp
***@fireball / #

I have reiserfs3 but I can't find 4. I didn't see anything in the man
page either. I thought it may be a option like -j for ext2 or 3.

Thanks.

Dale

:-) :-)
Pandu Poluan
2011-09-16 11:00:01 UTC
Permalink
Post by Dale
Post by Alan McKinnon
The basic idea is you set the boot drive in the bios and which runs grub
from that drive's mbr. When you installed that grub you hard-coded it
to know where to find it's grub.conf.
You can use the existing grub and it's config files just fine. Add a
new entry for your new stuff on sdb - grub will reference that drive as
(hd1) in grub.conf - and configure the root, kernel and initrd
settings appropriately.
If I were you I'd install grub to the mbr on sdb as well. If you happen
to switch sda and sdb around, you'll still have code to boot from on
the new first drive and not need to change the boot drive settings in
the bios. It's not a necessity, just a convenience.
That's what I was thinking. Now that I got that straight in my head.
Moooooving on.
Post by Dale
I was wanting to play with reiserfs4. Where in the heck is the command?
mk_cmds mke2fs mkfs mkfs.ext3
mkfs.msdos mkhybrid mkmanifest mkswap
Post by Dale
mkdir mkfifo mkfs.bfs mkfs.ext4
mkfs.reiserfs mk_isdnhwdb mknod mktap
Post by Dale
mkdiskimage mkfontdir mkfs.cramfs mkfs.ext4dev
mkfs.vfat mkisofs mkpasswd mktap-2.7
Post by Dale
mkdosfs mkfontscale mkfs.ext2 mkfs.minix
mkhomedir_helper mklost+found mkreiserfs mktemp
Post by Dale
I have reiserfs3 but I can't find 4. I didn't see anything in the man
page either. I thought it may be a option like -j for ext2 or 3.
IIRC, you must emerge reiser4.

Try eix reiser

Rgds,
Mick
2011-09-16 15:30:02 UTC
Permalink
Post by Dale
Post by Dale
Post by Alan McKinnon
The basic idea is you set the boot drive in the bios and which runs grub
from that drive's mbr. When you installed that grub you hard-coded it
to know where to find it's grub.conf.
You can use the existing grub and it's config files just fine. Add a
new entry for your new stuff on sdb - grub will reference that drive as
(hd1) in grub.conf - and configure the root, kernel and initrd
settings appropriately.
If I were you I'd install grub to the mbr on sdb as well. If you happen
to switch sda and sdb around, you'll still have code to boot from on
the new first drive and not need to change the boot drive settings in
the bios. It's not a necessity, just a convenience.
That's what I was thinking. Now that I got that straight in my head.
Moooooving on.
Post by Dale
I was wanting to play with reiserfs4. Where in the heck is the command?
mk_cmds mke2fs mkfs mkfs.ext3
mkfs.msdos mkhybrid mkmanifest mkswap
Post by Dale
mkdir mkfifo mkfs.bfs mkfs.ext4
mkfs.reiserfs mk_isdnhwdb mknod mktap
Post by Dale
mkdiskimage mkfontdir mkfs.cramfs mkfs.ext4dev
mkfs.vfat mkisofs mkpasswd mktap-2.7
Post by Dale
mkdosfs mkfontscale mkfs.ext2 mkfs.minix
mkhomedir_helper mklost+found mkreiserfs mktemp
Post by Dale
I have reiserfs3 but I can't find 4. I didn't see anything in the man
page either. I thought it may be a option like -j for ext2 or 3.
IIRC, you must emerge reiser4.
Try eix reiser
You will need to patch your kernel (in your sdb test OS) and then you will
also need to make a reiser4 fs on your sdb partition(s) (for that you'll need
to emerge sys-fs/reiser4progs).

If you want to be able to mount reiser4 from within your sda OS, you will need
of course to patch your current kernel to start with, alternatively use a
LiveCD like sysrescue which comes already patched. For patches look in here:

http://www.kernel.org/pub/linux/kernel/people/edward/reiser4/reiser4-for-2.6/


The way I do what you are trying to do is start with the existing OS on sda,
partition sdb, tar contents of sda partitions into corresponding sdb
partitions and then modify fstab.

Depending on what you want to test you may not need grub installed into sdb's
MBR and you may not need a /boot in sdb. As long as you are not going to
remove sda from the machine you should be able to add a couple of lines in the
original grub.conf to select to boot /dev/sdb, while using sda's MBR and /boot
partition.

HTH.
--
Regards,
Mick
Dale
2011-09-16 15:50:01 UTC
Permalink
Post by Mick
You will need to patch your kernel (in your sdb test OS) and then you
will also need to make a reiser4 fs on your sdb partition(s) (for that
you'll need to emerge sys-fs/reiser4progs). If you want to be able to
mount reiser4 from within your sda OS, you will need of course to
patch your current kernel to start with, alternatively use a LiveCD
http://www.kernel.org/pub/linux/kernel/people/edward/reiser4/reiser4-for-2.6/
The way I do what you are trying to do is start with the existing OS
on sda, partition sdb, tar contents of sda partitions into
corresponding sdb partitions and then modify fstab. Depending on what
you want to test you may not need grub installed into sdb's MBR and
you may not need a /boot in sdb. As long as you are not going to
remove sda from the machine you should be able to add a couple of
lines in the original grub.conf to select to boot /dev/sdb, while
using sda's MBR and /boot partition. HTH.
I could have swore reiserfs4 was in the kernel. Sure enough, it ain't.
I'll wait then. I don't want to take the chance that something goes
belly up then not have a bootable way to fix things.

Now to go unmerge the other since I can't use it anyway. lol

Thanks.

Dale

:-) :-)

P. S. I got new glasses today. They have bifocals too. I can actually
read what I type. Here comes that turbo charger again. lol
Alan McKinnon
2011-09-16 16:20:02 UTC
Permalink
On Fri, 16 Sep 2011 10:47:01 -0500
Post by Dale
Post by Mick
You will need to patch your kernel (in your sdb test OS) and then
you will also need to make a reiser4 fs on your sdb partition(s)
(for that you'll need to emerge sys-fs/reiser4progs). If you want
to be able to mount reiser4 from within your sda OS, you will need
of course to patch your current kernel to start with, alternatively
use a LiveCD like sysrescue which comes already patched. For
http://www.kernel.org/pub/linux/kernel/people/edward/reiser4/reiser4-for-2.6/
The way I do what you are trying to do is start with the existing
OS on sda, partition sdb, tar contents of sda partitions into
corresponding sdb partitions and then modify fstab. Depending on
what you want to test you may not need grub installed into sdb's
MBR and you may not need a /boot in sdb. As long as you are not
going to remove sda from the machine you should be able to add a
couple of lines in the original grub.conf to select to
boot /dev/sdb, while using sda's MBR and /boot partition. HTH.
I could have swore reiserfs4 was in the kernel. Sure enough, it
ain't. I'll wait then. I don't want to take the chance that
something goes belly up then not have a bootable way to fix things.
reiser4 was never in the kernel and the odds of it ever making it there
were about zero (coding style issues and many other things that pissed
Linux off). And that was in the days when Hans was physically located
in a place where he was allowed to code.

For all practical purposes Reiser4 is dead. I haven't heard a peep out
of anyone claiming to maintain it for a few years now.
--
Alan McKinnnon
***@gmail.com
Dale
2011-09-16 17:10:02 UTC
Permalink
Post by Alan McKinnon
On Fri, 16 Sep 2011 10:47:01 -0500
Post by Dale
Post by Mick
You will need to patch your kernel (in your sdb test OS) and then
you will also need to make a reiser4 fs on your sdb partition(s)
(for that you'll need to emerge sys-fs/reiser4progs). If you want
to be able to mount reiser4 from within your sda OS, you will need
of course to patch your current kernel to start with, alternatively
use a LiveCD like sysrescue which comes already patched. For
http://www.kernel.org/pub/linux/kernel/people/edward/reiser4/reiser4-for-2.6/
The way I do what you are trying to do is start with the existing
OS on sda, partition sdb, tar contents of sda partitions into
corresponding sdb partitions and then modify fstab. Depending on
what you want to test you may not need grub installed into sdb's
MBR and you may not need a /boot in sdb. As long as you are not
going to remove sda from the machine you should be able to add a
couple of lines in the original grub.conf to select to
boot /dev/sdb, while using sda's MBR and /boot partition. HTH.
I could have swore reiserfs4 was in the kernel. Sure enough, it
ain't. I'll wait then. I don't want to take the chance that
something goes belly up then not have a bootable way to fix things.
reiser4 was never in the kernel and the odds of it ever making it there
were about zero (coding style issues and many other things that pissed
Linux off). And that was in the days when Hans was physically located
in a place where he was allowed to code.
For all practical purposes Reiser4 is dead. I haven't heard a peep out
of anyone claiming to maintain it for a few years now.
New question. I'm playing with LVM. What is the best file system to
use that with? I know LVM can shrink and grow so a file system should
be able to do the same, online would be great but not required. That
would be good for a / partition but not needed for the rest. I can
always go to single user and resize things.

I don't want XFS tho. I used it before and it was a total disaster. I
have a UPS but I also recall having to pull the plug when hal showed up
too. No need for a repeat.

Hmm, maybe I am thinking of ext4? Life's confusing. :/

Dale

:-) :-)
Joost Roeleveld
2011-09-16 18:10:02 UTC
Permalink
Post by Dale
Post by Alan McKinnon
On Fri, 16 Sep 2011 10:47:01 -0500
Post by Dale
Post by Mick
You will need to patch your kernel (in your sdb test OS) and then
you will also need to make a reiser4 fs on your sdb partition(s)
(for that you'll need to emerge sys-fs/reiser4progs). If you want
to be able to mount reiser4 from within your sda OS, you will need
of course to patch your current kernel to start with, alternatively
use a LiveCD like sysrescue which comes already patched. For
http://www.kernel.org/pub/linux/kernel/people/edward/reiser4/reiser4
-for-2.6/ The way I do what you are trying to do is start with the
existing OS on sda, partition sdb, tar contents of sda partitions
into
corresponding sdb partitions and then modify fstab. Depending on
what you want to test you may not need grub installed into sdb's
MBR and you may not need a /boot in sdb. As long as you are not
going to remove sda from the machine you should be able to add a
couple of lines in the original grub.conf to select to
boot /dev/sdb, while using sda's MBR and /boot partition. HTH.
I could have swore reiserfs4 was in the kernel. Sure enough, it
ain't. I'll wait then. I don't want to take the chance that
something goes belly up then not have a bootable way to fix things.
reiser4 was never in the kernel and the odds of it ever making it there
were about zero (coding style issues and many other things that pissed
Linux off). And that was in the days when Hans was physically located
in a place where he was allowed to code.
For all practical purposes Reiser4 is dead. I haven't heard a peep out
of anyone claiming to maintain it for a few years now.
New question. I'm playing with LVM. What is the best file system to
use that with? I know LVM can shrink and grow so a file system should
be able to do the same, online would be great but not required. That
would be good for a / partition but not needed for the rest. I can
always go to single user and resize things.
LVM is great, when installing a large package, downloading large files or
finding out I need a lot more diskspace for the VMs I am running to do some
testing, I can simply increase the LV (LVM-partition) and then increase the
filesystem to match. All this while the filesystem is being written to.
Post by Dale
I don't want XFS tho. I used it before and it was a total disaster. I
have a UPS but I also recall having to pull the plug when hal showed up
too. No need for a repeat.
I know from personal experience that the following support online resizing:
ext2/3, reiserfs (v3), XFS, JFS.
I would expect ext4 to also support that.

One thing to remember, the online resizing only allows growing of the
filesystem. For shrinking, you still need to umount it first.
Also, XFS and JFS don't support shrinking at all.

For testing, I would suggest starting with ext3 and/or reiserfs. Both work.
I haven't tried ext4 yet, maybe someone who runs that on top of LVM can
comment?
Post by Dale
Hmm, maybe I am thinking of ext4? Life's confusing. :/
I think you might be thinking of ext4.

Btw, a brief description on how resizing would work.
When growing the filesystem:
1) lvextend .....
2) "resizefs" (different filesystems, different commands)
This will work for all filesystems supporting online resizing. (I know of one
that actually only allows growing when it IS mounted)

When shrinking a filesystem:
1) umount
2) "resizefs" to less then what you want to shrink it to
3) lvreduce ....
4) "resizefs"

The "resizefs" will default to growing to the full extend of the partition/LV
it resides on.

--
Joost

PS. With LVM, I find it easier to make the partitions smaller to start with
and leave un-assigned space in the VG for the LVs to grow.
Alan McKinnon
2011-09-16 18:50:02 UTC
Permalink
On Fri, 16 Sep 2011 11:58:11 -0500
Post by Dale
Post by Alan McKinnon
On Fri, 16 Sep 2011 10:47:01 -0500
Post by Dale
Post by Mick
You will need to patch your kernel (in your sdb test OS) and then
you will also need to make a reiser4 fs on your sdb partition(s)
(for that you'll need to emerge sys-fs/reiser4progs). If you want
to be able to mount reiser4 from within your sda OS, you will need
of course to patch your current kernel to start with,
alternatively use a LiveCD like sysrescue which comes already
http://www.kernel.org/pub/linux/kernel/people/edward/reiser4/reiser4-for-2.6/
The way I do what you are trying to do is start with the existing
OS on sda, partition sdb, tar contents of sda partitions into
corresponding sdb partitions and then modify fstab. Depending on
what you want to test you may not need grub installed into sdb's
MBR and you may not need a /boot in sdb. As long as you are not
going to remove sda from the machine you should be able to add a
couple of lines in the original grub.conf to select to
boot /dev/sdb, while using sda's MBR and /boot partition. HTH.
I could have swore reiserfs4 was in the kernel. Sure enough, it
ain't. I'll wait then. I don't want to take the chance that
something goes belly up then not have a bootable way to fix things.
reiser4 was never in the kernel and the odds of it ever making it
there were about zero (coding style issues and many other things
that pissed Linux off). And that was in the days when Hans was
physically located in a place where he was allowed to code.
For all practical purposes Reiser4 is dead. I haven't heard a peep
out of anyone claiming to maintain it for a few years now.
New question. I'm playing with LVM. What is the best file system to
use that with?
The best one to use is the one you want to use.

LVM has nothing to do with type of filesystem, there is no such thing
as this one works and that one doesn't. So pick the one that suits your
needs.
Post by Dale
I know LVM can shrink and grow so a file system
should be able to do the same, online would be great but not
required. That would be good for a / partition but not needed for
the rest. I can always go to single user and resize things.
Wrong question. See above.
Post by Dale
I don't want XFS tho. I used it before and it was a total disaster.
I have a UPS but I also recall having to pull the plug when hal
showed up too. No need for a repeat.
Hmm, maybe I am thinking of ext4? Life's confusing. :/
ext4 is fine for your needs. I will be mighty surprised if your usage
ever hits ext4's limits.
--
Alan McKinnnon
***@gmail.com
Peter Humphrey
2011-09-16 21:10:01 UTC
Permalink
Post by Dale
Hmm, maybe I am thinking of ext4? Life's confusing. :/
In case it helps, here's the relevant part of my fstab:

/dev/sda1 /boot ext2 noatime,noauto 1 2
/dev/md3 / ext4 noatime 1 1
/dev/vg1/home /home ext4 noatime 1 2
/dev/vg1/common /home/prh/common ext4 noatime 1 3
/dev/vg1/boinc /home/prh/boinc ext4 noatime 1 3
/dev/vg1/virt /home/prh/.VirtualBox ext4 noatime 1 3
/dev/vg1/portage /usr/portage ext4 noatime 1 2
/dev/vg1/packages /usr/portage/packages ext4 noatime 1 3
/dev/vg1/distfiles /usr/portage/distfiles ext4 noatime 1 3
/dev/vg1/local /usr/local ext4 noatime 1 2
/dev/vg1/opt /opt ext4 noatime 1 2
/dev/vg1/srv /srv ext4 noatime 1 2
/dev/vg1/chroot /mnt/atom ext4 noatime 1 2

The common partition is where I keep my user stuff that is common among
distros. Boinc is where boinc runs, and virt is where VirtualBox runs. I
don't know why I still have a srv there, as Gentoo doesn't use it (maybe I
should reallocate it to /var or /var/tmp). Chroot is where I mount my Atom
box's portage directory so that I can use the workstation to build packages
for binary installation on the Atom box - saves oodles of time and heat.

I have a /dev/vg2 as well, for experimental installation of other distros;
those that can be installed into virtual partitions, that is.

The following commands would re-create those partitions and file systems,
having created the physical volume and the volume group vg1:

lvcreate -L 10G -n opt vg1
lvcreate -L 12G -n distfiles vg1
lvcreate -L 12G -n srv vg1
lvcreate -L 15G -n home vg1
lvcreate -L 15G -n virt vg1
lvcreate -L 20G -n boinc vg1
lvcreate -L 20G -n chroot vg1
lvcreate -L 20G -n packages vg1
lvcreate -L 2G -n local vg1
lvcreate -L 50G -n common vg1
lvcreate -L 8G -n portage vg1
mkfs.ext4 -j -O dir_index /dev/vg1/boinc
mkfs.ext4 -j -O dir_index /dev/vg1/chroot
mkfs.ext4 -j -O dir_index /dev/vg1/common
mkfs.ext4 -j -O dir_index /dev/vg1/distfiles
mkfs.ext4 -j -O dir_index /dev/vg1/home
mkfs.ext4 -j -O dir_index /dev/vg1/local
mkfs.ext4 -j -O dir_index /dev/vg1/opt
mkfs.ext4 -j -O dir_index /dev/vg1/packages
mkfs.ext4 -j -O dir_index /dev/vg1/portage
mkfs.ext4 -j -O dir_index /dev/vg1/srv
mkfs.ext4 -j -O dir_index /dev/vg1/virt

That list was created by David Noon's zsh script, which he posted here
recently. In fact I have file-systm labels written by mkfs.ext4 as well, but
David's script doesn't notice those.

Sda and sdb are 1TB SATA Samsung devices.

HTH.
--
Rgds
Peter Linux Counter 5290, 1994-04-23
Neil Bothwick
2011-09-16 22:20:01 UTC
Permalink
Post by Peter Humphrey
/dev/sda1 /boot ext2 noatime,noauto 1 2
/dev/md3 / ext4 noatime 1 1
/dev/vg1/home /home ext4 noatime
1 2
A word of advice when starting from scratch, give your VG(s) unique
names. I've seen what happens when someone takes a drive from
one Fedora system and puts it in another, so there are two VGs called
vg01. It ain't nice (only one is seen, usually not the one you want).

I prefer to give my VGs names related to the hostname, so it's perfectly
clear where they came from and no risk of name collisions if I have to
attach the drive to another computer.
--
Neil Bothwick

Happiness is merely the remission of pain.
Peter Humphrey
2011-09-17 00:00:02 UTC
Permalink
Post by Neil Bothwick
A word of advice when starting from scratch, give your VG(s) unique
names. I've seen what happens when someone takes a drive from
one Fedora system and puts it in another, so there are two VGs called
vg01. It ain't nice (only one is seen, usually not the one you want).
That would be nasty, yes, but here at home I don't expect to be switching
disks around between machines.

I did give a bit of thought to a VG naming scheme, but although several
ideas came up, nothing was clearly the best so I left them at vg1 and vg2.
--
Rgds
Peter Linux Counter 5290, 1994-04-23
Alan McKinnon
2011-09-17 12:50:02 UTC
Permalink
On Sat, 17 Sep 2011 00:49:21 +0100
Post by Peter Humphrey
Post by Neil Bothwick
A word of advice when starting from scratch, give your VG(s) unique
names. I've seen what happens when someone takes a drive from
one Fedora system and puts it in another, so there are two VGs
called vg01. It ain't nice (only one is seen, usually not the one
you want).
That would be nasty, yes, but here at home I don't expect to be
switching disks around between machines.
I did give a bit of thought to a VG naming scheme, but although
several ideas came up, nothing was clearly the best so I left them at
vg1 and vg2.
This is what GUIDs are for.

They are not the best thing to work with admittedly, but they are
guaranteed to be unique for all reasonable human needs. In a world
when we plug things out of anything and plug them back into anything,
a guaranteed unique ID is a necessaity.
--
Alan McKinnnon
***@gmail.com
Peter Humphrey
2011-09-18 22:10:03 UTC
Permalink
[GUIDs] are not the best thing to work with admittedly, but they are
guaranteed to be unique for all reasonable human needs. In a world
when we plug things out of anything and plug them back into anything,
a guaranteed unique ID is a necessaity.
As I said, I do not expect to move hard disks around willy-nilly in my
boxes, so it certainly isn't a necessity - I don't have an armada of
hundreds of boxes here. And I still haven't seen a compelling reason not to
quote, e.g., /dev/sda3 in fstab. I know where my partitions are and I want
to continue to know that. Call it control-freakery if you like, but it's at
the core of sys-admin (if I may say that to you, Alan).
--
Rgds
Peter Linux Counter 5290, 1994-04-23
Alan McKinnon
2011-09-18 23:50:01 UTC
Permalink
On Sun, 18 Sep 2011 23:02:45 +0100
Post by Peter Humphrey
[GUIDs] are not the best thing to work with admittedly, but they are
guaranteed to be unique for all reasonable human needs. In a world
when we plug things out of anything and plug them back into
anything, a guaranteed unique ID is a necessaity.
As I said, I do not expect to move hard disks around willy-nilly in
my boxes, so it certainly isn't a necessity - I don't have an armada
of hundreds of boxes here. And I still haven't seen a compelling
reason not to quote, e.g., /dev/sda3 in fstab. I know where my
partitions are and I want to continue to know that. Call it
control-freakery if you like, but it's at the core of sys-admin (if I
may say that to you, Alan).
Well, if you are completely confident you can deal with anything that
comes up, you should just continue doing what you've always done.
That's part of good sysadmining.

I express my own paranoia in a different way :-)
--
Alan McKinnnon
***@gmail.com
Neil Bothwick
2011-09-18 20:00:03 UTC
Permalink
Post by Peter Humphrey
Post by Neil Bothwick
A word of advice when starting from scratch, give your VG(s) unique
names. I've seen what happens when someone takes a drive from
one Fedora system and puts it in another, so there are two VGs called
vg01. It ain't nice (only one is seen, usually not the one you want).
That would be nasty, yes, but here at home I don't expect to be
switching disks around between machines.
A motherboard fails so you connect the disk to your new computer to
retrieve the data and it disappears. It happens too often :(
--
Neil Bothwick

"Daddy, what does formatting drive 'C' mean?
Dale
2011-09-17 00:10:01 UTC
Permalink
Post by Neil Bothwick
Post by Peter Humphrey
/dev/sda1 /boot ext2 noatime,noauto 1 2
/dev/md3 / ext4 noatime 1 1
/dev/vg1/home /home ext4 noatime
1 2
A word of advice when starting from scratch, give your VG(s) unique
names. I've seen what happens when someone takes a drive from
one Fedora system and puts it in another, so there are two VGs called
vg01. It ain't nice (only one is seen, usually not the one you want).
I prefer to give my VGs names related to the hostname, so it's perfectly
clear where they came from and no risk of name collisions if I have to
attach the drive to another computer.
I did name it pretty well. It is called "test" right now. lol Right
now, I'm just having fun. The biggest difference so far is that I can
see with my new glasses. I just wish I didn't have arthritis in my neck
and could move my head better. It's hard to switch between normal and
the bifocal thingys.

I'm getting this LVM thing down pat tho.

cfdisk to create partitions, if not using the whole drive.
pvcreate
vgcreate
lvcreate
then put on a file system and mount.

I still get them confused as to what comes first but I got some pictures
to look at now. That helps to picture what I am doing, sort of.

Thanks to all for the advice tho. It's helping. Still nervous about /
on LVM tho. :/

Dale

:-) :-)
Peter Humphrey
2011-09-17 01:00:02 UTC
Permalink
Still nervous about / on LVM tho. :/
Me too. That's why my / is on /dev/md3, which combines /dev/sd[ab]3 in
RAID-1.
--
Rgds
Peter Linux Counter 5290, 1994-04-23
William Kenworthy
2011-09-17 02:10:02 UTC
Permalink
Post by Dale
Post by Neil Bothwick
Post by Peter Humphrey
/dev/sda1 /boot ext2 noatime,noauto 1 2
/dev/md3 / ext4 noatime 1 1
/dev/vg1/home /home ext4 noatime
1 2
A word of advice when starting from scratch, give your VG(s) unique
names. I've seen what happens when someone takes a drive from
one Fedora system and puts it in another, so there are two VGs called
vg01. It ain't nice (only one is seen, usually not the one you want).
I prefer to give my VGs names related to the hostname, so it's perfectly
clear where they came from and no risk of name collisions if I have to
attach the drive to another computer.
I did name it pretty well. It is called "test" right now. lol Right
now, I'm just having fun. The biggest difference so far is that I can
see with my new glasses. I just wish I didn't have arthritis in my neck
and could move my head better. It's hard to switch between normal and
the bifocal thingys.
try multifocal - makes stairs ... fun.
Post by Dale
I'm getting this LVM thing down pat tho.
cfdisk to create partitions, if not using the whole drive.
pvcreate
vgcreate
lvcreate
then put on a file system and mount.
I still get them confused as to what comes first but I got some pictures
to look at now. That helps to picture what I am doing, sort of.
Thanks to all for the advice tho. It's helping. Still nervous about /
on LVM tho. :/
Dale
:-) :-)
I'll second the recommendation about naming ... I use separate
partitions and lvm on everything except root (for recovery reasons) and
small single drive systems ... been a real saver when partitions fill up
or when a system is re-purposed and you have to change the storage
profile . Make sure every lvm you use is uniquely named ... I am just
going through retrieving drives from two older (both lvm) systems and
pushing them into a single storage (and everything else) sever, also
lvm.

Make sure if you remove a drive and dont intend using it immediately,
delete the lvm data to prevent future grief ... it can happen at home as
well as in data centres :)

BillK
Dale
2011-09-17 11:50:01 UTC
Permalink
Post by William Kenworthy
I did name it pretty well. It is called "test" right now. lol Right
now, I'm just having fun. The biggest difference so far is that I can
see with my new glasses. I just wish I didn't have arthritis in my
neck and could move my head better. It's hard to switch between
normal and the bifocal thingys.
try multifocal - makes stairs ... fun.
It was night time and I was walking. I decided to walk through the
garden which means crossing the ditch. Let's just say that first step
was fun. When I looked at the ground, I was looking through the reading
part. I found out it was deeper than my eyes said.
Post by William Kenworthy
I'm getting this LVM thing down pat tho.
cfdisk to create partitions, if not using the whole drive.
pvcreate
vgcreate
lvcreate
then put on a file system and mount.
I still get them confused as to what comes first but I got some pictures
to look at now. That helps to picture what I am doing, sort of.
Thanks to all for the advice tho. It's helping. Still nervous about /
on LVM tho. :/
Dale
:-) :-)
I'll second the recommendation about naming ... I use separate
partitions and lvm on everything except root (for recovery reasons) and
small single drive systems ... been a real saver when partitions fill up
or when a system is re-purposed and you have to change the storage
profile . Make sure every lvm you use is uniquely named ... I am just
going through retrieving drives from two older (both lvm) systems and
pushing them into a single storage (and everything else) sever, also
lvm.
Make sure if you remove a drive and dont intend using it immediately,
delete the lvm data to prevent future grief ... it can happen at home as
well as in data centres :)
BillK
Should I include the drive itself? Like sda, sdb etc. I could use my
system name too. I'm on fireball and my older rig is named smoker. See
a trend here? lol Anyway, this could work:

fireball-sda
fireball-sdb
fireball-sdc

That would keep things straight I would think. Thoughts?

If I do move one tho, I would likely erase the LVM stuff anyway. This
rig is amd64 and my old rig is x86. It's not like I can move one to the
other. Plus, smoker is IDE and fireball is SATA. I do have one IDE
connector in fireball and a SATA card in smoker tho. It could happen.

Jeepers, this could turn into a full blown meeting of the minds. O_O

Dale

:-) :-)
Dale
2011-09-17 12:00:02 UTC
Permalink
Post by Dale
Post by William Kenworthy
I did name it pretty well. It is called "test" right now. lol Right
now, I'm just having fun. The biggest difference so far is that I
can see with my new glasses. I just wish I didn't have arthritis in
my neck and could move my head better. It's hard to switch between
normal and the bifocal thingys.
try multifocal - makes stairs ... fun.
It was night time and I was walking. I decided to walk through the
garden which means crossing the ditch. Let's just say that first step
was fun. When I looked at the ground, I was looking through the
reading part. I found out it was deeper than my eyes said.
Post by William Kenworthy
I'm getting this LVM thing down pat tho.
cfdisk to create partitions, if not using the whole drive.
pvcreate
vgcreate
lvcreate
then put on a file system and mount.
I still get them confused as to what comes first but I got some pictures
to look at now. That helps to picture what I am doing, sort of.
Thanks to all for the advice tho. It's helping. Still nervous about /
on LVM tho. :/
Dale
:-) :-)
I'll second the recommendation about naming ... I use separate
partitions and lvm on everything except root (for recovery reasons) and
small single drive systems ... been a real saver when partitions fill up
or when a system is re-purposed and you have to change the storage
profile . Make sure every lvm you use is uniquely named ... I am just
going through retrieving drives from two older (both lvm) systems and
pushing them into a single storage (and everything else) sever, also
lvm.
Make sure if you remove a drive and dont intend using it immediately,
delete the lvm data to prevent future grief ... it can happen at home as
well as in data centres :)
BillK
Should I include the drive itself? Like sda, sdb etc. I could use my
system name too. I'm on fireball and my older rig is named smoker.
fireball-sda
fireball-sdb
fireball-sdc
That would keep things straight I would think. Thoughts?
If I do move one tho, I would likely erase the LVM stuff anyway. This
rig is amd64 and my old rig is x86. It's not like I can move one to
the other. Plus, smoker is IDE and fireball is SATA. I do have one
IDE connector in fireball and a SATA card in smoker tho. It could
happen.
Jeepers, this could turn into a full blown meeting of the minds. O_O
Dale
:-) :-)
Well, actually the drive won't work because it could be on any drive.
Looks like hostname/system name is the best way. Thoughts?

Dale

:-) :-)
Alan McKinnon
2011-09-17 13:10:02 UTC
Permalink
On Sat, 17 Sep 2011 06:44:47 -0500
Post by Dale
Should I include the drive itself? Like sda, sdb etc. I could use
my system name too. I'm on fireball and my older rig is named
fireball-sda
fireball-sdb
fireball-sdc
That's not a good naming scheme, the names depend on what a specific
machine will do with them. Rather come up with a naming scheme that
depends on what the drive IS

dale-seagate-old 320G
dale-seagate-new-660G
dale-quantum-73G

or what you intend to DO with it

dale-root-oldrig
dale-photos-small
dale-photos-big
dale-music
--
Alan McKinnnon
***@gmail.com
Alex Schuster
2011-09-17 10:50:01 UTC
Permalink
Post by Dale
Post by Neil Bothwick
A word of advice when starting from scratch, give your VG(s) unique
names. I've seen what happens when someone takes a drive from
one Fedora system and puts it in another, so there are two VGs called
vg01. It ain't nice (only one is seen, usually not the one you want).
I prefer to give my VGs names related to the hostname, so it's
perfectly clear where they came from and no risk of name collisions
if I have to attach the drive to another computer.
I had such a collision once, but I was able to change the name of one
volume group by using vgrename with the UUID that vgdisplay shows. Dale,
unless you know about them already, check out {pv,vg,lv}{scan,display} for
information about your LVM.
Post by Dale
I did name it pretty well. It is called "test" right now. lol Right
now, I'm just having fun. The biggest difference so far is that I can
see with my new glasses. I just wish I didn't have arthritis in my
neck and could move my head better. It's hard to switch between normal
and the bifocal thingys.
I'm getting this LVM thing down pat tho.
Finally!
Post by Dale
cfdisk to create partitions, if not using the whole drive.
I like to create several partitions, not a single one. And use pvcreate
for every partition then. This way, I can free space later just in case I
somehow need to have a conventional partition for installing Windows or
something. pvmove can transfer stuff from one physical volume to another.

I also used to have two volume groups. One small one for the system at
the start of the drive, where it is supposed to be a little faster, and
another big one for the rest. But I stopped doing so, as I lose a little
flexibility, and the effect is probably negligible, as all stuff is
encrypted here.

Wonko
Alan McKinnon
2011-09-17 13:00:01 UTC
Permalink
On Fri, 16 Sep 2011 19:06:40 -0500
Post by Dale
Post by Neil Bothwick
Post by Peter Humphrey
/dev/sda1 /boot ext2 noatime,noauto 1 2
/dev/md3 / ext4 noatime 1 1
/dev/vg1/home /home ext4 noatime
1 2
A word of advice when starting from scratch, give your VG(s) unique
names. I've seen what happens when someone takes a drive from
one Fedora system and puts it in another, so there are two VGs
called vg01. It ain't nice (only one is seen, usually not the one
you want).
I prefer to give my VGs names related to the hostname, so it's
perfectly clear where they came from and no risk of name collisions
if I have to attach the drive to another computer.
I did name it pretty well. It is called "test" right now. lol
Right now, I'm just having fun. The biggest difference so far is
that I can see with my new glasses. I just wish I didn't have
arthritis in my neck and could move my head better. It's hard to
switch between normal and the bifocal thingys.
I'm getting this LVM thing down pat tho.
cfdisk to create partitions, if not using the whole drive.
pvcreate
vgcreate
lvcreate
then put on a file system and mount.
I still get them confused as to what comes first but I got some
pictures to look at now. That helps to picture what I am doing, sort
of.
Thanks to all for the advice tho. It's helping. Still nervous
about / on LVM tho. :/
Your list is upside down :-) Turn it the other way in your head and it
all makes sense, fs at the top and pv at the bottom and the order makes
sense. A useful mental trick is to remember that each thing in the list
can't be bigger than the one below it (you can't put a 200G fs on a
100G block device for example). Take the list:

fs
lv
vg
pv
disk partition
raw disk

To make a bigger fs, you need a bigger lv first. You have free space
in the vg, so you can just extend the lv into it, then grow the fs
(this is conceptually identical to making a disk partition bigger then
growing the fs if you don't use LVM).

To make a smaller fs, reduce the fs first then reduce the lv to match.

The one slight oddity is making a vg bigger and smaller - a vg isn't
like a volume that you can make bigger, it's a *group* of things,
specifically pvs. To make a vg bigger, you add pvs to it. To make a vg
smaller, you take pvs away (much like enlarging and reducing RAID
arrays - you add and remove disks).

Once you've worked it through it in your head once or twice it all
makes sense. Users just gotta spend the 30 minutes doing that first.
--
Alan McKinnnon
***@gmail.com
Dale
2011-09-17 11:40:02 UTC
Permalink
Hmm, maybe I am thinking of ext4? Life's confusing. :/
/dev/sda1 /boot ext2 noatime,noauto 1 2
/dev/md3 / ext4 noatime 1 1
/dev/vg1/home /home ext4 noatime 1 2
/dev/vg1/common /home/prh/common ext4 noatime 1 3
/dev/vg1/boinc /home/prh/boinc ext4 noatime 1 3
/dev/vg1/virt /home/prh/.VirtualBox ext4 noatime 1 3
/dev/vg1/portage /usr/portage ext4 noatime 1 2
/dev/vg1/packages /usr/portage/packages ext4 noatime 1 3
/dev/vg1/distfiles /usr/portage/distfiles ext4 noatime 1 3
/dev/vg1/local /usr/local ext4 noatime 1 2
/dev/vg1/opt /opt ext4 noatime 1 2
/dev/vg1/srv /srv ext4 noatime 1 2
/dev/vg1/chroot /mnt/atom ext4 noatime 1 2
The common partition is where I keep my user stuff that is common
among distros. Boinc is where boinc runs, and virt is where VirtualBox
runs. I don't know why I still have a srv there, as Gentoo doesn't use
it (maybe I should reallocate it to /var or /var/tmp). Chroot is where
I mount my Atom box's portage directory so that I can use the
workstation to build packages for binary installation on the Atom box
- saves oodles of time and heat.
I have a /dev/vg2 as well, for experimental installation of other
distros; those that can be installed into virtual partitions, that is.
The following commands would re-create those partitions and file
lvcreate -L 10G -n opt vg1
lvcreate -L 12G -n distfiles vg1
lvcreate -L 12G -n srv vg1
lvcreate -L 15G -n home vg1
lvcreate -L 15G -n virt vg1
lvcreate -L 20G -n boinc vg1
lvcreate -L 20G -n chroot vg1
lvcreate -L 20G -n packages vg1
lvcreate -L 2G -n local vg1
lvcreate -L 50G -n common vg1
lvcreate -L 8G -n portage vg1
mkfs.ext4 -j -O dir_index /dev/vg1/boinc
mkfs.ext4 -j -O dir_index /dev/vg1/chroot
mkfs.ext4 -j -O dir_index /dev/vg1/common
mkfs.ext4 -j -O dir_index /dev/vg1/distfiles
mkfs.ext4 -j -O dir_index /dev/vg1/home
mkfs.ext4 -j -O dir_index /dev/vg1/local
mkfs.ext4 -j -O dir_index /dev/vg1/opt
mkfs.ext4 -j -O dir_index /dev/vg1/packages
mkfs.ext4 -j -O dir_index /dev/vg1/portage
mkfs.ext4 -j -O dir_index /dev/vg1/srv
mkfs.ext4 -j -O dir_index /dev/vg1/virt
That list was created by David Noon's zsh script, which he posted here
recently. In fact I have file-systm labels written by mkfs.ext4 as
well, but David's script doesn't notice those.
Sda and sdb are 1TB SATA Samsung devices.
HTH.
--
Rgds
Peter Linux Counter 5290, 1994-04-23
Interesting read and it helps. I sort of understand LVM and realize its
benefits but just concerned about something breaking and me sitting here
with no clue how to fix it. Of course, Knoppix and gmails web mail may
help tho. ;-) Thanks for posting fstab too. That gives me some clues
on how to do mine. Clues are good.

I have copied over with the following:

/boot on its partition
/ on its partition
/home on LVM
/usr on LVM
/var on LVM

Then I ran into this:

/dev/sdb3 9614148 1526468 7599304 17% /mnt/gentoo
/dev/sdb1 280003 11568 253979 5% /mnt/gentoo/boot
/dev/mapper/test-home
51606140 10289244 38695456 22% /mnt/gentoo/home
/dev/mapper/test-usr 14449712 10841540 2874172 80% /mnt/gentoo/usr
/dev/mapper/test-var 12385456 6405360 5350952 55% /mnt/gentoo/var


Ooops, /usr is almost full. Well that ain't good at all is it? Well
looky here:

/dev/sdb3 9614148 1526468 7599304 17% /mnt/gentoo
/dev/sdb1 280003 11568 253979 5% /mnt/gentoo/boot
/dev/mapper/test-home
51606140 10289244 38695456 22% /mnt/gentoo/home
/dev/mapper/test-usr 20642428 10845076 8748856 56% /mnt/gentoo/usr
/dev/mapper/test-var 12385456 6405360 5350952 55% /mnt/gentoo/var

That was like 5 minutes later and it was mounted the whole time too.
Yep, it's pretty neat. Now to work on this init crap. I read a couple
howtos but it is still murky. It'll sort out as I get started I guess.

I'm using ext4 by the way. It's been out a while and sounds like it is
stable.

Does LVM make the heads move around more or anything like that? I'm
just thinking it would depending on what lv are on what drives. I
dunno, just curious.

Thanks.

Dale

:-) :-)
Peter Humphrey
2011-09-18 22:20:02 UTC
Permalink
Post by Dale
Does LVM make the heads move around more or anything like that? I'm
just thinking it would depending on what lv are on what drives. I
dunno, just curious.
I haven't thought about that, but my first impression is that LVM won't make
any great difference. The data get stored where the data get stored, if you
see what I mean. How they're organised is in the implementation layers. (Am
I making sense? It's getting late here.)
--
Rgds
Peter Linux Counter 5290, 1994-04-23
Dale
2011-09-19 04:20:01 UTC
Permalink
Post by Peter Humphrey
Does LVM make the heads move around more or anything like that? I'm
just thinking it would depending on what lv are on what drives. I
dunno, just curious.
I haven't thought about that, but my first impression is that LVM
won't make any great difference. The data get stored where the data
get stored, if you see what I mean. How they're organised is in the
implementation layers. (Am I making sense? It's getting late here.)
--
Rgds
Peter Linux Counter 5290, 1994-04-23
Yea, I see the point. I was even thinking that if LVM is on multiple
drives and the a lv was spanned across two or more drives, then it could
even be faster. Data spanned across two or more drives could result in
it reading more data faster since both drives are collecting data at
about the same time.

But then again, it depends on how the data is spread out too. I guess
it is six of one and half a dozen of the other.

Dale

:-) :-)
Pandu Poluan
2011-09-19 07:00:01 UTC
Permalink
Post by Dale
Post by Peter Humphrey
Does LVM make the heads move around more or anything like that? I'm
just thinking it would depending on what lv are on what drives. I
dunno, just curious.
I haven't thought about that, but my first impression is that LVM won't
make any great difference. The data get stored where the data get stored, if
you see what I mean. How they're organised is in the implementation layers.
(Am I making sense? It's getting late here.)
Post by Dale
Post by Peter Humphrey
--
Rgds
Peter Linux Counter 5290, 1994-04-23
Yea, I see the point. I was even thinking that if LVM is on multiple
drives and the a lv was spanned across two or more drives, then it could
even be faster. Data spanned across two or more drives could result in it
reading more data faster since both drives are collecting data at about the
same time.
Post by Dale
But then again, it depends on how the data is spread out too. I guess it
is six of one and half a dozen of the other.
I'm not sure if LVM by itself implement striping. Most likely not because
LVM usually starts with 1 HD then gets additional PVs added. Plus there's
the possibility that the second PV has a different size.

I might be wrong, though, since all my experience with LVM involves only one
drive.

Rgds,
Alan McKinnon
2011-09-19 07:30:02 UTC
Permalink
On Mon, 19 Sep 2011 13:51:03 +0700
Post by Pandu Poluan
Post by Dale
Post by Peter Humphrey
Does LVM make the heads move around more or anything like that? I'm
just thinking it would depending on what lv are on what drives. I
dunno, just curious.
I haven't thought about that, but my first impression is that LVM won't
make any great difference. The data get stored where the data get
stored, if you see what I mean. How they're organised is in the
implementation layers. (Am I making sense? It's getting late here.)
Post by Dale
Post by Peter Humphrey
--
Rgds
Peter Linux Counter 5290, 1994-04-23
Yea, I see the point. I was even thinking that if LVM is on
multiple
drives and the a lv was spanned across two or more drives, then it
could even be faster. Data spanned across two or more drives could
result in it reading more data faster since both drives are
collecting data at about the same time.
Post by Dale
But then again, it depends on how the data is spread out too. I guess it
is six of one and half a dozen of the other.
I'm not sure if LVM by itself implement striping. Most likely not
because LVM usually starts with 1 HD then gets additional PVs added.
Plus there's the possibility that the second PV has a different size.
I might be wrong, though, since all my experience with LVM involves
only one drive.
LVM does do striping according to the man page. I've never tried it,
mostly because LVM is the wrong place to do that IMHO.

Use RAID for that instead and leave LVM to do what it's good at -
managing storage volumes
--
Alan McKinnnon
***@gmail.com
Dale
2011-09-19 08:10:02 UTC
Permalink
Post by Alan McKinnon
On Mon, 19 Sep 2011 13:51:03 +0700
Post by Pandu Poluan
Post by Dale
Post by Peter Humphrey
Does LVM make the heads move around more or anything like that? I'm
just thinking it would depending on what lv are on what drives. I
dunno, just curious.
I haven't thought about that, but my first impression is that LVM won't
make any great difference. The data get stored where the data get
stored, if you see what I mean. How they're organised is in the
implementation layers. (Am I making sense? It's getting late here.)
Post by Dale
Post by Peter Humphrey
--
Rgds
Peter Linux Counter 5290, 1994-04-23
Yea, I see the point. I was even thinking that if LVM is on
multiple
drives and the a lv was spanned across two or more drives, then it
could even be faster. Data spanned across two or more drives could
result in it reading more data faster since both drives are
collecting data at about the same time.
Post by Dale
But then again, it depends on how the data is spread out too. I guess it
is six of one and half a dozen of the other.
I'm not sure if LVM by itself implement striping. Most likely not
because LVM usually starts with 1 HD then gets additional PVs added.
Plus there's the possibility that the second PV has a different size.
I might be wrong, though, since all my experience with LVM involves
only one drive.
LVM does do striping according to the man page. I've never tried it,
mostly because LVM is the wrong place to do that IMHO.
Use RAID for that instead and leave LVM to do what it's good at -
managing storage volumes
What I was thinking about is this. You have two drives that is one lv.
It has to be data stored on both drives at some point. Example, you
have a data base that is 500Gbs. You have two drives that are 300Gbs
each that are in the same lv. Well obviously 200Gbs has to be on a
different drive. Isn't that striping which would would result in a
speed increase?

Now if it is like me and is only one drive, then that won't happen.

Dale

:-) :-)
Alan McKinnon
2011-09-19 08:50:01 UTC
Permalink
On Mon, 19 Sep 2011 03:01:32 -0500
Post by Dale
Post by Alan McKinnon
Post by Pandu Poluan
I'm not sure if LVM by itself implement striping. Most likely not
because LVM usually starts with 1 HD then gets additional PVs
added. Plus there's the possibility that the second PV has a
different size.
I might be wrong, though, since all my experience with LVM involves
only one drive.
LVM does do striping according to the man page. I've never tried it,
mostly because LVM is the wrong place to do that IMHO.
Use RAID for that instead and leave LVM to do what it's good at -
managing storage volumes
What I was thinking about is this. You have two drives that is one
lv. It has to be data stored on both drives at some point. Example,
you have a data base that is 500Gbs. You have two drives that are
300Gbs each that are in the same lv. Well obviously 200Gbs has to be
on a different drive. Isn't that striping which would would result
in a speed increase?
Now if it is like me and is only one drive, then that won't happen.
Think about this from a viewpoint of design.

You took two drives and put them in one big VG then assigned an LV to
the entire VG.

Now, what can you reasonably expect LVM to do with this? The obvious
answer is that PVs can be any old size and speed so LVM should just go
and do what it thinks is best. You only have one volume, there is zero
information available to the software to help it decide which PV is
better for which use, it can't look at your files on the LV and use
that to decide (LVM is clueless about fs structure and files), it
can't look at the connection type and decide to give higher priority to
Fibre connected drives in preference to USB connected drives. So the
only thing it could possibly do is maybe perhaps notice that the PVs
are the same size and maybe perhaps decide to do striping. Maybe. What
it will probably do is fill the first drive then start on the second.

Your case of two identical drives for a big database is not the usual
case for LVM, it is built to deal with VGs consisting of just about
anything. Any support it has for striping and mirroring would be
necessarily highly limited.

There is a MUCH better to do this, it's RAID which was designed to deal
with exactly this kind of thing. You know how you want those two drives
to behave, so put them in a RAID array first, set up the way you want
them. That will give you a block device that you turn into a PV, add
this single PV to a VG and make an LV from that.
--
Alan McKinnnon
***@gmail.com
Thanasis
2011-09-19 08:50:01 UTC
Permalink
Post by Dale
What I was thinking about is this. You have two drives that is one lv.
It has to be data stored on both drives at some point. Example, you
have a data base that is 500Gbs. You have two drives that are 300Gbs
each that are in the same lv. Well obviously 200Gbs has to be on a
different drive. Isn't that striping
I think, for stripping you have to use
lvcreate with option "-I StripeSize"
Joost Roeleveld
2011-09-19 09:00:01 UTC
Permalink
Post by Dale
Post by Alan McKinnon
On Mon, 19 Sep 2011 13:51:03 +0700
Post by Pandu Poluan
I'm not sure if LVM by itself implement striping. Most likely not
because LVM usually starts with 1 HD then gets additional PVs added.
Plus there's the possibility that the second PV has a different size.
I might be wrong, though, since all my experience with LVM involves
only one drive.
LVM does do striping according to the man page. I've never tried it,
mostly because LVM is the wrong place to do that IMHO.
Use RAID for that instead and leave LVM to do what it's good at -
managing storage volumes
What I was thinking about is this. You have two drives that is one lv.
It has to be data stored on both drives at some point. Example, you
have a data base that is 500Gbs. You have two drives that are 300Gbs
each that are in the same lv. Well obviously 200Gbs has to be on a
different drive. Isn't that striping which would would result in a
speed increase?
I think you're using the wrong names for the different pieces.
The 2 300GB drives will be Physical Volumes (PV)
These 2 PVs are in the Volume Group (VG)
This VG has 1 LV for database.

This LV, as it's at least 500GB will have parts on both PVs (harddrives)

This will have increased performance when the data being read happens to be on
2 physically different drives. If you just extend over 2 (or more) drives,
this is not very likely as the first 300GB of data will still, physically, be
on the same drive.

To spread the data more equally (eg. sequential parts will be on alternating
drives) you have 2 options:
1) Merge the 2 drives in a single RAID0-device (for striping)
2) Tell LVM to use striping for the LV when creating it.

My personal preference would be option 1 as I agree with Alan that LVM should
stick to managing LVs and leave striping and other options to RAID-
devices/software.
Post by Dale
Now if it is like me and is only one drive, then that won't happen.
With only one drive, definitely not :)

--
Joost
Alan McKinnon
2011-09-19 14:30:02 UTC
Permalink
On Mon, 19 Sep 2011 10:55:19 +0200
Post by Joost Roeleveld
My personal preference would be option 1 as I agree with Alan that
LVM should stick to managing LVs and leave striping and other options
to RAID- devices/software.
My preference is to get rid of this whole artifical
disk/block-device/partition/pv/vg/lv nonsense and have one layer that
does it all. I really don't see the point in persisting with keeping
knowledge of distinct disks all the way through the stack, all of that
should just be abstracted into "storage" that can have rules attached
where you specify how you want stuff to behave (like mirroring and
striping).

As it is, each layer is a very thin wrapper around a physical object
and we only lose that distinction when we create lvs. Even though I
understand how the whole stack works, it's still hard to visualize (5
layers!) and consumes way too much time explaining it to people. All
rather unnecessary.

I had high hopes that ZFS would take us to a new place where all that
would be possible.
--
Alan McKinnnon
***@gmail.com
Pandu Poluan
2011-09-19 15:50:02 UTC
Permalink
Post by Alan McKinnon
My preference is to get rid of this whole artifical
disk/block-device/partition/pv/vg/lv nonsense and have one layer that
does it all. I really don't see the point in persisting with keeping
knowledge of distinct disks all the way through the stack, all of that
should just be abstracted into "storage" that can have rules attached
where you specify how you want stuff to behave (like mirroring and
striping).
As it is, each layer is a very thin wrapper around a physical object
and we only lose that distinction when we create lvs. Even though I
understand how the whole stack works, it's still hard to visualize (5
layers!) and consumes way too much time explaining it to people. All
rather unnecessary.
I had high hopes that ZFS would take us to a new place where all that
would be possible.
Sounds like you need a SAN Storage solution like NetApp or HDS :-)

Heck, OpenFiler is a surprisingly good solution; it's now being used in
production in my company's subsidiary.

Rgds,
Alan McKinnon
2011-09-19 19:20:02 UTC
Permalink
On Mon, 19 Sep 2011 22:40:36 +0700
Post by Pandu Poluan
Post by Alan McKinnon
I had high hopes that ZFS would take us to a new place where all
that would be possible.
Sounds like you need a SAN Storage solution like NetApp or HDS :-)
I have those options too. But the team that runs the SAN charges and
the rates are not cheap.

So far the best external storage our team has ever had is the Dell
M9000 on the ftp server. 15TB raw space, 11TB usable, connected as a
simple DAS, and it's run for years without any issues at all.
Post by Pandu Poluan
Heck, OpenFiler is a surprisingly good solution; it's now being used
in production in my company's subsidiary.
A colleague is thrilled with OpenFiler, I haven't checked it out myself
yet. I might just try it out on a media server.
--
Alan McKinnnon
***@gmail.com
Pandu Poluan
2011-09-20 00:30:02 UTC
Permalink
Post by Alan McKinnon
On Mon, 19 Sep 2011 22:40:36 +0700
Post by Pandu Poluan
Post by Alan McKinnon
I had high hopes that ZFS would take us to a new place where all
that would be possible.
Sounds like you need a SAN Storage solution like NetApp or HDS :-)
I have those options too. But the team that runs the SAN charges and
the rates are not cheap.
Yeah, tell me about it.

The day after we got our first NetApp array, the Finance Director demanded
to know how we were going to recoup the cost >.<

Then the next week, the company got its bacon saved when the production
database got corrupted. We quickly mount the previous hour's snapshot, and
averted an off day. Suddenly, the NetApp array became the company's most
valuable asset.

(Reading BOFH indeed opened up my team's unlimited resource of creativity
;-) )
Post by Alan McKinnon
So far the best external storage our team has ever had is the Dell
M9000 on the ftp server. 15TB raw space, 11TB usable, connected as a
simple DAS, and it's run for years without any issues at all.
Post by Pandu Poluan
Heck, OpenFiler is a surprisingly good solution; it's now being used
in production in my company's subsidiary.
A colleague is thrilled with OpenFiler, I haven't checked it out myself
yet. I might just try it out on a media server.
You should! If you want to play around first, install it as a VM.

There are some 'gotcha's, not necessarily data-corrupting bugs, but annoying
ones like: if the conditions are right, a reboot will fail to remount LVs
(simple edit to one of the scripts fixes that). And configuring it to
provide iSCSI LUNs is very... involved.

But once you get the hang of it, everything will be a breeze, just like the
... involved way of installing Gentoo :-)

As for stability, I have no complaint whatsoever.

Beside OpenFiler, you might also want to take a look at Nexenta. Similar
solution to OpenFiler (i.e., convert a server-full-of-hard-disks into a SAN
Storage), but based on OpenSolaris.

And if you do not need the utmost performance, e.g., just a never-ending
NAS, I have heard that QNAP enclosures are the act to follow.

Rgds,
Alan McKinnon
2011-09-20 09:30:01 UTC
Permalink
On Tue, 20 Sep 2011 07:19:18 +0700
Pandu Poluan <***@poluan.info> wrote:

[snip]
Post by Pandu Poluan
Post by Alan McKinnon
Post by Pandu Poluan
Sounds like you need a SAN Storage solution like NetApp or HDS :-)
I have those options too. But the team that runs the SAN charges and
the rates are not cheap.
Yeah, tell me about it.
The day after we got our first NetApp array, the Finance Director
demanded to know how we were going to recoup the cost >.<
Then the next week, the company got its bacon saved when the
production database got corrupted. We quickly mount the previous
hour's snapshot, and averted an off day. Suddenly, the NetApp array
became the company's most valuable asset.
(Reading BOFH indeed opened up my team's unlimited resource of
creativity ;-) )
Ah, bean counters. Those fellows have an annoying habit of asking:

"Please tell me why I must spend a trifling amount of money (in
comparison to my own wages) on something that protects our major asset?"

I have found that cron and at provides the way to demonstrate exactly
why one must spend that money.
--
Alan McKinnnon
***@gmail.com
Pandu Poluan
2011-09-19 09:00:01 UTC
Permalink
Post by Alan McKinnon
On Mon, 19 Sep 2011 13:51:03 +0700
Post by Pandu Poluan
I'm not sure if LVM by itself implement striping. Most likely not
because LVM usually starts with 1 HD then gets additional PVs added.
Plus there's the possibility that the second PV has a different size.
I might be wrong, though, since all my experience with LVM involves
only one drive.
LVM does do striping according to the man page. I've never tried it,
mostly because LVM is the wrong place to do that IMHO.
Use RAID for that instead and leave LVM to do what it's good at -
managing storage volumes
Ah, thanks for the correction. Anyways, I agree with your last sentence.

Soft-RAID is always a catastrophe waiting to happen.

Rgds,
Alan McKinnon
2011-09-19 14:20:02 UTC
Permalink
On Mon, 19 Sep 2011 15:54:52 +0700
Post by Pandu Poluan
Post by Alan McKinnon
LVM does do striping according to the man page. I've never tried it,
mostly because LVM is the wrong place to do that IMHO.
Use RAID for that instead and leave LVM to do what it's good at -
managing storage volumes
Ah, thanks for the correction. Anyways, I agree with your last
sentence.
Soft-RAID is always a catastrophe waiting to happen.
My experience has always been that Linux software raid gets the job
done perfectly, every time, no issues and has never failed me.

Hardware RAID is another story, especially those cheap nasty
onboard pseudo-RAID thingies. DOubly so if the name Adaptec appears
anywhere.
--
Alan McKinnnon
***@gmail.com
Michael Mol
2011-09-19 14:30:03 UTC
Permalink
Post by Alan McKinnon
On Mon, 19 Sep 2011 15:54:52 +0700
Post by Pandu Poluan
Post by Alan McKinnon
LVM does do striping according to the man page. I've never tried it,
mostly because LVM is the wrong place to do that IMHO.
Use RAID for that instead and leave LVM to do what it's good at -
managing storage volumes
Ah, thanks for the correction. Anyways, I agree with your last sentence.
Soft-RAID is always a catastrophe waiting to happen.
My experience has always been that Linux software raid gets the job
done perfectly, every time, no issues and has never failed me.
Hardware RAID is another story, especially those cheap nasty
onboard pseudo-RAID thingies. DOubly so if the name Adaptec appears
anywhere.
Seconding software raid. Looking forward to btrfs being full-featured,
though, as integrating lvm and raid functionality sanely into the
filesystem a la ZFS is going to be very, very nice.
--
:wq
Pandu Poluan
2011-09-19 15:50:02 UTC
Permalink
Post by Alan McKinnon
On Mon, 19 Sep 2011 15:54:52 +0700
Post by Pandu Poluan
Post by Alan McKinnon
LVM does do striping according to the man page. I've never tried it,
mostly because LVM is the wrong place to do that IMHO.
Use RAID for that instead and leave LVM to do what it's good at -
managing storage volumes
Ah, thanks for the correction. Anyways, I agree with your last sentence.
Soft-RAID is always a catastrophe waiting to happen.
My experience has always been that Linux software raid gets the job
done perfectly, every time, no issues and has never failed me.
Hardware RAID is another story, especially those cheap nasty
onboard pseudo-RAID thingies. DOubly so if the name Adaptec appears
anywhere.
Ah yes. If we're talking about entry-level RAID, then I agree; Linux's RAID
is head and shoulders above cheapo Adaptec pseudo-RAID.

But in an Enterprise setting, I trust the server box's battery-backed RAID
controller better than any software solution.

At least, in the latter case we have a 3rd party to sue ;-)

Rgds,
David W Noon
2011-09-19 11:40:02 UTC
Permalink
On Mon, 19 Sep 2011 09:26:16 +0200, Alan McKinnon wrote about "Re:
[gentoo-user] grub and what happens exactly when booting.":

[snip]
Post by Alan McKinnon
LVM does do striping according to the man page. I've never tried it,
mostly because LVM is the wrong place to do that IMHO.
Use RAID for that instead and leave LVM to do what it's good at -
managing storage volumes
But LVM and software RAID do that in the same place: the "dm" driver,
inside the kernel. Using LVM to create stripes, when that is the only
RAID-like feature you need, eliminates the complexity of having the RAID
software installed.
--
Regards,

Dave [RLU #314465]
======================================================================
***@ntlworld.com (David W Noon)
======================================================================
Gregory Shearman
2011-09-17 01:20:02 UTC
Permalink
Post by Dale
I'm getting this LVM thing down pat tho.
cfdisk to create partitions, if not using the whole drive.
pvcreate
vgcreate
lvcreate
then put on a file system and mount.
Sounds good.
Post by Dale
I still get them confused as to what comes first but I got some pictures
to look at now. That helps to picture what I am doing, sort of.
pv = physical volume. Physical comes first.

vg = volume group. LVM volumes MUST belong to a group.

lv = logical volume. Logical obviously comes last.
Post by Dale
Thanks to all for the advice tho. It's helping. Still nervous about /
on LVM tho. :/
If you're nervous then don't do it. I've had my root filesystem on LVM
for years on a number of different machines and never had a problem.
I've used Genkernel's initramfs generator to create my initramfs, but
I've just unmasked dracut and I've begun testing it on my system. It
looks good so far, except that dracut mounts something on my nonexistent
/run directory. This caused a warning when displaying mounted
filesystems using "df" about /run not existing. I'm wondering whether I
can just add the directory or do I need to do something special such as
add a ".keep" file to it.

If you want to go ahead with root on LVM then leave a duplicate root
partition in a normal linux partition until you're satisfied, or backup
your root partition to another machine or disk drive. You should be
backing up your system anyway. My root filesystem on my laptop is only
161MB. Of course I've got a separate /usr and /var filesystem, which is
the reason for my testing of dracut. I want to be ahead of the game when
these forced udev changes become mandatory.

I'm not happy about the decision to make /usr necessary for udev to
populate /dev, but I've used an initramfs for years so it's not such a
wrench for as it for other gentoo users.
--
Regards,
Gregory.
Loading...