Hello,
With the change from traditional /dev/hd* type device nodes to /dev/sd* type nodes, I see that the highest partition number that can be used is 15. Will this change so that higher numbers will be available? I do have a system that has a drive with almost 30 small partitions used in a research effort and the number of partitions has not been an issue thru fc6.
On Thu, Dec 21, 2006 at 05:57:31PM -0500, Clyde E. Kunkel wrote:
Hello,
With the change from traditional /dev/hd* type device nodes to /dev/sd* type nodes, I see that the highest partition number that can be used is 15. Will this change so that higher numbers will be available? I do have a system that has a drive with almost 30 small partitions used in a research effort and the number of partitions has not been an issue thru fc6.
afaik, The only way to achieve > 15 "partitions" per disk with /dev/sd* is to use device mapper. I'm not sure the upper limit of logical volumes per PV, but I'm fairly sure it's higher than 15.
Dave
On Thursday 21 December 2006 10:31pm, Dave Jones wrote:
On Thu, Dec 21, 2006 at 05:57:31PM -0500, Clyde E. Kunkel wrote:
Hello,
With the change from traditional /dev/hd* type device nodes to /dev/sd* type nodes, I see that the highest partition number that can be used is 15. Will this change so that higher numbers will be available? I do have a system that has a drive with almost 30 small partitions used in a research effort and the number of partitions has not been an issue thru fc6.
afaik, The only way to achieve > 15 "partitions" per disk with /dev/sd* is to use device mapper. I'm not sure the upper limit of logical volumes per PV, but I'm fairly sure it's higher than 15.
Under LVM1 (2.4 kernel and earlier) it was 256 LVs per *system* (only one major number is used for all of LVM). The number of VGs (limited to 99, BTW) was not a factor, nor the number of PVs.
Because the 32-bit device number space is split at 20 bits major and 12 bits for the minor number (have I got that right way around), I would guess that LVM2 (kernel 2.6 & later) supports a maximum of 4096 LVs per system.
On Fri, Dec 22, 2006 at 12:31:21AM -0500, Dave Jones wrote:
On Thu, Dec 21, 2006 at 05:57:31PM -0500, Clyde E. Kunkel wrote:
Hello,
With the change from traditional /dev/hd* type device nodes to /dev/sd* type nodes, I see that the highest partition number that can be used is 15. Will this change so that higher numbers will be available? I do have a system that has a drive with almost 30 small partitions used in a research effort and the number of partitions has not been an issue thru fc6.
afaik, The only way to achieve > 15 "partitions" per disk with /dev/sd* is to use device mapper. I'm not sure the upper limit of logical volumes per PV, but I'm fairly sure it's higher than 15.
You can also use GPT (GUID Partition Table) for non-boot disks (or for all disks at ia64/s390). If I good remember the number of partitions is unlimited by GPT specification (= the limit depends on implementation; usually 128 partitions). The 'parted' supports GPT.
http://en.wikipedia.org/wiki/GUID_Partition_Table
Karel
On Thu, 2006-12-21 at 17:57 -0500, Clyde E. Kunkel wrote:
Hello,
With the change from traditional /dev/hd* type device nodes to /dev/sd* type nodes, I see that the highest partition number that can be used is 15. Will this change so that higher numbers will be available? I do have a system that has a drive with almost 30 small partitions used in a research effort and the number of partitions has not been an issue thru fc6.
--
Use LVM. Trust me. You won't be sorry.
- Gilboa
Gilboa Davara wrote:
On Thu, 2006-12-21 at 17:57 -0500, Clyde E. Kunkel wrote:
Hello,
With the change from traditional /dev/hd* type device nodes to /dev/sd* type nodes, I see that the highest partition number that can be used is 15. Will this change so that higher numbers will be available? I do have a system that has a drive with almost 30 small partitions used in a research effort and the number of partitions has not been an issue thru fc6.
--
Use LVM. Trust me. You won't be sorry.
Use lvm is not the answer, if this was possible before the system should upgrade to FC-7 flawlessly, if /dev/hdx accomodated more then 15 partitions, then so should the new /dev/sdx when it becomes the new device for this disk.
Regards,
Hans
On Fri, 2006-12-22 at 08:32 +0100, Hans de Goede wrote:
Gilboa Davara wrote:
On Thu, 2006-12-21 at 17:57 -0500, Clyde E. Kunkel wrote:
Hello,
With the change from traditional /dev/hd* type device nodes to /dev/sd* type nodes, I see that the highest partition number that can be used is 15. Will this change so that higher numbers will be available? I do have a system that has a drive with almost 30 small partitions used in a research effort and the number of partitions has not been an issue thru fc6.
--
Use LVM. Trust me. You won't be sorry.
Use lvm is not the answer, if this was possible before the system should upgrade to FC-7 flawlessly, if /dev/hdx accomodated more then 15 partitions, then so should the new /dev/sdx when it becomes the new device for this disk.
Regards,
Hans
Short answer: Use LVM! Long answer: Asking the kernel hackers to risk breaking compatibility inside inside the SCSI layer [1] and break God-knows-how-many-different-user-land-applications just to handle a 0.001% end case that can be handled differently (disable libata, use LVM), is pure madness. You can argue the it should be possible to disable libata in special end-cases such as the OP's case (and I'll second your motion), but that's another matter altogether.
- Gilboa [1] I'm not very familiar with the partition/fs code, but something tells me the 16x minor->physical ID translation is hard coded everywhere. Plus, increasing the number of partitions to 63 will reduce the number of possible SCSI drives to 8 (?? I'm not sure. should be 512/64) and there are many people (including myself) that have more then 8 SCSI/SATA drives configurations running software RAID5/6.
On Fri, Dec 22, 2006 at 08:32:53AM +0100, Hans de Goede wrote:
upgrade to FC-7 flawlessly, if /dev/hdx accomodated more then 15 partitions, then so should the new /dev/sdx when it becomes the new device for this disk.
Thats something you need to take upstream. I would also like to see it happen but previous patches to allow sparse minor numbers have always been vetoed by some of the fs maintainers.
On Saturday 30 December 2006 04:55pm, Alan Cox wrote:
On Fri, Dec 22, 2006 at 08:32:53AM +0100, Hans de Goede wrote:
upgrade to FC-7 flawlessly, if /dev/hdx accomodated more then 15 partitions, then so should the new /dev/sdx when it becomes the new device for this disk.
Thats something you need to take upstream. I would also like to see it happen but previous patches to allow sparse minor numbers have always been vetoed by some of the fs maintainers.
Since 2.6 and the 32-bit device ID space?
On Sat, 2006-12-30 at 18:55 -0500, Alan Cox wrote:
On Fri, Dec 22, 2006 at 08:32:53AM +0100, Hans de Goede wrote:
upgrade to FC-7 flawlessly, if /dev/hdx accomodated more then 15 partitions, then so should the new /dev/sdx when it becomes the new device for this disk.
Thats something you need to take upstream. I would also like to see it happen but previous patches to allow sparse minor numbers have always been vetoed by some of the fs maintainers.
when libata moves away from scsi this is easily solvable ;)
Gilboa Davara wrote:
Use LVM. Trust me. You won't be sorry.
I've been there, done that, and regretted it deeply. I got rid of LVM the first chance I could.
LVM does not inter-operate with anything else. Grub does not work under LVM. Parted does not grok LVM: you cannot create a hard partition from LVM free space. Using the rescue CDs is a nightmare under LVM: the LVM setup is not recognized automatically (you must remember what it is) and the rescue environment contains no help or documentation on LVM (such as: the _syntax_ for naming the pieces!)
LVM probably kills all low-level backup and recovery. Do not use LVM unless you are 100.000000% certain that you will never be faced with a hardware disaster.
On Friday 22 December 2006 10:50, John Reiser wrote:
Using the rescue CDs is a nightmare under LVM: the LVM setup is not recognized automatically
I call BS. I test the rescueCD quite a lot during development process, and it _always_ find the LVM and sets it up automatically.
Jesse Keating wrote:
On Friday 22 December 2006 10:50, John Reiser wrote:
Using the rescue CDs is a nightmare under LVM: the LVM setup is not recognized automatically
I call BS. I test the rescueCD quite a lot during development process, and it _always_ find the LVM and sets it up automatically.
Didn't work for me in March 2006 on x86_64 using a then-reasonbly-current rescue CD (probably a couple months old).
On Fri, 2006-12-22 at 08:57 -0800, John Reiser wrote:
Jesse Keating wrote:
On Friday 22 December 2006 10:50, John Reiser wrote:
Using the rescue CDs is a nightmare under LVM: the LVM setup is not recognized automatically
I call BS. I test the rescueCD quite a lot during development process, and it _always_ find the LVM and sets it up automatically.
Didn't work for me in March 2006 on x86_64 using a then-reasonbly-current rescue CD (probably a couple months old).
Bugzilla #?
- Gilboa
On Fri, 2006-12-22 at 08:57 -0800, John Reiser wrote:
I call BS. I test the rescueCD quite a lot during development process, and it _always_ find the LVM and sets it up automatically.
Didn't work for me in March 2006 on x86_64 using a then-reasonbly-current rescue CD (probably a couple months old).
Even when it doesn't:
lvm vgchange -a y
And all your VGs should appear.
Now if only enabling !@#$ RAID were as easy.
Also, renaming a VG is a total bitch if you're booting from it. The name seems to get hardwired into the inird. And... changing that is such an arcane ritual I'm not going to go into it.
... Unless I've missed something. Naming your VGs after the hostname maybe isn't such a great idea.
On Sat, Dec 23, 2006 at 04:26:04AM -0600, Callum Lerwick wrote:
Also, renaming a VG is a total bitch if you're booting from it. The name seems to get hardwired into the inird. And... changing that is such an arcane ritual I'm not going to go into it.
... Unless I've missed something. Naming your VGs after the hostname maybe isn't such a great idea.
After being bitten with the duplicate VG name thing, I think the installer should assign a random UUID-like name to VGs.
On Fri, 2006-12-22 at 07:50 -0800, John Reiser wrote:
Gilboa Davara wrote:
Use LVM. Trust me. You won't be sorry.
I've been there, done that, and regretted it deeply. I got rid of LVM the first chance I could.
LVM does not inter-operate with anything else. Grub does not work under LVM.
Why should it?
Parted does not grok LVM:
LVM doesn't require parted for in-order to resize partitions. It's called logical volume manager for a reason, you know?
you cannot create a hard partition from LVM free space.
HUH? Why-on-earth-would-you-want-to-do-that?
Using the rescue CDs is a nightmare under LVM: the LVM setup is not recognized automatically (you must remember what it is) and the rescue environment contains no help or documentation on LVM (such as: the _syntax_ for naming the pieces!)
I can agree that Feodra documentation on the subject is... missing. A. TLDP has an excellent on-line documentation. [1] B. system-config-lvm is improvement constantly. C. Documentation missing? Join the documentation team and help them fix it.
Never the less, I never had any problem mounting lvm under rescue CDs.
LVM probably kills all low-level backup and recovery.
At also "kills" when you meed to dynamically set and modify large number of partitions on multiple drives.
Do not use LVM unless you are 100.000000% certain that you will never be faced with a hardware disaster.
... If LVM is stable enough to be the default option under RHEL, its stable enough for me.
Either way, no-body is forcing LVM down your throat. Don't like it? Don't use it. Nobody is forcing it down your throat.
- Gilboa [1] http://tldp.org/HOWTO/LVM-HOWTO/
At 08:11 AM 12/22/2006, Gilboa Davara wrote:
On Fri, 2006-12-22 at 07:50 -0800, John Reiser wrote:
Gilboa Davara wrote:
Use LVM. Trust me. You won't be sorry.
I've been there, done that, and regretted it deeply.
I used LVM for a few years to double up hard drives (in stripes) to increase performance. I think it worked well that way, even though I don't exactly need that kind of performance most of the time. It was just a casual way of trying out new technologies.
I did stop using it after one of my hard drive started to fail and I plug both hard drives into another Fedora Core machine also configured with LVM, and I couldn't find a way to boot up that machine with both sets of LVM. IIRC, it complained about 2 logical partitions of the same names (collision), or something like that. With the extra LVM volume removed, I could boot; but with them plugged in, I couldn't boot (the otherwise healthy LVM volumes) at all.
Is there a solution for such situation? I admit that I never emailed this mailing list for help and perhaps didn't read up enough documentation, even though I read up a lot years before that.
I gave up using LVM, partly because after reading it up so much, I was still having troubles rescuing data on my failing hard drives. I think with improved tools, it need not be so difficult.
I'm having a lot of problems remembering just which partitions belong to which LVM volumes and which I can format to free some partitions out. Partition labelling support should be added to be practical. Or maybe I should not choose a potentially hard-to-remember partitioning scheme, but should use only one LVM per disk and not striped.
Hope I'm not choosing any side here. I'm just hoping the experience with LVM, especially when working at the partition level can be improved.
On Friday 22 December 2006 09:55am, Daniel Yek wrote:
At 08:11 AM 12/22/2006, Gilboa Davara wrote:
On Fri, 2006-12-22 at 07:50 -0800, John Reiser wrote:
Gilboa Davara wrote:
Use LVM. Trust me. You won't be sorry.
I've been there, done that, and regretted it deeply.
I used LVM for a few years to double up hard drives (in stripes) to increase performance. I think it worked well that way, even though I don't exactly need that kind of performance most of the time. It was just a casual way of trying out new technologies.
Software RAID would have been a little better for that. Personally, I subscribe to the "use the right tool for the job" theory. RAID for RAID things, LVM on top of RAID for manageability.
Still, it does work well for that situation.
I did stop using it after one of my hard drive started to fail and I plug both hard drives into another Fedora Core machine also configured with LVM, and I couldn't find a way to boot up that machine with both sets of LVM. IIRC, it complained about 2 logical partitions of the same names (collision), or something like that. With the extra LVM volume removed, I could boot; but with them plugged in, I couldn't boot (the otherwise healthy LVM volumes) at all.
Been there. The problem is that you had two volume groups (VG) with the same name.
Is there a solution for such situation? I admit that I never emailed this mailing list for help and perhaps didn't read up enough documentation, even though I read up a lot years before that.
Yup. Really simple fix. Rename one of the VGs with "vgrename". Some ways of doing this:
1. Give each machine's VG the same name as the box to begin with at installation. That way, when you get into the situation of wanting to mount up the disks from one box in another, there will not be a "conflict" to begin with.
2. Rename the VG on the hosting system (i.e. the box you're putting the disks into. This requires a number of simple steps to complete successfully.
To use vgrename, the entire VG must be offline. So, boot a rescue environment (via CD, PXE, however), skip trying to mount up the disk (or use the "nomount" option on the "rescue" boot: line), and run (assuming the old name is "vg0" and the new name should be "herold":
# lvm lvm> vgscan . . . output omitted . . . lvm> vgchange -a n vg0 . . . output omitted . . . lvm> vgrename vg0 herold . . . output omitted . . . lvm> vgchange -a y herold . . . output omitted . . . lvm> exit
Then, mount up the root LV, the /boot/ partition, and things like /usr/ and /dev/:
# mkdir /mnt/sysimage # mount /dev/herold/root /mnt/sysimage # mount /dev/sda1 /mnt/sysimage/boot # mount /dev/herold/usr /mnt/sysimage/usr
Of course, change the names of the devices appropriately.
You can then "chroot" and run "mkinitrd" to fix up the name of the root device (because the VG name changed). Also, don't forget to change the "root=" value(s) in grub.conf (menu.lst on any other distribution). I usually just snag the mkinitrd command out of the "/sbin/new-kernel-pkg" script (use "rpm -q --scripts kernel-`uname -r`" to see that's what the kernel RPMs run).
3. Rename the VG of the system that you are moving the drive(s) from. Just use a rescue environment, like I just showed. However, in this case, if you are not planing on returning the disks to the other machine (i.e., you're replacing them with new ones or rebuilding the system after getting some data off the disks, etc.), then you don't need to run mkinitrd or edit the grub.conf (or menu.lst for those using these instructions on other distros) file.
4. Disconnect the host systems drive(s), boot with a rescue CD (or PXE, etc.) and do what you need to do to the disks.
I gave up using LVM, partly because after reading it up so much, I was still having troubles rescuing data on my failing hard drives. I think with improved tools, it need not be so difficult.
I'm having a lot of problems remembering just which partitions belong to which LVM volumes and which I can format to free some partitions out. Partition labelling support should be added to be practical. Or maybe I should not choose a potentially hard-to-remember partitioning scheme, but should use only one LVM per disk and not striped.
The default LV names that anaconda suggests/uses are stupid. I say that because the LV names are meaningless and lead to just this problem. I have *always* named my LVs such as to indicate which part of the filesystem is on them. For example, when I create and LV for /ver/log/, I run (use whatever size you like, it's just a placeholder in this example):
# lvcreate -L 256M -n varlog vg_name
Entries in /etc/fstab are extremely easy to read like this.
Hope I'm not choosing any side here. I'm just hoping the experience with LVM, especially when working at the partition level can be improved.
I don't think you are.
LVM is one of the coolest things there is. Many people don't understand the basics of how & why LVM is, simply because they don't know where to get the education about it. Although there is good documentation about how to use LVMs commands to get things done, but not much about how to really benefit and organize and make decisions about how to use LVM at the system level.
There are only a small handful of the available LVM commands that are needed for regular stuff. They all have common consistent switches for their command lines and are very easy to use.
Lamont Peterson wrote:
LVM is one of the coolest things there is. Many people don't understand the basics of how & why LVM is, simply because they don't know where to get the
When LVM fails though, there are no recovery tools. You can recover the filesystem inside an LVM (I speak from experience) but your only friends are dd and a hex editor. There is no redundancy unlike genuine filesystems like ext2/3, if the LVM chunk before the actual filesystem is corrupted, the volume won't mount as LVM and that's your lot from the One True Way.
LVM binding raided storage together makes sense and buys you something. LVM being the default -- even for a laptop that cannot increase its permanent storage -- only has the capacity to make a crisis into a disaster.
-Andy
On Fri, 2006-12-22 at 21:48 +0000, Andy Green wrote:
When LVM fails though, there are no recovery tools. You can recover the filesystem inside an LVM (I speak from experience) but your only friends are dd and a hex editor. There is no redundancy unlike genuine filesystems like ext2/3, if the LVM chunk before the actual filesystem is corrupted, the volume won't mount as LVM and that's your lot from the One True Way.
And you could have set your laptop on top of a tape degausser. You have been making regular backups haven't you?
Jeff
Jeffrey C. Ollie wrote:
On Fri, 2006-12-22 at 21:48 +0000, Andy Green wrote:
When LVM fails though, there are no recovery tools. You can recover the filesystem inside an LVM (I speak from experience) but your only friends are dd and a hex editor. There is no redundancy unlike genuine filesystems like ext2/3, if the LVM chunk before the actual filesystem is corrupted, the volume won't mount as LVM and that's your lot from the One True Way.
And you could have set your laptop on top of a tape degausser. You have been making regular backups haven't you?
I had a backup a week old, but that doesn't excuse LVM from *escalating a lost sector into a lost filesystem*. Why ask me about my personal disaster recovery when we talk about taking action to minimize the chance of disaster for the whole class of storage?
When LVM is inflicted on to situations that cannot benefit from it, the end result is you made something more fragile for no gain: that can't be right.
-Andy
On Sat, 2006-12-23 at 10:35 +0000, Andy Green wrote:
Jeffrey C. Ollie wrote:
On Fri, 2006-12-22 at 21:48 +0000, Andy Green wrote:
When LVM fails though, there are no recovery tools. You can recover the filesystem inside an LVM (I speak from experience) but your only friends are dd and a hex editor. There is no redundancy unlike genuine filesystems like ext2/3, if the LVM chunk before the actual filesystem is corrupted, the volume won't mount as LVM and that's your lot from the One True Way.
And you could have set your laptop on top of a tape degausser. You have been making regular backups haven't you?
I had a backup a week old, but that doesn't excuse LVM from *escalating a lost sector into a lost filesystem*. Why ask me about my personal disaster recovery when we talk about taking action to minimize the chance of disaster for the whole class of storage?
When LVM is inflicted on to situations that cannot benefit from it, the end result is you made something more fragile for no gain: that can't be right.
Every PV keeps duplicate copies of the metadata by default. You can optionally make it store three. Two at the start, one at the end. And every PV in a VG has a copy of the metadata as well. So with two PVs that's four copies of the metadata by default, and optionally 6.
... And backing up your LVM metadata to your /boot partition isn't a bad idea either. As well as an offline backup.
So I have 11 copies of my LVM metadata. That seems rather redundant to me.
Callum Lerwick wrote:
When LVM is inflicted on to situations that cannot benefit from it, the end result is you made something more fragile for no gain: that can't be right.
Every PV keeps duplicate copies of the metadata by default. You can optionally make it store three. Two at the start, one at the end. And every PV in a VG has a copy of the metadata as well. So with two PVs that's four copies of the metadata by default, and optionally 6.
... And backing up your LVM metadata to your /boot partition isn't a bad idea either. As well as an offline backup.
So I have 11 copies of my LVM metadata. That seems rather redundant to me.
Sounds impressive, but the LVM that was damaged here was not visible nor mountable in Fedora, although the filesystem behind it was mostly intact and mounted okay when I had a copy of it without the LVM stuff in front of it. Googling at the time didn't bring up anything about how to use these proposed copied of "metadata" nor was anything done about them or mentioned about them by the LVM stuff in Fedora, nor have I heard about LVM metadata before today. All the damaged LVM header had for me was to hide my mostly intact filesystem from being fsck'd or mounted.
Further, you are full of care to have 11 copies of something you don't even need in, say, a laptop usage case, in case it breaks (because if it does break, you experience the truth of what I previously related about not being able to get at your filesystem). This looks like a pointless and dangerous burden to place on someone who is getting nothing from having LVM there in the first place. LVM on raid can make sense, in other common usage cases it is only a net risk.
-Andy
Personally I agree with Andy in what he's said before on the part of LVM.
First of all I have experienced the same "no help hell" with lvm when it crashes. (not if, when) I had FC3 installed on my Compaq R3000 Series notebook, default settings, except for package options. Ran that for about three weeks. I reinstall testing out new distros quite a lot, so I don't worry about backing up the entire system, and something happened with the system, the kernel panicked when it couldn't find a root FS. Same problem as andy.
Basically All I'm trying to say is that yes it fails sometime, no it is a cool thing to have, but either give us some tools on default install/rescue disc to combat the issue, or take it out unless the system detects a RAID array.
Thanks guys,
Chris
On Dec 23, 2006, at 8:24 AM, Andy Green wrote:
Callum Lerwick wrote:
When LVM is inflicted on to situations that cannot benefit from it, the end result is you made something more fragile for no gain: that can't be right.
Every PV keeps duplicate copies of the metadata by default. You can optionally make it store three. Two at the start, one at the end. And every PV in a VG has a copy of the metadata as well. So with two PVs that's four copies of the metadata by default, and optionally 6. ... And backing up your LVM metadata to your /boot partition isn't a bad idea either. As well as an offline backup. So I have 11 copies of my LVM metadata. That seems rather redundant to me.
Sounds impressive, but the LVM that was damaged here was not visible nor mountable in Fedora, although the filesystem behind it was mostly intact and mounted okay when I had a copy of it without the LVM stuff in front of it. Googling at the time didn't bring up anything about how to use these proposed copied of "metadata" nor was anything done about them or mentioned about them by the LVM stuff in Fedora, nor have I heard about LVM metadata before today. All the damaged LVM header had for me was to hide my mostly intact filesystem from being fsck'd or mounted.
Further, you are full of care to have 11 copies of something you don't even need in, say, a laptop usage case, in case it breaks (because if it does break, you experience the truth of what I previously related about not being able to get at your filesystem). This looks like a pointless and dangerous burden to place on someone who is getting nothing from having LVM there in the first place. LVM on raid can make sense, in other common usage cases it is only a net risk.
-Andy
-- fedora-devel-list mailing list fedora-devel-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-devel-list
First of all I have experienced the same "no help hell" with lvm when it crashes. (not if, when) I had FC3 installed on my Compaq R3000 Series notebook, default settings, except for package options. Ran that for about three weeks. I reinstall testing out new distros quite a lot, so I don't worry about backing up the entire system, and something happened with the system, the kernel panicked when it couldn't find a root FS. Same problem as andy.
Basically All I'm trying to say is that yes it fails sometime, no it is a cool thing to have, but either give us some tools on default install/rescue disc to combat the issue, or take it out unless the system detects a RAID array.
Thanks guys,
Chris
Come on, "FC3" got released *three*years* ago, so it's not appropriate to make an argument out of this for discussing today's maturity of "LVM". Btw, I have been using "LVM" exclusively for 3 years now [exact, since I first installed "FC3"], and I have never ever had any trouble with it at all. Maybe you encountered a hardware problem?
yeah one of the few, the proud. I don't really see the need for desktop users, and would like to see some sort of intelligent feature in Anaconda to keep it of unless there would be an advantage, like in a server, and give the end users of such server the tools to adequately recover his system if/when it does fall through.
I'm not trying to make some sort of wave in the community or anything I'm just asking a reasonable request that I'm sure some of the top users of fedora core would enjoy to have in the iso's. It may not be a feature that will get much critical acclaim, but it is a feature that is desperately needed, and if I'm shooting my mouth off and there's now something that does the trick, let me know.
thanks, Chris
On 12/24/06, Joachim Frieben jfrieben@gmx.de wrote:
First of all I have experienced the same "no help hell" with lvm when it crashes. (not if, when) I had FC3 installed on my Compaq R3000 Series notebook, default settings, except for package options. Ran that for about three weeks. I reinstall testing out new distros quite a lot, so I don't worry about backing up the entire system, and something happened with the system, the kernel panicked when it couldn't find a root FS. Same problem as andy.
Basically All I'm trying to say is that yes it fails sometime, no it is a cool thing to have, but either give us some tools on default install/rescue disc to combat the issue, or take it out unless the system detects a RAID array.
Thanks guys,
Chris
Come on, "FC3" got released *three*years* ago, so it's not appropriate to make an argument out of this for discussing today's maturity of "LVM". Btw, I have been using "LVM" exclusively for 3 years now [exact, since I first installed "FC3"], and I have never ever had any trouble with it at all. Maybe you encountered a hardware problem? -- Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer
-- fedora-devel-list mailing list fedora-devel-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-devel-list
On Sun, Dec 24, 2006 at 12:32:05PM -0500, leomon wrote:
yeah one of the few, the proud. I don't really see the need for desktop users, and would like to see some sort of intelligent feature in Anaconda to keep it of unless there would be an advantage, like in a server, and give the end users of such server the tools to adequately recover his system if/when it does fall through.
There already is the "Allow me to set up the partitioning" feature in Anaconda. Heck, you can even use fdisk on Ctrl-Alt-F2 if you don't like disk druid.
I disagree that LVM isn't useful for desktop users. I've already used it many times to expand my /home when I run out of space, by using free space I left on the disk, or by shrinking another logical volume, or by adding a disk.
Oh, and Joachim,
I don't think it was a Hardware issue, I just think I got some sort of data corruption and it messed up the partition table, and since LVM handles the management of Logical Volumes on Real Partitions, there were no partitions that I could e2fsck.....
later,
Chris
On 12/24/06, Joachim Frieben jfrieben@gmx.de wrote:
First of all I have experienced the same "no help hell" with lvm when it crashes. (not if, when) I had FC3 installed on my Compaq R3000 Series notebook, default settings, except for package options. Ran that for about three weeks. I reinstall testing out new distros quite a lot, so I don't worry about backing up the entire system, and something happened with the system, the kernel panicked when it couldn't find a root FS. Same problem as andy.
Basically All I'm trying to say is that yes it fails sometime, no it is a cool thing to have, but either give us some tools on default install/rescue disc to combat the issue, or take it out unless the system detects a RAID array.
Thanks guys,
Chris
Come on, "FC3" got released *three*years* ago, so it's not appropriate to make an argument out of this for discussing today's maturity of "LVM". Btw, I have been using "LVM" exclusively for 3 years now [exact, since I first installed "FC3"], and I have never ever had any trouble with it at all. Maybe you encountered a hardware problem? -- Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer
-- fedora-devel-list mailing list fedora-devel-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-devel-list
On Sunday 24 December 2006 04:22am, Joachim Frieben wrote:
First of all I have experienced the same "no help hell" with lvm when it crashes. (not if, when) I had FC3 installed on my Compaq R3000 Series notebook, default settings, except for package options. Ran that for about three weeks. I reinstall testing out new distros quite a lot, so I don't worry about backing up the entire system, and something happened with the system, the kernel panicked when it couldn't find a root FS. Same problem as andy.
Basically All I'm trying to say is that yes it fails sometime, no it is a cool thing to have, but either give us some tools on default install/rescue disc to combat the issue, or take it out unless the system detects a RAID array.
Thanks guys,
Chris
Come on, "FC3" got released *three*years* ago,
FC1 was released in November of 2003, just after RHEL3 (October 2003). FC3 was released in November 2004. Last I checked 2006 - 2004 = 2 years ago.
so it's not appropriate to make an argument out of this for discussing today's maturity of "LVM". Btw, I have been using "LVM" exclusively for 3 years now [exact, since I first installed "FC3"], and I have never ever had any trouble with it at all. Maybe you encountered a hardware problem?
Likely in my mind.
BTW, I've been using LVM on all my notebooks and workstations since FC1/RHEL3 and have been using it on all my servers for nearly 7 years.
I've never encountered a problem that couldn't be fixed with the LVM commands other than outright hardware (i.e. storage) failure.
Err, sorry for the wrong figure: indeed "FC3" has been released in late 2004. Thanks for correcting me! However, this still means more that 2 years of flawless operation of "LVM" on my various "GNU/Linux" systems running "Fedora Core". ;o) And as stated elsewhere, "LVM" was a proven and mature technology long time ago, e.g. I used to set up some "HP/UX" boxes back in 2001 and was immediately convinced by the benefits of this technology.
Come on, "FC3" got released *three*years* ago,
FC1 was released in November of 2003, just after RHEL3 (October 2003). FC3 was released in November 2004. Last I checked 2006 - 2004 = 2 years ago.
On Friday 22 December 2006 02:48pm, Andy Green wrote:
Lamont Peterson wrote:
LVM is one of the coolest things there is. Many people don't understand the basics of how & why LVM is, simply because they don't know where to get the
When LVM fails though, there are no recovery tools. You can recover the filesystem inside an LVM (I speak from experience) but your only friends are dd and a hex editor. There is no redundancy unlike genuine filesystems like ext2/3, if the LVM chunk before the actual filesystem is corrupted, the volume won't mount as LVM and that's your lot from the One True Way.
LVM binding raided storage together makes sense and buys you something. LVM being the default -- even for a laptop that cannot increase its permanent storage -- only has the capacity to make a crisis into a disaster.
On my laptops, the hard drive is quite a bit bigger than I "knew what to do with" at the time I installed.
For example, this notebook has an 80GB drive. The last 20GB is WinXP, the first 100MB is /boot/, the next 1GB is swap and the rest is LVM (4 partitions total). When I installed FC on here, I created a 512MB /, 3GB /usr/, 256MB /var/, 128MB /var/log/, 128MB /tmp/ and 5GB /home/.
Later, I created a 2GB /var/lib/mysql/, 2GB /var/lib/pgsql/ and a 10GB /download/. At one point, I was playing around with a 4GB /opt/oracle/ and a 4GB /opt/db2/. I once expanded /home/ to 15GB and /download/ to 10GB while downloading some DVD .iso files.
Even on a notebook, without RAID, I've found that being able to reallocate space to/from LVs to grow/shrink the filesystems has been incredibly handy.
Of course, the alternatives are use one big / partition (that sucks) or keep moving partitions around in order to grow/shrink filesystems (that sucks, too). I think I'll stick with LVM.
BTW, I'm thinking that my next notebook just might be one of these models that can house 2 2.5-inch hard drives internally (not in a docking bay) and use LVM on RAID1. As much traveling as I do, I have lost a couple of hard drives over the years (I have another that started exhibiting bad sectors just the other day).
At 11:15 AM 12/22/2006, Lamont Peterson wrote:
On Friday 22 December 2006 09:55am, Daniel Yek wrote: ...
I did stop using it after one of my hard drive started to fail and I plug both hard drives into another Fedora Core machine also configured with LVM, and I couldn't find a way to boot up that machine with both sets of LVM. IIRC, it complained about 2 logical partitions of the same names (collision), or something like that. With the extra LVM volume removed, I could boot; but with them plugged in, I couldn't boot (the otherwise healthy LVM volumes) at all.
...
Is there a solution for such situation? ...
Yup. Really simple fix. Rename one of the VGs with "vgrename". Some ways of doing this: ... 2. Rename the VG on the hosting system (i.e. the box you're putting the disks into. This requires a number of simple steps to complete successfully.
To use vgrename, the entire VG must be offline. So, boot a rescue environment (via CD, PXE, however), skip trying to mount up the disk (or use the "nomount" option on the "rescue" boot: line), and run (assuming the old name is "vg0" and the new name should be "herold":
# lvm lvm> vgscan . . . output omitted . . . lvm> vgchange -a n vg0 . . . output omitted . . . lvm> vgrename vg0 herold . . . output omitted . . . lvm> vgchange -a y herold . . . output omitted . . . lvm> exit
Then, mount up the root LV, the /boot/ partition, and things like /usr/ and /dev/:
# mkdir /mnt/sysimage # mount /dev/herold/root /mnt/sysimage # mount /dev/sda1 /mnt/sysimage/boot # mount /dev/herold/usr /mnt/sysimage/usr
Of course, change the names of the devices appropriately.
You can then "chroot" and run "mkinitrd" to fix up the name of the root device (because the VG name changed). Also, don't forget to change the "root=" value(s) in grub.conf (menu.lst on any other distribution). I usually just snag the mkinitrd command out of the "/sbin/new-kernel-pkg" script (use "rpm -q --scripts kernel-`uname -r`" to see that's what the kernel RPMs run).
A bit complicated perhaps and haven't tried it yet, but that is great information for me to try out later. Thanks much.
...
Lamont Peterson lamont@gurulabs.com Senior Instructor Guru Labs, L.C. [ http://www.GuruLabs.com/ ]
Gilboa Davara wrote:
On Fri, 2006-12-22 at 07:50 -0800, John Reiser wrote:
Gilboa Davara wrote:
Use LVM. Trust me. You won't be sorry.
I've been there, done that, and regretted it deeply. I got rid of LVM the first chance I could.
LVM does not inter-operate with anything else. Grub does not work under LVM.
Why should it?
Why shouldn't it? LVM is touted as "the solution" to restrictions on disk partitioning. Except that LVM doesn't work for the first logical access to disk partitions: booting.
Parted does not grok LVM:
LVM doesn't require parted for in-order to resize partitions. It's called logical volume manager for a reason, you know?
you cannot create a hard partition from LVM free space.
HUH? Why-on-earth-would-you-want-to-do-that?
To interoperate with the rest of the world, such as other operating systems, even other distributions of Linux, that understand DOS partition tables but not LVM. If you don't have infinitely many boxes then it is useful to be able to "compromise".
Using the rescue CDs is a nightmare under LVM: the LVM setup is not recognized automatically (you must remember what it is) and the rescue environment contains no help or documentation on LVM (such as: the _syntax_ for naming the pieces!)
I can agree that Feodra documentation on the subject is... missing. A. TLDP has an excellent on-line documentation. [1] B. system-config-lvm is improvement constantly. C. Documentation missing? Join the documentation team and help them fix it.
"Get off the bus one stop before I do." I didn't recognize that LVM documentation was missing from the rescue CD until I needed it but it wasn't there.
Never the less, I never had any problem mounting lvm under rescue CDs.
LVM probably kills all low-level backup and recovery.
At also "kills" when you meed to dynamically set and modify large number of partitions on multiple drives.
Do not use LVM unless you are 100.000000% certain that you will never be faced with a hardware disaster.
... If LVM is stable enough to be the default option under RHEL, its stable enough for me.
Data longevity is not the strong suit of RHEL/Fedora Core/Linux. http://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=137068
Either way, no-body is forcing LVM down your throat. Don't like it? Don't use it. Nobody is forcing it down your throat.
_You_ strongly advocated LVM and suggested "You won't be sorry" without any disclaimers. I'm supplying some of the omissions.
On Fri, 2006-12-22 at 09:32 -0800, John Reiser wrote:
Gilboa Davara wrote:
On Fri, 2006-12-22 at 07:50 -0800, John Reiser wrote:
Gilboa Davara wrote:
Use LVM. Trust me. You won't be sorry.
I've been there, done that, and regretted it deeply. I got rid of LVM the first chance I could.
LVM does not inter-operate with anything else. Grub does not work under LVM.
Why should it?
Why shouldn't it? LVM is touted as "the solution" to restrictions on disk partitioning. Except that LVM doesn't work for the first logical access to disk partitions: booting.
No you are wrong. -grub- doesn't support LVM. (Same goes for xfs, reiserfs and at least 95 other file systems)
HUH? Why-on-earth-would-you-want-to-do-that?
To interoperate with the rest of the world, such as other operating systems, even other distributions of Linux, that understand DOS partition tables but not LVM. If you don't have infinitely many boxes then it is useful to be able to "compromise".
If you remove Linux from the equation. Does Windows support ZFS-lvm? Does Solaris support dynamic disks?
What OSs are you talk about exactly?
As for other distributions... well, on my old workstation I had Slackware, Debian, Fedora and CentOS all sharing the same LVM. If you favorite distro doesn't support LVM, you have a very, very, old distro.
Using the rescue CDs is a nightmare under LVM: the LVM setup is not recognized automatically (you must remember what it is) and the rescue environment contains no help or documentation on LVM (such as: the _syntax_ for naming the pieces!)
I can agree that Feodra documentation on the subject is... missing. A. TLDP has an excellent on-line documentation. [1] B. system-config-lvm is improvement constantly. C. Documentation missing? Join the documentation team and help them fix it.
"Get off the bus one stop before I do." I didn't recognize that LVM documentation was missing from the rescue CD until I needed it but it wasn't there.
As I said, TDLP has a very comprehensive LVM documentation. Last time I checked, it was the first google result to LVM howto.
Data longevity is not the strong suit of RHEL/Fedora Core/Linux. http://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=137068
Now you're just trolling.
_You_ strongly advocated LVM and suggested "You won't be sorry" without any disclaimers. I'm supplying some of the omissions.
No you're not. Out of your initial post, only two items stand: A. Documentation is missing. (You'll have to open google - oh my God!) B. Hardware failure will require manual intervention. (Though in 3 years and countless hardware crashes, I only had to manually mount lvm once, and even this was caused by a broken software RAID)
The rest of your post(s) were pure senseless ranting. (Especially the BZ#)
BTW, if you consider Fedora/RHEL to be an unstable junk, -why- are you here?
- Gilboa
On Friday 22 December 2006 03:52pm, Gilboa Davara wrote:
On Fri, 2006-12-22 at 09:32 -0800, John Reiser wrote:
[snip]
-grub- doesn't support LVM. (Same goes for xfs, reiserfs and at least 95 other file systems)
On FC5 (this notebook):
# ls /boot/grub/ default fat_stage1_5 jfs_stage1_5 reiserfs_stage1_5 stage2.old device.map ffs_stage1_5 menu.lst splash.xpm.gz ufs2_stage1_5 device.map.old grub.conf menu.lst.old stage1 vstafs_stage1_5 e2fs_stage1_5 iso9660_stage1_5 minix_stage1_5 stage2 xfs_stage1_5
I see xfs and reiserfs there. IIRC, both have been present in GRUB at least as far back as RHL8.0, though xfs might not have appeared until FC2 (I don't remember for sure, but I think it was there earlier).
[snip]
On Sun, 2006-12-24 at 22:29 -0700, Lamont Peterson wrote:
On Friday 22 December 2006 03:52pm, Gilboa Davara wrote:
On Fri, 2006-12-22 at 09:32 -0800, John Reiser wrote:
[snip]
-grub- doesn't support LVM. (Same goes for xfs, reiserfs and at least 95 other file systems)
On FC5 (this notebook):
# ls /boot/grub/ default fat_stage1_5 jfs_stage1_5 reiserfs_stage1_5 stage2.old device.map ffs_stage1_5 menu.lst splash.xpm.gz ufs2_stage1_5 device.map.old grub.conf menu.lst.old stage1 vstafs_stage1_5 e2fs_stage1_5 iso9660_stage1_5 minix_stage1_5 stage2 xfs_stage1_5
I see xfs and reiserfs there. IIRC, both have been present in GRUB at least as far back as RHL8.0, though xfs might not have appeared until FC2 (I don't remember for sure, but I think it was there earlier).
I stand corrected then. Last time I tried using boot on xfs (back in FC3) it failed miserably. (root (hdx,x) failed to detect the stage 1.5) Never the less, it does not change what I said. Grub doesn't support software RAID5/6 - should I stop using them?
- Gilboa
On Monday 25 December 2006 03:25am, Gilboa Davara wrote:
On Sun, 2006-12-24 at 22:29 -0700, Lamont Peterson wrote:
On Friday 22 December 2006 03:52pm, Gilboa Davara wrote:
On Fri, 2006-12-22 at 09:32 -0800, John Reiser wrote:
[snip]
-grub- doesn't support LVM. (Same goes for xfs, reiserfs and at least 95 other file systems)
On FC5 (this notebook):
# ls /boot/grub/ default fat_stage1_5 jfs_stage1_5 reiserfs_stage1_5 stage2.old device.map ffs_stage1_5 menu.lst splash.xpm.gz ufs2_stage1_5 device.map.old grub.conf menu.lst.old stage1 vstafs_stage1_5 e2fs_stage1_5 iso9660_stage1_5 minix_stage1_5 stage2 xfs_stage1_5
I see xfs and reiserfs there. IIRC, both have been present in GRUB at least as far back as RHL8.0, though xfs might not have appeared until FC2 (I don't remember for sure, but I think it was there earlier).
I stand corrected then. Last time I tried using boot on xfs (back in FC3) it failed miserably. (root (hdx,x) failed to detect the stage 1.5) Never the less, it does not change what I said. Grub doesn't support software RAID5/6 - should I stop using them?
You are correct there, and I agree with you. What GRUB supports has nothing to do with what the kernel supports. GRUB reads the kernel (vmlinuz) and initrd into memory and punts the ball. From that point on, GRUB support doesn't matter.
FYI, I use reiserfs, xfs and (sometimes) ext3 for my systems' partitions, but I always use ext3 for /boot/ (100-250MB, depending on how many distros are to be installed).
BTW, just for the benefit of those reading this in the archives in the future, GRUB will work when /boot/ is on top of software RAID1, but it is rarely done as one might have to alter the /boot/grub/grub.conf (/boot/grub/menu.lst on all other distros) to read the correct disk the first one goes out. For example, changing (hd0,0) to (hd1.0) throughout the file.
Le mardi 26 décembre 2006 à 11:16 -0700, Lamont Peterson a écrit :
BTW, just for the benefit of those reading this in the archives in the future, GRUB will work when /boot/ is on top of software RAID1, but it is rarely done as one might have to alter the /boot/grub/grub.conf (/boot/grub/menu.lst on all other distros) to read the correct disk the first one goes out. For example, changing (hd0,0) to (hd1.0) throughout the file.
It's done there and it works good.
In case of hardware failure, the chances that hd1 is renumbered in hd0 are rather high (either because faulty hd0 was removed, it didn't answer the bios which dunked it, or you just changed bios boot order manually)
If you have / on lvm and your small /boot partition raid-1 mirrored on every disk you use, suddenly you can plug your disks in any random number and things will just work (linux-side that is).
Very magical, considering that other OS will fail at the slightest change (and there grub remapping tricks may help)
On 12/22/06, John Reiser jreiser@bitwagon.com wrote:
Gilboa Davara wrote:
On Fri, 2006-12-22 at 07:50 -0800, John Reiser wrote:
Gilboa Davara wrote:
Use LVM. Trust me. You won't be sorry.
I've been there, done that, and regretted it deeply. I got rid of LVM the first chance I could.
LVM does not inter-operate with anything else. Grub does not work under LVM.
Why should it?
Why shouldn't it? LVM is touted as "the solution" to restrictions on disk partitioning. Except that LVM doesn't work for the first logical access to disk partitions: booting.
Ok I am going to call cranky old man not wanting to learn new tricks. Its different, its new, and it makes me think about ways that I havent ever before... therefore its bad. The only people I have heard LVM touted as "THE" solution.. are people who don't like LVM. If there are people who tout it as "THE" solution for everything should be ignored.. getting them in your gander is a waste of energy and time.
On Sun, Dec 24, 2006 at 11:01:29AM -0700, Stephen John Smoogen wrote:
Ok I am going to call cranky old man not wanting to learn new tricks. Its different, its new, and it makes me think about ways that I havent ever before... therefore its bad. The only people I have heard LVM touted as "THE" solution.. are people who don't like LVM. If there are people who tout it as "THE" solution for everything should be ignored.. getting them in your gander is a waste of energy and time.
I'm having difficulty determining if there is any sarcasm here. I don't understand your last sentence above at all. What are you saying?
If you don't like new tricks, then you shouldn't be using Fedora. Fedora is all about bleeding edge, pushing the envelope, trying new technologies.
All the tools to rescue a system are there on the rescue disk, either with or without LVM. Is there a lack of documentation about LVM on the rescue disk? Perhaps. Isn't the same true about fdisk, parted, fsck, etc?
You still have the ability to install a system without LVM.
On Friday 22 December 2006 08:50am, John Reiser wrote:
Gilboa Davara wrote:
Use LVM. Trust me. You won't be sorry.
I've been there, done that, and regretted it deeply. I got rid of LVM the first chance I could.
The very first thought in my mind is that you never understood what role LVM plays or how it actually works. By that, I don't mean the internal workings of LVM itself, I mean what LVM does for you and how to take advantage of it.
LVM does not inter-operate with anything else.
Bull.
LVM takes one or more block devices, aggregates them together into a single volume group (VG) and then manages the disbursement of available storage in the VG by creating block devices called logical volumes (VG).
Notice, block devices in, block devices out.
It's part of the sheer genius of the design of Linux that the RAID, LVM, multipath-I/O, filesystem "drivers" and other related code doesn't have to know *anything* about any of the other systems. They just work on top of block devices
LVM is extremely inter-operable with everything else.
Grub does not work under LVM.
There are patches available that let GRUB boot kernel images stored on LVM. But why would you want to?
Also, IIRC, GRUB 2 will have this support out-of-the-box.
Parted does not grok LVM:
parted doesn't grok a lot of things. In this case, it doesn't need to and shouldn't. parted is a [part]ition [ed]itor. It would be considered inappropriate by many for it to manipulate LVM. That's what the LVM commands are for. But if you want a frontend there is a new on in Fedora/RHEL called "system-config-lvm". SUSE has had extensive LVM support in YaST's partitioning tools for several years. There are frontends available.
you cannot create a hard partition from LVM free space.
Nope. But I have no idea what you are thinking here. Not to sound condescending or anything, but that's a completely idiotic thing to suggest. LVM creates block devices that look just like a plain old partition (POP? :) ) to everything that uses them (block devices, partitions, etc.), there is no difference. You can put LVM on RAID on LVM on RAID on iSCSI if wanted to. It's all just block devices.
Using the rescue CDs is a nightmare under LVM:
The individual LVM commands are not found in the rescue environment. You have to run "lvm" and then at the "lvm>" prompt you can run LVM commands just like on the normal, running systems. I've wondered why the rescue environment doesn't just have all the lvm commands as symlinks to "lvm", like on the installed systems. That would make it a little easier to use.
the LVM setup is not recognized automatically
Yes, it is. It's always worked just fine as far back as I can remember Red Hat having LVM support at all. I have no idea what you did that prevented it from picking things up. For it to fail, the root filesystem would have to be unavailable or the /etc/fstab file on it would have to be wrong or corrupted (I think that's the whole list).
(you must remember what it is) and the rescue environment contains no help or documentation on LVM (such as: the _syntax_ for naming the pieces!)
Sounds like you've only used the stupid names for VGs and LVs that anaconda defaults to, like LogVol00, which are really useless names.
LVM probably kills all low-level backup and recovery.
Nope, you're wrong there. IN fact, it has features like snapshotting that make low-level backup and recovery significantly easier and even make it possible to get consistent backups without taking anything offline. Let that DB keep running while you backup it's files (or use it's own backup tool) against an unchanging, read-only copy.
Do not use LVM unless you are 100.000000% certain that you will never be faced with a hardware disaster.
Use RAID (and not linear or RAID0) for redundancy to deal with/weather hardware failures.
LVM has nothing to do with improving redundancy. Yes, it does have some striping abilities, very similar to RAID, but that's not what it's for. It's Logical Volume *Management*. Use LVM to make all that space manageable.
Don't want to decide right now which "partition" to allocate all that space to? Then don't. With LVM, you can allocate space, *as needed*.
Besides, on the few occasions that I have had to deal with hardware failures *i.e. disks and disk controllers), having LVM has been very helpful for me.
On 22-Dec-2006 15:50.59 (GMT), John Reiser wrote:
LVM does not inter-operate with anything else. Grub does not work under LVM.
That's why you have a boot partition to load the kernel and initrd from. The installer outlines this when you install. And the kernels always get put into the /boot partition so they are accessible from grub.
Parted does not grok LVM: you cannot create a hard partition from LVM free space.
No, of course you can't. But you seem to be speaking from an interoperability perspective: If you dedicate an area of disk to a Linux logical volume manager, you don't expect to have the space available for whatever other operating systems you have on your computer.
And remember, not everyone dual boots.
Using the rescue CDs is a nightmare under LVM: the LVM setup is not recognized automatically (you must remember what it is) and the rescue environment contains no help or documentation on LVM (such as: the _syntax_ for naming the pieces!)
It's really simple:
lvm vgchange -a y
And that's all.
LVM probably kills all low-level backup and recovery.
"Probably" does not make for a good argument.
On 21-Dec-2006 22:57.31 (GMT), Clyde E. Kunkel wrote:
With the change from traditional /dev/hd* type device nodes to /dev/sd* type nodes, I see that the highest partition number that can be used is 15. Will this change so that higher numbers will be available? I do have a system that has a drive with almost 30 small partitions used in a research effort and the number of partitions has not been an issue thru fc6.
devices.txt says that hdN device numbering supports up to 63 partitions per disk. It also says that sdN device numbering follows the same trend as hdN numbering (unified partition table handling, and all).
I didn't think that PC BIOS partition tables supported > 15 partitions. To the best of my knowledge, the 63-partition numbering scheme was there for partition table formats that *did* support it (Amiga RDB, BSD disk label, etc).
To quote the fdisk(8) manual page:
The partition is a device name followed by a partition number. For example, /dev/hda1 is the first partition on the first IDE hard disk in the system. Disks can have up to 15 partitions. See also /usr/src/linux/Documentation/devices.txt.
To reiterate what another writer has said, lvm is a smashing way of working. And it has a much more flexible ("meaningful") volume naming scheme.
Rob Andrews wrote:
On 21-Dec-2006 22:57.31 (GMT), Clyde E. Kunkel wrote:
With the change from traditional /dev/hd* type device nodes to /dev/sd* type nodes, I see that the highest partition number that can be used is 15. Will this change so that higher numbers will be available? I do have a system that has a drive with almost 30 small partitions used in a research effort and the number of partitions has not been an issue thru fc6.
devices.txt says that hdN device numbering supports up to 63 partitions per disk. It also says that sdN device numbering follows the same trend as hdN numbering (unified partition table handling, and all).
I didn't think that PC BIOS partition tables supported > 15 partitions. To the best of my knowledge, the 63-partition numbering scheme was there for partition table formats that *did* support it (Amiga RDB, BSD disk label, etc).
To quote the fdisk(8) manual page:
The partition is a device name followed by a partition number. For example, /dev/hda1 is the first partition on the first IDE hard disk in the system. Disks can have up to 15 partitions. See also /usr/src/linux/Documentation/devices.txt.
To reiterate what another writer has said, lvm is a smashing way of working. And it has a much more flexible ("meaningful") volume naming scheme.
I do use LVM extensively and appreciate its flexibility. The use of > 15 partitions on one spindle was a deliberate design to accommodate a specific research method. AFAIK, grub will not boot from an LV, but beyond that, not sure if there is any other software that would prohibit converting everything to LVM.
(BTW, will grub be modified to boot from an LV?)
On Friday 22 December 2006 10:04, Clyde E. Kunkel wrote:
(BTW, will grub be modified to boot from an LV?)
Grub2 has this, and other improvements. Too bad about it being utterly broken though and not suitable for even rawhide at this point.
On Friday 22 December 2006 01:28am, Rob Andrews wrote:
On 21-Dec-2006 22:57.31 (GMT), Clyde E. Kunkel wrote:
With the change from traditional /dev/hd* type device nodes to /dev/sd* type nodes, I see that the highest partition number that can be used is 15. Will this change so that higher numbers will be available? I do have a system that has a drive with almost 30 small partitions used in a research effort and the number of partitions has not been an issue thru fc6.
devices.txt says that hdN device numbering supports up to 63 partitions per disk. It also says that sdN device numbering follows the same trend as hdN numbering (unified partition table handling, and all).
I didn't think that PC BIOS partition tables supported > 15 partitions. To the best of my knowledge, the 63-partition numbering scheme was there for partition table formats that *did* support it (Amiga RDB, BSD disk label, etc).
To quote the fdisk(8) manual page:
The partition is a device name followed by a partition number. For example, /dev/hda1 is the first partition on the first IDE hard disk in the system. Disks can have up to 15 partitions. See also /usr/src/linux/Documentation/devices.txt.
That's an old holdover. Older versions of fdisk took the lazy way out and didn't try to figure out if you had a SCSI or an IDE drive. It just went with the lowest common denominator (i.e. 15 partitions( that worked on all disks.
The kernel has supported up to 63 partitions on a single IDE drive for a long time. It was the partitioning tools (most notably, fdisk) that didn't.
BTW, the numbers 63 & 15 are not arbitrary. It comes from the 256 major numbers, 256 minor numbers for device nodes that kernels prior to 2.6 used. So, since you can have up to 4 drives on a single (typical) IDE controller (2 channels, one master one slave drive on each), take the 256 minor numbers for that controller and divide by 4. You get 64 minor numbers to use for partitions on that drive, however, one of them is used for the whole drive (i.e. /dev/hda) leaving 63 for partitions (i.e. /dev/hda1 -> /dev/hda63).
For SCSI (and SATA, etc.), there can be a maximum of 16 devices on any SCSI bus (or channel, if you prefer that term), so 256/16 = 16. One for the whole drive (/dev/sda) and 15 for the partitions (/dev/sda1 -> /dev/sda15).
Now, if you're wondering about the 16 devices on a SCSI bus, one of them will be the controller, usually at SCSI ID 7. That leave 15 SCSI IDs to assign to drives. However, faster bus speeds and longer cables both lower the total number of devices a SCSI bus can support, so the newer, better, shinier SCSI stuff doesn't support 15 drives on a single SCSI channel.
To reiterate what another writer has said, lvm is a smashing way of working. And it has a much more flexible ("meaningful") volume naming scheme.
Only if you give LVs meaningful names. anaconda (as I've already pointed out in this thread) uses fairly useless names for LVs.
For years, I always named the VGs something like vg0, since I didn't much care what that name was. Today, I name the VG the same as the machine name (this notebook is "corsair" so that's what the VG is named).