Hi All,
While doing what should be testing a simple iscsi related patch, I encountered the following issue:
Take a system with a single disk, sda, which has a /boot on sda1 and a PV on sda2. This PV is the PV for the 1 PV VG: VolGroup, which contains LV's lv_swap, lv_root and lv_home.
"Attach" an iscsi disk to this system, which becomes sdb, which has a /boot on sdb1 and a PV on sdb2. This PV is the PV for the 1 PV VG: VolGroup, which contains LV's lv_swap and lv_root.
Notice that: 1) The 2 VG's have the same name 2) Only sda has a lv_home LV.
Now in the filter UI select only disk sdb to install to, then the following may (depending on scanning order) happen:
Assume sdb gets scanned first by devicetree.py: - when scanning sdb2, handleUdevLVMPVFormat() will call "lvm lvchange -ay" for all LV's in this VG (as seen by udev, more on that later). - at this point, sda has not been scanned yet, so isIgnored has not been called for sda2 yet, and thus lvm_cc_addFilterRejectRegexp("sda2") has not been called yet. - thus lvm lvchange sees both sda2 and sdb2, it complains that there are 2 identically named VG's and picks the one using the sda2 PV. - note that the same is happening when the udev rules execute and as lvm picks the sda based VG, udev sees 3 LV's so lvm lvchange gets called for lv_root, lv_swap and lv_home. However all is well here, as the sda VG actually has all 3 of these. - sda gets scanned, and sda2 gets added to lvm_cc_addFilterRejectRegexp() - devicetree,populate is done and calls teardownAll() - teardownAll() calls "lvm lvchange -an" for all 3 LV's. However at this point its --config argument tells it to ignore sda2, so it reads the VG metadata from sdb2, and when called to stop lv_home, it complains that lv_home is not part of the VG, and then anaconda backtraces.
Note we really have 2 issues here: 1) udev rules don't know about our lvm_cc_addFilterRejectRegexp(),
2) We are calling "lvm lvchange ..." with a changing (growing) device filter list, which sometimes confuses it.
I've been thinking about how to fix this, and all I can come up with is a 2 fase scan.
Notice that at first it seems tempting to simply first call isIgnored on all devices to scan, and then scan them all.
This won't work as new PV candidates may show up while scanning: with BIOS RAID the sets wont get activated until the members get scanned, and the set itself could be ignored, so it could become part of the lvm_cc_addFilterRejectRegexp(), which won't happen until the set itself gets scanned (which happens in the next pass in populate).
So what I've come up with (not yet coded, I'm first soliciting feedback) is:
1) populate as one normally would, but don't call handleUdevLVMPVFormat() from handleUdevDeviceFormat()
2) When done populating, write out lvm_cc_addFilterRejectRegexp to lvm.conf, retrigger udev (so that we get the correct info in the udev database in cases where there were PV / VG conflicts before the filtering).
3) call handleUdevLVMPVFormat() for all devices in the devicetree with a PV format
4) Go through the populate loop once more to handle the format on all the brought up LV's.
Yes this means that something which ends up being a PV on a PV will not work (the second PV will be pretty much ignored), I suggest we simply declare this as not supported.
Regards,
Hans
On 03/30/2010 02:30 PM, Hans de Goede wrote:
Hi All,
While doing what should be testing a simple iscsi related patch, I encountered the following issue:
Take a system with a single disk, sda, which has a /boot on sda1 and a PV on sda2. This PV is the PV for the 1 PV VG: VolGroup, which contains LV's lv_swap, lv_root and lv_home.
"Attach" an iscsi disk to this system, which becomes sdb, which has a /boot on sdb1 and a PV on sdb2. This PV is the PV for the 1 PV VG: VolGroup, which contains LV's lv_swap and lv_root.
Notice that:
- The 2 VG's have the same name
- Only sda has a lv_home LV.
Now in the filter UI select only disk sdb to install to, then the following may (depending on scanning order) happen:
Assume sdb gets scanned first by devicetree.py:
- when scanning sdb2, handleUdevLVMPVFormat() will
call "lvm lvchange -ay" for all LV's in this VG (as seen by udev, more on that later).
- at this point, sda has not been scanned yet, so
isIgnored has not been called for sda2 yet, and thus lvm_cc_addFilterRejectRegexp("sda2") has not been called yet.
- thus lvm lvchange sees both sda2 and sdb2, it complains
that there are 2 identically named VG's and picks the one using the sda2 PV.
Maybe we should stop the installation at this point and tell the user that he named two VGs the same and needs to address this before proceeding with the installation? Because otherwise we will need to do too many changes for a corner case that only occurs infrequently. And we still won't be completely happy with them.
Ales
Hi,
On 04/06/2010 11:54 AM, Ales Kozumplik wrote:
On 03/30/2010 02:30 PM, Hans de Goede wrote:
Hi All,
While doing what should be testing a simple iscsi related patch, I encountered the following issue:
Take a system with a single disk, sda, which has a /boot on sda1 and a PV on sda2. This PV is the PV for the 1 PV VG: VolGroup, which contains LV's lv_swap, lv_root and lv_home.
"Attach" an iscsi disk to this system, which becomes sdb, which has a /boot on sdb1 and a PV on sdb2. This PV is the PV for the 1 PV VG: VolGroup, which contains LV's lv_swap and lv_root.
Notice that:
- The 2 VG's have the same name
- Only sda has a lv_home LV.
Now in the filter UI select only disk sdb to install to, then the following may (depending on scanning order) happen:
Assume sdb gets scanned first by devicetree.py:
- when scanning sdb2, handleUdevLVMPVFormat() will
call "lvm lvchange -ay" for all LV's in this VG (as seen by udev, more on that later).
- at this point, sda has not been scanned yet, so
isIgnored has not been called for sda2 yet, and thus lvm_cc_addFilterRejectRegexp("sda2") has not been called yet.
- thus lvm lvchange sees both sda2 and sdb2, it complains
that there are 2 identically named VG's and picks the one using the sda2 PV.
Maybe we should stop the installation at this point and tell the user that he named two VGs the same and needs to address this before proceeding with the installation? Because otherwise we will need to do too many changes for a corner case that only occurs infrequently. And we still won't be completely happy with them.
That won't work, as there actually are no duplicate VG's when looking only at the devices the user selected in the filter UI, the problem is that lvm at this point does not honor what we've selected in the filter UI and what not. Which is caused by the way we build the ignore these devices cmdline argument for lvm.
Regards,
Hans
On 04/06/2010 06:06 AM, Hans de Goede wrote:
Hi,
On 04/06/2010 11:54 AM, Ales Kozumplik wrote:
On 03/30/2010 02:30 PM, Hans de Goede wrote:
Hi All,
While doing what should be testing a simple iscsi related patch, I encountered the following issue:
Take a system with a single disk, sda, which has a /boot on sda1 and a PV on sda2. This PV is the PV for the 1 PV VG: VolGroup, which contains LV's lv_swap, lv_root and lv_home.
"Attach" an iscsi disk to this system, which becomes sdb, which has a /boot on sdb1 and a PV on sdb2. This PV is the PV for the 1 PV VG: VolGroup, which contains LV's lv_swap and lv_root.
Notice that:
- The 2 VG's have the same name
- Only sda has a lv_home LV.
Now in the filter UI select only disk sdb to install to, then the following may (depending on scanning order) happen:
Assume sdb gets scanned first by devicetree.py:
- when scanning sdb2, handleUdevLVMPVFormat() will
call "lvm lvchange -ay" for all LV's in this VG (as seen by udev, more on that later).
- at this point, sda has not been scanned yet, so
isIgnored has not been called for sda2 yet, and thus lvm_cc_addFilterRejectRegexp("sda2") has not been called yet.
- thus lvm lvchange sees both sda2 and sdb2, it complains
that there are 2 identically named VG's and picks the one using the sda2 PV.
Maybe we should stop the installation at this point and tell the user that he named two VGs the same and needs to address this before proceeding with the installation? Because otherwise we will need to do too many changes for a corner case that only occurs infrequently. And we still won't be completely happy with them.
That won't work, as there actually are no duplicate VG's when looking only at the devices the user selected in the filter UI, the problem is that lvm at this point does not honor what we've selected in the filter UI and what not. Which is caused by the way we build the ignore these devices cmdline argument for lvm.
Perhaps we should be generating an lvm.conf with a proper filter section for this instead? It's not really an ideal solution :/
On Tue, 2010-04-06 at 10:58 -0400, Peter Jones wrote:
On 04/06/2010 06:06 AM, Hans de Goede wrote:
Hi,
On 04/06/2010 11:54 AM, Ales Kozumplik wrote:
On 03/30/2010 02:30 PM, Hans de Goede wrote:
Hi All,
While doing what should be testing a simple iscsi related patch, I encountered the following issue:
Take a system with a single disk, sda, which has a /boot on sda1 and a PV on sda2. This PV is the PV for the 1 PV VG: VolGroup, which contains LV's lv_swap, lv_root and lv_home.
"Attach" an iscsi disk to this system, which becomes sdb, which has a /boot on sdb1 and a PV on sdb2. This PV is the PV for the 1 PV VG: VolGroup, which contains LV's lv_swap and lv_root.
Notice that:
- The 2 VG's have the same name
- Only sda has a lv_home LV.
Now in the filter UI select only disk sdb to install to, then the following may (depending on scanning order) happen:
Assume sdb gets scanned first by devicetree.py:
- when scanning sdb2, handleUdevLVMPVFormat() will
call "lvm lvchange -ay" for all LV's in this VG (as seen by udev, more on that later).
- at this point, sda has not been scanned yet, so
isIgnored has not been called for sda2 yet, and thus lvm_cc_addFilterRejectRegexp("sda2") has not been called yet.
- thus lvm lvchange sees both sda2 and sdb2, it complains
that there are 2 identically named VG's and picks the one using the sda2 PV.
Maybe we should stop the installation at this point and tell the user that he named two VGs the same and needs to address this before proceeding with the installation? Because otherwise we will need to do too many changes for a corner case that only occurs infrequently. And we still won't be completely happy with them.
That won't work, as there actually are no duplicate VG's when looking only at the devices the user selected in the filter UI, the problem is that lvm at this point does not honor what we've selected in the filter UI and what not. Which is caused by the way we build the ignore these devices cmdline argument for lvm.
Perhaps we should be generating an lvm.conf with a proper filter section for this instead? It's not really an ideal solution :/
It might be worth passing lvm a full list of the devices it is allowed to look at instead of telling it which devices to ignore. We will know the full list of PVs at activation time. I've considered this on several occasions since initially writing this stuff, but never tried it. Maybe that's what you meant anyway.
Dave
Hi,
On 04/06/2010 06:29 PM, David Lehman wrote:
On Tue, 2010-04-06 at 10:58 -0400, Peter Jones wrote:
On 04/06/2010 06:06 AM, Hans de Goede wrote:
Hi,
On 04/06/2010 11:54 AM, Ales Kozumplik wrote:
On 03/30/2010 02:30 PM, Hans de Goede wrote:
Hi All,
While doing what should be testing a simple iscsi related patch, I encountered the following issue:
Take a system with a single disk, sda, which has a /boot on sda1 and a PV on sda2. This PV is the PV for the 1 PV VG: VolGroup, which contains LV's lv_swap, lv_root and lv_home.
"Attach" an iscsi disk to this system, which becomes sdb, which has a /boot on sdb1 and a PV on sdb2. This PV is the PV for the 1 PV VG: VolGroup, which contains LV's lv_swap and lv_root.
Notice that:
- The 2 VG's have the same name
- Only sda has a lv_home LV.
Now in the filter UI select only disk sdb to install to, then the following may (depending on scanning order) happen:
Assume sdb gets scanned first by devicetree.py:
- when scanning sdb2, handleUdevLVMPVFormat() will
call "lvm lvchange -ay" for all LV's in this VG (as seen by udev, more on that later).
- at this point, sda has not been scanned yet, so
isIgnored has not been called for sda2 yet, and thus lvm_cc_addFilterRejectRegexp("sda2") has not been called yet.
- thus lvm lvchange sees both sda2 and sdb2, it complains
that there are 2 identically named VG's and picks the one using the sda2 PV.
Maybe we should stop the installation at this point and tell the user that he named two VGs the same and needs to address this before proceeding with the installation? Because otherwise we will need to do too many changes for a corner case that only occurs infrequently. And we still won't be completely happy with them.
That won't work, as there actually are no duplicate VG's when looking only at the devices the user selected in the filter UI, the problem is that lvm at this point does not honor what we've selected in the filter UI and what not. Which is caused by the way we build the ignore these devices cmdline argument for lvm.
Perhaps we should be generating an lvm.conf with a proper filter section for this instead? It's not really an ideal solution :/
It might be worth passing lvm a full list of the devices it is allowed to look at instead of telling it which devices to ignore.
I've been thinking in that direction too, and I like it, but ...
We will know the full list of PVs at activation time.
Do we? Currently we activate LV's from handleUdevLVMPVFormat() when we find the first PV.
But if you've ideas to change this, I'm all ears. We could delay bringing up the LV's until the VG actually has pv_count parents, but this still won't save us from duplicate VG name issues.
Hmm, but if we were to use the VG's UUID as name in the device tree then this could work. This would probably require quite a bit of reworking of the code though, as I think there are assumptions that devicetree name == VG name in quite a few places.
Note that this problem is less obscure then it seems, it can be triggered by this PITA called software RAID. If we have a PV on a software RAID mirror, then lvm will see the PV 3 times, and semi randomly (or so it seems) pick one (*). So we really must make sure we've scanned all possible lower level devices before activating LV's. I've been thinking about this, and a patch for this should not be all that invasive.
*) Although we could simply not care in this case, as at the end of populating the tree we tear down everything, and the next activation the ignore list will be ok and the right PV will get used.
Regards,
Hans
On Tue, 2010-04-06 at 19:25 +0200, Hans de Goede wrote:
Hi,
On 04/06/2010 06:29 PM, David Lehman wrote:
On Tue, 2010-04-06 at 10:58 -0400, Peter Jones wrote:
On 04/06/2010 06:06 AM, Hans de Goede wrote:
Hi,
On 04/06/2010 11:54 AM, Ales Kozumplik wrote:
On 03/30/2010 02:30 PM, Hans de Goede wrote:
Hi All,
While doing what should be testing a simple iscsi related patch, I encountered the following issue:
Take a system with a single disk, sda, which has a /boot on sda1 and a PV on sda2. This PV is the PV for the 1 PV VG: VolGroup, which contains LV's lv_swap, lv_root and lv_home.
"Attach" an iscsi disk to this system, which becomes sdb, which has a /boot on sdb1 and a PV on sdb2. This PV is the PV for the 1 PV VG: VolGroup, which contains LV's lv_swap and lv_root.
Notice that:
- The 2 VG's have the same name
- Only sda has a lv_home LV.
Now in the filter UI select only disk sdb to install to, then the following may (depending on scanning order) happen:
Assume sdb gets scanned first by devicetree.py:
- when scanning sdb2, handleUdevLVMPVFormat() will
call "lvm lvchange -ay" for all LV's in this VG (as seen by udev, more on that later).
- at this point, sda has not been scanned yet, so
isIgnored has not been called for sda2 yet, and thus lvm_cc_addFilterRejectRegexp("sda2") has not been called yet.
- thus lvm lvchange sees both sda2 and sdb2, it complains
that there are 2 identically named VG's and picks the one using the sda2 PV.
Maybe we should stop the installation at this point and tell the user that he named two VGs the same and needs to address this before proceeding with the installation? Because otherwise we will need to do too many changes for a corner case that only occurs infrequently. And we still won't be completely happy with them.
That won't work, as there actually are no duplicate VG's when looking only at the devices the user selected in the filter UI, the problem is that lvm at this point does not honor what we've selected in the filter UI and what not. Which is caused by the way we build the ignore these devices cmdline argument for lvm.
Perhaps we should be generating an lvm.conf with a proper filter section for this instead? It's not really an ideal solution :/
It might be worth passing lvm a full list of the devices it is allowed to look at instead of telling it which devices to ignore.
I've been thinking in that direction too, and I like it, but ...
We will know the full list of PVs at activation time.
Do we? Currently we activate LV's from handleUdevLVMPVFormat() when we find the first PV.
Right, but lv.setup calls vg.setup, which raises an exception if len(self.parents) < self.pvCount, so that's taken care of.
But if you've ideas to change this, I'm all ears. We could delay bringing up the LV's until the VG actually has pv_count parents, but this still won't save us from duplicate VG name issues.
Hmm, but if we were to use the VG's UUID as name in the device tree then this could work. This would probably require quite a bit of reworking of the code though, as I think there are assumptions that devicetree name == VG name in quite a few places.
I'm not even sure what we do right now if we encounter two VGs with the same name. I do see that we try to look up the VG by name in handleUdevLVMPVFormat. I think it would only help to do all lookups on existing VGs by UUID instead of name, at least while populating the tree.
Note that this problem is less obscure then it seems, it can be triggered by this PITA called software RAID. If we have a PV on a software RAID mirror, then lvm will see the PV 3 times, and semi randomly (or so it seems) pick one (*). So we really must make sure we've scanned all possible lower level devices before activating LV's. I've been thinking about this, and a patch for this should not be all that invasive.
Have you seen this happen, or are you speculating? It seems to me we would have hit this long ago if it were so simple to set up. I suppose it's possible that we have and just don't know it.
*) Although we could simply not care in this case, as at the end of populating the tree we tear down everything, and the next activation the ignore list will be ok and the right PV will get used.
Regards,
Hans
Anaconda-devel-list mailing list Anaconda-devel-list@redhat.com https://www.redhat.com/mailman/listinfo/anaconda-devel-list
Hi,
On 04/06/2010 08:16 PM, David Lehman wrote:
On Tue, 2010-04-06 at 19:25 +0200, Hans de Goede wrote:
Hi,
On 04/06/2010 06:29 PM, David Lehman wrote:
On Tue, 2010-04-06 at 10:58 -0400, Peter Jones wrote:
On 04/06/2010 06:06 AM, Hans de Goede wrote:
Hi,
On 04/06/2010 11:54 AM, Ales Kozumplik wrote:
On 03/30/2010 02:30 PM, Hans de Goede wrote: > Hi All, > > While doing what should be testing a simple iscsi related patch, > I encountered the following issue: > > Take a system with a single disk, sda, which has a /boot on > sda1 and a PV on sda2. This PV is the PV for the 1 PV VG: > VolGroup, which contains LV's lv_swap, lv_root and lv_home. > > "Attach" an iscsi disk to this system, which becomes sdb, > which has a /boot on sdb1 and a PV on sdb2. This PV is the PV > for the 1 PV VG: VolGroup, which contains LV's lv_swap and > lv_root. > > Notice that: > 1) The 2 VG's have the same name > 2) Only sda has a lv_home LV. > > Now in the filter UI select only disk sdb to install to, then > the following may (depending on scanning order) happen: > > Assume sdb gets scanned first by devicetree.py: > - when scanning sdb2, handleUdevLVMPVFormat() will > call "lvm lvchange -ay" for all LV's in this VG > (as seen by udev, more on that later). > - at this point, sda has not been scanned yet, so > isIgnored has not been called for sda2 yet, and thus > lvm_cc_addFilterRejectRegexp("sda2") has not been called > yet. > - thus lvm lvchange sees both sda2 and sdb2, it complains > that there are 2 identically named VG's and picks the one > using the sda2 PV.
Maybe we should stop the installation at this point and tell the user that he named two VGs the same and needs to address this before proceeding with the installation? Because otherwise we will need to do too many changes for a corner case that only occurs infrequently. And we still won't be completely happy with them.
That won't work, as there actually are no duplicate VG's when looking only at the devices the user selected in the filter UI, the problem is that lvm at this point does not honor what we've selected in the filter UI and what not. Which is caused by the way we build the ignore these devices cmdline argument for lvm.
Perhaps we should be generating an lvm.conf with a proper filter section for this instead? It's not really an ideal solution :/
It might be worth passing lvm a full list of the devices it is allowed to look at instead of telling it which devices to ignore.
I've been thinking in that direction too, and I like it, but ...
We will know the full list of PVs at activation time.
Do we? Currently we activate LV's from handleUdevLVMPVFormat() when we find the first PV.
Right, but lv.setup calls vg.setup, which raises an exception if len(self.parents)< self.pvCount, so that's taken care of.
Ah, well that is hidden pretty well, but ack it is taken care of then :)
But if you've ideas to change this, I'm all ears. We could delay bringing up the LV's until the VG actually has pv_count parents, but this still won't save us from duplicate VG name issues.
Hmm, but if we were to use the VG's UUID as name in the device tree then this could work. This would probably require quite a bit of reworking of the code though, as I think there are assumptions that devicetree name == VG name in quite a few places.
I'm not even sure what we do right now if we encounter two VGs with the same name. I do see that we try to look up the VG by name in handleUdevLVMPVFormat. I think it would only help to do all lookups on existing VGs by UUID instead of name, at least while populating the tree.
That sounds like a solution, but what to do then when we encounter a PV of a VG which has the same name but a different UUID then a VG already in the list ...
Currently we have code to handle corner cases like this, but that does not run until populating is done.
Note that this problem is less obscure then it seems, it can be triggered by this PITA called software RAID. If we have a PV on a software RAID mirror, then lvm will see the PV 3 times, and semi randomly (or so it seems) pick one (*). So we really must make sure we've scanned all possible lower level devices before activating LV's. I've been thinking about this, and a patch for this should not be all that invasive.
Have you seen this happen, or are you speculating? It seems to me we would have hit this long ago if it were so simple to set up. I suppose it's possible that we have and just don't know it.
Normally it does not happen, as the first round through the loop in populate we scan all the raid set members (and ignore any PV formatting found on partitions of them as we ignore the partitions), and then only the next round the set itself gets scanned. But I've seen this happen in the conflicting VG names case (where I had, 1 sw bios raid mirror, 2 plain disks, and 1 iscsi disk, and the sw bios raid mirror had a 1 pv vg, and the iscsi disk had a 1 pv vg, and they were named identical).
I think that we've got most normal cases covered already, but we still have problems with filter ui + conflicting VG names. I think that looking for existing VG's bij UUID in handleUdevLVMPVFormat() + some sort of handling in case of Name conflicts + passing which PV's to use explicitly when activating things might do the trick.
Anyways this needs some more thinking / investigation. So I'm going to sleep a night on it (time to call it a day for today).
Regards,
Hans
On 04/06/2010 12:29 PM, David Lehman wrote:
On Tue, 2010-04-06 at 10:58 -0400, Peter Jones wrote:
On 04/06/2010 06:06 AM, Hans de Goede wrote:
That won't work, as there actually are no duplicate VG's when looking only at the devices the user selected in the filter UI, the problem is that lvm at this point does not honor what we've selected in the filter UI and what not. Which is caused by the way we build the ignore these devices cmdline argument for lvm.
Perhaps we should be generating an lvm.conf with a proper filter section for this instead? It's not really an ideal solution :/
It might be worth passing lvm a full list of the devices it is allowed to look at instead of telling it which devices to ignore. We will know the full list of PVs at activation time. I've considered this on several occasions since initially writing this stuff, but never tried it. Maybe that's what you meant anyway.
Yeah, that's basically what I was getting at.
Hi,
On 04/06/2010 04:58 PM, Peter Jones wrote:
On 04/06/2010 06:06 AM, Hans de Goede wrote:
Hi,
On 04/06/2010 11:54 AM, Ales Kozumplik wrote:
On 03/30/2010 02:30 PM, Hans de Goede wrote:
Hi All,
While doing what should be testing a simple iscsi related patch, I encountered the following issue:
Take a system with a single disk, sda, which has a /boot on sda1 and a PV on sda2. This PV is the PV for the 1 PV VG: VolGroup, which contains LV's lv_swap, lv_root and lv_home.
"Attach" an iscsi disk to this system, which becomes sdb, which has a /boot on sdb1 and a PV on sdb2. This PV is the PV for the 1 PV VG: VolGroup, which contains LV's lv_swap and lv_root.
Notice that:
- The 2 VG's have the same name
- Only sda has a lv_home LV.
Now in the filter UI select only disk sdb to install to, then the following may (depending on scanning order) happen:
Assume sdb gets scanned first by devicetree.py:
- when scanning sdb2, handleUdevLVMPVFormat() will
call "lvm lvchange -ay" for all LV's in this VG (as seen by udev, more on that later).
- at this point, sda has not been scanned yet, so
isIgnored has not been called for sda2 yet, and thus lvm_cc_addFilterRejectRegexp("sda2") has not been called yet.
- thus lvm lvchange sees both sda2 and sdb2, it complains
that there are 2 identically named VG's and picks the one using the sda2 PV.
Maybe we should stop the installation at this point and tell the user that he named two VGs the same and needs to address this before proceeding with the installation? Because otherwise we will need to do too many changes for a corner case that only occurs infrequently. And we still won't be completely happy with them.
That won't work, as there actually are no duplicate VG's when looking only at the devices the user selected in the filter UI, the problem is that lvm at this point does not honor what we've selected in the filter UI and what not. Which is caused by the way we build the ignore these devices cmdline argument for lvm.
Perhaps we should be generating an lvm.conf with a proper filter section for this instead? It's not really an ideal solution :/
The problem is not as much wether we use the cmdline or a config file, but that we dynamically build the list of ignored devices as we scan them (checking if a disk is not in exclusive disks as the disk is scanned), but lvm does its own scanning, and when we've not yet scanned a disk, it is not in our to be ignored list yet. Which is why I was thinking towards a 2 pass scan, but what Dave suggests in the other mail would be better, we really don't want lvm to go and probe all disks anyways, but only those we are interested in. See my reply to Dave's mail.
Regards,
Hans
anaconda-devel@lists.stg.fedoraproject.org