Gitweb: http://git.fedorahosted.org/git/?p=lvm2.git;a=commitdiff;h=ad85b0c526b40a4e…
Commit: ad85b0c526b40a4e78ac83426f54c935c147c787
Parent: 756bcabbfe297688ba240a880bc2b55265ad33f0
Author: Peter Rajnoha <prajnoha(a)redhat.com>
AuthorDate: Fri Dec 21 10:54:31 2012 +0100
Committer: Peter Rajnoha <prajnoha(a)redhat.com>
CommitterDate: Fri Dec 21 11:15:46 2012 +0100
pvscan: synchronize with udev if pvscan --cache is used.
We need to call sync_local_dev_names directly as pvscan uses
VG_GLOBAL lock and this one *does not* cause the synchronization
(sync_dev_names) to be called on unlock (VG_GLOBAL is not a real VG):
define unlock_vg(cmd, vol)
do { \
if (is_real_vg(vol)) \
sync_dev_names(cmd); \
(void) lock_vol(cmd, vol, LCK_VG_UNLOCK); \
} while (0)
Without this fix, we end up without udev synchronization for the
pvscan --cache (mainly for -aay that causes the VGs/LVs to be
autoactivated) and also udev synchronization cookies are then left
in the system since they're not managed properly (code before sets
up udev sync cookies, but we have to call dm_udev_wait at least once
after that to do the wait and cleanup).
---
WHATS_NEW | 1 +
tools/pvscan.c | 1 +
2 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/WHATS_NEW b/WHATS_NEW
index 5ac3090..5b81d52 100644
--- a/WHATS_NEW
+++ b/WHATS_NEW
@@ -1,5 +1,6 @@
Version 2.02.99 -
===================================
+ Synchronize with udev in pvscan --cache and fix dangling udev_sync cookies.
Fix autoactivation to not autoactivate VG/LV on each change of the PVs used.
Limit RAID device replacement to repair only if LV is not in-sync.
Disallow RAID device replacement or repair on inactive LVs.
diff --git a/tools/pvscan.c b/tools/pvscan.c
index c2e6f5c..1e844c5 100644
--- a/tools/pvscan.c
+++ b/tools/pvscan.c
@@ -241,6 +241,7 @@ static int _pvscan_lvmetad(struct cmd_context *cmd, int argc, char **argv)
}
out:
+ sync_local_dev_names(cmd);
unlock_vg(cmd, VG_GLOBAL);
return ret;
Gitweb: http://git.fedorahosted.org/git/?p=lvm2.git;a=commitdiff;h=756bcabbfe297688…
Commit: 756bcabbfe297688ba240a880bc2b55265ad33f0
Parent: 970dfbcd691addb7b69f4011c067eba3b35270b6
Author: Peter Rajnoha <prajnoha(a)redhat.com>
AuthorDate: Fri Dec 21 10:34:48 2012 +0100
Committer: Peter Rajnoha <prajnoha(a)redhat.com>
CommitterDate: Fri Dec 21 10:34:48 2012 +0100
activation: fix autoactivation to not trigger on each PV change
Before, the pvscan --cache -aay was called on each ADD and CHANGE
uevent (for a device that is not a device-mapper device) and each CHANGE
event (for a PV that is a device-mapper device).
This causes troubles with autoactivation in some cases as CHANGE event
may originate from using the OPTION+="watch" udev rule that is defined
in 60-persistent-storage.rules (part of the rules provided by udev
directly) and it's used for all block devices
(except fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md* devices). For example, the
following sequence incorrectly activates the rest of LVs in a VG if one
of the LVs in the VG is being removed:
[root@rhel6-a ~]# pvcreate /dev/sda
Physical volume "/dev/sda" successfully created
[root@rhel6-a ~]# vgcreate vg /dev/sda
Volume group "vg" successfully created
[root@rhel6-a ~]# lvcreate -l1 vg
Logical volume "lvol0" created
[root@rhel6-a ~]# lvcreate -l1 vg
Logical volume "lvol1" created
[root@rhel6-a ~]# vgchange -an vg
0 logical volume(s) in volume group "vg" now active
[root@rhel6-a ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log
Cpy%Sync Convert
lvol0 vg -wi------ 4.00m
lvol1 vg -wi------ 4.00m
[root@rhel6-a ~]# lvremove -ff vg/lvol1
Logical volume "lvol1" successfully removed
[root@rhel6-a ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log
Cpy%Sync Convert
lvol0 vg -wi-a---- 4.00m
...so the vg was deactivated, then lvol1 removed, and we end up with
lvol1 removed (which is ok) BUT with lvol0 activated (which is wrong)!!!
This is because after lvol1 removal, we need to write metadata to the
underlying device /dev/sda and that causes the CHANGE event to be
generated (because of the WATCH udev rule set on this device) and this
causes the pvscan --cache -aay to be reevaluated.
We have to limit this and call pvscan --cache -aay to autoactivate
VGs/LVs only in these cases:
--> if the *PV is not a dm device*, scan only after proper device
addition (ADD event) and not with any other changes (CHANGE event)
--> if the *PV is a dm device*, scan only after proper mapping
activation (CHANGE event + the underlying PV in a state "just
activated")
---
WHATS_NEW | 1 +
udev/10-dm.rules.in | 2 +-
udev/69-dm-lvm-metad.rules.in | 16 ++++++++++++----
3 files changed, 14 insertions(+), 5 deletions(-)
diff --git a/WHATS_NEW b/WHATS_NEW
index d114ccc..5ac3090 100644
--- a/WHATS_NEW
+++ b/WHATS_NEW
@@ -1,5 +1,6 @@
Version 2.02.99 -
===================================
+ Fix autoactivation to not autoactivate VG/LV on each change of the PVs used.
Limit RAID device replacement to repair only if LV is not in-sync.
Disallow RAID device replacement or repair on inactive LVs.
Fix possible race while removing metadata from lvmetad.
diff --git a/udev/10-dm.rules.in b/udev/10-dm.rules.in
index 29af467..cfee145 100644
--- a/udev/10-dm.rules.in
+++ b/udev/10-dm.rules.in
@@ -45,7 +45,7 @@ ENV{DISK_RO}=="1", GOTO="dm_disable"
# in libdevmapper so we need to detect this and try to behave correctly.
# For such spurious events, regenerate all flags from current udev database content
# (this information would normally be inaccessible for spurious ADD and CHANGE events).
-ENV{DM_UDEV_PRIMARY_SOURCE_FLAG}=="1", GOTO="dm_flags_done"
+ENV{DM_UDEV_PRIMARY_SOURCE_FLAG}=="1", ENV{DM_ACTIVATION}="1", GOTO="dm_flags_done"
IMPORT{db}="DM_UDEV_DISABLE_DM_RULES_FLAG"
IMPORT{db}="DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG"
IMPORT{db}="DM_UDEV_DISABLE_DISK_RULES_FLAG"
diff --git a/udev/69-dm-lvm-metad.rules.in b/udev/69-dm-lvm-metad.rules.in
index 706c03b..b16a27a 100644
--- a/udev/69-dm-lvm-metad.rules.in
+++ b/udev/69-dm-lvm-metad.rules.in
@@ -17,10 +17,18 @@
SUBSYSTEM!="block", GOTO="lvm_end"
(LVM_EXEC_RULE)
-# Device-mapper devices are processed only on change event or on supported synthesized event.
-KERNEL=="dm-[0-9]*", ENV{DM_UDEV_RULES_VSN}!="?*", GOTO="lvm_end"
-
# Only process devices already marked as a PV - this requires blkid to be called before.
-ENV{ID_FS_TYPE}=="LVM2_member|LVM1_member", RUN+="(LVM_EXEC)/lvm pvscan --cache --activate ay --major $major --minor $minor"
+ENV{ID_FS_TYPE}!="LVM2_member|LVM1_member", GOTO="lvm_end"
+
+ACTION=="remove", GOTO="lvm_scan"
+
+# If the PV is not a dm device, scan only after device addition (ADD event)
+KERNEL!="dm-[0-9]*", ACTION!="add", GOTO="lvm_end"
+
+# If the PV is a dm device, scan only after proper mapping activation (CHANGE event + DM_ACTIVATION=1)
+KERNEL=="dm-[0-9]*", ENV{DM_ACTIVATION}!="1", GOTO="lvm_end"
+
+LABEL="lvm_scan"
+RUN+="(LVM_EXEC)/lvm pvscan --cache --activate ay --major $major --minor $minor"
LABEL="lvm_end"
Gitweb: http://git.fedorahosted.org/git/?p=lvm2.git;a=commitdiff;h=970dfbcd691addb7…
Commit: 970dfbcd691addb7b69f4011c067eba3b35270b6
Parent: 0379c480e09f0d6e8641bd4d8ca49a8bd3c32943
Author: Jonathan Brassow <jbrassow(a)redhat.com>
AuthorDate: Tue Dec 18 14:40:42 2012 -0600
Committer: Jonathan Brassow <jbrassow(a)redhat.com>
CommitterDate: Tue Dec 18 14:40:42 2012 -0600
RAID: Limit replacement of devices when array is not in-sync.
If a RAID array is not in-sync, replacing devices should not be allowed
as a general rule. This is because the contents used to populate the
incoming device may be undefined because the devices being read where
not in-sync. The kernel enforces this rule unless overridden by not
allowing the creation of an array that is not in-sync and includes a
devices that needs to be rebuilt.
Since we cannot know the sync state of an LV if it is inactive, we must
also enforce the rule that an array must be active to replace devices.
That leaves us with the following conditions:
1) never allow replacement or repair of devices if the LV is in-active
2) never allow replacement if the LV is not in-sync
3) allow repair if the LV is not in-sync, but warn that contents may
not be recoverable.
In the case where a user is performing the repair on the command line via
'lvconvert --repair', the warning is printed before the user is prompted
if they would like to replace the device(s). If the repair is automated
(i.e. via dmeventd and policy is "allocate"), then the device is replaced
if possible and the warning is printed.
---
WHATS_NEW | 2 ++
lib/metadata/raid_manip.c | 12 ++++++++++++
tools/lvconvert.c | 27 +++++++++++++++++++++++++++
3 files changed, 41 insertions(+), 0 deletions(-)
diff --git a/WHATS_NEW b/WHATS_NEW
index c18f3f4..d114ccc 100644
--- a/WHATS_NEW
+++ b/WHATS_NEW
@@ -1,5 +1,7 @@
Version 2.02.99 -
===================================
+ Limit RAID device replacement to repair only if LV is not in-sync.
+ Disallow RAID device replacement or repair on inactive LVs.
Fix possible race while removing metadata from lvmetad.
Fix possible deadlock when querying and updating lvmetad at the same time.
Check lvmcache_info_from_pvid and recall only when needed in _pv_read.
diff --git a/lib/metadata/raid_manip.c b/lib/metadata/raid_manip.c
index 8be0abe..b902d3d 100644
--- a/lib/metadata/raid_manip.c
+++ b/lib/metadata/raid_manip.c
@@ -1617,6 +1617,18 @@ int lv_raid_replace(struct logical_volume *lv,
dm_list_init(&new_meta_lvs);
dm_list_init(&new_data_lvs);
+ if (!lv_is_active(lv)) {
+ log_error("%s/%s must be active to perform this operation.",
+ lv->vg->name, lv->name);
+ return 0;
+ }
+
+ if (!mirror_in_sync() && !_raid_in_sync(lv)) {
+ log_error("Unable to replace devices in %s/%s while it is"
+ " not in-sync.", lv->vg->name, lv->name);
+ return 0;
+ }
+
/*
* How many sub-LVs are being removed?
*/
diff --git a/tools/lvconvert.c b/tools/lvconvert.c
index fa3b1ec..5bda00f 100644
--- a/tools/lvconvert.c
+++ b/tools/lvconvert.c
@@ -1566,6 +1566,7 @@ static int lvconvert_raid(struct logical_volume *lv, struct lvconvert_params *lp
struct dm_list *failed_pvs;
struct cmd_context *cmd = lv->vg->cmd;
struct lv_segment *seg = first_seg(lv);
+ percent_t sync_percent;
if (!arg_count(cmd, type_ARG))
lp->segtype = seg->segtype;
@@ -1623,6 +1624,32 @@ static int lvconvert_raid(struct logical_volume *lv, struct lvconvert_params *lp
return lv_raid_replace(lv, lp->replace_pvh, lp->pvh);
if (arg_count(cmd, repair_ARG)) {
+ if (!lv_is_active(lv)) {
+ log_error("%s/%s must be active to perform"
+ "this operation.", lv->vg->name, lv->name);
+ return 0;
+ }
+
+ if (!lv_raid_percent(lv, &sync_percent)) {
+ log_error("Unable to determine sync status of %s/%s.",
+ lv->vg->name, lv->name);
+ return 0;
+ }
+
+ if (sync_percent != PERCENT_100) {
+ log_error("WARNING: %s/%s is not in-sync.",
+ lv->vg->name, lv->name);
+ log_error("WARNING: Portions of the array may"
+ " be unrecoverable.");
+
+ /*
+ * The kernel will not allow a device to be replaced
+ * in an array that is not in-sync unless we override
+ * by forcing the array to be considered "in-sync".
+ */
+ init_mirror_in_sync(1);
+ }
+
_lvconvert_raid_repair_ask(cmd, &replace);
if (replace) {
Gitweb: http://git.fedorahosted.org/git/?p=lvm2.git;a=commitdiff;h=0379c480e09f0d6e…
Commit: 0379c480e09f0d6e8641bd4d8ca49a8bd3c32943
Parent: 86e528c6677b4eff40df5c2424fdb588effdc73e
Author: Peter Rajnoha <prajnoha(a)redhat.com>
AuthorDate: Tue Dec 18 12:12:58 2012 +0100
Committer: Peter Rajnoha <prajnoha(a)redhat.com>
CommitterDate: Tue Dec 18 12:12:58 2012 +0100
WHATS_NEW: changelog for fae1a611d2 and 5294a6f77a
---
WHATS_NEW | 2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/WHATS_NEW b/WHATS_NEW
index 106bee4..c18f3f4 100644
--- a/WHATS_NEW
+++ b/WHATS_NEW
@@ -1,5 +1,7 @@
Version 2.02.99 -
===================================
+ Fix possible race while removing metadata from lvmetad.
+ Fix possible deadlock when querying and updating lvmetad at the same time.
Check lvmcache_info_from_pvid and recall only when needed in _pv_read.
Check for memory failure of dm_config_write_node() in lvmetad.
Fix socket leak on error path in lvmetad's handle_connect.
Gitweb: http://git.fedorahosted.org/git/?p=lvm2.git;a=commitdiff;h=5294a6f77a900493…
Commit: 5294a6f77a900493b3e81eb70c1698ec3c4814b8
Parent: fae1a611d2f907aa23c237b9f84df5089d30f728
Author: Petr Rockai <prockai(a)redhat.com>
AuthorDate: Mon Dec 17 00:43:18 2012 +0100
Committer: Petr Rockai <prockai(a)redhat.com>
CommitterDate: Mon Dec 17 00:47:55 2012 +0100
lvmetad: Fix a possible race in remove_metadata.
All operations on shared hash tables need to be protected by mutexes. Moreover,
lookup and subsequent key removal need to happen atomically, to avoid races (and
possible double free-ing) between multiple threads trying to manipulate the same
VG.
---
daemons/lvmetad/lvmetad-core.c | 14 +++++++++-----
1 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/daemons/lvmetad/lvmetad-core.c b/daemons/lvmetad/lvmetad-core.c
index 674ddea..a951b30 100644
--- a/daemons/lvmetad/lvmetad-core.c
+++ b/daemons/lvmetad/lvmetad-core.c
@@ -601,19 +601,23 @@ static int remove_metadata(lvmetad_state *s, const char *vgid, int update_pvids)
lock_vgid_to_metadata(s);
old = dm_hash_lookup(s->vgid_to_metadata, vgid);
oldname = dm_hash_lookup(s->vgid_to_vgname, vgid);
- unlock_vgid_to_metadata(s);
- if (!old)
+ if (!old) {
+ unlock_vgid_to_metadata(s);
return 0;
+ }
+
assert(oldname);
- if (update_pvids)
- /* FIXME: What should happen when update fails */
- update_pvid_to_vgid(s, old, "#orphan", 0);
/* need to update what we have since we found a newer version */
dm_hash_remove(s->vgid_to_metadata, vgid);
dm_hash_remove(s->vgid_to_vgname, vgid);
dm_hash_remove(s->vgname_to_vgid, oldname);
+ unlock_vgid_to_metadata(s);
+
+ if (update_pvids)
+ /* FIXME: What should happen when update fails */
+ update_pvid_to_vgid(s, old, "#orphan", 0);
dm_config_destroy(old);
return 1;
}
Gitweb: http://git.fedorahosted.org/git/?p=lvm2.git;a=commitdiff;h=fae1a611d2f907aa…
Commit: fae1a611d2f907aa23c237b9f84df5089d30f728
Parent: ed23da95b63308e11f8d680b189686a5d2d380d0
Author: Petr Rockai <prockai(a)redhat.com>
AuthorDate: Mon Dec 17 00:39:00 2012 +0100
Committer: Petr Rockai <prockai(a)redhat.com>
CommitterDate: Mon Dec 17 00:47:55 2012 +0100
lvmetad: Fix a possible deadlock.
If an update and a query were running in parallel, there was a slim but non-zero
chance of a deadlock due to (unnecessary) mutex nesting.
---
daemons/lvmetad/lvmetad-core.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/daemons/lvmetad/lvmetad-core.c b/daemons/lvmetad/lvmetad-core.c
index 4e02662..674ddea 100644
--- a/daemons/lvmetad/lvmetad-core.c
+++ b/daemons/lvmetad/lvmetad-core.c
@@ -671,8 +671,8 @@ static int update_metadata(lvmetad_state *s, const char *name, const char *_vgid
lock_vgid_to_metadata(s);
old = dm_hash_lookup(s->vgid_to_metadata, _vgid);
- lock_vg(s, _vgid);
unlock_vgid_to_metadata(s);
+ lock_vg(s, _vgid);
seq = dm_config_find_int(metadata, "metadata/seqno", -1);