I found instructions that resulted in failure to replace the bad drive. The 'bad' drive still works but the directions to add a new drive did not work. I shutdown the computer and unplugged the 'bad' drive.
On rebooting I ran: vgreduce --removemissing --force vg00
I was not able to mount the lvm afterwards.
I have looked for instructions on deleting the lvm and creating a new lvm and I have not been able to recover.
I am running Fedora 31.
pvs displays: at this point I have not added /dev/sda1 to the volume group.
PV VG Fmt Attr PSize PFree /dev/sda1 lvm2 --- 931.51g 931.51g # this is the new drive /dev/sdb1 vg00 lvm2 a-- <465.76g 0 # this is the bad drive /dev/sdc1 vg00 lvm2 a-- <465.76g 0 /dev/sdd1 vg00 lvm2 a-- <3.64t 0
I created a new lvm.
lvs displays:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert LVM vg00 -wi-a----- <4.55t
using the command:
mount /dev/mapper/vg00-LVM /tv
displays this error:
mount: /tv: wrong fs type, bad option, bad superblock on /dev/mapper/vg00-LVM, missing codepage or helper program, or other error.
I do not know what the next step should be.
If I have left anything out please let me know.
Thanks for any assist,
David
On 7/2/20 8:01 AM, D&R wrote:
I found instructions that resulted in failure to replace the bad drive. The 'bad' drive still works but the directions to add a new drive did not work.
Removing a bad drive and adding a replacement is something you can only do when your LVs are redundant across PVs. It doesn't look like yours was.
Can you describe your LVM configuration in more detail? Why did you expect to be able to remove a disk and replace it?