Hi,
I have migrated to thin pools using partclone.
But now that I am using thin pools my system is completely unstable.
My swap is not activated :
swapon[1004]: swapon: /fedora.swap: read swap header failed systemd[1]: fedora.swap.swap: Swap process exited, code=exited, status=255/EXCEPTION
systemd[1]: fedora.swap.swap: Failed with result 'exit-code'. Failed to activate swap /fedora.swap. Dependency failed for Swap. swap.target: Job swap.target/start failed with result 'dependency'.
My System logging service is not started:
rsyslog.service: Failed with result 'core-dump'. Failed to start System Logging Service. SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=rsyslog comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' rsyslog.service: Scheduled restart job, restart counter is at 1.
And device mapper gives me weird messages during startup like:
kernel: device-mapper: btree spine: node_check failed: csum 3318704195 != wanted 3318554075 kernel: device-mapper: block manager: btree_node validator check failed for block 220 kernel: device-mapper: btree spine: node_check failed: csum 3318704195 != wanted 3318554075 kernel: device-mapper: block manager: btree_node validator check failed for block 220 kernel: device-mapper: btree spine: node_check failed: csum 3318704195 != wanted 3318554075 kernel: device-mapper: block manager: btree_node validator check failed for block 220 kernel: device-mapper: btree spine: node_check failed: csum 3318704195 != wanted 3318554075 kernel: device-mapper: block manager: btree_node validator check failed for block 220
Also some dracut module is missing:
dracut-initqueue[867]: /usr/sbin/thin_check: execvp failed: No such file or directory dracut-initqueue[867]: WARNING: Check is skipped, please install recommended missing binary /usr/sbin/thin_check!
In short my system has gone to hell.
This all started when I did a fstrim like this:
sudo fstrim -v /
And it returned an input/output error.
My logical volumes were marked with two 'XX', I don't know what that means. After that I restarted.
What can I do to get my system back to normal ?
Please help.
On 11/26/20 6:44 AM, Sreyan Chakravarty wrote:
In short my system has gone to hell.
This all started when I did a fstrim like this:
sudo fstrim -v /
And it returned an input/output error.
That is a serious problem. Something went really wrong. Since you have very recent backups, I suggest restarting this process again.
On Fri, Nov 27, 2020 at 6:28 AM Samuel Sieb samuel@sieb.net wrote:
That is a serious problem. Something went really wrong. Since you have very recent backups, I suggest restarting this process again.
I no longer have those backups.
On 11/27/20 2:13 AM, Sreyan Chakravarty wrote:
On Fri, Nov 27, 2020 at 6:28 AM Samuel Sieb <samuel@sieb.net mailto:samuel@sieb.net> wrote:
That is a serious problem. Something went really wrong. Since you have very recent backups, I suggest restarting this process again.
I no longer have those backups.
Then I suggest backing up whatever you need to save and doing a reinstall. There is a thin LV option you can select in the installer, although I think btrfs is probably the better option going forward.
On Sat, Nov 28, 2020 at 1:58 AM Samuel Sieb samuel@sieb.net wrote:
Then I suggest backing up whatever you need to save and doing a reinstall. There is a thin LV option you can select in the installer, although I think btrfs is probably the better option going forward.
Is BTRFS stable ?
I thought that was still experimental.
On 28/11/2020 13:52, Sreyan Chakravarty wrote:
On Sat, Nov 28, 2020 at 1:58 AM Samuel Sieb <samuel@sieb.net mailto:samuel@sieb.net> wrote:
Then I suggest backing up whatever you need to save and doing a reinstall. There is a thin LV option you can select in the installer, although I think btrfs is probably the better option going forward.
Is BTRFS stable ?
I thought that was still experimental.
With the release of F33.
https://fedoramagazine.org/btrfs-coming-to-fedora-33/
--- The key to getting good answers is to ask good questions.
On Sat, Nov 28, 2020 at 12:28 PM Ed Greshko ed.greshko@greshko.com wrote:
With the release of F33.
I have no idea how BTRFS is a better alternative.
The snapshotting is so complex, I haven't been able to get it working as of yet.
On Sat, 2020-11-28 at 11:22 +0530, Sreyan Chakravarty wrote:
On Sat, Nov 28, 2020 at 1:58 AM Samuel Sieb samuel@sieb.net wrote:
Then I suggest backing up whatever you need to save and doing a reinstall. There is a thin LV option you can select in the installer, although I think btrfs is probably the better option going forward.
Is BTRFS stable ?
I thought that was still experimental.
BTRFS is the default system for new installations of F33. It has been available as an option for quite a few years now, so I'd say it's safe to regard it as stable.
poc
On Thu, Nov 26, 2020 at 7:45 AM Sreyan Chakravarty sreyan32@gmail.com wrote:
Hi,
I have migrated to thin pools using partclone.
But now that I am using thin pools my system is completely unstable.
My swap is not activated :
swapon[1004]: swapon: /fedora.swap: read swap header failed systemd[1]: fedora.swap.swap: Swap process exited, code=exited, status=255/EXCEPTION
systemd[1]: fedora.swap.swap: Failed with result 'exit-code'. Failed to activate swap /fedora.swap. Dependency failed for Swap. swap.target: Job swap.target/start failed with result 'dependency'.
You can't have a swapfile on a file system that's on an LVM thin volume.
And device mapper gives me weird messages during startup like:
kernel: device-mapper: btree spine: node_check failed: csum 3318704195 != wanted 3318554075 kernel: device-mapper: block manager: btree_node validator check failed for block 220 kernel: device-mapper: btree spine: node_check failed: csum 3318704195 != wanted 3318554075 kernel: device-mapper: block manager: btree_node validator check failed for block 220 kernel: device-mapper: btree spine: node_check failed: csum 3318704195 != wanted 3318554075 kernel: device-mapper: block manager: btree_node validator check failed for block 220 kernel: device-mapper: btree spine: node_check failed: csum 3318704195 != wanted 3318554075 kernel: device-mapper: block manager: btree_node validator check failed for block 220
Also some dracut module is missing:
dracut-initqueue[867]: /usr/sbin/thin_check: execvp failed: No such file or directory dracut-initqueue[867]: WARNING: Check is skipped, please install recommended missing binary /usr/sbin/thin_check!
In short my system has gone to hell.
This all started when I did a fstrim like this:
sudo fstrim -v /
Without complete start to finish logs, showing the first instance of problems, it's not much to go on.
On Sat, Nov 28, 2020 at 4:48 AM Chris Murphy lists@colorremedies.com wrote:
You can't have a swapfile on a file system that's on an LVM thin volume.
Where exactly are you getting this from ?
I have been using swap on LVM thin volume pretty much up around always, excluding this crash.
On Fri, Nov 27, 2020 at 10:54 PM Sreyan Chakravarty sreyan32@gmail.com wrote:
Where exactly are you getting this from ?
I have been using swap on LVM thin volume pretty much up around always, excluding this crash.
It's possibly stale information, back from when the installer first added support for LVM thinp. I forget if it was someone on the installer or LVM team, but I can't find any reference. The installer, to this day, creates swap on a conventional "thick" provisioned LV, even when choosing the LVM thinp layout.
On 11/28/20 4:13 PM, Chris Murphy wrote:
On Fri, Nov 27, 2020 at 10:54 PM Sreyan Chakravarty sreyan32@gmail.com wrote:
Where exactly are you getting this from ?
I have been using swap on LVM thin volume pretty much up around always, excluding this crash.
It's possibly stale information, back from when the installer first added support for LVM thinp. I forget if it was someone on the installer or LVM team, but I can't find any reference. The installer, to this day, creates swap on a conventional "thick" provisioned LV, even when choosing the LVM thinp layout.
I don't see why it would be any different on a thin LV as long as the swap file is fully provisioned. It's still just a fixed set of blocks on the hard drive.
On Sun, Nov 29, 2020 at 10:55 AM Samuel Sieb samuel@sieb.net wrote:
I don't see why it would be any different on a thin LV as long as the swap file is fully provisioned. It's still just a fixed set of blocks on the hard drive.
I need some advice.
I have checked the filesystem with e2fsck, and it reports that it's ok.
So if I do take a partclone backup now, then will there be any issues when I restore the backup ?
I mean, what chance is there of the backup being corrupted ?
On 11/28/20 10:57 PM, Sreyan Chakravarty wrote:
On Sun, Nov 29, 2020 at 10:55 AM Samuel Sieb <samuel@sieb.net mailto:samuel@sieb.net> wrote:
I don't see why it would be any different on a thin LV as long as the swap file is fully provisioned. It's still just a fixed set of blocks on the hard drive.
I need some advice.
I have checked the filesystem with e2fsck, and it reports that it's ok.
So if I do take a partclone backup now, then will there be any issues when I restore the backup ?
I mean, what chance is there of the backup being corrupted ?
Pretty low. If e2fsk is happy with it, then it should be good. Although that doesn't verify that the file data is all there, so there is still potential for hitting an I/O error somewhere during the process. But trying won't hurt anything.
On Sun, Nov 29, 2020 at 12:37 PM Samuel Sieb samuel@sieb.net wrote:
Pretty low. If e2fsk is happy with it, then it should be good. Although that doesn't verify that the file data is all there, so there is still potential for hitting an I/O error somewhere during the process. But trying won't hurt anything.
I have already lost some data, namely my wallpapers.
On Sat, Nov 28, 2020 at 10:25 PM Samuel Sieb samuel@sieb.net wrote:
On 11/28/20 4:13 PM, Chris Murphy wrote:
On Fri, Nov 27, 2020 at 10:54 PM Sreyan Chakravarty sreyan32@gmail.com wrote:
Where exactly are you getting this from ?
I have been using swap on LVM thin volume pretty much up around always, excluding this crash.
It's possibly stale information, back from when the installer first added support for LVM thinp. I forget if it was someone on the installer or LVM team, but I can't find any reference. The installer, to this day, creates swap on a conventional "thick" provisioned LV, even when choosing the LVM thinp layout.
I don't see why it would be any different on a thin LV as long as the swap file is fully provisioned. It's still just a fixed set of blocks on the hard drive.
It's complicated. I'm suspicious because even in the corner of storage I mostly hang out in (Btrfsland) there are various logical to physical mapping issues. Hence all the limitations for swapfiles listed in man 5 btrfs. I know there isn't a shared/standard interface for finding the physical offset during early boot in order to restore a hibernation image, which is why we have an extra step to figure out that physical offset on Btrfs and add it as a boot parameter.
Given my skepticism, I decided to start an upstream discussion. Unfortunately there's a problem with the redhat list archive not showing the thread. But it is on lore, which was just set up a couple days ago. https://lore.kernel.org/linux-lvm/608365664c2a18db4e756f524c0e76da@assyoma.i...
Lore is awesome because it's searchable and there are quite a few other upstream linux lists stored there (including the file systems lists)
Anyway, there are related issues to swap on the LVM list. e.g. it's not recommended to use it with LVM cache.
https://lore.kernel.org/linux-lvm/57207A94.5040004@redhat.com/
Switching back to hibernation, one of the additional issues there is the hibernation image must be made of contiguous blocks on the physical storage. And this isn't certain with either Btrfs or LVM thin. That's probably yet another thing we need a standard interface for when initially creating the logical to physical mapping. Or we need an updated hibernation image format that can be discontiguous.
-- Chris Murphy
On Sun, Nov 29, 2020 at 5:43 AM Chris Murphy lists@colorremedies.com wrote:
It's possibly stale information, back from when the installer first added support for LVM thinp. I forget if it was someone on the installer or LVM team, but I can't find any reference. The installer, to this day, creates swap on a conventional "thick" provisioned LV, even when choosing the LVM thinp layout.
-- Chris Murphy
Well I was using a swap file without any hiccups.
On Sat, Nov 28, 2020 at 11:57 PM Sreyan Chakravarty sreyan32@gmail.com wrote:
On Sun, Nov 29, 2020 at 5:43 AM Chris Murphy lists@colorremedies.com wrote:
It's possibly stale information, back from when the installer first added support for LVM thinp. I forget if it was someone on the installer or LVM team, but I can't find any reference. The installer, to this day, creates swap on a conventional "thick" provisioned LV, even when choosing the LVM thinp layout.
-- Chris Murphy
Well I was using a swap file without any hiccups.
Except the thin pool just blew up on you, which is what this thread is about. And also somewhere you reported hibernation resume problems. There is nowhere near enough information to know if these things are related or not.
-- Chris Murphy