Hello there again! I've been using Stratis as my rootfs for a bit now, and it seems to be working pretty well...with one particular catch...
For some context: I have a single 2TB pool on a single disk, with two filesystems inside. When I created said filesystems, they seem to have been both sized as 1TB, which was a bit confusing since I thought they would start smaller and expand as I wrote to them. In particular, one of them (the rootfs) is only ~200GB full, while the other is ~800GB. Of course, the latter of these is reaching the current 1TB limit, which is where things get...weird. In particular, based on the system logs, I seem to have hit this: https://github.com/stratis-storage/stratisd/issues/1466
From what I can understand, this is just Stratis trying to expand the filesystem. However...it doesn't seem to ever succeed, since the size stays the same. Despite that, it still seems to run whenever I write in a large amount of new data, resulting in some *very* brutally slow I/O speeds (despite being on an NVMe disk), to the extent that I can't really even open a terminal (to be fair, I have quite a few oh-my-zsh plugins that could be contributing negatively to this...)
This has led to two particular questions:
- How exactly does the resizing of filesystems work? I know you can't shrink XFS, but I believe the Stratis design paper references reclaiming unused space via trims. Is it even possible for the rootfs to "shrink" (but not really) to accommodate an expansion of the other one?
- If the other one can't expand, is it possible to just tell Stratis to stop trying and avoid the major lag? Or, could there be something in my configuration making this significantly slower than it's supposed to be (maybe disk schedulers)?