On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
testing something more real-world (20T ... 500T?) might still be interesting.
Here's my test script:
qemu-img create -f qcow2 test1.img 500T && \ guestfish -a test1.img \ memsize 4096 : run : \ part-disk /dev/vda gpt : mkfs ext4 /dev/vda1
The guestfish "mkfs" command translates directly to "mke2fs -t ext4" in this case.
500T: fails with the same error:
/dev/vda1: Cannot create filesystem with requested number of inodes while setting up superblock
By a process of bisection I found that I get the same error for all sizes >= 255T.
For 254T, I get:
/dev/vda1: Memory allocation failed while setting up superblock
I wasn't able to give the VM enough memory to make this succeed. I've only got 8G on this laptop. Should I need large amounts of memory to create these filesystems?
At 100T it doesn't run out of memory, but the man behind the curtain starts to show. The underlying qcow2 file grows to several gigs and I had to kill it. I need to play with the lazy init features of ext4.
Rich.
Bleah. Care to use xfs? ;)
Anyway, interesting; when I tried the larger sizes I got many other problems, but never the "requested number of inodes" error.
I just created a large sparse file on xfs, and pointed mke2fs at that.
But I'm using bleeding-edge git, ~= the latest WIP snapshot (which I haven't put into rawhide yet, because it doesn't actually build for me w/o a couple patches I'd like upstream to ACK first).
-Eric