On 10/4/11 2:09 AM, Farkas Levente wrote:
On 10/04/2011 01:03 AM, Eric Sandeen wrote:
On 10/3/11 5:53 PM, Farkas Levente wrote:
On 10/04/2011 12:33 AM, Eric Sandeen wrote:
On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote: I wasn't able to give the VM enough memory to make this succeed. I've only got 8G on this laptop. Should I need large amounts of memory to create these filesystems?
At 100T it doesn't run out of memory, but the man behind the curtain starts to show. The underlying qcow2 file grows to several gigs and I had to kill it. I need to play with the lazy init features of ext4.
Rich.
Bleah. Care to use xfs? ;)
why we've to use xfs? really? nobody really use large fs on linux? or nobody really use rhel? why not the e2fsprogs has too much upstream support? with 2-3TB disk the 16TB fs limit is really funny...or not so funny:-(
XFS has been proven at this scale on Linux for a very long time, is all.
the why rh do NOT support it in 32 bit? there're still system that should have to run on 32 bit:-(
32-bit machines have a 32-bit index into the page cache; on x86, that limits us to 16T for XFS, as well. So 32-bit is really not that interesting for large filesystem use.
If you need really scalable filesystems, I'd suggest a 64-bit machine.
-Eric