As mentioned in the IRC meeting yesterday, I've put together a CloudFS feature proposal at https://fedoraproject.org/wiki/Features/CloudFS. Feedback would be most welcome.
On Fri, 12 Nov 2010 13:14:45 -0500 Jeff Darcy jdarcy@redhat.com wrote:
As mentioned in the IRC meeting yesterday, I've put together a CloudFS feature proposal at https://fedoraproject.org/wiki/Features/CloudFS. Feedback would be most welcome.
I have a question: why not adopt Ceph instead?
-- Pete
Jeff had mentioned on irc the idea of packaging RBD, which uses the lower layer of Ceph (per Jeff's description when we were talking).
My thoughts: If folks are willing to help with packaging and/or reviewing, I think it would benefit us to have a variety of building blocks to choose from. I personally would be delighted to see both in Fedora.
-robyn On Nov 13, 2010 7:03 PM, "Pete Zaitcev" zaitcev@redhat.com wrote:
On Fri, 12 Nov 2010 13:14:45 -0500 Jeff Darcy jdarcy@redhat.com wrote:
As mentioned in the IRC meeting yesterday, I've put together a CloudFS feature proposal at https://fedoraproject.org/wiki/Features/CloudFS. Feedback would be most welcome.
I have a question: why not adopt Ceph instead?
-- Pete _______________________________________________ cloud mailing list cloud@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/cloud
On Sat, 13 Nov 2010 19:58:37 -0700 Robyn Bergeron robyn.bergeron@gmail.com wrote:
Jeff had mentioned on irc the idea of packaging RBD, which uses the lower layer of Ceph (per Jeff's description when we were talking).
See, my agenda was to ask next if Jeff's CloudFS had enough consistency to back a block device emulator and thus displace Speepdog, which I see as too ad-hoc. Now if Jeff himself prefers to package RBD (RADOS), then this may be an admission that CloudFS is not up to it. Or perhaps he does not like the idea for other reason. I just basically want him to discuss the alternatives. I know he knows about them more than I do, hence the question.
-- Pete
On 11/13/2010 09:50 PM, Pete Zaitcev wrote:
On Sat, 13 Nov 2010 19:58:37 -0700 Robyn Bergeron robyn.bergeron@gmail.com wrote:
Jeff had mentioned on irc the idea of packaging RBD, which uses the lower layer of Ceph (per Jeff's description when we were talking).
See, my agenda was to ask next if Jeff's CloudFS had enough consistency to back a block device emulator and thus displace Speepdog, which I see as too ad-hoc. Now if Jeff himself prefers to package RBD (RADOS), then
It is true that sheepdog is not generically useful outside of block storage and specifically QEMU enabled systems (and perhaps XEN). However in the Fedora performance case, I expect sheepdog with its lower memory copy operation count and tight integration with QEMU to produce better performance results and lower cpu utilization, resulting in higher density of VMs on equivalent hardware. There is only one way to validate this speculation - integrate the various solutions into Fedora and see which has the best mix of real-world attributes.
Regards -steve
this may be an admission that CloudFS is not up to it. Or perhaps he does not like the idea for other reason. I just basically want him to discuss the alternatives. I know he knows about them more than I do, hence the question.
-- Pete _______________________________________________ cloud mailing list cloud@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/cloud
Any plan to package Sheepdog?
On Mon, Nov 15, 2010 at 2:19 AM, Steven Dake sdake@redhat.com wrote:
On 11/13/2010 09:50 PM, Pete Zaitcev wrote:
On Sat, 13 Nov 2010 19:58:37 -0700 Robyn Bergeron robyn.bergeron@gmail.com wrote:
Jeff had mentioned on irc the idea of packaging RBD, which uses the
lower
layer of Ceph (per Jeff's description when we were talking).
See, my agenda was to ask next if Jeff's CloudFS had enough consistency to back a block device emulator and thus displace Speepdog, which I see as too ad-hoc. Now if Jeff himself prefers to package RBD (RADOS), then
It is true that sheepdog is not generically useful outside of block storage and specifically QEMU enabled systems (and perhaps XEN). However in the Fedora performance case, I expect sheepdog with its lower memory copy operation count and tight integration with QEMU to produce better performance results and lower cpu utilization, resulting in higher density of VMs on equivalent hardware. There is only one way to validate this speculation - integrate the various solutions into Fedora and see which has the best mix of real-world attributes.
Regards -steve
this may be an admission that CloudFS is not up to it. Or perhaps he does not like the idea for other reason. I just basically want him to discuss the alternatives. I know he knows about them more than I do, hence the question.
-- Pete _______________________________________________ cloud mailing list cloud@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/cloud
cloud mailing list cloud@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/cloud
On 11/13/2010 11:50 PM, Pete Zaitcev wrote:
See, my agenda was to ask next if Jeff's CloudFS had enough consistency to back a block device emulator and thus displace Speepdog, which I see as too ad-hoc. Now if Jeff himself prefers to package RBD (RADOS), then this may be an admission that CloudFS is not up to it.
On the first point, I believe the answer is that yes, GlusterFS can support block devices via loopback, and I know of several sites using it that way. It could probably be improved somewhat, as BTW could the loopback driver which somebody broke a while back by making it single thread per device.
On the second point, I wouldn't say that CloudFS is "not up to it" so much as that CloudFS is not designed for it. It can handle that need *adequately*, and if a user/provider wanted to limit the number of different storage technologies in play (a reasonable goal IMO) then that would be the way to go. However, CloudFS is all about the "multi" - multi tenant, multi site, etc. - with all of the coherency issues implied by that in a world of complex namespaces and non-block-aligned access. Sleepdog or RBD, which don't have to worry about any of that, can and should take advantage of the simpler requirements to perform better in that particular situation. I don't even know that GlusterFS would "lose" in the performance comparison I had suggested to Steven a while back, but whether it would or not isn't particularly relevant to CloudFS's goals.
As with the comparison to Ceph, the real issue is balancing a user's (or provider's) need for operational simplicity with their need for optimality of each component. If they want to keep things simple, I'd propose using just GlusterFS/CloudFS with block devices via loopback. If they want to differentiate themselves with higher performance for virtual block devices, at a cost in complexity (including complexity for "secondary" functions such as backups) then they could augment that with RBD or Sheepdog.
On 11/14/2010 05:52 PM, wariola@gmail.com wrote:
Any plan to package Sheepdog?
short answer : yes
The sheepdog upstream repo needs one more man page which is sitting around on my harddisk in incomplete state. Once that is done, sheepdog needs a new upstream release with the current set of software. Once that is complete the userland component will be packaged, and the qemu portion is already available in rawhide.
Hopeful can find a package reviewer once this is complete - the package is very simple, two binaries two man pages and an init script. If you are interested in reviewing the package, ping me offlist.
Not sure how far we will get on libvirt integration in F15 - which is why this technology will be "tech preview" and may require manual launching of vms.
Regards -steve
On Mon, Nov 15, 2010 at 2:19 AM, Steven Dake <sdake@redhat.com mailto:sdake@redhat.com> wrote:
On 11/13/2010 09:50 PM, Pete Zaitcev wrote: > On Sat, 13 Nov 2010 19:58:37 -0700 > Robyn Bergeron <robyn.bergeron@gmail.com <mailto:robyn.bergeron@gmail.com>> wrote: > >> Jeff had mentioned on irc the idea of packaging RBD, which uses the lower >> layer of Ceph (per Jeff's description when we were talking). > > See, my agenda was to ask next if Jeff's CloudFS had enough consistency > to back a block device emulator and thus displace Speepdog, which I see > as too ad-hoc. Now if Jeff himself prefers to package RBD (RADOS), then It is true that sheepdog is not generically useful outside of block storage and specifically QEMU enabled systems (and perhaps XEN). However in the Fedora performance case, I expect sheepdog with its lower memory copy operation count and tight integration with QEMU to produce better performance results and lower cpu utilization, resulting in higher density of VMs on equivalent hardware. There is only one way to validate this speculation - integrate the various solutions into Fedora and see which has the best mix of real-world attributes. Regards -steve > this may be an admission that CloudFS is not up to it. Or perhaps he does > not like the idea for other reason. I just basically want him to > discuss the alternatives. I know he knows about them more than I do, > hence the question. > > -- Pete > _______________________________________________ > cloud mailing list > cloud@lists.fedoraproject.org <mailto:cloud@lists.fedoraproject.org> > https://admin.fedoraproject.org/mailman/listinfo/cloud _______________________________________________ cloud mailing list cloud@lists.fedoraproject.org <mailto:cloud@lists.fedoraproject.org> https://admin.fedoraproject.org/mailman/listinfo/cloud-- .: war|ola :. Use Fedora Linux for better computing experience http://fedoraproject.org
cloud mailing list cloud@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/cloud
On 11/13/2010 09:03 PM, Pete Zaitcev wrote:
On Fri, 12 Nov 2010 13:14:45 -0500 Jeff Darcyjdarcy@redhat.com wrote:
As mentioned in the IRC meeting yesterday, I've put together a CloudFS feature proposal at https://fedoraproject.org/wiki/Features/CloudFS. Feedback would be most welcome.
I have a question: why not adopt Ceph instead?
Why does either have to be adopted *instead* of the other? They're both admirable pieces of work, with slightly different strengths. The technology in Ceph (e.g. RADOS, CRUSH) is slightly more advanced, and it would generally be a great choice for anyone looking for a solution in the traditional distributed-filesystem space. On the other hand, when it comes to a *cloud* filesystem specifically and the features that implies, I think the modular architecture of GlusterFS carries much greater weight. That provides several significant advantages.
* It allows functionality to be added without perturbing a core which many users already depend on.
* It moves functionality out to where needed library support (e.g. for auth/crypto) already exists, without tedious interfaces and brittle coupling between kernel and user-space components (e.g. mountd, lockd, gssd).
* It moves functionality with high memory requirements (e.g. maintaining long "patch" lists for multi-site replication) or long sequential control flows out of the kernel where such things don't belong.
* It accelerates development by avoiding the endless churn in the VFS and block layers, and by making more development tools available.
I'd estimate that development for something like multi-site replication would take two to four times as long on a Ceph base, so I'd be proposing it as a feature for Fedora 18 instead. ;) Even if the goal were to implement CloudFS in the kernel eventually, I'd do it in user space first to get all the protocol stuff sorted out before dealing with low-level implementation issues. I have every intention to continue supporting and recommending both Ceph and GlusterFS/CloudFS for their respective use cases, as I consider them quite complementary.
cloud@lists.stg.fedoraproject.org