I met some of the folks last night at OSCON - they're willing help / learn packaging stuff. I haven't looked to see how hairy it is - but they did seem excited to perhaps work with us, which is awesome. I'm going to track them down in the expo area and see if we can chat a bit more.
On 7/20/10, Pete Zaitcev zaitcev@redhat.com wrote:
So, is anyone looking it a it?
-- Pete _______________________________________________ cloud mailing list cloud@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/cloud
You can find people on Freenode in #openstack if you have more questions. If you are looking for specific details, the voiced folks in that channel should be able to answer your questions.
If the group would prefer a more formal meeting with some of the devs to discuss it, just let me know and I'll get the ball rolling.
-- Major Hayden
On Jul 20, 2010, at 14:54, Robyn Bergeron wrote:
I met some of the folks last night at OSCON - they're willing help / learn packaging stuff. I haven't looked to see how hairy it is - but they did seem excited to perhaps work with us, which is awesome. I'm going to track them down in the expo area and see if we can chat a bit more.
On 7/20/10, Pete Zaitcev zaitcev@redhat.com wrote:
So, is anyone looking it a it?
-- Pete _______________________________________________ cloud mailing list cloud@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/cloud
-- Sent from my mobile device _______________________________________________ cloud mailing list cloud@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/cloud
Major Hayden wrote:
You can find people on Freenode in #openstack if you have more questions. If you are looking for specific details, the voiced folks in that channel should be able to answer your questions.
If the group would prefer a more formal meeting with some of the devs to discuss it, just let me know and I'll get the ball rolling.
I asked a few people from #openstack to stop by #fedora-meeting to answer any questions that might pop up during today's SIG meeting. We can always follow up via email or something more formal if necessary.
On Tue, Jul 20, 2010 at 8:41 PM, Pete Zaitcev zaitcev@redhat.com wrote:
So, is anyone looking it a it?
I've had a very brief look at it and might soon get some work time to contribute to further packaging.
Peter
On Wed, 21 Jul 2010, pbrobinson@gmail.com wrote:
On Tue, Jul 20, 2010 at 8:41 PM, Pete Zaitcev zaitcev@redhat.com wrote:
So, is anyone looking it a it?
I've had a very brief look at it and might soon get some work time to contribute to further packaging.
Can anyone give a brief architectural overview? (having not looked at it at all I was wondering stuff like:)
"Mostly single server node communication via xmlrpc. Written almost entirely in python. node auth uses ssl. Only a few deps on libraries (maybe 10), uses mysql database" type thing.
-Mike
On Thu, Jul 22, 2010 at 7:02 PM, Mike McGrath mmcgrath@redhat.com wrote:
On Wed, 21 Jul 2010, pbrobinson@gmail.com wrote:
On Tue, Jul 20, 2010 at 8:41 PM, Pete Zaitcev zaitcev@redhat.com wrote:
So, is anyone looking it a it?
I've had a very brief look at it and might soon get some work time to contribute to further packaging.
Can anyone give a brief architectural overview? (having not looked at it at all I was wondering stuff like:)
"Mostly single server node communication via xmlrpc. Written almost entirely in python. node auth uses ssl. Only a few deps on libraries (maybe 10), uses mysql database" type thing.
KVM nodes, The NASA Nova CC cloud controller, AMQP for messaging, there's a storage component that I don't believe is released yet but its object based and E3 compatible. I think some components will already be in fedora (AMQP/KVM) and some won't be. I noticed NTT is involved and they released recently the "sheep dog" [1] distribured storage that is now integrated into qemu so I wonder if it might not be based on that.
There was a good break down diagram but I can't seem to find it at the moment.
Mike McGrath wrote:
Can anyone give a brief architectural overview? (having not looked at it at all I was wondering stuff like:)
For the storage part (Swift) the authoritative source is http://swift.openstack.org/overview_architecture.html or various files in the docs/source subdir of the tarball, but I'll try to provide a brief overview.
The storage model is very similar to Amazon's S3 - HTTP interface, single-level hierarchy of containers (S3 buckets) and objects, mostly whole-object get/put, some support for attributes and ACLs on objects, etc. The main difference that I've found is that Swift requires the API user to obtain a token through a (semi-)separate auth service and then use that same token for all subsequent requests, while S3 does its own per-request auth. Internally, the whole thing is based on consistent hashing, but it's consistent hashing based on partitions rather than hashing directly from item to server. IOW, the item is hashed to a partition rather than a server, and the assignment of partitions to (N-way-replicated) servers is done offline via a "ring-builder" utility. There are separate rings and partition/server sets for objects, containers, and accounts, with each server using its own internal sqlite3 database to store metadata and plain files for data. There are also one or more HTTP proxy servers, based on WebOb and eventlet, which provide API service. Incoming objects are hashed to the appropriate partition, the replica servers looked up in the global partition/server map, and then the object contents directly streamed (I confirmed this with the devs) to the N object servers which will hold copies. There are also some background processes to do re-replication, auditing, etc. It's all Python, and comes with a semi-decent set of unit and functional tests which could also be used for other similar data stores.
cloud@lists.stg.fedoraproject.org