// Deploying fedora infrastructure (koji) across clouds // As promised, steps to deploy kojihub to an openstack instance communicating with builders on ec2 // Any provider supported by deltacloud will work // (lots of em: http://deltacloud.apache.org/drivers.html)
// A short video of these steps in action can be seen here: // http://youtu.be/qF2ctg7ItNc
// install deltacloud & drivers $ sudo yum install wget deltacloud-core deltacloud-core-openstack deltacloud-core-ec2
// start it $ deltacloudd -i mock
// small wrapper script around deltacloud: $ wget https://raw.github.com/movitto/mycloud/master/mycloud.rb $ chmod +x mycloud.rb
// template describing kojihub cloud deployment $ wget https://raw.github.com/aeolus-incubator/templates/master/fedora_infra/koji/f...
// template describing kojibuild cloud deployment $ wget https://raw.github.com/aeolus-incubator/templates/master/fedora_infra/koji/f...
// edit kojihub deployment to contain openstack credentials $ vim koji_f17.xml
// startup an instance on openstack w/ kojihub setup togo $ ./mycloud.rb koji_f17.xml
// grab public address from output, grab kojibuild ssl credentials from new instance
$ scp -i ~/os.key ec2-user@kojihub:/etc/pki/koji/kojiadmin.pem . $ scp -i ~/os.key ec2-user@kojihub:/etc/pki/koji/koji_ca_cert.crt .
// edit kojibuild deployment to deploy to ec2 w/ correct koji credentials & hub address $ vim koji_builder_f17.xml
// startup an instance on ec2 w/ kojibuild communicating w/ the hub $ ./mycloud.rb koji_builder_f17.xml
// open up a webbrowser, navigating to http://kojihub/koji to use your Koji instance!
-Mo
----- Original Message -----
From: "Mo Morsi" mmorsi@redhat.com To: "aeolus-devel" aeolus-devel@lists.fedorahosted.org, "Fedora Cloud SIG" cloud@lists.fedoraproject.org, "Development discussions related to Fedora" devel@lists.fedoraproject.org Sent: Wednesday, October 31, 2012 2:22:47 AM Subject: Deploying fedora infrastructure (koji) across clouds
// Deploying fedora infrastructure (koji) across clouds // As promised, steps to deploy kojihub to an openstack instance communicating with builders on ec2 // Any provider supported by deltacloud will work // (lots of em: http://deltacloud.apache.org/drivers.html)
// A short video of these steps in action can be seen here: // http://youtu.be/qF2ctg7ItNc
// install deltacloud & drivers $ sudo yum install wget deltacloud-core deltacloud-core-openstack deltacloud-core-ec2
// start it $ deltacloudd -i mock
// small wrapper script around deltacloud: $ wget https://raw.github.com/movitto/mycloud/master/mycloud.rb $ chmod +x mycloud.rb
// template describing kojihub cloud deployment $ wget https://raw.github.com/aeolus-incubator/templates/master/fedora_infra/koji/f...
// template describing kojibuild cloud deployment $ wget https://raw.github.com/aeolus-incubator/templates/master/fedora_infra/koji/f...
// edit kojihub deployment to contain openstack credentials $ vim koji_f17.xml
// startup an instance on openstack w/ kojihub setup togo $ ./mycloud.rb koji_f17.xml
// grab public address from output, grab kojibuild ssl credentials from new instance
$ scp -i ~/os.key ec2-user@kojihub:/etc/pki/koji/kojiadmin.pem . $ scp -i ~/os.key ec2-user@kojihub:/etc/pki/koji/koji_ca_cert.crt .
// edit kojibuild deployment to deploy to ec2 w/ correct koji credentials & hub address $ vim koji_builder_f17.xml
// startup an instance on ec2 w/ kojibuild communicating w/ the hub $ ./mycloud.rb koji_builder_f17.xml
// open up a webbrowser, navigating to http://kojihub/koji to use your Koji instance!
A few questions:
#1: How does this take things like ARM or other archs into the picture - ie: I am guessing we can't really build ARM on ec2? :)
#2: Could there be a way to take a (working) nightly build, build one's package against that nightly in a personal build of some sort, and somehow have a verification process that it built in that "personal build" before it goes into rawhide, etc? (or even... unit tests, etc.)?
I'm mostly thinking about things like "how to not have perpetually broken rawhide" (avoid checking in things that will likely break the build in the first place).
Re: the endless anaconda-killing-f18 thread: I know there's been some discussion about whether or not the devel process really accommodates what needs to happen with something like a full-blown anaconda re-write - and while I know that "THE CLOUD" is not the entire solution to that (there are obviously, as others have graciously pointed out, many other feature/fesco/planning/etc processes intertwined here that also need love) - it seems like having these capabilities might fit in to a solution, or at least, something on the road to a better devel/build process.
There are plenty of projects doing CI/CD - and having cloud infra makes this significantly easier to enable - though obviously not a lot of cases of people doing an OS.
-robyn
-Mo _______________________________________________ cloud mailing list cloud@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/cloud
On 10/31/2012 12:40 PM, Robyn Bergeron wrote:
----- Original Message -----
From: "Mo Morsi" mmorsi@redhat.com To: "aeolus-devel" aeolus-devel@lists.fedorahosted.org, "Fedora Cloud SIG" cloud@lists.fedoraproject.org, "Development discussions related to Fedora" devel@lists.fedoraproject.org Sent: Wednesday, October 31, 2012 2:22:47 AM Subject: Deploying fedora infrastructure (koji) across clouds
// Deploying fedora infrastructure (koji) across clouds // As promised, steps to deploy kojihub to an openstack instance communicating with builders on ec2 // Any provider supported by deltacloud will work // (lots of em: http://deltacloud.apache.org/drivers.html)
// A short video of these steps in action can be seen here: // http://youtu.be/qF2ctg7ItNc
A few questions:
#1: How does this take things like ARM or other archs into the picture - ie: I am guessing we can't really build ARM on ec2? :)
No ARM on EC2, but you can mix and match clouds as you will. Providers are now just config options so you can easily add and change the environments and resources that you leverage anytime you want.
There's really a world of possibilities available, we just have to identify the correct level of abstraction the Fedora community is looking for.
#2: Could there be a way to take a (working) nightly build, build one's package against that nightly in a personal build of some sort, and somehow have a verification process that it built in that "personal build" before it goes into rawhide, etc? (or even... unit tests, etc.)?
Absolutely, repositories and packages can be added on the fly, and we can incorporate any image already pushed to the cloud as well as build new ones for our purposes.
Even what I demoed in the screencast (just the tip of the iceburg concerning Deltacloud's capabilities) can be automated such that new cloud instances w/ any stack we want can be provisioned on demand anywhere we want.
I'm mostly thinking about things like "how to not have perpetually broken rawhide" (avoid checking in things that will likely break the build in the first place).
Really the workflow at this point is the only thing vague to me, once that's down I'm sure it can be implemented.
Perhaps people can send their thoughts or we can discuss it at the next cloud sig meeting? (Wednesdays at 10AM EST on #fedora-meeting-1 for those that are interested)
Re: the endless anaconda-killing-f18 thread: I know there's been some discussion about whether or not the devel process really accommodates what needs to happen with something like a full-blown anaconda re-write - and while I know that "THE CLOUD" is not the entire solution to that (there are obviously, as others have graciously pointed out, many other feature/fesco/planning/etc processes intertwined here that also need love) - it seems like having these capabilities might fit in to a solution, or at least, something on the road to a better devel/build process.
There are plenty of projects doing CI/CD - and having cloud infra makes this significantly easier to enable - though obviously not a lot of cases of people doing an OS.
-robyn
I've said it before and I'll say it again, the cloud is _not_ the be all end all. :-) Its a great tool, and brings a powerful computational resource to the table, but it's never going to completely mitigate local infrastructure.
I think hybrid technologies is the future, being able to leverage cloud resources in addition to and along side of local ones seemlessly in an open manner will really enable us to do some really cool things. I believe Fedora can make great headway and lead on this front, we just have to find what works for people and do it! :-)
-Mo
On 10/31/2012 12:40 PM, Robyn Bergeron wrote:
----- Original Message -----
From: "Mo Morsi" mmorsi@redhat.com To: "aeolus-devel" aeolus-devel@lists.fedorahosted.org, "Fedora Cloud SIG" cloud@lists.fedoraproject.org, "Development discussions related to Fedora" devel@lists.fedoraproject.org Sent: Wednesday, October 31, 2012 2:22:47 AM Subject: Deploying fedora infrastructure (koji) across clouds
// Deploying fedora infrastructure (koji) across clouds // As promised, steps to deploy kojihub to an openstack instance communicating with builders on ec2 // Any provider supported by deltacloud will work // (lots of em: http://deltacloud.apache.org/drivers.html)
// A short video of these steps in action can be seen here: // http://youtu.be/qF2ctg7ItNc
A few questions:
#1: How does this take things like ARM or other archs into the picture - ie: I am guessing we can't really build ARM on ec2? :)
No ARM on EC2, but you can mix and match clouds as you will. Providers are now just config options so you can easily add and change the environments and resources that you leverage anytime you want.
There's really a world of possibilities available, we just have to identify the correct level of abstraction the Fedora community is looking for.
#2: Could there be a way to take a (working) nightly build, build one's package against that nightly in a personal build of some sort, and somehow have a verification process that it built in that "personal build" before it goes into rawhide, etc? (or even... unit tests, etc.)?
Absolutely, repositories and packages can be added on the fly, and we can incorporate any image already pushed to the cloud as well as build new ones for our purposes.
Even what I demoed in the screencast (just the tip of the iceburg concerning Deltacloud's capabilities) can be automated such that new cloud instances w/ any stack we want can be provisioned on demand anywhere we want.
I'm mostly thinking about things like "how to not have perpetually broken rawhide" (avoid checking in things that will likely break the build in the first place).
Really the workflow at this point is the only thing vague to me, once that's down I'm sure it can be implemented.
Perhaps people can send their thoughts or we can discuss it at the next cloud sig meeting? (Wednesdays at 10AM EST on #fedora-meeting-1 for those that are interested)
Re: the endless anaconda-killing-f18 thread: I know there's been some discussion about whether or not the devel process really accommodates what needs to happen with something like a full-blown anaconda re-write - and while I know that "THE CLOUD" is not the entire solution to that (there are obviously, as others have graciously pointed out, many other feature/fesco/planning/etc processes intertwined here that also need love) - it seems like having these capabilities might fit in to a solution, or at least, something on the road to a better devel/build process.
There are plenty of projects doing CI/CD - and having cloud infra makes this significantly easier to enable - though obviously not a lot of cases of people doing an OS.
-robyn
I've said it before and I'll say it again, the cloud is _not_ the be all end all. :-) Its a great tool, and brings a powerful computational resource to the table, but it's never going to completely mitigate local infrastructure.
I think hybrid technologies is the future, being able to leverage cloud resources in addition to and along side of local ones seemlessly in an open manner will really enable us to do some really cool things. I believe Fedora can make great headway and lead on this front, we just have to find what works for people and do it! :-)
-Mo
On Wed, 31 Oct 2012, Mo Morsi wrote:
#2: Could there be a way to take a (working) nightly build, build one's package against that nightly in a personal build of some sort, and somehow have a verification process that it built in that "personal build" before it goes into rawhide, etc? (or even... unit tests, etc.)?
Absolutely, repositories and packages can be added on the fly, and we can incorporate any image already pushed to the cloud as well as build new ones for our purposes.
Is this a new feature in koji, then? B/c in the past adding repositories has been a major limiting factor in koji. Especially untrusted, remote repositories.
I think hybrid technologies is the future, being able to leverage cloud resources in addition to and along side of local ones seemlessly in an open manner will really enable us to do some really cool things. I believe Fedora can make great headway and lead on this front, we just have to find what works for people and do it! :-)
Have you been following what the infrastructure team has been doing with our private cloud instances and deployment/provisioning there?
some info here if you're interested:
https://fedoraproject.org/wiki/Infrastructure_private_cloud
-sv
On 10/31/2012 01:10 PM, Seth Vidal wrote:
On Wed, 31 Oct 2012, Mo Morsi wrote:
#2: Could there be a way to take a (working) nightly build, build one's package against that nightly in a personal build of some sort, and somehow have a verification process that it built in that "personal build" before it goes into rawhide, etc? (or even... unit tests, etc.)?
Absolutely, repositories and packages can be added on the fly, and we can incorporate any image already pushed to the cloud as well as build new ones for our purposes.
Is this a new feature in koji, then? B/c in the past adding repositories has been a major limiting factor in koji. Especially untrusted, remote repositories.
Hrm I was refering more to the repos necessary to bootstrap the cloud instance for the koji builder. Which component imposes this limitation on koji, the hub or the builder? Would being able to build custom cloud images on the fly for different clouds assist with this? We are able to do that w/ our imagefactory / oz tools (written in Python incidently [1][2])
-Mo
[1] https://github.com/aeolusproject/imagefactory [2] https://github.com/clalancette/oz
On 10/31/2012 01:10 PM, Seth Vidal wrote:
On Wed, 31 Oct 2012, Mo Morsi wrote:
#2: Could there be a way to take a (working) nightly build, build one's package against that nightly in a personal build of some sort, and somehow have a verification process that it built in that "personal build" before it goes into rawhide, etc? (or even... unit tests, etc.)?
Absolutely, repositories and packages can be added on the fly, and we can incorporate any image already pushed to the cloud as well as build new ones for our purposes.
Is this a new feature in koji, then? B/c in the past adding repositories has been a major limiting factor in koji. Especially untrusted, remote repositories.
Hrm I was refering more to the repos necessary to bootstrap the cloud instance for the koji builder. Which component imposes this limitation on koji, the hub or the builder? Would being able to build custom cloud images on the fly for different clouds assist with this? We are able to do that w/ our imagefactory / oz tools (written in Python incidently [1][2])
-Mo
[1] https://github.com/aeolusproject/imagefactory [2] https://github.com/clalancette/oz
On 11/01/2012 08:33 AM, Mo Morsi wrote:
On 10/31/2012 01:10 PM, Seth Vidal wrote:
On Wed, 31 Oct 2012, Mo Morsi wrote:
#2: Could there be a way to take a (working) nightly build, build one's package against that nightly in a personal build of some sort, and somehow have a verification process that it built in that "personal build" before it goes into rawhide, etc? (or even... unit tests, etc.)?
Absolutely, repositories and packages can be added on the fly, and we can incorporate any image already pushed to the cloud as well as build new ones for our purposes.
Is this a new feature in koji, then? B/c in the past adding repositories has been a major limiting factor in koji. Especially untrusted, remote repositories.
Hrm I was refering more to the repos necessary to bootstrap the cloud instance for the koji builder. Which component imposes this limitation on koji, the hub or the builder? Would being able to build custom cloud images on the fly for different clouds assist with this? We are able to do that w/ our imagefactory / oz tools (written in Python incidently [1][2])
-Mo
[1] https://github.com/aeolusproject/imagefactory [2] https://github.com/clalancette/oz
Heh, speak about timing, imagefactory was just hosted on the latest FLOSS Weekly:
http://www.youtube.com/watch?v=H5z-tpYS0Ng
-Mo
On Thu, 1 Nov 2012, Mo Morsi wrote:
Hrm I was refering more to the repos necessary to bootstrap the cloud instance for the koji builder.
I see - I thought you were referring to package repos.
Which component imposes this limitation on koji, the hub or the builder?
the hub.
Would being able to build custom cloud images on the fly for different clouds assist with this?
No - where it is built has nothing to do with it.
-sv
// small wrapper script around deltacloud: $ wget https://raw.github.com/movitto/mycloud/master/mycloud.rb $ chmod +x mycloud.rb
// template describing kojihub cloud deployment $ wget https://raw.github.com/aeolus-incubator/templates/master/fedora_infra/koji/f...
// template describing kojibuild cloud deployment $ wget https://raw.github.com/aeolus-incubator/templates/master/fedora_infra/koji/f...
// edit kojihub deployment to contain openstack credentials $ vim koji_f17.xml
// startup an instance on openstack w/ kojihub setup togo $ ./mycloud.rb koji_f17.xml
// grab public address from output, grab kojibuild ssl credentials from new instance
$ scp -i ~/os.key ec2-user@kojihub:/etc/pki/koji/kojiadmin.pem . $ scp -i ~/os.key ec2-user@kojihub:/etc/pki/koji/koji_ca_cert.crt .
// edit kojibuild deployment to deploy to ec2 w/ correct koji credentials & hub address $ vim koji_builder_f17.xml
// startup an instance on ec2 w/ kojibuild communicating w/ the hub
$ ./mycloud.rb koji_builder_f17.xml
// open up a webbrowser, navigating to http://kojihub/koji to use your Koji instance!
Mo, Interesting!
You can orchestrate all of these steps across/between multiple systems using ansible: http://ansible.cc - I've been documenting spinning up and provisioning instances on my blog in the last week or so. You might take a look - it should solve the problem of the above needing to be so manual of a process and it requires nothing other than ssh be installed on the machine you're trying to configure/control.
The problems we have encountered in the fedora infrastructure with koji builders are:
1. having to bring them up and down while waiting for them to complete build process (koji enable-host/disable host) 2. them having no way to recover a build w/o manual intervention if a builder were to crash or go dead during a build 3. the way the builders connect to the hub to get jobs rather than pushing out from the hub to the builders. Mainly this makes it a pain to deal with 1 and 2 above. 4. For completely arbitrary repositories the tagging process for koji seems pretty cumbersome, especially for transient scratch/chain builds.
I've talked about some of these on the buildsys list (buildsys@lists.fedoraproject.org)
If you're interested in discussing this further drop by there.
Thanks, -sv
On 10/31/2012 01:07 PM, Seth Vidal wrote:
// small wrapper script around deltacloud: $ wget https://raw.github.com/movitto/mycloud/master/mycloud.rb $ chmod +x mycloud.rb
// template describing kojihub cloud deployment $ wget https://raw.github.com/aeolus-incubator/templates/master/fedora_infra/koji/f...
// template describing kojibuild cloud deployment $ wget https://raw.github.com/aeolus-incubator/templates/master/fedora_infra/koji/f...
// edit kojihub deployment to contain openstack credentials $ vim koji_f17.xml
// startup an instance on openstack w/ kojihub setup togo $ ./mycloud.rb koji_f17.xml
// grab public address from output, grab kojibuild ssl credentials from new instance
$ scp -i ~/os.key ec2-user@kojihub:/etc/pki/koji/kojiadmin.pem . $ scp -i ~/os.key ec2-user@kojihub:/etc/pki/koji/koji_ca_cert.crt .
// edit kojibuild deployment to deploy to ec2 w/ correct koji credentials & hub address $ vim koji_builder_f17.xml
// startup an instance on ec2 w/ kojibuild communicating w/ the hub
$ ./mycloud.rb koji_builder_f17.xml
// open up a webbrowser, navigating to http://kojihub/koji to use your Koji instance!
Mo, Interesting!
You can orchestrate all of these steps across/between multiple systems using ansible: http://ansible.cc - I've been documenting spinning up and provisioning instances on my blog in the last week or so. You might take a look - it should solve the problem of the above needing to be so manual of a process and it requires nothing other than ssh be installed on the machine you're trying to configure/control.
Cool thanks for the info Seth. Ansible looks interesting, its a configuration orchestration component akin to Puppet / Chef is it not? Does it do any provisioning in itself?
The ec2_create utility on your blog seems to call out to euca2ools to do the actual provisioning on ec2 correct? You'd still want a component such as Deltacloud to abstractify commands to different cloud providers would you not?
-Mo
On 10/31/2012 01:07 PM, Seth Vidal wrote:
// small wrapper script around deltacloud: $ wget https://raw.github.com/movitto/mycloud/master/mycloud.rb $ chmod +x mycloud.rb
// template describing kojihub cloud deployment $ wget https://raw.github.com/aeolus-incubator/templates/master/fedora_infra/koji/f...
// template describing kojibuild cloud deployment $ wget https://raw.github.com/aeolus-incubator/templates/master/fedora_infra/koji/f...
// edit kojihub deployment to contain openstack credentials $ vim koji_f17.xml
// startup an instance on openstack w/ kojihub setup togo $ ./mycloud.rb koji_f17.xml
Mo, Interesting!
You can orchestrate all of these steps across/between multiple systems using ansible: http://ansible.cc - I've been documenting spinning up and provisioning instances on my blog in the last week or so. You might take a look - it should solve the problem of the above needing to be so manual of a process and it requires nothing other than ssh be installed on the machine you're trying to configure/control.
Cool thanks for the info Seth. Ansible looks interesting, its a configuration orchestration component akin to Puppet / Chef is it not? Does it do any provisioning in itself?
The ec2_create utility on your blog seems to call out to euca2ools to do the actual provisioning on ec2 correct? You'd still want a component such as Deltacloud to abstractify commands to different cloud providers would you not?
-Mo
On Thu, 1 Nov 2012, Mo Morsi wrote:
Cool thanks for the info Seth. Ansible looks interesting, its a configuration orchestration component akin to Puppet / Chef is it not? Does it do any provisioning in itself?
Ansible is more like this:
Combining puppet and chef in one item. Then making it so you can run the entire tool w/o having to first setup the config mgmt system on the node and not requiring any sort of central server.
So - ansible is those things and it doesn't make install a giant ruby blob on a system.
The ec2_create utility on your blog seems to call out to euca2ools to do the actual provisioning on ec2 correct? You'd still want a component such as Deltacloud to abstractify commands to different cloud providers would you not?
I doubt it. It's easier to simply use the euca2ools w/the ec2 api against openstack/cloudstack/euca/etc. Or if need be write an ostack_create to use nova.
I'm not interested in supporting the whole world of clouds with this - we only have euca and openstack setup - nothing else.
-sv
cloud@lists.stg.fedoraproject.org