Seth, is is possible to set up builders that the they will have very limited network connections? Probably just to copr-be or copr-fe?
I'm asking because currently you can do nearly everything on builder. For example I made evil.spec, which have
%prep wget http://www.googl.com -O index.html cat index.html
Which is executed without problem: http://copr-be.cloud.fedoraproject.org/results/msuchy/copr/fedora-18-x86_64/...
Of course instead wget I can e.g start my own ssh server or botnet. And I will own that machine until our timeout expire.
On Koji we do not have such problem because builders are configured to not allow any internet connections.
Right now we need internet connection on builders, because they need to download src.rpm. But this can be fixed that user will not provide url to src.rpm, but they will upload it to cloud-fe. This will be save, because we will store and behave to src.rpm just as binary file and only store it in some directory and not parse it.
Builder can then pick it up from copr-fe. Either by downloading from some http export or via nfs read only share.
Opinions?
Mirek
On Wed, 26 Jun 2013 16:43:39 +0200 Miroslav Suchy msuchy@redhat.com wrote:
Seth, is is possible to set up builders that the they will have very limited network connections? Probably just to copr-be or copr-fe?
I'm asking because currently you can do nearly everything on builder. For example I made evil.spec, which have
%prep wget http://www.googl.com -O index.html cat index.html
Which is executed without problem: http://copr-be.cloud.fedoraproject.org/results/msuchy/copr/fedora-18-x86_64/...
Of course instead wget I can e.g start my own ssh server or botnet. And I will own that machine until our timeout expire.
On Koji we do not have such problem because builders are configured to not allow any internet connections.
Right now we need internet connection on builders, because they need to download src.rpm. But this can be fixed that user will not provide url to src.rpm, but they will upload it to cloud-fe. This will be save, because we will store and behave to src.rpm just as binary file and only store it in some directory and not parse it.
Builder can then pick it up from copr-fe. Either by downloading from some http export or via nfs read only share.
Opinions? \
Mirek, All of the above is on purpose and intentional.
Here's why:
1. remember that the user can add random repos to their mock configs in addition to the urls for their own srpms. Those repos can be from anywhere - not just from coprs. This is on purpose - so we can support a wide variety of systems and packages - w/ buildreqs from all over.
2. uploading a 400-1GB pkg to cloud-fe will murder it. Fetching the file and putting limits on the fetch are MUCH easier
3. the whole point of evil in the builder is that we don't care - the builders are destroyed once they are used and they do not contain ANY sensitive information. Furthermore, they are timed out if they are hanging out for too long.
Remember coprs is not koji and koji is not coprs. Coprs is not a koji replacement. Coprs is for the space that Koji does not want to and should not occupy - The untrusted build.
Koji is about creating trusted builds from trusted sources and trusted contributors.
Coprs is about building pkgs from untrusted sources and potentially untrusted contributors. This is why we don't touch the rpm on any system AS an rpm. We only deal with it as files.
So if we were to implement your suggestions we would eliminate: 1. external repositories of any kind 2. pkgs above a certain size 3. pkgs requiring interesting network access to build.
That's not what we want, it's not what our scope or goal is, either.
Does that help explain?
-sv
On 06/26/2013 04:53 PM, seth vidal wrote:
- remember that the user can add random repos to their mock configs in
addition to the urls for their own srpms. Those repos can be from anywhere - not just from coprs. This is on purpose - so we can support a wide variety of systems and packages - w/ buildreqs from all over.
Why we have such requirement? None build service (including very open OBS) allows this. For example OBS allow to build to very wide scale of repositories: Arch, Debian, Suse, Fedora, RHEL, Mandriva, Ubuntu and many others. But they always have to be added by build service administrator first.
- uploading a 400-1GB pkg to cloud-fe will murder it. Fetching the file
and putting limits on the fetch are MUCH easier
Why? We will have to have work with big packages anyway. For example package 0ad-data have more than 1GB. I do not see difference if this file is downloaded by builder or uploaded to cloud-fe. OK there is subtle difference, we will not need to store src.rpm. But it will be max. 1/2 of stored data. If you build for more targets (which most user will) then it will be e.g 1/8 of data. So I do not see problem here.
BTW: We will need some storage with terabytes of free space for sure. Do we have some? Or I have to start looking around?
- the whole point of evil in the builder is that we don't care - the
builders are destroyed once they are used and they do not contain ANY sensitive information. Furthermore, they are timed out if they are hanging out for too long.
But the timeout is few hours. And how many builders we will have? OBS have 400. We will start will smaller number. I (as attacker) would care less about sensitive information (there is only one, hard to get, require a lot of work). I would rather welcome the possibility to get bunch of machines for few hours for free. And once they timeout I can get them again in few minutes. Ideal for some botnet or as source of DOS attack.
Remember coprs is not koji and koji is not coprs. Coprs is not a koji replacement. Coprs is for the space that Koji does not want to and should not occupy - The untrusted build.
Koji is about creating trusted builds from trusted sources and trusted contributors.
Coprs is about building pkgs from untrusted sources and potentially untrusted contributors. This is why we don't touch the rpm on any system AS an rpm. We only deal with it as files.
Therefore our whole environment should be more secure than environment of Koji. Including builders.
So if we were to implement your suggestions we would eliminate:
- external repositories of any kind
fine with me.
- pkgs above a certain size
why? I see no problem with uploading file with GB size.
- pkgs requiring interesting network access to build.
Such packages exist? Can you give me example. This is given as case of bad packing on every packaging workshop.
Does that help explain?
Little bit. Lets continue with discussion :)
Mirek
On Wed, 26 Jun 2013 21:25:05 +0200 Miroslav Suchy msuchy@redhat.com wrote:
On 06/26/2013 04:53 PM, seth vidal wrote:
- remember that the user can add random repos to their mock
configs in addition to the urls for their own srpms. Those repos can be from anywhere - not just from coprs. This is on purpose - so we can support a wide variety of systems and packages - w/ buildreqs from all over.
Why we have such requirement? None build service (including very open OBS) allows this. For example OBS allow to build to very wide scale of repositories: Arch, Debian, Suse, Fedora, RHEL, Mandriva, Ubuntu and many others. But they always have to be added by build service administrator first.
Which is the point. That really only lets you build from the blessed locations. It's an extra bit of red tape to cut through. Why would we want to restrict our users in that way?
- uploading a 400-1GB pkg to cloud-fe will murder it. Fetching the
file and putting limits on the fetch are MUCH easier
Why? We will have to have work with big packages anyway. For example package 0ad-data have more than 1GB. I do not see difference if this file is downloaded by builder or uploaded to cloud-fe. OK there is subtle difference, we will not need to store src.rpm. But it will be max. 1/2 of stored data. If you build for more targets (which most user will) then it will be e.g 1/8 of data. So I do not see problem here.
Have you uploaded a 1GB file to a website from a home network connection? It's AWFUL. Not to mention we'd need to store this one the -fe system and the whole goal was to have a separation between -fe and -be so they didn't need to communicate in anyway but, ultimately REST
Here's how I would use this:
1. construct my spec file(s) locally or over ssh. 2. build srpms on a system <out_in_the_world> 3. put srpms in a webaccessible location 4. paste the srpm urls into the box 5. wait for builds 6. win
BTW: We will need some storage with terabytes of free space for sure. Do we have some? Or I have to start looking around?
Our existing private cloud infrastructure has some space - but not infinite space. The answer I was given was - if this is useful we'll be able to manifest space for it by adding space onto the private cloud and more importantly if it is very popular then running out of space is a good problem to have.
- the whole point of evil in the builder is that we don't care -
the builders are destroyed once they are used and they do not contain ANY sensitive information. Furthermore, they are timed out if they are hanging out for too long.
But the timeout is few hours. And how many builders we will have? OBS have 400. We will start will smaller number.
I have no idea what OBS has to do with this.
Could you explain what you're talking about here?
I (as attacker) would care less about sensitive information (there is only one, hard to get, require a lot of work). I would rather welcome the possibility to get bunch of machines for few hours for free. And once they timeout I can get them again in few minutes. Ideal for some botnet or as source of DOS attack.
Your concern here is someone using these systems as a botnet? I think there are an array of ways to forestall such a thing from happening w/o restricting all access out from the system.
Off the top of my head I can think of a couple of trivial ways to verify that the places we're pulling from are repos and not random websites - whitelisting them through.
What's the REAL risk of this as a botnet? I understand scary theoretical risk. At the moment the plan was to open up use of this only to people who are contributors to fedora. The same level of access required to get access to fedorapeople.org and a fedora alias. That set of users is far more narrow.
Also - why are you so adamant about this? Maybe I'm misunderstanding your tone.
If we want to change what the builders are allowed access to it's not terribly difficult. We just change the security group the instance is created in to reflect the set of hosts we want it to be able to access. It's.... well... trivial.
Remember coprs is not koji and koji is not coprs. Coprs is not a koji replacement. Coprs is for the space that Koji does not want to and should not occupy - The untrusted build.
Koji is about creating trusted builds from trusted sources and trusted contributors.
Coprs is about building pkgs from untrusted sources and potentially untrusted contributors. This is why we don't touch the rpm on any system AS an rpm. We only deal with it as files.
Therefore our whole environment should be more secure than environment of Koji. Including builders.
They are MORE secure. We are not threatened by the builders. The builders are disposable. If you're concerned that these systems would be dangerous to others, that's not an entirely unreasonable concern. However, I don't think that your 'ALL CLOSED' mechanism is a good way to address that.
So if we were to implement your suggestions we would eliminate:
- external repositories of any kind
fine with me.
- pkgs above a certain size
why? I see no problem with uploading file with GB size.
- pkgs requiring interesting network access to build.
Such packages exist? Can you give me example. This is given as case of bad packing on every packaging workshop.
The whole point of this is to allow steaming pile of crap packages to be built, if people so choose.
COPRs exists b/c the packaging policies of various bodies are too strict and are a barrier to entry. The Powers-That-Be wanted a place where people could build whatever package they wanted to build and not be hampered by bundling or bad build processes or anything.
As to your question of an example, the most recent one I encountered was a perl module that wanted to talk out to test before it was built. In fedora the test was disabled, iirc. I'm sure members of the Fedora Package Committee could comment on some of the nutbar things they've seen.
-sv
On 06/26/2013 10:03 PM, seth vidal wrote:
Which is the point. That really only lets you build from the blessed locations. It's an extra bit of red tape to cut through. Why would we want to restrict our users in that way?
If you (and others) really want this feature. What about that you provide URL to that repo. It will be synced to copr and then builder will use the synced repo?
Have you uploaded a 1GB file to a website from a home network connection? It's AWFUL. Not to mention we'd need to store this one the
Yes. I uploaded even 100 GB.
-fe system and the whole goal was to have a separation between -fe and -be so they didn't need to communicate in anyway but, ultimately REST
Why is there such requirement?
Here's how I would use this:
- construct my spec file(s) locally or over ssh.
- build srpms on a system <out_in_the_world>
- put srpms in a webaccessible location
- paste the srpm urls into the box
- wait for builds
- win
On the other hand I always build srpms on local system (which is usually behind NAT or restrictive firewall). So *I* prefer upload rather then scp somewhere and pasting URL. Yes uploading over http have little overhead, but is is just multiply by some constant, so not interesting from complexity POV.
Our existing private cloud infrastructure has some space - but not infinite space. The answer I was given was - if this is useful we'll be
Can we know the order of "some space"? Is it MB, GB, TB or PB?
I have no idea what OBS has to do with this.
Could you explain what you're talking about here?
Because it have same purpose, same goal, same requirements. And it is running for some time. So it can give us idea where we can be one or two years from now.
Off the top of my head I can think of a couple of trivial ways to verify that the places we're pulling from are repos and not random websites - whitelisting them through.
You mean on the iptables level? I would not call that trivial.
Also - why are you so adamant about this? Maybe I'm misunderstanding your tone.
Adamant? Nope. I'm just playing devils advocate. I'm trying to look on COPR from different angles and I'm thinking loudly. I would rather be prepared for scenario, which will never happen than experience issue for which we will not be prepared.
The whole point of this is to allow steaming pile of crap packages to be built, if people so choose.
COPRs exists b/c the packaging policies of various bodies are too strict and are a barrier to entry. The Powers-That-Be wanted a place where people could build whatever package they wanted to build and not be hampered by bundling or bad build processes or anything.
Really? That differ from my expectations. I do not want to encourage people to build crap (by allowing them to). Maybe it is time to recap what are the primary target audience of Copr. At least how I feel it, because I never seen that written or discussed:
1) various upstream projects (e.g OpenShift, Katello, etc. - but even small one man projects), which want to build theirs nightlies in reproducible manner. 2) projects which consist of lot of packages and it would take long time for them to go through Package Review process (either because it is simply too much packages or because they are e.g. bundling libraries and it takes some time to fix the code to follow Fedora guideliness) and they want to offer their releases right now. 3) Projects which want to offer release for different platform than Fedora/EPEL. E.g. Suse, but I would count here even Software Collections, Secondary Architectures. 4) Repos which should be private for some reasons (embargoed builds which should be tested)
Do I perceive it correctly? Feel free to correct me or add what I'm missing.
On Thu, Jun 27, 2013 at 10:58:09AM +0200, Miroslav Suchý wrote:
The whole point of this is to allow steaming pile of crap packages to be built, if people so choose.
COPRs exists b/c the packaging policies of various bodies are too strict and are a barrier to entry. The Powers-That-Be wanted a place where people could build whatever package they wanted to build and not be hampered by bundling or bad build processes or anything.
Really? That differ from my expectations. I do not want to encourage people to build crap (by allowing them to). Maybe it is time to recap what are the primary target audience of Copr. At least how I feel it, because I never seen that written or discussed:
- various upstream projects (e.g OpenShift, Katello, etc. - but even
small one man projects), which want to build theirs nightlies in reproducible manner.
+1
- projects which consist of lot of packages and it would take long
time for them to go through Package Review process (either because it is simply too much packages or because they are e.g. bundling libraries and it takes some time to fix the code to follow Fedora guideliness) and they want to offer their releases right now.
+1 -- note that this is a subset of the noted "streaming pile of crap packages" :-)
- Projects which want to offer release for different platform than
Fedora/EPEL. E.g. Suse, but I would count here even Software Collections, Secondary Architectures.
-1 Software Collections perhaps, but the other uses are out of scope. Secondary arches, for instance, require copr-be hardware for the arch to be available.
- Repos which should be private for some reasons (embargoed builds
which should be tested)
+0 -- I think this is out of scope but I suppose the scope could be expanded to encompas it. I'm a little leery of it though.... Part of what makes the legal ramifications of copr work is that we're crowdsourcing the monitoring of what's being hosted in coprs. Having private repos eliminates that check.
Do I perceive it correctly? Feel free to correct me or add what I'm missing.
Just my thoughts on what the scope is.
-Toshio
On Thu, 27 Jun 2013 10:17:00 -0700 Toshio Kuratomi a.badger@gmail.com wrote:
- Projects which want to offer release for different platform than
Fedora/EPEL. E.g. Suse, but I would count here even Software Collections, Secondary Architectures.
-1 Software Collections perhaps, but the other uses are out of scope. Secondary arches, for instance, require copr-be hardware for the arch to be available.
Which, in the case of arm, rumor has it - we will have access to (in one way or another).
- Repos which should be private for some reasons (embargoed builds
which should be tested)
+0 -- I think this is out of scope but I suppose the scope could be expanded to encompas it. I'm a little leery of it though.... Part of what makes the legal ramifications of copr work is that we're crowdsourcing the monitoring of what's being hosted in coprs. Having private repos eliminates that check.
Private repos has been specifically requested by various powers that be.
The gist of the private repos is to allow someone to build something and put a password on the repo to restrict access to it. That's really all. So think of it less as a 'private' repo and more as a 'not public' repo.
I can definitely see the advantage of being able to build something in a copr with a password. Email the password out to a mailing list of people and tell them to go test it. Not for purposes of keeping it private but to keep it from being stumbled upon, either b/c it is not ready for public consumption or b/c you are worried it might eat babies.
Otherwise I agree with the scopes you outlined.
-sv
On Thu, 27 Jun 2013 10:58:09 +0200 Miroslav Suchý msuchy@redhat.com wrote:
On 06/26/2013 10:03 PM, seth vidal wrote:
Which is the point. That really only lets you build from the blessed locations. It's an extra bit of red tape to cut through. Why would we want to restrict our users in that way?
If you (and others) really want this feature. What about that you provide URL to that repo. It will be synced to copr and then builder will use the synced repo?
As Rex pointed out - the point of this is to lower barriers - not put more up.
Have you uploaded a 1GB file to a website from a home network connection? It's AWFUL. Not to mention we'd need to store this one the
Yes. I uploaded even 100 GB.
I'm glad you have such good network access. I do not. A lot of the world does not.
If you want to add in functionality to upload files I won't to block it - but I do not want that at the cost of removing remote file downloads. If you lobby to remove remote file downloads on builders you're going to be creating a lot of problems for yourself.
Please don't make us start having to get approval on individual patches.
-fe system and the whole goal was to have a separation between -fe and -be so they didn't need to communicate in anyway but, ultimately REST
Why is there such requirement?
B/c it makes sense - b/c it cleanly separates the FE and BE?
Here's how I would use this:
- construct my spec file(s) locally or over ssh.
- build srpms on a system <out_in_the_world>
- put srpms in a webaccessible location
- paste the srpm urls into the box
- wait for builds
- win
On the other hand I always build srpms on local system (which is usually behind NAT or restrictive firewall). So *I* prefer upload rather then scp somewhere and pasting URL. Yes uploading over http have little overhead, but is is just multiply by some constant, so not interesting from complexity POV.
And yet it means we have to be able to house and cope with those on the frontend and write them out. I'd really rather them not ever be there specificaly for the security and integrity of our own systems.
Our existing private cloud infrastructure has some space - but not infinite space. The answer I was given was - if this is useful we'll be
Can we know the order of "some space"? Is it MB, GB, TB or PB?
We have a about 400GB allocated right now and we could probably add another 600GB before we need to look for more space for cinder volume servers. So let's call it about 1tb at the moment.
Off the top of my head I can think of a couple of trivial ways to verify that the places we're pulling from are repos and not random websites - whitelisting them through.
You mean on the iptables level? I would not call that trivial.
Actually I mean on the private cloud security groups level. Iptables on the builder would be meaningless - anyone who can install a package into the chroot is effectively root. rpm pkgs are installed as root and any root user can walk out of a chroot - counting on iptables on the buildsystem is unsafe - we'd need to do it with security groups in the cloud system.
Adamant? Nope. I'm just playing devils advocate.
Please don't. You'd be much more help just by helping rather than making me and others defend decisions and plans we made before.
I'm trying to look on COPR from different angles and I'm thinking loudly. I would rather be prepared for scenario, which will never happen than experience issue for which we will not be prepared.
I'd rather we work on the specific items that are left to do, get this available to all fedora packagers and adapt as we need to.
Perfect is the enemy of the good.
Really? That differ from my expectations. I do not want to encourage people to build crap (by allowing them to). Maybe it is time to recap what are the primary target audience of Copr. At least how I feel it, because I never seen that written or discussed:
COPR IS FOR ANY PACKAGE NO MATTER THE QUALITY.
I put that in caps to make sure it was not misssed.
It is absolutely mandatory that we are not imposing packaging policies or rules on anything that goes in coprs.
Exceptions to this rule include: 1. packages which cannot build (by default of course) 2. packages which are not legal to be build ie: don't have a valid/acceptable license tag.
otherwise ANYTHING goes.
- various upstream projects (e.g OpenShift, Katello, etc. - but even
small one man projects), which want to build theirs nightlies in reproducible manner.
Sure.
- projects which consist of lot of packages and it would take long
time for them to go through Package Review process (either because it is simply too much packages or because they are e.g. bundling libraries and it takes some time to fix the code to follow Fedora guideliness) and they want to offer their releases right now.
Sure.
- Projects which want to offer release for different platform than
Fedora/EPEL. E.g. Suse, but I would count here even Software Collections, Secondary Architectures.
Sure - provided we have it.
- Repos which should be private for some reasons (embargoed builds
which should be tested)
sure.
Do I perceive it correctly? Feel free to correct me or add what I'm missing.
Okay
5. Crap.
Let a thousand flowers bloom and we'll pick the best ones later. The crap gets mowed under for compost.
-sv
----- Original Message -----
On Thu, 27 Jun 2013 10:58:09 +0200 Miroslav Suchý msuchy@redhat.com wrote:
On 06/26/2013 10:03 PM, seth vidal wrote:
Which is the point. That really only lets you build from the blessed locations. It's an extra bit of red tape to cut through. Why would we want to restrict our users in that way?
If you (and others) really want this feature. What about that you provide URL to that repo. It will be synced to copr and then builder will use the synced repo?
As Rex pointed out - the point of this is to lower barriers - not put more up.
IMO having the ability to get build deps from outside repos is very important for Copr. How we want to achieve that is a different question. Syncing repo may be very expensive. Think of scenario with 100G repo, out of which Copr builds only use 1M of packages (a cornercase yes, but still...). Generally, I think both approaches have their downsides, but I'm currently more convinced that letting builders download the packages is the way to go.
Have you uploaded a 1GB file to a website from a home network connection? It's AWFUL. Not to mention we'd need to store this one the
Yes. I uploaded even 100 GB.
I'm glad you have such good network access. I do not. A lot of the world does not.
If you want to add in functionality to upload files I won't to block it - but I do not want that at the cost of removing remote file downloads. If you lobby to remove remote file downloads on builders you're going to be creating a lot of problems for yourself.
Please don't make us start having to get approval on individual patches.
So IMHO this is more about separating building process into multiple steps: making SRPM available -> providing SRPM to backend -> building SRPM And by "making SRPM available" I mean in any way possible - upload it somewhere and just point to it or upload it to frontend, which will then place it somewhere where backend can pick it up. We can do both cases easily.
-fe system and the whole goal was to have a separation between -fe and -be so they didn't need to communicate in anyway but, ultimately REST
Why is there such requirement?
B/c it makes sense - b/c it cleanly separates the FE and BE?
I think that separating the functionality into two communicating entities will pay off in the long run. It allows us to swap the parts without changing the way the other parts work.
Here's how I would use this:
- construct my spec file(s) locally or over ssh.
- build srpms on a system <out_in_the_world>
- put srpms in a webaccessible location
- paste the srpm urls into the box
- wait for builds
- win
On the other hand I always build srpms on local system (which is usually behind NAT or restrictive firewall). So *I* prefer upload rather then scp somewhere and pasting URL. Yes uploading over http have little overhead, but is is just multiply by some constant, so not interesting from complexity POV.
And yet it means we have to be able to house and cope with those on the frontend and write them out. I'd really rather them not ever be there specificaly for the security and integrity of our own systems.
Our existing private cloud infrastructure has some space - but not infinite space. The answer I was given was - if this is useful we'll be
Can we know the order of "some space"? Is it MB, GB, TB or PB?
We have a about 400GB allocated right now and we could probably add another 600GB before we need to look for more space for cinder volume servers. So let's call it about 1tb at the moment.
Off the top of my head I can think of a couple of trivial ways to verify that the places we're pulling from are repos and not random websites - whitelisting them through.
You mean on the iptables level? I would not call that trivial.
Actually I mean on the private cloud security groups level. Iptables on the builder would be meaningless - anyone who can install a package into the chroot is effectively root. rpm pkgs are installed as root and any root user can walk out of a chroot - counting on iptables on the buildsystem is unsafe - we'd need to do it with security groups in the cloud system.
Adamant? Nope. I'm just playing devils advocate.
Please don't. You'd be much more help just by helping rather than making me and others defend decisions and plans we made before.
I'm trying to look on COPR from different angles and I'm thinking loudly. I would rather be prepared for scenario, which will never happen than experience issue for which we will not be prepared.
I'd rather we work on the specific items that are left to do, get this available to all fedora packagers and adapt as we need to.
Perfect is the enemy of the good.
Really? That differ from my expectations. I do not want to encourage people to build crap (by allowing them to). Maybe it is time to recap what are the primary target audience of Copr. At least how I feel it, because I never seen that written or discussed:
COPR IS FOR ANY PACKAGE NO MATTER THE QUALITY.
I put that in caps to make sure it was not misssed.
It is absolutely mandatory that we are not imposing packaging policies or rules on anything that goes in coprs.
Exceptions to this rule include:
- packages which cannot build (by default of course)
- packages which are not legal to be build ie: don't have a
valid/acceptable license tag.
otherwise ANYTHING goes.
+1. If we try to enforce any sort of packaging guidelines, we will end up where Fedora currently is with all the guidelines and stuff (which is what we don't want). Let's keep Copr free, if users want to build crap, then let them.
- various upstream projects (e.g OpenShift, Katello, etc. - but even
small one man projects), which want to build theirs nightlies in reproducible manner.
Sure.
- projects which consist of lot of packages and it would take long
time for them to go through Package Review process (either because it is simply too much packages or because they are e.g. bundling libraries and it takes some time to fix the code to follow Fedora guideliness) and they want to offer their releases right now.
Sure.
- Projects which want to offer release for different platform than
Fedora/EPEL. E.g. Suse, but I would count here even Software Collections, Secondary Architectures.
Sure - provided we have it.
- Repos which should be private for some reasons (embargoed builds
which should be tested)
sure.
Do I perceive it correctly? Feel free to correct me or add what I'm missing.
Okay
- Crap.
Agreed all 5.
Let a thousand flowers bloom and we'll pick the best ones later. The crap gets mowed under for compost.
-sv
On 07/02/2013 05:53 AM, seth vidal wrote:
Adamant? Nope. I'm just playing devils advocate.
Please don't. You'd be much more help just by helping rather than making me and others defend decisions and plans we made before.
The problem is that those decisions (and their reasoning) was not written, so I (and others) do not know about it, so I'm asking.
On Mon, 1 Jul 2013 23:53:30 -0400 seth vidal skvidal@fedoraproject.org wrote:
Actually I mean on the private cloud security groups level. Iptables on the builder would be meaningless - anyone who can install a package into the chroot is effectively root. rpm pkgs are installed as root and any root user can walk out of a chroot - counting on iptables on the buildsystem is unsafe - we'd need to do it with security groups in the cloud system.
I wanted to add something here - we can easily restrict external access from these boxes to port 80 or 443 on any system anywhere, and nothing else.
I don't think that's an unreasonable limitation on users and it does help prevent someone from leaping off overly much from one of those systems.
-sv
copr-devel@lists.fedorahosted.org