Items for thoughts:
1. build system using comps.xml for chroot install definitions (base, build, minimal) - it would make sense and we could leverage the groupinstall/update/remove mechanism in yum.
2. I talked to Jeremy some about this and I think if we do all rpmdb transactions from OUTSIDE of the rpmdb and only build in the chroot then we should be able to safely juggle multiple rpmdb versions from host to chroot systems.
3. there's no reason to not develop a specialized script that uses the yum modules that can be run by something like mach-helper for making chroots reasonably correctly.
4. we're going to run into problems with contention for the rpm transaction lock on the host system b/c rpm likes to lock the rpmdb on the host system even when operating on the chroot system. A queuing mechanism for access to the that lock so we know what else is left in the process is not a bad idea.
From what I can think breaking up the build system into:
- something that watches cvs for things to be built - something that makes/handles/cleans up the chroots - something that spawns the builds - something that deals with the results
Is it reasonable to focus on these as modules to be developed?
-sv
- something that watches cvs for things to be built
- something that makes/handles/cleans up the chroots
- something that spawns the builds
- something that deals with the results
Is it reasonable to focus on these as modules to be developed?
One more addendum - having the last 3 be a separate package end users could interact with would be very useful I suspect.
-sv
On Wed, 2005-03-02 at 17:41 -0500, seth vidal wrote:
- something that watches cvs for things to be built
- something that makes/handles/cleans up the chroots
- something that spawns the builds
- something that deals with the results
Is it reasonable to focus on these as modules to be developed?
One more addendum - having the last 3 be a separate package end users could interact with would be very useful I suspect.
Definitely the second and some part of the third. Having the other two in the set of "stuff we let you install and have your own personal copy of the buildsystem" may be overkill. Although all of it should be available, it's just what we advertise makes sense for people to do as their setup.
Jeremy
On Wed, 2005-03-02 at 17:41 -0500, seth vidal wrote:
- something that watches cvs for things to be built
- something that makes/handles/cleans up the chroots
- something that spawns the builds
- something that deals with the results
Is it reasonable to focus on these as modules to be developed?
One more addendum - having the last 3 be a separate package end users could interact with would be very useful I suspect.
Definitely the second and some part of the third. Having the other two in the set of "stuff we let you install and have your own personal copy of the buildsystem" may be overkill. Although all of it should be available, it's just what we advertise makes sense for people to do as their setup.
All of it is useful. We want people to be able to set up their own build engines running their favorite not-really-supported platform, including their option to drive it automagically from our repositories, and to produce results web pages that look just like the canonical ones.
On Wed, 2005-03-02 at 17:38 -0500, seth vidal wrote:
- we're going to run into problems with contention for the rpm
transaction lock on the host system b/c rpm likes to lock the rpmdb on the host system even when operating on the chroot system. A queuing mechanism for access to the that lock so we know what else is left in the process is not a bad idea.
Frankly, we should probably consider this a bug and get it fixed. I _think_ it's actually fixed in rpm-4_4, so we should be able to backport it and fix this. Actually, it's apparently even fixed in 4.3.3 (move global /var/lock/rpm/transaction to dbpath in CHANGES)
From what I can think breaking up the build system into:
- something that watches cvs for things to be built
One thing that comes to my mind is that you probably don't want to be watching CVS. Having it be an explicit "request a build now" makes more sense (which can then be integrated as a makefile target eventually, etc). I just tend to prefer having "do a build" be an explicit action rather than a side effect.
- something that makes/handles/cleans up the chroots
Yes.
- something that spawns the builds
- something that deals with the results
These two are likely to be fairly related. Perhaps even the same thing.
Is it reasonable to focus on these as modules to be developed?
The two big things are probably the "handle chroots" piece and "spawn builds". Especially if we want to go the route of a new chroot for every build. So I'd mostly focus on those two first and I think the other stuff will mostly fall out on its own.
Jeremy
Jeremy Katz wrote :
- something that spawns the builds
- something that deals with the results
These two are likely to be fairly related. Perhaps even the same thing.
Yes and no : Since we're probably going to want to support different archs coming from different build machines some day, the build spawning should be per build host, whereas the dealing with the results will be partly per build host (the part that you consider fairly related) but also partly in a central location which will gather everything, right?
Other than that, I agree with Jeremy regarding the fact the builds should be explicitly requested and not automatic for every CVS commit, or even every commit matching certain criterias (i.e. a change in version and/or release).
Matthias
Yes and no : Since we're probably going to want to support different archs coming from different build machines some day, the build spawning should be per build host, whereas the dealing with the results will be partly per build host (the part that you consider fairly related) but also partly in a central location which will gather everything, right?
well from a queuing standpoint - submitting build reqs to a central system should be able to deal with the 'for which archs' and 'where do you go for those archs' questions.
Other than that, I agree with Jeremy regarding the fact the builds should be explicitly requested and not automatic for every CVS commit, or even every commit matching certain criterias (i.e. a change in version and/or release).
sure - I guess I was thinking of what gafton had mentioned before. Some way for someone to tag a release as 'buildmeplease' in cvs and have it just do it.
-sv
On Wed, 2 Mar 2005, Jeremy Katz wrote:
From what I can think breaking up the build system into:
- something that watches cvs for things to be built
One thing that comes to my mind is that you probably don't want to be watching CVS. Having it be an explicit "request a build now" makes more sense (which can then be integrated as a makefile target eventually, etc). I just tend to prefer having "do a build" be an explicit action rather than a side effect.
I agree, I would rather have a "cvs tag build" or "cvs tag build-test" or something like that. That will queue a build request and provide some sort of URL where one could watch the status.
The two big things are probably the "handle chroots" piece and "spawn builds". Especially if we want to go the route of a new chroot for every build. So I'd mostly focus on those two first and I think the other stuff will mostly fall out on its own.
But wait, there is more! Ok, so we have chroots, we're spawning build, what do we do with the resulting packages? what is their path through the process?
So far we have:
A. Buildroot provisioning - yum-based scriplet - users can run it themselves and create their own trees - for speed reasonsm can we assume that buildroots are generic (ie, have the devel stuff installed, but are not customized for the needs of any particular src.rpm build)
B. Spawning builds - assuming a queue of some sort of things that need building - do we have a master controller for builds or do we let all buildhosts fight to empty out the build queue? - once a buildroot is chosen: - we customize it according to the src.rpm's buildrequires - launch the "chroot ... rpmbuild --rebuild ..." job - stdout and stderr go to a log accessible online in real time? - extract the binary packages and drop them somewhere - after the build is done - dispose of the buildroot? - set up a new buildroot again (async?)
C. Package management - we have a bunch of new packages built for a particular tree - what is the qualification process? - QA? - puching stuff out?
Anybody else has any other big components we need to concentrate on?
Cristian
I agree, I would rather have a "cvs tag build" or "cvs tag build-test" or something like that. That will queue a build request and provide some sort of URL where one could watch the status.
Any reason this should be a weird cvs hook liks this? Why not just "curl http://build-it.fedora.redhat.com/pkgname#cvstag" (that is of course done by "make build")?
- assuming a queue of some sort of things that need building - do we have a master controller for builds or do we let all buildhosts fight to empty out the build queue?
MCR does something in between about this. There may be some wisdom there to be had from experience with central queueing and driving disparate and sometimes flaky build iron. (Internally, contact testing@redhat.com for the hackers who know MCR.)
- stdout and stderr go to a log accessible online in real time?
Oh please yes. There is cruft around from tinderbox-like hacks to htmlify build logs and give good highlighting and easy navigation to finding the error messages, which is a lot quicker for developers than grovelling plain logs like beehive users do today.
- after the build is done
- dispose of the buildroot?
In the case of failed builds, it would be a nice improvement to have the loser state sit around for a brief time so a developer can go investigate why the buildsystem barfed. Perhaps move it aside, and asynchronously nuke LRU buildroots triggered by free disk space checks. OTOH, with developers always able to do the chroot builds themselves first, perhaps this will not come up nearly so often as it does with beehive.
On Wed, 2 Mar 2005, Roland McGrath wrote:
- assuming a queue of some sort of things that need building - do we have a master controller for builds or do we let all buildhosts fight to empty out the build queue?
You probably want a queue manager and scheduler to handle things like prioritization and failover.
-- Elliot
skvidal@phy.duke.edu (seth vidal) writes:
- we're going to run into problems with contention for the rpm
transaction lock on the host system b/c rpm likes to lock the rpmdb on the host system even when operating on the chroot system.
Should not be a problem. Just create a new namespace, mount the rpm database both into the host and the chroot system and execute rpm then.
Enrico
Hi,
- build system using comps.xml for chroot install definitions (base,
build, minimal) - it would make sense and we could leverage the groupinstall/update/remove mechanism in yum.
Not sure what this would achieve ? In mach, these three "target" names mean the following: - minimal: a minimal set of packages that allows you to chroot into it and run bash - base: the same set of packages, but with all packages needed to make the rpm db consistent added - build: a bunch of additional packages that rpmbuild likes to have (patch, gcc, ...)
Not sure what (some magic link to comps.xml) would bring more.
- I talked to Jeremy some about this and I think if we do all rpmdb
transactions from OUTSIDE of the rpmdb and only build in the chroot then we should be able to safely juggle multiple rpmdb versions from host to chroot systems.
Yep - that's how mach 2 has always done it after long discussions with jbj. The alternative is to put specially compiled rpms in the root.
- there's no reason to not develop a specialized script that uses the
yum modules that can be run by something like mach-helper for making chroots reasonably correctly.
What does "reasonably correctly" mean here ? I mean, is there anything wrong with the chroots that currently can be created with rpm -- root/apt-get .../yum --installroot=... ?
- we're going to run into problems with contention for the rpm
transaction lock on the host system b/c rpm likes to lock the rpmdb on the host system even when operating on the chroot system. A queuing mechanism for access to the that lock so we know what else is left in the process is not a bad idea.
Yeah, that'd be nice to get fixed. mach still has a global file lock for this reason.
Thomas
Dave/Dina : future TV today ! - http://www.davedina.org/ <-*- thomas (dot) apestaart (dot) org -*-> he strokes your hair to keep you down will you fight ? let's see you fight <-*- thomas (at) apestaart (dot) org -*-> URGent, best radio on the net - 24/7 ! - http://urgent.fm/
On Thu, 2005-03-03 at 09:36 +0100, Thomas Vander Stichele wrote:
- build system using comps.xml for chroot install definitions (base,
build, minimal) - it would make sense and we could leverage the groupinstall/update/remove mechanism in yum.
Not sure what this would achieve ? In mach, these three "target" names mean the following:
- minimal: a minimal set of packages that allows you to chroot into it
and run bash
- base: the same set of packages, but with all packages needed to make
the rpm db consistent added
- build: a bunch of additional packages that rpmbuild likes to have
(patch, gcc, ...)
Not sure what (some magic link to comps.xml) would bring more.
The big thing you gain (imho) is the easy and obvious answer of "what do these targets mean". Instead of having it in mach specific config files somewhere.
Jeremy
Not sure what this would achieve ? In mach, these three "target" names mean the following:
- minimal: a minimal set of packages that allows you to chroot into it
and run bash
- base: the same set of packages, but with all packages needed to make
the rpm db consistent added
- build: a bunch of additional packages that rpmbuild likes to have
(patch, gcc, ...)
Not sure what (some magic link to comps.xml) would bring more.
but instead of having to discern the list w/no relationship information from this:
# Fedora Core Development packages['fedora-development-i386-core'] = { 'dir': 'fedoracore-development-i386', 'minimal': 'bash glibc yum python createrepo', 'base': 'coreutils findutils openssh-server', 'build': 'dev rpm-build make gcc tar gzip patch ' + 'unzip bzip2 diffutils cpio elfutils', }
we can use an xml format that various folks are already very familiar with.
- I talked to Jeremy some about this and I think if we do all rpmdb
transactions from OUTSIDE of the rpmdb and only build in the chroot then we should be able to safely juggle multiple rpmdb versions from host to chroot systems.
Yep - that's how mach 2 has always done it after long discussions with jbj. The alternative is to put specially compiled rpms in the root.
I'd also like to have no calls to the rpm cli binary in any buildroot system. we should never be making the buildroot with --nodeps or --force so I don't see a motive to use rpm for erasures or additions.
- there's no reason to not develop a specialized script that uses the
yum modules that can be run by something like mach-helper for making chroots reasonably correctly.
What does "reasonably correctly" mean here ? I mean, is there anything wrong with the chroots that currently can be created with rpm -- root/apt-get .../yum --installroot=... ?
okay so what do we get out of making the buildsystem capable of using yum/apt-get/rpm--aid/whatever for doing the installs?
what's the perk? If we're building this for fedora why not just make a script that imports the yum modules and works out of the available infrastructure? Is there something that's needed in the yum modules to make this work?
-sv
Hi,
On Thu, 2005-03-03 at 23:37 -0500, seth vidal wrote:
Not sure what this would achieve ? In mach, these three "target" names mean the following:
- minimal: a minimal set of packages that allows you to chroot into it
and run bash
- base: the same set of packages, but with all packages needed to make
the rpm db consistent added
- build: a bunch of additional packages that rpmbuild likes to have
(patch, gcc, ...)
Not sure what (some magic link to comps.xml) would bring more.
but instead of having to discern the list w/no relationship information from this:
well, this passes by the fact that a) comps.xml doesn't have this concept of minimal b) comps.xml IIRC has a completely different understanding of "base" than what I just said (the minimum self-consistent set of packages that give you bash) c) there are distros mach is used for that do not have comps.xml files.
So, sure, I can use comps.xml. It's just that it doesn't give me exactly what I need in these cases.
Thomas
Dave/Dina : future TV today ! - http://www.davedina.org/ <-*- thomas (dot) apestaart (dot) org -*-> Come on baby take a walk with me honey Tell me who do you love Who do you love <-*- thomas (at) apestaart (dot) org -*-> URGent, best radio on the net - 24/7 ! - http://urgent.fm/
well, this passes by the fact that a) comps.xml doesn't have this concept of minimal
why not? Just make a new group, call it minimal.
b) comps.xml IIRC has a completely different understanding of "base" than what I just said (the minimum self-consistent set of packages that give you bash)
again - call it chroot-base - but the point is the same.
c) there are distros mach is used for that do not have comps.xml files.
I'm not talking about using the distro provided comps.xml - I'm talking about using that format for specifying the packages installed in the chroots.
-sv
skvidal@phy.duke.edu (seth vidal) writes:
c) there are distros mach is used for that do not have comps.xml files.
I'm not talking about using the distro provided comps.xml - I'm talking about using that format for specifying the packages installed in the chroots.
What would be the advantage of this? You will have to maintain yet another configuration file with an ugly format (XML), and it will not work with other depsolvers like apt or smartpm.
Enrico
c) there are distros mach is used for that do not have comps.xml files.
I'm not talking about using the distro provided comps.xml - I'm talking about using that format for specifying the packages installed in the chroots.
What would be the advantage of this? You will have to maintain yet another configuration file with an ugly format (XML), and it will not work with other depsolvers like apt or smartpm.
Right, I'm having trouble trying to figure out why we're bothering with support for other depsolvers at this time. We just need to build now.
-sv
skvidal@phy.duke.edu (seth vidal) writes:
I'm not talking about using the distro provided comps.xml - I'm talking about using that format for specifying the packages installed in the chroots.
What would be the advantage of this? You will have to maintain yet another configuration file with an ugly format (XML), and it will not work with other depsolvers like apt or smartpm.
Right, I'm having trouble trying to figure out why we're bothering with support for other depsolvers at this time. We just need to build now.
When we want to build now, I do not understand why code for new technology (comps.xml) shall be added, while existing technology (manual package-lists) is already working...
Enrico
On Fri, 2005-03-04 at 16:52 +0100, Enrico Scholz wrote:
skvidal@phy.duke.edu (seth vidal) writes:
I'm not talking about using the distro provided comps.xml - I'm talking about using that format for specifying the packages installed in the chroots.
What would be the advantage of this? You will have to maintain yet another configuration file with an ugly format (XML), and it will not work with other depsolvers like apt or smartpm.
Right, I'm having trouble trying to figure out why we're bothering with support for other depsolvers at this time. We just need to build now.
When we want to build now, I do not understand why code for new technology (comps.xml) shall be added, while existing technology (manual package-lists) is already working...
comps.xml isn't new technology. All the support is there. No waiting. Zero-day.
-sv
skvidal@phy.duke.edu (seth vidal) writes:
I'm not talking about using the distro provided comps.xml - I'm talking about using that format for specifying the packages installed in the chroots.
What would be the advantage of this? You will have to maintain yet another configuration file with an ugly format (XML), and it will not work with other depsolvers like apt or smartpm.
Right, I'm having trouble trying to figure out why we're bothering with support for other depsolvers at this time. We just need to build now.
When we want to build now, I do not understand why code for new technology (comps.xml) shall be added, while existing technology (manual package-lists) is already working...
comps.xml isn't new technology. All the support is there.
mach2 has already support for specifying the location of the comps.xml file? And the comps.xml files for the buildroots exist already?
Enrico
skvidal@phy.duke.edu (seth vidal) writes:
comps.xml isn't new technology. All the support is there.
mach2 has already support for specifying the location of the comps.xml file? And the comps.xml files for the buildroots exist already?
yep, in yum.
it just has to use groupinstall :)
Where can the location of the comps.xml files be specified? E.g. so that builds for FC3 use comps-A.xml while builds for devel use comps-B.xml? This works really out-of-the-box without adding code to mach2?
Enrico
On Fri, 2005-03-04 at 18:37 +0100, Enrico Scholz wrote:
skvidal@phy.duke.edu (seth vidal) writes:
comps.xml isn't new technology. All the support is there.
mach2 has already support for specifying the location of the comps.xml file? And the comps.xml files for the buildroots exist already?
yep, in yum.
it just has to use groupinstall :)
Where can the location of the comps.xml files be specified? E.g. so that builds for FC3 use comps-A.xml while builds for devel use comps-B.xml? This works really out-of-the-box without adding code to mach2?
in the repository metadata.
you can even have a repository that has ONLY comps information.
-sv
seth vidal wrote:
Items for thoughts:
- build system using comps.xml for chroot install definitions (base,
build, minimal) - it would make sense and we could leverage the groupinstall/update/remove mechanism in yum.
I have no objection to yum groupinstall, but in my opinion none of the current defined groups are suitable for a minimal buildroot. There have been objections in the past to this with the opinion that the "Development" tools group and -devel packages should be assumed to be installed in this minimum buildroot. However this is a bad assumption because the set of -devel packages has been arbitrary, and dependencies don't make sure that particular tools exist in the buildroot.
For this reason none of the existing groups are suitable for the most important goal of the minimum buildroot: reproducible binary payloads.
I believe the following requirements describe a minimal buildroot:
* Absolute minimum needed for rpmbuild to function. * BuildRequires should describe explicitly what is needed beyond the minimum buildroot to build a reproducible binary payload. * Must NOT include autoconf*, automake*, gettext* or libtool. While this seems counter-productive at first, it makes sense because sources theoretically shouldn't need it to build. And in cases where patches require them during rpmbuild, they often require an explicit version of auto*. * EXCEPTIONS: Stuff like gcc or g++ are included because it would be silly to list them explicitly in every package. These exceptions should be VERY rare.
bash bzip2 coreutils cpio diffutils fedora-release gcc gcc-c++ gzip make patch perl python rpm-build redhat-rpm-config sed tar unzip
Something like the list describes a very well tested minimum buildroot. The dependencies pulled in by these packages form the minimum set necessary for rpmbuild to function.
We may also want to consider providing a "fake-build-provides" package in a buildroot repository that provides something like "kernel = 999", since stuff in the buildroot Requires kernel but it isn't actually needed to build stuff.
Warren Togami wtogami@redhat.com
On Tue, 2005-03-08 at 14:38 -1000, Warren Togami wrote:
I have no objection to yum groupinstall, but in my opinion none of the current defined groups are suitable for a minimal buildroot. There have been objections in the past to this with the opinion that the "Development" tools group and -devel packages should be assumed to be installed in this minimum buildroot. However this is a bad assumption because the set of -devel packages has been arbitrary, and dependencies don't make sure that particular tools exist in the buildroot.
As seth has pointed out before, we don't have to use the existing comps file, we can create our own groups, but in comps format. That is the real question, adding comps format support to the build system.
I agree with the rest about the minimal build environment, buildreqs for the rest.
On Tue, 8 Mar 2005, Warren Togami wrote:
seth vidal wrote:
Items for thoughts:
- build system using comps.xml for chroot install definitions (base,
build, minimal) - it would make sense and we could leverage the groupinstall/update/remove mechanism in yum.
I have no objection to yum groupinstall, but in my opinion none of the current defined groups are suitable for a minimal buildroot. There have been objections in the past to this with the opinion that the "Development" tools group and -devel packages should be assumed to be installed in this minimum buildroot. However this is a bad assumption because the set of -devel packages has been arbitrary, and dependencies don't make sure that particular tools exist in the buildroot.
For this reason none of the existing groups are suitable for the most important goal of the minimum buildroot: reproducible binary payloads.
Minimal buildroot isn't necessary for reproducible builds, a *consistently* populated buildroot is. You'll get a consistent environment by dropping in Base + Devel groups with yum groupinstall even with the stock comps.xml.
Absolute bare minimum buildroot is a nice bonus in a way but by no means a requirement for build system to be useful IMHO. Me thinks concentrating on getting a *working build system, now* is at this point far more important than playing "how minimal can you get"-games. Just my 5cents. :)
- Panu -
On Wed, 9 Mar 2005, Panu Matilainen wrote:
Minimal buildroot isn't necessary for reproducible builds, a *consistently* populated buildroot is. You'll get a consistent environment by dropping in Base + Devel groups with yum groupinstall even with the stock comps.xml.
Providing consistent buildroots actually works against reproducible builds in the long term, because of the effect those buildroots have on the way people choose to package things.
For maximum quality control, packages should not be affected by having an unrelated (non-BuildRequires and non-base) package installed in the buildroot. If package X is is unrelated to the ongoing build of package Y, then package Y's build should not be affected by the absence OR presence of package X in the buildroot.
The root cause of the problem here is not really having consistent buildroots, but having improper packaging that doesn't account for all possible variables. One thing we have internally at Red Hat is a mass rebuild system that creates a buildroot with all packages installed, attempts rebuilds of all packages, and for the builds that succeed, it compares the resulting binary packages against the original ones to see if things like filelist or dependencies have changed. It'd be nice to get the equivalent of that for Fedora.
Best, -- Elliot
The root cause of the problem here is not really having consistent buildroots, but having improper packaging that doesn't account for all possible variables. One thing we have internally at Red Hat is a mass rebuild system that creates a buildroot with all packages installed, attempts rebuilds of all packages, and for the builds that succeed, it compares the resulting binary packages against the original ones to see if things like filelist or dependencies have changed. It'd be nice to get the equivalent of that for Fedora.
ALL packages installed becomes a bit more complex when you think about fedora extras. ALL could become several thousand packages.
rpm comparison scripts for file list and dependencies abound from the rhel rebuild projects. We can probably just snag one of those.
-sv
On Tue, Mar 15, 2005 at 07:25:57PM -0500, Elliot Lee wrote:
On Tue, 15 Mar 2005, seth vidal wrote:
ALL packages installed becomes a bit more complex when you think about fedora extras. ALL could become several thousand packages.
A random sampling should be sufficient.
And it is enough to make this test only once per quarter as a special check, not needed that often anyway...
greetings,
Florian La Roche
On Tue, 15 Mar 2005, Elliot Lee wrote:
Minimal buildroot isn't necessary for reproducible builds, a *consistently* populated buildroot is. You'll get a consistent environment by dropping in Base + Devel groups with yum groupinstall even with the stock comps.xml.
Providing consistent buildroots actually works against reproducible builds in the long term, because of the effect those buildroots have on the way people choose to package things.
I agree... in theory...
For maximum quality control, packages should not be affected by having an unrelated (non-BuildRequires and non-base) package installed in the buildroot. If package X is is unrelated to the ongoing build of package Y, then package Y's build should not be affected by the absence OR presence of package X in the buildroot.
In the ideal world, yes. In the practical world, a very large amount of software does ./configure time autodetection of various libraries and other software which may or may not be present in a buildroot, many of it being conditionally enabled/disabled based on the presence or lack of libs/etc. avail.
This autodetection is good for Joe Blow downloading something and compiling/installing by hand into /usr/local per se. but it kindof works against rpm based builds in the way of reproduceability.
This puts a large part of the reproduceability factor square in the hands of the package maintainer. In order to get a reasonably good chance of having every rpm rebuild exactly the same regardless of what deps are present or absent in the buildroot, all package maintainers need to be come much more intimately involved with the rpms they maintain. This would require deeply inspecting all ./configure options with each release of the software, and being more involved with the underlying projects in question, and very closely analyzing the output of ./configure and trying to determine if there are any changes from upstream version to version and build to build.
While it could be argued "this is already the packager's responsibility", in reality it does not work well, and it isn't likely to ever work well as long as it is not automated in some fashion. Relying on humans to do all of this:
1) Puts a lot of extra burden on humans, whom are already already greatly overburdened.
2) Makes the single point of failure be the human. Very bad idea. Not scaleable. Humans make mistakes. Computers do not.
The most scaleable systems, are those which are as completely automated as possible, requiring as little to no human intervention as possible.
So my suggestion to those seeking a solution to this problem, is to look at how it can be eliminated or reduced through software automation. rpmdiff is an example of creative use of automation. Perhaps someone can brainstorm an automation tool that could be plugged into rpm or beehive or mach, etc.
The root cause of the problem here is not really having consistent buildroots, but having improper packaging that doesn't account for all possible variables.
Yep.
One thing we have internally at Red Hat is a mass rebuild system that creates a buildroot with all packages installed, attempts rebuilds of all packages, and for the builds that succeed, it compares the resulting binary packages against the original ones to see if things like filelist or dependencies have changed. It'd be nice to get the equivalent of that for Fedora.
If someone were to develop a tool that compared consecutive ./configure runs and reported major differences, that'd be cool. I don't know how difficult that'd be though. I suspect if it were easy someone might have done it by now, but who knows. ;o)
HTH
If someone were to develop a tool that compared consecutive ./configure runs and reported major differences, that'd be cool. I don't know how difficult that'd be though. I suspect if it were easy someone might have done it by now, but who knows. ;o)
The log files might help with some of this. E.g. looking at changes between different archs or if updating glibc could sometimes help.
greetings,
Florian La Roche
mharris@redhat.com ("Mike A. Harris") writes:
One thing we have internally at Red Hat is a mass rebuild system that creates a buildroot with all packages installed, attempts rebuilds of all packages, and for the builds that succeed, it compares the resulting binary packages against the original ones to see if things like filelist or dependencies have changed. It'd be nice to get the equivalent of that for Fedora.
If someone were to develop a tool that compared consecutive ./configure runs and reported major differences,
That's not a problem... just run a diff across the 'config.status' files. But what is with packages which do not use ./configure but have other kinds of build-time feature-checks?
Comparing resulting packages seems to be a more universal way for detecting missing BuildRequires:.
Enrico
buildsys@lists.fedoraproject.org