Trying to make this idea a little more concrete. Here's two suggestions for how it might work. These are strawman ideas -- please provide alternates, poke holes, etc. And particularly from a QA and rel-eng point of view. Both of these are not taking modularity into account in any way; it's "how we could do this with our current distro-building process".
Option 1: Big batched update
1. Release F26 according to schedule https://fedoraproject.org/wiki/Releases/26/Schedule
2. At the beginning of October, stop pushing non-security updates from updates-testing to updates
3. Bigger updates (desktop environment refreshes, etc.) allowed into updates-testing at this time.
4. Mid-October, freeze exceptions for getting into updates-testing even.
5. Test all of that together in Some Handwavy Way for serious problems and regressions.
6. Once all good, push from updates-testing to updates at end of October or beginning of November.
Option 2: Branching!
1. Release F26 according to schedule.
2. July/August: branch F26.1 from F26 (not rawhide)
3. Updates to F26 also go into F26.1 (magic happens here?)
4. No Alpha, but do "Beta" freeze and validation as normal for release.
5. And same for F26.1 final
6. And sometime in October/November, release that (but without big press push).
7. GNOME Software presents F26.1 as upgrade option
8. F26 continues in parallel through December
9. In January, update added to F26 which activates the F26.1 repo.
10. And also in January updates stop going to F26.
Some of this idea, by the way, is reminiscent of Spot's suggestions at FUDCon Lawrence in 2013. This is not completely coincidence - I always liked those ideas!
Option 2 sounds really nice! For Option 1 I do not like the freeze for an already released version. This makes small (non security) fixes more complicated due to required freeze exceptions etc.
On 12/08/2016 03:17 PM, Matthew Miller wrote:
Trying to make this idea a little more concrete. Here's two suggestions for how it might work. These are strawman ideas -- please provide alternates, poke holes, etc. And particularly from a QA and rel-eng point of view. Both of these are not taking modularity into account in any way; it's "how we could do this with our current distro-building process".
Option 1: Big batched update
Release F26 according to schedule https://fedoraproject.org/wiki/Releases/26/Schedule
At the beginning of October, stop pushing non-security updates from updates-testing to updates
Bigger updates (desktop environment refreshes, etc.) allowed into updates-testing at this time.
Mid-October, freeze exceptions for getting into updates-testing even.
Test all of that together in Some Handwavy Way for serious problems and regressions.
Once all good, push from updates-testing to updates at end of October or beginning of November.
Option 2: Branching!
Release F26 according to schedule.
July/August: branch F26.1 from F26 (not rawhide)
Updates to F26 also go into F26.1 (magic happens here?)
No Alpha, but do "Beta" freeze and validation as normal for release.
And same for F26.1 final
And sometime in October/November, release that (but without big press push).
GNOME Software presents F26.1 as upgrade option
F26 continues in parallel through December
In January, update added to F26 which activates the F26.1 repo.
And also in January updates stop going to F26.
Some of this idea, by the way, is reminiscent of Spot's suggestions at FUDCon Lawrence in 2013. This is not completely coincidence - I always liked those ideas!
On 12/08/2016 03:17 PM, Matthew Miller wrote:
Trying to make this idea a little more concrete. Here's two suggestions for how it might work. These are strawman ideas -- please provide alternates, poke holes, etc. And particularly from a QA and rel-eng point of view. Both of these are not taking modularity into account in any way; it's "how we could do this with our current distro-building process".
With all due respect, Matthew, I don't like any of these proposals and feel both are absurd and non-helpful.
Is it your or Red Hat's plan to kill Fedora?
Ralf
On Thu, Dec 08, 2016 at 03:51:42PM +0100, Ralf Corsepius wrote:
With all due respect, Matthew, I don't like any of these proposals and feel both are absurd and non-helpful.
Okay.
Is it your or Red Hat's plan to kill Fedora?
It's my plan to explore different ideas to continue to make Fedora more successful as measured by user and contributor growth, contributor return on effort, and fulfillment of our mission. If you define that as "killing Fedora", then yes!
On 12/08/2016 11:10 AM, Matthew Miller wrote:
It's my plan to explore different ideas to continue to make Fedora more successful as measured by user and contributor growth, contributor return on effort, and fulfillment of our mission.
Your stated goals are quite high level, so I would like to step back and ask some broad questions before discussing fairly detailed technical choices between your options. I do realize that rehashing the 'rolling vs. point release' discussion can't be made in abstract, but I always had hard time separating technical and psychological, marketing and organizational arguments in that debate. I think you as the leadership should be thinking it through more clearly than I'm able to.
Specifically, I think that for someone who already has a working Fedora installation, the point releases are a distinction without much difference: I think the users want a smoothly running system being continuously upgraded. They don't care if sometimes such update takes longer and changes are deeper than usual. If all goes well, they should always be on a usable system. As a 'reductio ad absurdum' exercise, far out in the future, why should anyone be excited by arrival of Fedora 139? We're treating the releases as precious pets, but they are really doomed to become cattle :)
By the way, that's partly why I thought your historical plot was misleading. On one of my systems, I was one of the folks responsible for your F24 'unfulfilled potential' effect because I upgraded directly from F23 to F25. I did not have anything against F24---life happened, and I just simply didn't upgrade until I felt the EOL hammer hanging over me. I think that to really look at Fedora penetration and momentum, we'd have to only count new installs, or at least not count upgrades within the EOL window. I don't think we have data to do that.
On the other hand, point releases are useful for new users because they provide the 'new product' message that is a better draw than explaining why now is the right time to jump on a moving train. It's mostly psychology and marketing.
Point releases are of course essential organizational focus points for internal processes such as testing and QA, and they provide a framework to discuss new features and deep changes. I would argue that this is an internal consideration, except that it also might provide a useful marketing message.
So, my TL;DR message is, think carefully what aspects are important (technical? organizational? marketing?), what constituencies are involved in each, what changes are desirable, how to measure their effect, and then come up with processes to effect those changes.
On Thu, Dec 8, 2016 at 12:40 PM, Przemek Klosowski przemek.klosowski@nist.gov wrote:
So, my TL;DR message is, think carefully what aspects are important (technical? organizational? marketing?), what constituencies are involved in each, what changes are desirable, how to measure their effect, and then come up with processes to effect those changes.
I'd like to build on this a little bit. I'm concerned that we've gone straight to discussing the "how" without a clear picture of the "why". What would we want to accomplish with a change in the release schedule? mattdm said:
So, first, putting together a release is a lot of work. If we're stepping on the toes of the previous releases, are we wasting some of that work?
Second, from a press/PR point of view, I think we get less total press from having twice-a-year releases than we would from just having one big one. When it's so frequent, it doesn't feel like news.
The first point is a good question, but what if the answer is "no, we're not wasting some of that work"? For the second point, the solution could be to do a better job on the marketing side, or to focus on a few really kickass features for a given release.
I'm not opposed to making changes, but I'd like to know what it is we're trying to accomplish in a semi-concrete manner. Then we can figure out the changes necessary to get there.
Thanks, BC
On Thu, Dec 08, 2016 at 01:30:33PM -0500, Ben Cotton wrote:
I'd like to build on this a little bit. I'm concerned that we've gone straight to discussing the "how" without a clear picture of the "why".
A fair point - I didn't mean to jump ahead, but sometimes things are easier to talk about if they're less hand-wavy too.
More later. LISA conference continues now. :)
On Thu, 8 Dec 2016 12:40:49 -0500 Przemek Klosowski przemek.klosowski@nist.gov wrote:
On 12/08/2016 11:10 AM, Matthew Miller wrote:
It's my plan to explore different ideas to continue to make Fedora more successful as measured by user and contributor growth, contributor return on effort, and fulfillment of our mission.
Your stated goals are quite high level, so I would like to step back and ask some broad questions before discussing fairly detailed technical choices between your options. I do realize that rehashing the 'rolling vs. point release' discussion can't be made in abstract, but I always had hard time separating technical and psychological, marketing and organizational arguments in that debate. I think you as the leadership should be thinking it through more clearly than I'm able to.
Specifically, I think that for someone who already has a working Fedora installation, the point releases are a distinction without much difference: I think the users want a smoothly running system being continuously upgraded. They don't care if sometimes such update takes longer and changes are deeper than usual. If all goes well, they should always be on a usable system. As a 'reductio ad absurdum' exercise, far out in the future, why should anyone be excited by arrival of Fedora 139? We're treating the releases as precious pets, but they are really doomed to become cattle :)
...snip...
I just want to note here why I don't think rolling releases are great for everyone: WIth a rolling release you have to roughly consume changes as maintainers of your release push them to you. With a point release system you have much more choice when to switch.
For example, say you are a heavy user of libreoffice and have a important class using it. You don't want to be forced to upgrade while you are busy using the application, you would rather wait until you are in a time of lesser activity and spent the time then to learn the new version.
kevin
On jueves, 8 de diciembre de 2016 9:17:14 AM CST Matthew Miller wrote:
Trying to make this idea a little more concrete. Here's two suggestions for how it might work. These are strawman ideas -- please provide alternates, poke holes, etc. And particularly from a QA and rel-eng point of view. Both of these are not taking modularity into account in any way; it's "how we could do this with our current distro-building process".
Option 1: Big batched update
Release F26 according to schedule https://fedoraproject.org/wiki/Releases/26/Schedule
At the beginning of October, stop pushing non-security updates from updates-testing to updates
Bigger updates (desktop environment refreshes, etc.) allowed into updates-testing at this time.
Mid-October, freeze exceptions for getting into updates-testing even.
Test all of that together in Some Handwavy Way for serious problems and regressions.
Once all good, push from updates-testing to updates at end of October or beginning of November.
Option 2: Branching!
Release F26 according to schedule.
July/August: branch F26.1 from F26 (not rawhide)
Updates to F26 also go into F26.1 (magic happens here?)
No Alpha, but do "Beta" freeze and validation as normal for release.
And same for F26.1 final
And sometime in October/November, release that (but without big press push).
GNOME Software presents F26.1 as upgrade option
F26 continues in parallel through December
In January, update added to F26 which activates the F26.1 repo.
And also in January updates stop going to F26.
I have been talking with adamw about dropping alpha releases entirely. I was planning to outline all of it at DevConf but the talk was rejected so I will have to find another way to lay out the plans I have to change many things.
I would like to see us stop pushing non security updates to updates from updates-testing entirely and do it in monthly batches instead. we would push daily security fixes and updates-testing. However this would make atomic host 2 week releases much less useful, as there would be no updates except for once a month.
Dennis
On Thu, Dec 08, 2016 at 12:26:21PM -0600, Dennis Gilmore wrote:
I would like to see us stop pushing non security updates to updates from updates-testing entirely and do it in monthly batches instead. we would push daily security fixes and updates-testing.
Also part of Spot's suggestion - yeah, I'm also very much in favor, whether or not part of a larger initiative.
However this would make atomic host 2 week releases much less useful, as there would be no updates except for once a month.
In that case, we might want a way to pull selected packages on a whitelist from updates-testing into Atomic Host directly. (Dusty, other Atomic devs, what do you think?)
On 12/08/2016 07:26 PM, Dennis Gilmore wrote:
I would like to see us stop pushing non security updates to updates from updates-testing entirely and do it in monthly batches instead. we would push daily security fixes and updates-testing. However this would make atomic host 2 week releases much less useful, as there would be no updates except for once a month.
What do you expect from monthly batches? I *really* don't like things like "patchdays". Besides security fixes there are also other situations like small but annoying bugs… IMHO the current model with updates repo works fine, I see no reason to make a change here.
On Thu, Dec 08, 2016 at 07:45:55PM +0100, Christian Dersch wrote:
What do you expect from monthly batches? I *really* don't like things like "patchdays". Besides security fixes there are also other situations like small but annoying bugs… IMHO the current model with updates repo works fine, I see no reason to make a change here.
Spot's proposal actually included three levels: updates-testing, updates, updates-batched. I think that's good and would address your concern. We'd make updates-batched the default but people who want the firehose can.
On Thu, 8 Dec 2016 19:45:55 +0100, Christian Dersch wrote:
On 12/08/2016 07:26 PM, Dennis Gilmore wrote:
I would like to see us stop pushing non security updates to updates from updates-testing entirely and do it in monthly batches instead. we would push daily security fixes and updates-testing. However this would make atomic host 2 week releases much less useful, as there would be no updates except for once a month.
What do you expect from monthly batches? I *really* don't like things like "patchdays". Besides security fixes there are also other situations like small but annoying bugs… IMHO the current model with updates repo works fine, I see no reason to make a change here.
The apparently random flow of poorly tested "rushed out" updates is a major drawback of Fedora's current release process. It reminds me too much of the infamous dumping ground for packages.
We jump over many hops to release a "stable" distribution. Then we take it apart as we unleash more and more updates, which move away from what has gone through the Alpha/Beta freeze and testing process. Too many packages get changed and invalidate all the testing of previous releases.
We also change the repo metadata too often due to this flood of updates. Users already wonder why the metadata need to be refreshed so often? One packager pushes a minor version update for some niche market font, for example, and the repo changes for everyone.
As much as I'd like to retain the possibility to publish hotfixes -- provided that both the package maintainer *and* any testers (if needed) know what they're doing, and if the package maintainer in particular runs the affected release of Fedora *and* the update -- something is wrong about the updates release process. Patch days make it possible to publish update collections in a way the entire batch can be evaluated better, because it will be a more clearly defined step from A to B rather than randomly released pieces every couple of days. The process overhead will be significant, however. Everyone will want to get some minor update or random version upgrade to be included.
On Fri, Dec 09, 2016 at 01:18:31PM +0100, Michael Schwendt wrote:
On Thu, 8 Dec 2016 19:45:55 +0100, Christian Dersch wrote:
On 12/08/2016 07:26 PM, Dennis Gilmore wrote:
I would like to see us stop pushing non security updates to updates from updates-testing entirely and do it in monthly batches instead. we would push daily security fixes and updates-testing. However this would make atomic host 2 week releases much less useful, as there would be no updates except for once a month.
What do you expect from monthly batches? I *really* don't like things like "patchdays". Besides security fixes there are also other situations like small but annoying bugs… IMHO the current model with updates repo works fine, I see no reason to make a change here.
The apparently random flow of poorly tested "rushed out" updates is a major drawback of Fedora's current release process. It reminds me too much of the infamous dumping ground for packages.
We jump over many hops to release a "stable" distribution. Then we take it apart as we unleash more and more updates, which move away from what has gone through the Alpha/Beta freeze and testing process. Too many packages get changed and invalidate all the testing of previous releases.
We also change the repo metadata too often due to this flood of updates. Users already wonder why the metadata need to be refreshed so often? One packager pushes a minor version update for some niche market font, for example, and the repo changes for everyone.
A strawman proposal: when updates are created, we now fill out a "severity" field. Let's make use of this field, and batch the way that updates are pushed out from updates-testing to updates: - urgent → right now (as fast as we can make it) - high → daily - medium → up to a week delay - low/unspecified → next biweekly batch
(I put unspecified together with low, so that people learn to fill out the field ;))
This would make the updates process nicer for users without adding much more complexity.
Zbyszek
On Fri, Dec 9, 2016 at 11:30 AM, Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
On Fri, Dec 09, 2016 at 01:18:31PM +0100, Michael Schwendt wrote:
On Thu, 8 Dec 2016 19:45:55 +0100, Christian Dersch wrote:
On 12/08/2016 07:26 PM, Dennis Gilmore wrote:
I would like to see us stop pushing non security updates to updates from updates-testing entirely and do it in monthly batches instead. we would push daily security fixes and updates-testing. However this would make atomic host 2 week releases much less useful, as there would be no updates except for once a month.
What do you expect from monthly batches? I *really* don't like things like "patchdays". Besides security fixes there are also other situations like small but annoying bugs… IMHO the current model with updates repo works fine, I see no reason to make a change here.
The apparently random flow of poorly tested "rushed out" updates is a major drawback of Fedora's current release process. It reminds me too much of the infamous dumping ground for packages.
We jump over many hops to release a "stable" distribution. Then we take it apart as we unleash more and more updates, which move away from what has gone through the Alpha/Beta freeze and testing process. Too many packages get changed and invalidate all the testing of previous releases.
We also change the repo metadata too often due to this flood of updates. Users already wonder why the metadata need to be refreshed so often? One packager pushes a minor version update for some niche market font, for example, and the repo changes for everyone.
A strawman proposal: when updates are created, we now fill out a "severity" field. Let's make use of this field, and batch the way that updates are pushed out from updates-testing to updates:
- urgent → right now (as fast as we can make it)
- high → daily
- medium → up to a week delay
- low/unspecified → next biweekly batch
(I put unspecified together with low, so that people learn to fill out the field ;))
I did something like this many moons ago. In the end, it was a wash. Push frequency without repository separation basically means the frequency is irrelevant because when the updates show up is entirely dependent upon when the user runs 'dnf update' or similar.
This would make the updates process nicer for users without adding much more complexity.
That is actually untrue because of how our updates backend process works. Doing it this way requires multiple pushes for the same cumulative set of updates and it requires tooling to be written to do the filtering. So it is less complex for users, but more work for those processing the updates.
josh
On Fri, 2016-12-09 at 13:18 +0100, Michael Schwendt wrote:
The apparently random flow of poorly tested "rushed out" updates
<citation needed>
I've had automatic updates, of all kinds, turned on on all of my servers for at least the last four releases, and can think of maybe one time one of them broke? This seems like a severe over-statement.
On Fri, 09 Dec 2016 08:44:26 -0800, Adam Williamson wrote:
On Fri, 2016-12-09 at 13:18 +0100, Michael Schwendt wrote:
The apparently random flow of poorly tested "rushed out" updates
<citation needed>
Nah, not needed at all. Basically, one can update a desktop workstation to death, if applying updates too often or not at the right time. Then you suffer from updates causing regression, if not searching for an even newer update in the updates-testing repo, where pulling out individual packages isn't safe. One faces a growing number of issues, systemd waiting for timeouts during poweroff/reboot, SELinux errors or warnings, GNOME Shell logging you out, applications failing to render or refusing to work, Firefox crashing, ABRT collecting unusable crash data daily, DNF being unable to perform history undo, because packages are not found in the repos anymore. You can find failure reports from users, who haven't updated their installation for weeks, then applied 200 or more updates at once and afterwards couldn't log in anymore. And if updating to updates-testing, there are still packagers, who delete their bodhi pages, so eventually you notice that a distro-sync wants to downgrade updates that have been deleted "silently" (why? negative karma? severe breakage?).
On Fri, 2016-12-09 at 18:37 +0100, Michael Schwendt wrote:
On Fri, 09 Dec 2016 08:44:26 -0800, Adam Williamson wrote:
On Fri, 2016-12-09 at 13:18 +0100, Michael Schwendt wrote:
The apparently random flow of poorly tested "rushed out" updates
<citation needed>
Nah, not needed at all. Basically, one can update a desktop workstation to death, if applying updates too often or not at the right time. Then you suffer from updates causing regression, if not searching for an even newer update in the updates-testing repo, where pulling out individual packages isn't safe. One faces a growing number of issues, systemd waiting for timeouts during poweroff/reboot, SELinux errors or warnings, GNOME Shell logging you out, applications failing to render or refusing to work, Firefox crashing, ABRT collecting unusable crash data daily, DNF being unable to perform history undo, because packages are not found in the repos anymore. You can find failure reports from users, who haven't updated their installation for weeks, then applied 200 or more updates at once and afterwards couldn't log in anymore. And if updating to updates-testing, there are still packagers, who delete their bodhi pages, so eventually you notice that a distro-sync wants to downgrade updates that have been deleted "silently" (why? negative karma? severe breakage?).
This is just a bunch of entirely unsupported assertions, and thus not worth the time to respond to.
But I'll just note that it is not possible to delete updates in Bodhi, and hasn't been since Bodhi 2 arrived, which was years ago.
On Fri, 09 Dec 2016 09:40:08 -0800, Adam Williamson wrote:
This is just a bunch of entirely unsupported assertions, and thus not worth the time to respond to.
Same applies to your usage scenario. Personal experience is just that: personal experience.
But I'll just note that it is not possible to delete updates in Bodhi, and hasn't been since Bodhi 2 arrived, which was years ago.
Great! Then something else is the cause, such as editing bodhi tickets and replacing builds or removing them. Whatever. Or else "dnf" would not find installed packages with no reference in bodhi. And previous releases of a package in the repo still get deleted, breaking history undo.
On Fri, 2016-12-09 at 19:48 +0100, Michael Schwendt wrote:
On Fri, 09 Dec 2016 09:40:08 -0800, Adam Williamson wrote:
This is just a bunch of entirely unsupported assertions, and thus not worth the time to respond to.
Same applies to your usage scenario. Personal experience is just that: personal experience.
Yes, but the burden of proof always lies with those who want to change stuff. I've got the easy job here: I just get to say 'look, if you want to change everything, provide some concrete evidence:
a) that there's a problem b) that the changes will solve it c) that they won't create larger problems than the ones they solve'
That's always how it works. You have to provide a justification for change. No justification is really needed for no-change.
Great! Then something else is the cause, such as editing bodhi tickets and replacing builds or removing them. Whatever. Or else "dnf" would not find installed packages with no reference in bodhi. And previous releases of a package in the repo still get deleted, breaking history undo.
Well, yes. I don't think it's ever been claimed that 'history undo' is guaranteed to always work. We've never claimed to keep every build that at some point landed in updates-testing or updates there forever, so far as I know.
On Fri, Dec 9, 2016 at 11:00 AM, Adam Williamson <adamwill@fedoraproject.org
wrote:
Yes, but the burden of proof always lies with those who want to change stuff. I've got the easy job here: I just get to say 'look, if you want to change everything, provide some concrete evidence:
a) that there's a problem b) that the changes will solve it c) that they won't create larger problems than the ones they solve'
That's always how it works. You have to provide a justification for change. No justification is really needed for no-change.
Yup... absolutely true. Change for change sake isn't a reason - and spectral justifications don't cut it.
On Fri, 09 Dec 2016 11:00:45 -0800, Adam Williamson wrote:
Same applies to your usage scenario. Personal experience is just that: personal experience.
Yes, but the burden of proof always lies with those who want to change stuff. I've got the easy job here: I just get to say 'look, if you want to change everything, provide some concrete evidence:
a) that there's a problem b) that the changes will solve it c) that they won't create larger problems than the ones they solve'
That's always how it works. You have to provide a justification for change. No justification is really needed for no-change.
Then you get to keep the pieces. Unfortunate as it may sound, but I don't see any "burden of proof". I do not "have to provide" anything at all. I voice my opinion, and you are free to listen _or_ ignore it. And if you don't listen, you may as well forget about changing bodhi, too, and enjoy updates that get marked stable by the karma threshold system even before they have appeared on the relevant world-wide mirrors.
But hey, in another reply you've mentioned the rather old "logical AND" solution to the problem. Karma threshold AND minimum time spent in repo. Of course, the developers of bodhi would need to be convinced of such a feature, too, and it could be that there is no big kahuna to do exactly that. Oh well.
On Fri, 2016-12-09 at 21:29 +0100, Michael Schwendt wrote:
Of course, the developers of bodhi would need to be convinced of such a feature, too, and it could be that there is no big kahuna to do exactly that. Oh well.
Well, no, you could just send a patch. Bodhi is a rather nice codebase and quite easy to work on. I've had several things merged into it.
On Fri, Dec 09, 2016 at 12:54:29PM -0800, Adam Williamson wrote:
On Fri, 2016-12-09 at 21:29 +0100, Michael Schwendt wrote:
Of course, the developers of bodhi would need to be convinced of such a feature, too, and it could be that there is no big kahuna to do exactly that. Oh well.
Well, no, you could just send a patch. Bodhi is a rather nice codebase and quite easy to work on. I've had several things merged into it.
Or even... you know... open a ticket? Bring the discussion to them? It's not like they are going to say no and close the ticket as wontfix without a discussion, and if they do, then you can come here to rant about it, with a good reason :)
Pierre
On 12/08/2016 12:26 PM, Dennis Gilmore wrote:
I would like to see us stop pushing non security updates to updates from updates-testing entirely and do it in monthly batches instead.
Where did this concept originate from? What basis and factual data backs up this proposal?
On Thu, 8 Dec 2016 21:05:31 +0100, Emmanuel Seyman wrote:
- Michael Cronenworth [08/12/2016 14:01] :
Where did this concept originate from?
This is a very old proposal that Spot made at the very first Flock way back in 2013.
The idea of monthly batches is much older.
On 8 December 2016 at 15:01, Michael Cronenworth mike@cchtml.com wrote:
On 12/08/2016 12:26 PM, Dennis Gilmore wrote:
I would like to see us stop pushing non security updates to updates from updates-testing entirely and do it in monthly batches instead.
Where did this concept originate from? What basis and factual data backs up this proposal?
What basis and factual data would you want to see? Not being clear here means that what ever anyone can provide can be dismissed as not being real data or basis.
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org
On 12/08/2016 02:06 PM, Stephen John Smoogen wrote:
What basis and factual data would you want to see? Not being clear here means that what ever anyone can provide can be dismissed as not being real data or basis.
Something that says "this is better" other than because Spot thought it up one day.
Moving updates to a non-default repo would be a dramatically different Fedora experience than what I signed up for and it would not help. Most users use the default repos. Bugs, instead of found within a week, would be found once a month. To put it simply: I'm against this proposal. If you want "safer" updates then change Bodhi to require 30 days in testing, but we all know everyone would reject that idea just like I'm rejecting this one.
On Thu, Dec 8, 2016, at 01:26 PM, Dennis Gilmore wrote:
I would like to see us stop pushing non security updates to updates from updates-testing entirely and do it in monthly batches instead. we would push daily security fixes and updates-testing. However this would make atomic host 2 week releases much less useful, as there would be no updates except for once a month.
Remember the "2 week releases" are only images, they don't relate to the primary update mechanism of ostree commits (nor for that matter the rpm-md repos, which is what I think most people here are talking about): https://pagure.io/releng/issue/6545
Anyways, in the big picture, while I don't speak for everyone on the Project Atomic side, I personally point users at CentOS first, unless I have some reason to think they want Fedora. Something like 80% of Fedora usage hitting the mirrors was desktop systems, right? I don't expect that to change personally.
Of course we need to, and will continue to do server-related work in Fedora. It's defined to be the upstream. What I want as a developer is a place to integration test things rapidly - more than once a day, and do continuous delivery from our main upstreams (from systemd, anaconda etc.) - with rollback (what ostree was designed to do, and we aren't using).
As far as release cadence from that - monthly feels a little slow but I wouldn't be opposed. In the end though if we containerize more fully, the question gets both easier and more complex as we'd (I'd assume) use the ability to have separate cadences for applications vs OS base vs development tools etc.
On Thu, Dec 8, 2016, at 09:26 PM, Colin Walters wrote:
Anyways, in the big picture, while I don't speak for everyone on the Project Atomic side, I personally point users at CentOS first, unless I have some reason to think they want Fedora. Something like 80% of Fedora usage hitting the mirrors was desktop systems, right? I don't expect that to change personally.
Although..except for EPEL. And how EPEL works should obviously be part of this. Things would feel clearer if EPEL lived in CentOS now perhaps.
On Fri, Dec 09, 2016 at 11:07:32AM -0500, Colin Walters wrote:
Anyways, in the big picture, while I don't speak for everyone on the Project Atomic side, I personally point users at CentOS first, unless I have some reason to think they want Fedora. Something like 80% of Fedora usage hitting the mirrors was desktop systems, right? I don't expect that to change personally.
Although..except for EPEL. And how EPEL works should obviously be part of this. Things would feel clearer if EPEL lived in CentOS now perhaps.
Right; in mirror traffic, EPEL is to Fedora Workstation as Workstation is to Server. :)
EPEL packages *are* Fedora packages, though — moving the project to CentOS isn't completely crazy, but would require a lot more integration and cooperation between the projects.
That's something I'd like to see anyway. I think there are a lot of opportunities for this with containers and modularity — if you can just run Fedora containers on CentOS or RHEL *directly*, why bother rebuilding them? For a lot of the software that's in EPEL, that's completely sufficient. For other software, where users would like the version to match more closely the long lifecycle, maybe there could be a hand-off from Fedora version to CentOS version.
On Fri, 2016-12-09 at 11:17 -0500, Matthew Miller wrote:
On Fri, Dec 09, 2016 at 11:07:32AM -0500, Colin Walters wrote:
Anyways, in the big picture, while I don't speak for everyone on the Project Atomic side, I personally point users at CentOS first, unless I have some reason to think they want Fedora. Something like 80% of Fedora usage hitting the mirrors was desktop systems, right? I don't expect that to change personally.
Although..except for EPEL. And how EPEL works should obviously be part of this. Things would feel clearer if EPEL lived in CentOS now perhaps.
Right; in mirror traffic, EPEL is to Fedora Workstation as Workstation is to Server. :)
EPEL packages *are* Fedora packages, though — moving the project to CentOS isn't completely crazy, but would require a lot more integration and cooperation between the projects.
Right, it would have to be easy for maintainers to contribute in both.
That's something I'd like to see anyway. I think there are a lot of opportunities for this with containers and modularity — if you can just run Fedora containers on CentOS or RHEL *directly*, why bother rebuilding them?
I agree that can be an option in some cases, however, I can think of a few cases which it cannot. (a) running centos7 on a container, without epel you cannot have that additional software, (b) kernel features which are available in Fedora but not in centos7, may cause the software not to work if they don't detect the features on runtime, (c) simplicity; not having to go through the path of having to run special tools for scanning vulnerabilities in running containers.
regards, Nikos
On 9 December 2016 at 11:42, Nikos Mavrogiannopoulos nmav@redhat.com wrote:
On Fri, 2016-12-09 at 11:17 -0500, Matthew Miller wrote:
On Fri, Dec 09, 2016 at 11:07:32AM -0500, Colin Walters wrote:
Anyways, in the big picture, while I don't speak for everyone on the Project Atomic side, I personally point users at CentOS first, unless I have some reason to think they want Fedora. Something like 80% of Fedora usage hitting the mirrors was desktop systems, right? I don't expect that to change personally.
Although..except for EPEL. And how EPEL works should obviously be part of this. Things would feel clearer if EPEL lived in CentOS now perhaps.
Right; in mirror traffic, EPEL is to Fedora Workstation as Workstation is to Server. :)
EPEL packages *are* Fedora packages, though — moving the project to CentOS isn't completely crazy, but would require a lot more integration and cooperation between the projects.
Right, it would have to be easy for maintainers to contribute in both.
That's something I'd like to see anyway. I think there are a lot of opportunities for this with containers and modularity — if you can just run Fedora containers on CentOS or RHEL *directly*, why bother rebuilding them?
I agree that can be an option in some cases, however, I can think of a few cases which it cannot. (a) running centos7 on a container, without epel you cannot have that additional software, (b) kernel features which are available in Fedora but not in centos7, may cause the software not to work if they don't detect the features on runtime, (c) simplicity; not having to go through the path of having to run special tools for scanning vulnerabilities in running containers.
There is also the fact that I doubt that containers and modularity are what EL customers are aware they want anytime soon. The majority of EPEL users are on RHEL-6 (and that number is still growing as they move from RHEL-5 to RHEL-6). RHEL-7 is growing but only at a rate which shows the conservative nature of most EL sites.
I expect that containers/modularity etc will become something EL users want after Fedora considers it not only old but is actively looking to replace it with some new shiney paradigm that will solve the problems left from containers.
regards, Nikos _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org
On 12/09/2016 11:17 AM, Matthew Miller wrote:
For other software, where users would like the version to match more closely the long lifecycle, maybe there could be a hand-off from Fedora version to CentOS version.
Yeah, hand-offs would be a great feature for the users. Right now, it's tricky to use Fedora in production because of the 18-month support cycle; a smooth hand-off would make it much easier to manage Fedora installations, and therefore would help adoption. I know I would use Fedora more in production if I could rely on hand-off.
It would be a great selling point: Fedora offers cutting-edge features now, and transitions gently to long term support.
Strangely, this also affects RedHat commercial products: we've run into situations where we deployed paid, supported systems and then something happened and they fell off the list, thus losing updates. I thought it'd be a nice feature in RedHat to hand off unsupported systems to CentOS. Right now, I buy commercial support for the most important systems, deploy CentOS for not very important ones that don't need latest versions, and use Fedora when I need the most recent features. Hand-offs that work across all three(*) would make managing this stuff much easier.
I had a conversation with RedHat, arguing that whatever revenue they lost on those systems would be offset by having more systems overall, because people like me would be less hesitant to deploy RedHat; long-term support considerations would be decoupled from technical issues.
(*) of course only in reasonable configurations: I could only see two useful hand-offs: Fedora ->CentOS and RedHat -> CentOS---I can't see how it'd make sense to hand-off from e.g. RedHat to Fedora.
On 9 December 2016 at 11:07, Colin Walters walters@verbum.org wrote:
On Thu, Dec 8, 2016, at 09:26 PM, Colin Walters wrote:
Anyways, in the big picture, while I don't speak for everyone on the Project Atomic side, I personally point users at CentOS first, unless I have some reason to think they want Fedora. Something like 80% of Fedora usage hitting the mirrors was desktop systems, right? I don't expect that to change personally.
Although..except for EPEL. And how EPEL works should obviously be part of this. Things would feel clearer if EPEL lived in CentOS now perhaps.
It might however EPEL relies on a large amount of infrastructure that is tied deep in Fedora. Trying to 'move' it over to CentOS is not easy without a lot of that infrastructure also moving over to CentOS or starting from scratch over in CentOS. It also would take a lot of time and effort that no one wants to fund but would like people to do for them for free. That and every time it is brought up, various groups want to use it as the time to restart every argument they lost sometime in the past from repotags to putting it all in /opt to the logo looks like a horse's ass.
[By the way, this isn't a "it shouldn't happen" as much as "beware the surgery you are trying to do.. pus ridden gangrene will quickly set in if not done well and will probably show up anyway."]
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org
On Thursday, 08 December 2016 at 19:26, Dennis Gilmore wrote: [...]
I would like to see us stop pushing non security updates to updates from updates-testing entirely and do it in monthly batches instead. we would push daily security fixes and updates-testing. However this would make atomic host 2 week releases much less useful, as there would be no updates except for once a month.
You gave just one disadvantage of this proposal and no advantages at all. Why do you think the above is a good idea? I, for one, do not like waiting a month to get bug fixes that are not security-related. We are not RHEL or Microsoft or Adobe. I'm convinced that having bug fixes available as soon as they're ready is valuable (even if you choose to wait before installing them). Also, as was pointed out elsewhere in this subthread, updates get tested only after they're released to stable very often, so it's also valuable to get the feedback earlier rather than in a month.
Regards, Dominik
On Tue, 2016-12-20 at 10:32 +0100, Dominik 'Rathann' Mierzejewski wrote:
You gave just one disadvantage of this proposal and no advantages at all. Why do you think the above is a good idea? I, for one, do not like waiting a month to get bug fixes that are not security-related. We are not RHEL or Microsoft or Adobe. I'm convinced that having bug fixes available as soon as they're ready is valuable (even if you choose to wait before installing them). Also, as was pointed out elsewhere in this subthread, updates get tested only after they're released to stable very often, so it's also valuable to get the feedback earlier rather than in a month.
Batched updates are something I really want to do regardless. Of course having fixes available sooner is valuable, but you have to weigh that against the cost of releasing a *botched* update. The advantage of batched updates is we reduce the risk of releasing botched updates. If we batch the updates together and release them all at once, possibly with new installation media, then that's something that we can QA, and that reduces the risk of a botched update.
Last year we released several botched hawkey/hif updates (I lost count, but I think it was three total?) that broke PackageKit updates, so nontechnical users who don't know command line foo to recover their systems got stuck forever, never to receive an update again. Ideally that would never happen. Delaying updates by a couple of weeks seems like a small price to reduce this risk.
Michael
On 20/12/16 14:23, Michael Catanzaro wrote:
Batched updates are something I really want to do regardless. Of course having fixes available sooner is valuable, but you have to weigh that against the cost of releasing a *botched* update. The advantage of batched updates is we reduce the risk of releasing botched updates. If we batch the updates together and release them all at once, possibly with new installation media, then that's something that we can QA, and that reduces the risk of a botched update.
Surely it's more likely that it just delays the discovery of the botched update?
The only way it reduces the risk of releasing a botched update is the the updates somehow get more testing just by staying in the testing channel longer.
Which makes the question whether botched updates happen because not enough people use testing, or because there are enough people using it but they don't have enough time to spot the problems before the updates get pushed.
Tom
On Tue, 2016-12-20 at 14:27 +0000, Tom Hughes wrote:
Surely it's more likely that it just delays the discovery of the botched update?
I don't think updates-testing should be batched. Testers should of course still get all test updates ASAP.
The only way it reduces the risk of releasing a botched update is the the updates somehow get more testing just by staying in the testing channel longer.
...and actual QA, from the professionals and volunteers on the QA team, who are very good at finding bugs pre-release but currently do zero QA on our updates because it's an unmanageable rolling stream of a bazillion separate updates. With batched updates, you can test a batch with the same overall criteria used for releases to see if it's botched. That's the advantage of batching over simply extending the amount of time spent in updates-testing.
Which makes the question whether botched updates happen because not enough people use testing, or because there are enough people using it but they don't have enough time to spot the problems before the updates get pushed.
We indeed do not need batched updates to extend the length of time updates remain in testing. We could (and should) do that immediately.
Michael
On 20/12/16 16:48, Michael Catanzaro wrote:
On Tue, 2016-12-20 at 14:27 +0000, Tom Hughes wrote:
Surely it's more likely that it just delays the discovery of the botched update?
I don't think updates-testing should be batched. Testers should of course still get all test updates ASAP.
I didn't think updates-testing would be, it's just I don't think many people use it so I'm not sure having things there for longer will actually help.
The only way it reduces the risk of releasing a botched update is the the updates somehow get more testing just by staying in the testing channel longer.
...and actual QA, from the professionals and volunteers on the QA team, who are very good at finding bugs pre-release but currently do zero QA on our updates because it's an unmanageable rolling stream of a bazillion separate updates. With batched updates, you can test a batch with the same overall criteria used for releases to see if it's botched. That's the advantage of batching over simply extending the amount of time spent in updates-testing.
Well yes obviously if those batched updates get some formal QA then that's a different matter, but I didn't realise that was proposed.
Tom
On Tue, 2016-12-20 at 17:15 +0000, Tom Hughes wrote:
I didn't think updates-testing would be, it's just I don't think many people use it so I'm not sure having things there for longer will actually help.
We do in fact have numbers on this. For instance, since F25 came out, 218 people have filed 1,404 items of feedback on F25 updates in Bodhi.
On 20/12/16 17:40, Adam Williamson wrote:
On Tue, 2016-12-20 at 17:15 +0000, Tom Hughes wrote:
I didn't think updates-testing would be, it's just I don't think many people use it so I'm not sure having things there for longer will actually help.
We do in fact have numbers on this. For instance, since F25 came out, 218 people have filed 1,404 items of feedback on F25 updates in Bodhi.
I wonder how many have updates-testing enabled, and how many have just installed particular updates to test them...
I don't have updates-testing enabled, but I will test specific updates that relate to bugs I have filed by updating that package with "--enablerepo=updates-testing" and then file feedback,.
Tom
I'll repost this because I believe Kevin had a good point:
I don't understand why we are trying to reinvent the wheel here. The infrastructure for Kevin's suggestion is in place now - KDE has been using it and it works.
On Thu, Dec 8, 2016 at 9:07 PM, Kevin Kofler kevin.kofler@chello.at wrote:
However, I also do not see why we cannot just do such big updates through the regular update process rather than in a big .1 drop. The KDE SIG has experience with pushing big grouped updates that look a lot like a .1 release for Plasma users. They go through the regular update process just fine. Grouping them together with updates to GNOME, LibreOffice etc. in one batch is not necessary and would only add unnecessary delays.
I think pushing all updates in a big drop will actually make them LESS tested than if they just trickle through one at a time.
That is an excellent point. KDE for some time has been pushing out large updates using the regular update process. What is the issue with just doing this? It certainly seems much more straight forward and easier than ~.x updates. Fedora version releases could then be reserved for structural / architectural concerns rather than software updates and bug fixes.
Fedora stays fast moving and Fedora X releases come less often - seems like a win / win.
On Tue, Dec 20, 2016 at 10:01:57AM -0800, Gerald B. Cox wrote:
I'll repost this because I believe Kevin had a good point:
I don't understand why we are trying to reinvent the wheel here. The infrastructure for Kevin's suggestion is in place now - KDE has been using it and it works.
This has the same downside as a rolling release to end users. It asks them to take a big user interface / user experience update whenever we push it, or else disable all updates including security fixes and bugfixes that do not change user experience.
Modularity, however, will allow us to update module stacks — like GNOME or KDE — while still also maintaining older versions for some time... without tying the whole release to that cycle.
On Tue, Dec 20, 2016 at 10:08 AM, Matthew Miller mattdm@fedoraproject.org wrote:
On Tue, Dec 20, 2016 at 10:01:57AM -0800, Gerald B. Cox wrote:
I'll repost this because I believe Kevin had a good point:
I don't understand why we are trying to reinvent the wheel here. The infrastructure for Kevin's suggestion is in place now - KDE has been using it and it works.
This has the same downside as a rolling release to end users. It asks them to take a big user interface / user experience update whenever we push it, or else disable all updates including security fixes and bugfixes that do not change user experience.
Modularity, however, will allow us to update module stacks — like GNOME or KDE — while still also maintaining older versions for some time... without tying the whole release to that cycle.
Well, it isn't some theoretical construct... it's being done now with KDE and has been working just fine. It stays in updates-testing until you decide to push it to stable. KDE folks by and large want the updates as fast as possible. If the GNOME folks would like their updates to age for six months, they can just keep them in updates-testing. Seems like we're just making this more complicated than it is.
On Tue, Dec 20, 2016 at 10:22:41AM -0800, Gerald B. Cox wrote:
Well, it isn't some theoretical construct... it's being done now with KDE and has been working just fine. It stays in updates-testing until you decide to push it to stable. KDE folks by and large want the updates as fast as possible. If the GNOME folks would like their updates to age for six months, they can just keep them in updates-testing. Seems like we're just making this more complicated than it is.
Right, KDE on Fedora is more like a rolling release. TBH, this is something of a luxury because none of the Editions are dependent on KDE. If Workstation were KDE-based, I'd be inclined to push back against the practice.
I don't think anyone said we want the GNOME updates to "age" for six months. What I'm saying is that the release model allows us to provide a new shiny version quickly after the upstream release, but users get to choose if they want it right now. If we did this by putting a big GNOME update into updates-testing, a) people would have to opt into getting testing updates to get it, or do the even more advanced thing of cherry-picking from the updates repo, and b) once having done that, would presumably get all future updates to that stack through updates-testing, and c) if there's a fix to the older GNOME, we wouldn't have a way to provide it.
On Tue, Dec 20, 2016 at 1:23 PM Gerald B. Cox wrote:
Well, it isn't some theoretical construct... it's being done now with KDE and has been working just fine. It stays in updates-testing until you decide to push it to stable. KDE folks by and large want the updates as fast as possible. If the GNOME folks would like their updates to age for six months, they can just keep them in updates-testing. Seems like we're just making this more complicated than it is.
You can't keep things simmering in updates-stable for a long time. What if you need to push a bug fix or security fix that is not tied to a new major upstream release?
Rahul
On Tue, Dec 20, 2016 at 5:45 PM, Rahul Sundaram metherid@gmail.com wrote:
Well, it isn't some theoretical construct... it's being done now with KDE
and has been working just fine. It stays in updates-testing until you decide to push it to stable. KDE folks by and large want the updates as fast as possible. If the GNOME folks would like their updates to age for six months, they can just keep them in updates-testing. Seems like we're just making this more complicated than it is.
You can't keep things simmering in updates-stable for a long time. What if you need to push a bug fix or security fix that is not tied to a new major upstream release?
Rahul
I was just repeating what I thought was a good suggestion - which is based upon what has already been implemented using the current infrastructure. Reserve "new" releases only for things that absolutely require it and let everything else be updated piecemeal.
On Tue, Dec 20, 2016 at 9:26 PM Gerald B. Cox wrote:
I was just repeating what I thought was a good suggestion - which is based upon what has already been implemented using the current infrastructure. Reserve "new" releases only for things that absolutely require it and let everything else be updated piecemeal.
Right. I understand that but the solution of letting things stay in updates-testing for a long time isn't a great way to implement that. It an abuse of updates-testing.
Rahul
On Tue, Dec 20, 2016 at 6:30 PM, Rahul Sundaram metherid@gmail.com wrote:
Right. I understand that but the solution of letting things stay in updates-testing for a long time isn't a great way to implement that. It an abuse of updates-testing.
No one is doing that. You have to read the whole thread.
On Tue, Dec 20, 2016 at 9:33 PM Gerald B. Cox gbcox@bzb.us wrote:
Right. I understand that but the solution of letting things stay in updates-testing for a long time isn't a great way to implement that. It an abuse of updates-testing.
No one is doing that. You have to read the whole thread.
What makes you assume I didn't? I am quoting you again
" KDE folks by and large want the updates as fast as possible. If the GNOME folks would like their updates to age for six months, they can just keep them in updates-testing."
On Tue, Dec 20, 2016 at 6:41 PM, Rahul Sundaram metherid@gmail.com wrote:
" KDE folks by and large want the updates as fast as possible. If the GNOME folks would like their updates to age for six months, they can just keep them in updates-testing."
Obviously you missed it. Again, you have to take that comment in context of the entire thread.
On Tue, Dec 20, 2016 at 9:59 PM Gerald B. Cox wrote:
" KDE folks by and large want the updates as fast as possible. If the GNOME folks would like their updates to age for six months, they can just keep them in updates-testing."
Obviously you missed it. Again, you have to take that comment in context of the entire thread.
I don't see any context missing in a direct quote which I responded to. If you believe otherwise, feel free to summarize your position and include any context you need to.
Rahul
On Tue, Dec 20, 2016 at 7:10 PM, Rahul Sundaram metherid@gmail.com wrote:
I don't see any context missing in a direct quote which I responded to. If you believe otherwise, feel free to summarize your position and include any context you need to.
That's ok... I don't think you'd get the point - as I said the context was the thread.
On Tue, Dec 20, 2016 at 10:25 PM Gerald B. Cox wrote:
On Tue, Dec 20, 2016 at 7:10 PM, Rahul Sundaram wrote:
I don't see any context missing in a direct quote which I responded to. If you believe otherwise, feel free to summarize your position and include any context you need to.
That's ok... I don't think you'd get the point - as I said the context was the thread.
I have read the thread. If you are going to insist that I missed a context repeatedly, I would recommend you explicitly state what it is without making any assumptions about whether the other person can understand it.
Rahul
Hey everyone -- clearly there's a bit of a miscommunication here. Working it out through further discussion is good, but in the future, it's probably better to briefly take that off list and come back when both sides are satisfied that understanding has been reached. Otherwise, it adds a lot of noise to threads and makes the mailing list less welcoming.
On Tue, 2016-12-20 at 10:48 -0600, Michael Catanzaro wrote:
On Tue, 2016-12-20 at 14:27 +0000, Tom Hughes wrote:
Surely it's more likely that it just delays the discovery of the botched update?
I don't think updates-testing should be batched. Testers should of course still get all test updates ASAP.
The only way it reduces the risk of releasing a botched update is the the updates somehow get more testing just by staying in the testing channel longer.
...and actual QA, from the professionals and volunteers on the QA team, who are very good at finding bugs pre-release but currently do zero QA on our updates because it's an unmanageable rolling stream of a bazillion separate updates.
This is an exaggeration. We do test updates. We could always test everything *better*, and that applies to updates, but it is not true to say we 'do zero QA' on them.
With batched updates, you can test a batch with the same overall criteria used for releases to see if it's botched. That's the advantage of batching over simply extending the amount of time spent in updates-testing.
This is also an exaggeration, or at least it's a long way from proven. I don't think we could say that just batching updates would suddenly allow us to QA them as extensively as we QA a release. Release testing involves a lot of work by a lot of people; especially desktop validation is rather onerous. If we're talking about *weekly* batched updates, no, it is not at all practical to assume we'll magically be able to find the time to do release-validation level testing of each update batch every week.
We could in theory apply what automated functional testing we have to batched updates, but it's not at all a simple thing to do, and we could in fact apply it to *non*-batched updates too. It's something I've been wanting to do for a while, just have not had time for yet.
On Tue, 2016-12-20 at 09:33 -0800, Adam Williamson wrote:
If we're talking about *weekly* batched updates, no, it is not at all practical to assume we'll magically be able to find the time to do release-validation level testing of each update batch every week.
Of course it wouldn't make sense to do a weekly batch. I was thinking monthly.
Michael
The only way it reduces the risk of releasing a botched update is the the updates somehow get more testing just by staying in the testing channel longer.
...and actual QA, from the professionals and volunteers on the QA team, who are very good at finding bugs pre-release but currently do zero QA on our updates because it's an unmanageable rolling stream of a bazillion separate updates. With batched updates, you can test a batch with the same overall criteria used for releases to see if it's botched. That's the advantage of batching over simply extending the amount of time spent in updates-testing.
I've not seen that proposed anywhere, I'm not sure QA has the resources to actually do that.
Which makes the question whether botched updates happen because not enough people use testing, or because there are enough people using it but they don't have enough time to spot the problems before the updates get pushed.
We indeed do not need batched updates to extend the length of time updates remain in testing. We could (and should) do that immediately.
At the moment the time is a week, basically I don't see any real proposal to extend that overall, just to batch updates out on a Monday (not sure that is the best day if no one tests over a weekend). Most of the updates that go out quicker than a week are due to receiving the explicitly requested amount of karma.
On Wed, Dec 21, 2016 at 01:35:56AM +0000, Peter Robinson wrote:
...and actual QA, from the professionals and volunteers on the QA team, who are very good at finding bugs pre-release but currently do zero QA on our updates because it's an unmanageable rolling stream of a bazillion separate updates. With batched updates, you can test a batch with the same overall criteria used for releases to see if it's botched. That's the advantage of batching over simply extending the amount of time spent in updates-testing.
I've not seen that proposed anywhere, I'm not sure QA has the resources to actually do that.
It was part of Spot's proposal at FUDCon Lawrence and we talked about it more at Flock Charleston - where if I remember right, several people from QA also said they didn't have resources to do that).
At the moment the time is a week, basically I don't see any real proposal to extend that overall, just to batch updates out on a Monday (not sure that is the best day if no one tests over a weekend). Most of the updates that go out quicker than a week are due to receiving the explicitly requested amount of karma.
I'm not set on Monday; that's a fairly arbitrary suggestion. Shouldn't be a weekend or Friday, though!
On 12/20/2016 06:27 AM, Tom Hughes wrote:
On 20/12/16 14:23, Michael Catanzaro wrote:
Batched updates are something I really want to do regardless. Of course having fixes available sooner is valuable, but you have to weigh that against the cost of releasing a *botched* update. The advantage of batched updates is we reduce the risk of releasing botched updates. If we batch the updates together and release them all at once, possibly with new installation media, then that's something that we can QA, and that reduces the risk of a botched update.
Surely it's more likely that it just delays the discovery of the botched update?
The only way it reduces the risk of releasing a botched update is the the updates somehow get more testing just by staying in the testing channel longer.
Which makes the question whether botched updates happen because not enough people use testing, or because there are enough people using it but they don't have enough time to spot the problems before the updates get pushed.
Batched updates are valuable when testing happens with the whole. It sorts out complex interactions between multiple package updates by testing them all together. It's a thing that could be adopted whether or not Fedora moves to a once-a-year release and it could be done in addition to rolling updates.
On Tue, 2016-12-20 at 08:48 -0800, Brendan Conoboy wrote:
Batched updates are valuable when testing happens with the whole. It sorts out complex interactions between multiple package updates by testing them all together.
Of course, a corollary of this is that you have to try and figure out which bit of the batched update actually caused the bug you saw.
On 12/20/2016 09:34 AM, Adam Williamson wrote:
On Tue, 2016-12-20 at 08:48 -0800, Brendan Conoboy wrote:
Batched updates are valuable when testing happens with the whole. It sorts out complex interactions between multiple package updates by testing them all together.
Of course, a corollary of this is that you have to try and figure out which bit of the batched update actually caused the bug you saw.
Or to be even more specific:
1. Batched update testing is more work.
2. Fixing bugs found in batched updates is more work.
Of the two I think we already end up doing #2, it's just a question of when.
Maybe a bit bit off topic WRT $Subject, sorry if it is the case.
On Tuesday, December 20, 2016 8:23:12 AM CET Michael Catanzaro wrote:
Batched updates are something I really want to do regardless. Of course having fixes available sooner is valuable, but you have to weigh that against the cost of releasing a *botched* update. The advantage of batched updates is we reduce the risk of releasing botched updates. If we batch the updates together and release them all at once, possibly with new installation media, then that's something that we can QA, and that reduces the risk of a botched update.
Not always, unless we have https://fedorahosted.org/bodhi/ticket/663 fixed first. Packages which depend like:
A -> concrete_version_of(B) -> concrete_version_of(C)
.. are now updated in "batches". People interested in 'C' give karma to 'C', but also approve 'A' and 'B' (without paying attention to test those packages independently). But breaking 'A' or 'B' breaks stable release anyway. So batched update _increases_ the risk of a "botched" update.
Pavel
On Tue, Dec 20, 2016 at 2:32 AM, Dominik 'Rathann' Mierzejewski dominik@greysector.net wrote:
On Thursday, 08 December 2016 at 19:26, Dennis Gilmore wrote: [...]
I would like to see us stop pushing non security updates to updates from updates-testing entirely and do it in monthly batches instead. we would push daily security fixes and updates-testing. However this would make atomic host 2 week releases much less useful, as there would be no updates except for once a month.
You gave just one disadvantage of this proposal and no advantages at all. Why do you think the above is a good idea? I, for one, do not like waiting a month to get bug fixes that are not security-related. We are not RHEL or Microsoft or Adobe. I'm convinced that having bug fixes available as soon as they're ready is valuable (even if you choose to wait before installing them). Also, as was pointed out elsewhere in this subthread, updates get tested only after they're released to stable very often, so it's also valuable to get the feedback earlier rather than in a month.
I keep hearing different opinions on update frequency, and it suggests a discoverable dial is needed on the users' end of this equation.
On Tue, Dec 20, 2016 at 2:32 AM, Dominik 'Rathann' Mierzejewski dominik@greysector.net wrote:
On Thursday, 08 December 2016 at 19:26, Dennis Gilmore wrote: [...]
I would like to see us stop pushing non security updates to updates from updates-testing entirely and do it in monthly batches instead. we would push daily security fixes and updates-testing. However this would make atomic host 2 week releases much less useful, as there would be no updates except for once a month.
You gave just one disadvantage of this proposal and no advantages at all. Why do you think the above is a good idea? I, for one, do not like waiting a month to get bug fixes that are not security-related. We are not RHEL or Microsoft or Adobe. I'm convinced that having bug fixes available as soon as they're ready is valuable (even if you choose to wait before installing them). Also, as was pointed out elsewhere in this subthread, updates get tested only after they're released to stable very often, so it's also valuable to get the feedback earlier rather than in a month.
Having bug fixes available sooner also means having regressions. It's inevitable. And that's why there's updates-testing repo, and why it's not enabled by default on release.
Why is user opt in to updates-testing insufficient?
On Thu, Dec 8, 2016 at 3:17 PM, Matthew Miller mattdm@fedoraproject.org wrote:
Trying to make this idea a little more concrete. Here's two suggestions for how it might work. These are strawman ideas -- please provide alternates, poke holes, etc. And particularly from a QA and rel-eng point of view. Both of these are not taking modularity into account in any way; it's "how we could do this with our current distro-building process".
Which problem are you trying to solve with those proposals?
Your other mail:
"explore different ideas to continue to make Fedora more successful as measured by user and contributor growth, contributor return on effort, and fulfillment of our mission"
just has some pretty vague phrases without any connection on how changing the release process helps achieving those goals.
On Thu, Dec 08, 2016 at 07:41:13PM +0100, drago01 wrote:
Which problem are you trying to solve with those proposals?
From my *other* other mail:
* predictable calendar dates, to help with long-term planning * not being on a hamster wheel which routinely bursts into flame * maintaining the high level of QA we have for releases (or, you know, even increasing it) * doesn't increase work for packagers * including time for QA and Rel-Eng to a) breathe and b) invest in infrastructure * satisfying upstream projects which depend on us as an early delivery mechanism to users (GNOME, GCC, glibc, have spoken up before, but not limited to just those) * maximum PR and user growth
and just to expand a little bit: although we have a nominal six-month cycle, the natural tendency seems to be to expand to eigh- or nine-month cycles. That's not necessarily terrible, except a) it's not well-aligned with upstreams and b) it makes longer-term planning difficult because release times are unpredictable year-to-year.
The alternative we just tried was: if one cycle goes over six months, still target the next one as if it it _hadn't_ - that is, a shorter "make up" cycle. In this case, we came out with a great release (again, awesome work everyone), but we didn't have much breathing room (and ended slipping into the holidays again, with real risk of running into Christmas/end-of-year. And we certainly didn't have, in that, time for the teams to work on infrstructure.
So, I'm trying to come up with different ways to do it which still have the properties above.
On Thu, Dec 8, 2016 at 7:59 PM, Matthew Miller mattdm@fedoraproject.org wrote:
On Thu, Dec 08, 2016 at 07:41:13PM +0100, drago01 wrote:
Which problem are you trying to solve with those proposals?
From my *other* other mail:
- predictable calendar dates, to help with long-term planning
Longer cycles are not necessarily mean no slips.
- not being on a hamster wheel which routinely bursts into flame
[...] mechanism to users (GNOME, GCC, glibc, have spoken up before, but not limited to just those)
How so? By having less frequent releases we'd be skipping more of them.
- maximum PR and user growth
How is less PR (only one event per year) instead of two lead into "maximum PR" ?
and just to expand a little bit: although we have a nominal six-month cycle, the natural tendency seems to be to expand to eigh- or nine-month cycles. That's not necessarily terrible, except a) it's not well-aligned with upstreams and b) it makes longer-term planning difficult because release times are unpredictable year-to-year.
Longer term planning of what exactly? And by whom? Are you talking about fedora's planning or the users?
The alternative we just tried was: if one cycle goes over six months, still target the next one as if it it _hadn't_ - that is, a shorter "make up" cycle. In this case, we came out with a great release (again, awesome work everyone), but we didn't have much breathing room (and ended slipping into the holidays again,
There is no evidence that we slipped into the holidays because of the shorter cycle (it happens all the time, hence even you wrote "again" ;) )
So, I'm trying to come up with different ways to do it which still have the properties above.
Well I am trying to understand what you are trying to do before thinking of solutions. I think the 6 months cycle worked pretty well so far so I'd rather only change it for good reasons.
On Thu, Dec 08, 2016 at 09:20:50PM +0100, drago01 wrote:
Which problem are you trying to solve with those proposals?
From my *other* other mail:
- predictable calendar dates, to help with long-term planning
Longer cycles are not necessarily mean no slips.
I wasn't referring to slipping, but rather what happens when we schedule by starting at the whenever a release ships and add 6-8 months to that.
- not being on a hamster wheel which routinely bursts into flame
[...] mechanism to users (GNOME, GCC, glibc, have spoken up before, but not limited to just those)
How so? By having less frequent releases we'd be skipping more of them.
Well, that's where the .1 release idea here came from, rather than just going to purely once-a-year-.
- maximum PR and user growth
How is less PR (only one event per year) instead of two lead into "maximum PR" ?
Two releases a year ends up barely being an "event", so it's hard to drum up new enthusiasm. I think that adds up to less interest total than we'd get for an annual release. I don't have data for it, but as someone working to do the drumming I'm inclined to give some weight to my own intuition.
nine-month cycles. That's not necessarily terrible, except a) it's not well-aligned with upstreams and b) it makes longer-term planning difficult because release times are unpredictable year-to-year.
Longer term planning of what exactly? And by whom? Are you talking about fedora's planning or the users?
Three things. First, Fedora's overall strategic planning. Second, developers planning when and how to land features — especially ones which will take more than one release. And yeah, finally, users, who definitely want a predicatable lifetime and upgrade pattern.
The alternative we just tried was: if one cycle goes over six months, still target the next one as if it it _hadn't_ - that is, a shorter "make up" cycle. In this case, we came out with a great release (again, awesome work everyone), but we didn't have much breathing room (and ended slipping into the holidays again,
There is no evidence that we slipped into the holidays because of the shorter cycle (it happens all the time, hence even you wrote "again" ;) )
But if we had a longer cycle, we could plan breathing room around not doing that. Particularly, October-November-December-January is a minefield while May-June is not so much.
On 8 December 2016 at 15:31, Matthew Miller mattdm@fedoraproject.org wrote:
- maximum PR and user growth
How is less PR (only one event per year) instead of two lead into "maximum PR" ?
Two releases a year ends up barely being an "event", so it's hard to drum up new enthusiasm. I think that adds up to less interest total than we'd get for an annual release. I don't have data for it, but as someone working to do the drumming I'm inclined to give some weight to my own intuition.
I am going to agree that 2 major releases a year won't be an event, but that is mainly because distros aren't really interesting to anyone anymore. Computer operating systems are the indoor plumbing of the late 20th and 21st century. We were really exciting in the 1880's when we first came out and everyone had to go see that someone had put a toilet in their house (and it didn't explode). You might even upgrade your toilet every year to the latest model as they were always fixing and adding some new feature. (Also because you were extremely wealthy and having the newest model was expected) But by the 1910's it was pretty much a done deal. You can move around the parts some amount but people knew what their kind of toilet was and wouldn't want one that looked or was different. No one updated it yearly just because the 1917 crapper had a pivot handle and last year was just a chain. Instead you liked your chain and you would keep it even if no one made them anymore.
We are going to have to come to terms that our day in exciting the masses to switch is well past us. It doesn't mean we can't and shouldn't work on marketing ourselves.. just that we should be aware that doing multiple events a year aren't going to be big wins in growth.
[This post was made possible by a grant from the odd searches one does when your plumbing is broken.]
On Thu, Dec 8, 2016 at 2:05 PM, Stephen John Smoogen smooge@gmail.com wrote:
On 8 December 2016 at 15:31, Matthew Miller mattdm@fedoraproject.org wrote:
- maximum PR and user growth
How is less PR (only one event per year) instead of two lead into "maximum PR" ?
Two releases a year ends up barely being an "event", so it's hard to drum up new enthusiasm. I think that adds up to less interest total than we'd get for an annual release. I don't have data for it, but as someone working to do the drumming I'm inclined to give some weight to my own intuition.
I am going to agree that 2 major releases a year won't be an event, but that is mainly because distros aren't really interesting to anyone anymore. Computer operating systems are the indoor plumbing of the late 20th and 21st century. We were really exciting in the 1880's when we first came out and everyone had to go see that someone had put a toilet in their house (and it didn't explode). You might even upgrade your toilet every year to the latest model as they were always fixing and adding some new feature. (Also because you were extremely wealthy and having the newest model was expected) But by the 1910's it was pretty much a done deal. You can move around the parts some amount but people knew what their kind of toilet was and wouldn't want one that looked or was different. No one updated it yearly just because the 1917 crapper had a pivot handle and last year was just a chain. Instead you liked your chain and you would keep it even if no one made them anymore.
We are going to have to come to terms that our day in exciting the masses to switch is well past us. It doesn't mean we can't and shouldn't work on marketing ourselves.. just that we should be aware that doing multiple events a year aren't going to be big wins in growth.
[This post was made possible by a grant from the odd searches one does when your plumbing is broken.]
Stable plumbing definitely has value, builds loyalty and grows the market. There are distros that specialize in stable. But they're really boring development wise. How to build something bleeding edge while also stable enough to avoid hemorrhage, and I think that's actually being done in Fedora. But we are also taking fewer risks. If atomic host helps us be more aggressive by actually expecting some people will have to do rollbacks, and those rollbacks are essentially bulletproof, that's quite a sweet spot.
On Thu, Dec 8, 2016 at 1:20 PM, drago01 drago01@gmail.com wrote:
Well I am trying to understand what you are trying to do before thinking of solutions. I think the 6 months cycle worked pretty well so far so I'd rather only change it for good reasons.
I think that ignores that we're pretty much always slipping, and they aren't in fact 6 month cycles. There's always pressure to get the fall release done on time to avoid hitting end of year holiday season, and we have actually busted that also which has a few times really imploded the following release schedule. And then next the problem is that QA and releng at least do not have enough of their own development time.
On Thu, Dec 08, 2016 at 01:33:41PM -0700, Chris Murphy wrote:
I think that ignores that we're pretty much always slipping, and they aren't in fact 6 month cycles. There's always pressure to get the fall release done on time to avoid hitting end of year holiday season, and we have actually busted that also which has a few times really imploded the following release schedule. And then next the problem is that QA and releng at least do not have enough of their own development time.
Thanks Chris — yes. I'm not at all saying that the suggestions I put out are the best solutions, but there are definitely some things we can improve _somehow_.
On Thu, 2016-12-08 at 13:33 -0700, Chris Murphy wrote:
I think that ignores that we're pretty much always slipping, and they aren't in fact 6 month cycles.
Well, since 22 we've got a lot closer to being back to May/November. We've had:
22: May 23: November 24: June 25: November
so we've only missed once. 21 was early December.
On Thu, Dec 8, 2016 at 1:38 PM, Adam Williamson adamwill@fedoraproject.org wrote:
On Thu, 2016-12-08 at 13:33 -0700, Chris Murphy wrote:
I think that ignores that we're pretty much always slipping, and they aren't in fact 6 month cycles.
Well, since 22 we've got a lot closer to being back to May/November. We've had:
22: May 23: November 24: June 25: November
so we've only missed once. 21 was early December.
There is a variant of "gethomeitis" known as "U.S.S. Ship It" that happens much more in the fall release time. That it ships in November doesn't tell the whole sausage making story.
On Thu, Dec 8, 2016 at 8:38 PM, Adam Williamson adamwill@fedoraproject.org wrote:
On Thu, 2016-12-08 at 13:33 -0700, Chris Murphy wrote:
I think that ignores that we're pretty much always slipping, and they aren't in fact 6 month cycles.
Well, since 22 we've got a lot closer to being back to May/November. We've had:
22: May 23: November 24: June 25: November
so we've only missed once. 21 was early December.
And that was the 1 year cycle which slipped a LOT * so on that evidence it might be damning for the new proposal.
* Some of this was due to rel-eng process at the time where we didn't composes everything nightly so we often didn't know what was broken until we attempted a test compose at alpha. this was identified and has been fixed with the new "pungi 4" process where we do a full compose every night.
On Thu, Dec 8, 2016 at 6:59 PM, Matthew Miller mattdm@fedoraproject.org wrote:
On Thu, Dec 08, 2016 at 07:41:13PM +0100, drago01 wrote:
Which problem are you trying to solve with those proposals?
From my *other* other mail:
- predictable calendar dates, to help with long-term planning
- not being on a hamster wheel which routinely bursts into flame
Some of the above proposal looks to be creating more hamster wheels not less, by this I mean branching off the branch. What do we gain by this other than more branches to deal with which generally means more work/maintenance, maybe we'd be better tagging to document as opposed to branching.
- maintaining the high level of QA we have for releases (or, you know, even increasing it)
Again more hamster wheels if not done properly (automated CI as opposed to human) but I've not seen any concrete proposal for implementation, again just hand wavy stuff.
- doesn't increase work for packagers
How? With more branches it seems like it woudl... would we still keep N+1 releases IE, have a release around for 2 years not 1?
- including time for QA and Rel-Eng to a) breathe and b) invest in infrastructure
Again, with bundled updates every 6 months you're still releasing and potentially with major desktop/virt rebases (and no doubt docker too) I don't see how this is actually close to your reality so Id like to see more solid detail on this rather than hand wavy bullet points.
- satisfying upstream projects which depend on us as an early delivery mechanism to users (GNOME, GCC, glibc, have spoken up before, but not limited to just those)
- maximum PR and user growth
I think that needs to be two separate points as I don't see them directly linked.
and just to expand a little bit: although we have a nominal six-month cycle, the natural tendency seems to be to expand to eigh- or nine-month cycles. That's not necessarily terrible, except a) it's not well-aligned with upstreams and b) it makes longer-term planning difficult because release times are unpredictable year-to-year.
The alternative we just tried was: if one cycle goes over six months, still target the next one as if it it _hadn't_ - that is, a shorter "make up" cycle. In this case, we came out with a great release (again, awesome work everyone), but we didn't have much breathing room (and ended slipping into the holidays again, with real risk of running into Christmas/end-of-year. And we certainly didn't have, in that, time for the teams to work on infrstructure.
So, I'm trying to come up with different ways to do it which still have the properties above.
-- Matthew Miller mattdm@fedoraproject.org Fedora Project Leader _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org
On Thu, 2016-12-08 at 09:17 -0500, Matthew Miller wrote:
Trying to make this idea a little more concrete. Here's two suggestions for how it might work. These are strawman ideas -- please provide alternates, poke holes, etc. And particularly from a QA and rel-eng point of view. Both of these are not taking modularity into account in any way; it's "how we could do this with our current distro-building process".
Frankly, this all seems like a lot of churn and mess and process change for no very obvious benefit. I'm a hell of a lot more interested in looking at smaller and more frequent 'release' events than larger less frequent ones.
Maybe I'm not understanding very well, but none of this makes me terribly excited.
On Thu, Dec 8, 2016 at 1:13 PM, Adam Williamson adamwill@fedoraproject.org wrote:
On Thu, 2016-12-08 at 09:17 -0500, Matthew Miller wrote:
Trying to make this idea a little more concrete. Here's two suggestions for how it might work. These are strawman ideas -- please provide alternates, poke holes, etc. And particularly from a QA and rel-eng point of view. Both of these are not taking modularity into account in any way; it's "how we could do this with our current distro-building process".
Frankly, this all seems like a lot of churn and mess and process change for no very obvious benefit. I'm a hell of a lot more interested in looking at smaller and more frequent 'release' events than larger less frequent ones.
Maybe I'm not understanding very well, but none of this makes me terribly excited.
Yeah I'm kinda thinking the same. Exactly what is the time frame for atomic host? What work can be done to make that less hand wavy future? If that's where we really want to be, then eat that frog rather than coming up with more frogs.
On Thu, Dec 8, 2016 at 1:18 PM, Chris Murphy lists@colorremedies.com wrote:
On Thu, Dec 8, 2016 at 1:13 PM, Adam Williamson adamwill@fedoraproject.org wrote:
On Thu, 2016-12-08 at 09:17 -0500, Matthew Miller wrote:
Trying to make this idea a little more concrete. Here's two suggestions for how it might work. These are strawman ideas -- please provide alternates, poke holes, etc. And particularly from a QA and rel-eng point of view. Both of these are not taking modularity into account in any way; it's "how we could do this with our current distro-building process".
Frankly, this all seems like a lot of churn and mess and process change for no very obvious benefit. I'm a hell of a lot more interested in looking at smaller and more frequent 'release' events than larger less frequent ones.
Maybe I'm not understanding very well, but none of this makes me terribly excited.
Yeah I'm kinda thinking the same. Exactly what is the time frame for atomic host? What work can be done to make that less hand wavy future? If that's where we really want to be, then eat that frog rather than coming up with more frogs.
Conversely...
Can kernel rebase act as a template for rebasing other things, without the monolithic effort of a full release? How much time-effort is put into creating testing qualifying and releasing images? That can't be zero, but maybe it's not as big as it seems. The other pile of work is stabilizing Rawhide from the time of branch to release, and in both Option 1 and 2 that's being dropped. So how much time-effort happens with that work? I sorta like the idea of branching off stable to make a dot release but that means Rawhide ends up simmering much, much longer than it usual does.
On Thu, 2016-12-08 at 13:29 -0700, Chris Murphy wrote:
How much time-effort is put into creating testing qualifying and releasing images?
On this note...
https://openqa.stg.fedoraproject.org/tests/overview?distri=fedora&versio...
On Thu, Dec 8, 2016 at 1:34 PM, Adam Williamson adamwill@fedoraproject.org wrote:
On Thu, 2016-12-08 at 13:29 -0700, Chris Murphy wrote:
How much time-effort is put into creating testing qualifying and releasing images?
On this note...
https://openqa.stg.fedoraproject.org/tests/overview?distri=fedora&versio...
?
OK I mean the entirety of the process including filing out test matrices, baremetal testing by the bags of mostly water, every single blocker review. All of it. And then what parts of those would still happen for a hypothetical dot release? Probably still blocker review but it'd probably be a whole lot less because there'd be no installation images to test.
On Thu, 2016-12-08 at 13:37 -0700, Chris Murphy wrote:
On Thu, Dec 8, 2016 at 1:34 PM, Adam Williamson adamwill@fedoraproject.org wrote:
On Thu, 2016-12-08 at 13:29 -0700, Chris Murphy wrote:
How much time-effort is put into creating testing qualifying and releasing images?
On this note...
https://openqa.stg.fedoraproject.org/tests/overview?distri=fedora&versio...
?
OK I mean the entirety of the process including filing out test matrices, baremetal testing by the bags of mostly water, every single blocker review. All of it. And then what parts of those would still happen for a hypothetical dot release? Probably still blocker review but it'd probably be a whole lot less because there'd be no installation images to test.
I just thought I'd drop it in there as it was rather on the same topic, and I set that up yesterday...that's openQA running on the respun live images that southern_gentlem builds periodically and which live in https://dl.fedoraproject.org/pub/alt/live-respins/ .
On Thu, Dec 8, 2016 at 8:13 PM, Adam Williamson adamwill@fedoraproject.org wrote:
On Thu, 2016-12-08 at 09:17 -0500, Matthew Miller wrote:
Trying to make this idea a little more concrete. Here's two suggestions for how it might work. These are strawman ideas -- please provide alternates, poke holes, etc. And particularly from a QA and rel-eng point of view. Both of these are not taking modularity into account in any way; it's "how we could do this with our current distro-building process".
Frankly, this all seems like a lot of churn and mess and process change for no very obvious benefit. I'm a hell of a lot more interested in looking at smaller and more frequent 'release' events than larger less frequent ones.
Maybe I'm not understanding very well, but none of this makes me terribly excited.
I couldn't have summarised it better myself!
On 8 December 2016 at 21:56, Peter Robinson pbrobinson@gmail.com wrote:
On Thu, Dec 8, 2016 at 8:13 PM, Adam Williamson adamwill@fedoraproject.org wrote:
On Thu, 2016-12-08 at 09:17 -0500, Matthew Miller wrote:
Trying to make this idea a little more concrete. Here's two suggestions for how it might work. These are strawman ideas -- please provide alternates, poke holes, etc. And particularly from a QA and rel-eng point of view. Both of these are not taking modularity into account in any way; it's "how we could do this with our current distro-building process".
Frankly, this all seems like a lot of churn and mess and process change for no very obvious benefit. I'm a hell of a lot more interested in looking at smaller and more frequent 'release' events than larger less frequent ones.
Maybe I'm not understanding very well, but none of this makes me terribly excited.
I couldn't have summarised it better myself! _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org
My take-away from this and some other threads is that Matthew shouldn't post ideas to the list when he is at a conference where he
* isnt' going to be able to respond except during breaks from floor time. * doesn't have time to fully outline the idea enough to let people know what he is thinking.
or people should just wait until he is able to do those things versus killing the idea by a thousand cuts before it has any chance of being filled out. :)
On 9 Dec 2016 14:02, "Stephen John Smoogen" smooge@gmail.com wrote:
On 8 December 2016 at 21:56, Peter Robinson pbrobinson@gmail.com wrote:
On Thu, Dec 8, 2016 at 8:13 PM, Adam Williamson adamwill@fedoraproject.org wrote:
On Thu, 2016-12-08 at 09:17 -0500, Matthew Miller wrote:
Trying to make this idea a little more concrete. Here's two suggestions for how it might work. These are strawman ideas -- please provide alternates, poke holes, etc. And particularly from a QA and rel-eng point of view. Both of these are not taking modularity into account in any way; it's "how we could do this with our current distro-building process".
Frankly, this all seems like a lot of churn and mess and process change for no very obvious benefit. I'm a hell of a lot more interested in looking at smaller and more frequent 'release' events than larger less frequent ones.
Maybe I'm not understanding very well, but none of this makes me terribly excited.
I couldn't have summarised it better myself! _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org
My take-away from this and some other threads is that Matthew shouldn't post ideas to the list when he is at a conference where he
* isnt' going to be able to respond except during breaks from floor time. * doesn't have time to fully outline the idea enough to let people know what he is thinking.
or people should just wait until he is able to do those things versus killing the idea by a thousand cuts before it has any chance of being filled out. :)
Well I'm on PTO and travelled to get my laptop to reply to this so I could ensure my voice is heard too so it works both ways!
On Thu, Dec 08, 2016 at 12:13:59PM -0800, Adam Williamson wrote:
Frankly, this all seems like a lot of churn and mess and process change for no very obvious benefit. I'm a hell of a lot more interested in looking at smaller and more frequent 'release' events than larger less frequent ones.
In a completely theoretical universe (spherical cow, anyone?), we'd have something that we'd feel *could* be a perfect release every month, week, or even nightly. Then, we'd decide whether or not to *make* it a release based entirely on other factors.
Maybe I'm not understanding very well, but none of this makes me terribly excited.
So, *did* you feel that the F25 cycle felt compressed? If we're close enough to the theoretical-world above that we feel like we can do, say, four month cycles to stay on track without experiencing (particular) pain, maybe that's okay.
On Fri, 2016-12-09 at 11:03 -0500, Matthew Miller wrote:
So, *did* you feel that the F25 cycle felt compressed? If we're close enough to the theoretical-world above that we feel like we can do, say, four month cycles to stay on track without experiencing (particular) pain, maybe that's okay.
This seems like an impossible question to answer. Our release cycles are entirely arbitrary; they're precisely what we say they are. So I'm not sure how to say whether one "feels compressed", or understand how "four month cycles" would make us "stay on track". *What* track would we be staying on?
When I mentioned shorter cycles, I wasn't suggesting we do all the same stuff we do now, only in a smaller space of time. That would be awful. I was honestly thinking more about far more automated and less significant 'release events'. But really, my larger point is that what you're proposing sounded like a large amount of work for (particularly) release engineering, but came with no clear justification beyond "I have an unquantifiable feeling that we can get better press coverage if we do one release a year", which is extremely thin. At a bare minimum, any significant release cycle change needs to come with a ground-up and coherent justification of why *that* is the best way, right now, for the Fedora project to produce little baby Fedoras.
It also seems bizarre to be having a 'release' conversation that doesn't really seem to tie in at all with what's going on with Modularity and Factory 2.0...since I thought those were the primary drivers of planned major change to how we deliver Fedora.
On Fri, Dec 09, 2016 at 08:50:06AM -0800, Adam Williamson wrote:
So, *did* you feel that the F25 cycle felt compressed? If we're close enough to the theoretical-world above that we feel like we can do, say, four month cycles to stay on track without experiencing (particular) pain, maybe that's okay.
This seems like an impossible question to answer. Our release cycles are entirely arbitrary; they're precisely what we say they are. So I'm not sure how to say whether one "feels compressed", or understand how "four month cycles" would make us "stay on track". *What* track would we be staying on?
Roughly Mother's Day / Halloween, and not unpredictably cycling around the calendar. Entirely arbitrary in general, in the sense that we make them up is fine. Entirely arbitrary *each time* where we don't know where they'll be in the future until after the current release is done is bad for users, Fedora developers, upstream developers, downstreams, and basically every group I can think of.
you're proposing sounded like a large amount of work for (particularly) release engineering, but came with no clear justification beyond "I have an unquantifiable feeling that we can get better press coverage if we do one release a year", which is extremely thin. At a bare minimum, any significant release cycle change needs to come with a ground-up and coherent justification of why *that* is the best way, right now, for the Fedora project to produce little baby Fedoras.
I'm sorry — I'll blame some of this on what Smooge said, about emailing ideas from the conference floor. I didn't mean for the release adoption curve and PR cycles to be the justification — that's just what got me thinking about it right now.
I'm not sure what the best way is, right now.
It also seems bizarre to be having a 'release' conversation that doesn't really seem to tie in at all with what's going on with Modularity and Factory 2.0...since I thought those were the primary drivers of planned major change to how we deliver Fedora.
Somewhere back in the early part of one of these threads, that *was* in there — Generational Core on _three month_ cycles following new kernel releases, userspace modules updated on their own natural cycles, and big release events annually.
Langdon is sitting right next to me right now and I'm going to tag him in for more on Modularity.
On 12/09/2016 02:52 PM, Matthew Miller wrote:
On Fri, Dec 09, 2016 at 08:50:06AM -0800, Adam Williamson wrote:
So, *did* you feel that the F25 cycle felt compressed? If we're close enough to the theoretical-world above that we feel like we can do, say, four month cycles to stay on track without experiencing (particular) pain, maybe that's okay.
This seems like an impossible question to answer. Our release cycles are entirely arbitrary; they're precisely what we say they are. So I'm not sure how to say whether one "feels compressed", or understand how "four month cycles" would make us "stay on track". *What* track would we be staying on?
Roughly Mother's Day / Halloween, and not unpredictably cycling around the calendar. Entirely arbitrary in general, in the sense that we make them up is fine. Entirely arbitrary *each time* where we don't know where they'll be in the future until after the current release is done is bad for users, Fedora developers, upstream developers, downstreams, and basically every group I can think of.
you're proposing sounded like a large amount of work for (particularly) release engineering, but came with no clear justification beyond "I have an unquantifiable feeling that we can get better press coverage if we do one release a year", which is extremely thin. At a bare minimum, any significant release cycle change needs to come with a ground-up and coherent justification of why *that* is the best way, right now, for the Fedora project to produce little baby Fedoras.
I'm sorry — I'll blame some of this on what Smooge said, about emailing ideas from the conference floor. I didn't mean for the release adoption curve and PR cycles to be the justification — that's just what got me thinking about it right now.
I'm not sure what the best way is, right now.
It also seems bizarre to be having a 'release' conversation that doesn't really seem to tie in at all with what's going on with Modularity and Factory 2.0...since I thought those were the primary drivers of planned major change to how we deliver Fedora.
Somewhere back in the early part of one of these threads, that *was* in there — Generational Core on _three month_ cycles following new kernel releases, userspace modules updated on their own natural cycles, and big release events annually.
Langdon is sitting right next to me right now and I'm going to tag him in for more on Modularity.
So, what I hope for with gen-core/modularity is that the decision to "release" becomes entirely unrelated to engineering. In other words, at any given time there is a fully working/fully tested, up to date gen-core and all the applications (or modules) that sit on it. Those applications will also, likely, be able to run on multiple gen-cores. As result, the processes to produce working artifacts that users can install will always be running. Hopefully, with enough CI (read: automated testing) that there is little to no human involvement in ensure that everything is in "good shape."
If we can get to that point, then we can make "release" and "lifecycle" decisions purely based on the desire of "not code" reasons. In other words, we can decide how many versions of things are currently available based on the effort required to maintain them. We can also decide when a "release" makes sense based on marketing or other considerations and just "pull the trigger" on that day. Or we could allow users to decide for themselves by opting in to a "rolling release" style of deployment.
Right now, we decide on the server-side when a "release" happens for lots of reasons. In the modularized world, there is no reason that we can't let users decide when they get new versions of things and they may even want different rules for different software. Or, as that is likely to be pretty confusing (particularly at first) we could have the Editions decide their policies/releases and have the client tools "enforce" them.
Langdon
On Fri, Dec 09, 2016 at 03:19:58PM -0500, langdon wrote:
Langdon is sitting right next to me right now and I'm going to tag him in for more on Modularity.
[...]
them. We can also decide when a "release" makes sense based on marketing or other considerations and just "pull the trigger" on that day. Or we could allow users to decide for themselves by opting in to a "rolling release" style of deployment.
This all sounds suspiciously like what I said would be the ideal. How far from that, in the actual world, are we going to be with Modularity when we get to, say, October 2017?
On Friday, December 9, 2016, Matthew Miller mattdm@fedoraproject.org wrote:
On Fri, Dec 09, 2016 at 03:19:58PM -0500, langdon wrote:
Langdon is sitting right next to me right now and I'm going to tag him in for more on Modularity.
[...]
them. We can also decide when a "release" makes sense based on marketing or other considerations and just "pull the trigger" on that day. Or we could allow users to decide for themselves by opting in to a "rolling release" style of deployment.
This all sounds suspiciously like what I said would be the ideal. How far from that, in the actual world, are we going to be with Modularity when we get to, say, October 2017?
That does not work ... Without install media it might not be possible to
install Fedora on hardware unsupported by the last release kernel for a long time. So not really "ideal".
-- Matthew Miller <mattdm@fedoraproject.org javascript:;> Fedora Project Leader _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org javascript:; To unsubscribe send an email to devel-leave@lists.fedoraproject.org javascript:;
On 12/09/2016 03:26 PM, Matthew Miller wrote:
On Fri, Dec 09, 2016 at 03:19:58PM -0500, langdon wrote:
Langdon is sitting right next to me right now and I'm going to tag him in for more on Modularity.
[...]
them. We can also decide when a "release" makes sense based on marketing or other considerations and just "pull the trigger" on that day. Or we could allow users to decide for themselves by opting in to a "rolling release" style of deployment.
This all sounds suspiciously like what I said would be the ideal. How far from that, in the actual world, are we going to be with Modularity when we get to, say, October 2017?
Well, I think it depends. We need a lot of community help to convert things to modules, integrate tests, etc. We also need factory-2 to be fully online (or at least mostly). We also are not completely sure how small we can make the "gen-core". The folks working on gen-core are gonna make it super small, but then modularity is gonna come and add a bunch of stuff that we need to make available to applications until we have the opportunity to repackage (e.g. if you have a lib and it is packaged with a command line tool, even if the lib can be parallel installed, the command line tool can't, so we have to re-package). Basically, the gen-core will be ~equivalent to a distribution that we are shrinking by pulling content into the applications.
So, making the "decision to release" based on marketing will likely be possible. The impact of that decision on users being "no biggie" will probably take longer. I know that is a little cryptic but the modularity project has always been trying to lay the groundwork in a non-disruptive way. So you will be able to build modules and have a disconnect from gen-core and app lifecycle but the gen-core will probably be big at first, not all apps will be modularized, many leaf apps sitting in an "everything else" module, etc. Over time (and it is semi-unpredictable length), we will get closer and closer to the ideal.
Langdon
Matthew Miller wrote:
Option 1: Big batched update
Release F26 according to schedule https://fedoraproject.org/wiki/Releases/26/Schedule
At the beginning of October, stop pushing non-security updates from updates-testing to updates
Bigger updates (desktop environment refreshes, etc.) allowed into updates-testing at this time.
Mid-October, freeze exceptions for getting into updates-testing even.
Test all of that together in Some Handwavy Way for serious problems and regressions.
Once all good, push from updates-testing to updates at end of October or beginning of November.
This is highly impractical. We really want bug fixes to go out! (IMHO, this is a showstopper in that proposal.) The branching option is more realistic.
However, I also do not see why we cannot just do such big updates through the regular update process rather than in a big .1 drop. The KDE SIG has experience with pushing big grouped updates that look a lot like a .1 release for Plasma users. They go through the regular update process just fine. Grouping them together with updates to GNOME, LibreOffice etc. in one batch is not necessary and would only add unnecessary delays.
I think pushing all updates in a big drop will actually make them LESS tested than if they just trickle through one at a time.
Kevin Kofler
On Thu, Dec 8, 2016 at 9:07 PM, Kevin Kofler kevin.kofler@chello.at wrote:
However, I also do not see why we cannot just do such big updates through the regular update process rather than in a big .1 drop. The KDE SIG has experience with pushing big grouped updates that look a lot like a .1 release for Plasma users. They go through the regular update process just fine. Grouping them together with updates to GNOME, LibreOffice etc. in one batch is not necessary and would only add unnecessary delays.
I think pushing all updates in a big drop will actually make them LESS tested than if they just trickle through one at a time.
That is an excellent point. KDE for some time has been pushing out large updates using the regular update process. What is the issue with just doing this? It certainly seems much more straight forward and easier than ~.x updates. Fedora version releases could then be reserved for structural / architectural concerns rather than software updates and bug fixes.
Fedora stays fast moving and Fedora X releases come less often - seems like a win / win.
On Fri, 09 Dec 2016 06:07:02 +0100, Kevin Kofler wrote:
However, I also do not see why we cannot just do such big updates through the regular update process rather than in a big .1 drop. The KDE SIG has experience with pushing big grouped updates that look a lot like a .1 release for Plasma users. They go through the regular update process just fine. Grouping them together with updates to GNOME, LibreOffice etc. in one batch is not necessary and would only add unnecessary delays.
And there it is again, the rush to get out updates. Quickly! Quickly! What has been released before is not bug-free, and the update are not bug-free either, and even if no user has reported a bug, the flow of updates will ensure that the user will be affected by a new bug eventually.
If as a maintainer you don't release version upgrades quickly, some users complain everywhere they are permitted to post. Except for bugzilla. And if you make available upgrades quickly, the users will complain if they think they are affected by bugs.
Upstream release cycles are not aligned with Fedora's dist release schedule anyway.
I think pushing all updates in a big drop will actually make them LESS tested than if they just trickle through one at a time.
The latter is turning all users of the stable "updates" repo into testers once those updates are unleashed so quickly. And those brave ones, albeit only few, who would be willing to evaluate "Test Updates" for some time, don't get any real chance to do so, because updates are rushed out.
On 12/09/2016 01:51 PM, Michael Schwendt wrote:
On Fri, 09 Dec 2016 06:07:02 +0100, Kevin Kofler wrote:
If as a maintainer you don't release version upgrades quickly, some users complain everywhere they are permitted to post. Except for bugzilla. And if you make available upgrades quickly, the users will complain if they think they are affected by bugs.
And? What*s the problem? It's part of a packagers job to balance the tradeoffs and find a viable compromise.
I think pushing all updates in a big drop will actually make them LESS tested than if they just trickle through one at a time.
Agreed. Swapping one large bowl over users doesn't help anybody.
Openly said, I feel some people do not comprehend the fundamental differences between RHEL, CentOS and Fedora and between a community project and an enterprise project.
Ralf
On Fri, 9 Dec 2016 19:41:28 +0100, Ralf Corsepius wrote:
And? What*s the problem? It's part of a packagers job to balance the tradeoffs and find a viable compromise.
You don't need to agree. In the reply you've truncated, I've only pointed out how I feel about the updates flood. It's my number one reason why I've pretty much given up spending karma points in bodhi as all too often an update had been pushed before I could vote -1. Rushing out updates defeats the purpose of Test Updates, IMO. And nothing is done to make the updates-testing repo more sexy.
"Viable" is such a vague word.
Openly said, I feel some people do not comprehend the fundamental differences between RHEL, CentOS and Fedora and between a community project and an enterprise project.
You mean, like EPEL that is flooded with updates, too? ;)
On Fri, 2016-12-09 at 20:46 +0100, Michael Schwendt wrote:
On Fri, 9 Dec 2016 19:41:28 +0100, Ralf Corsepius wrote:
And? What*s the problem? It's part of a packagers job to balance the tradeoffs and find a viable compromise.
You don't need to agree. In the reply you've truncated, I've only pointed out how I feel about the updates flood. It's my number one reason why I've pretty much given up spending karma points in bodhi as all too often an update had been pushed before I could vote -1. Rushing out updates defeats the purpose of Test Updates, IMO. And nothing is done to make the updates-testing repo more sexy.
There are much simpler ways to deal with this, if it's really a problem. The fact that updates default to auto-push after +3 karma is entirely plucked out of the air, it's just something someone made up one day. We could *certainly* change that. I'd be quite interested in a tweak where there's a minimum-time-in-testing value for autopush too, which would default to say 2 days. The way that would work is automatic push would never happen until the update had actually been in updates- testing (not queued for push) for that long. *Manual* push could still be done during that time, and the update submitter could make the minimum-time-in-testing value larger or smaller (as they can make the karma threshold for autopush greater or smaller). 2 days would just be the default (and is similarly a number I've just made up; we could make it something else).
On Fri, Dec 09, 2016 at 11:55:26AM -0800, Adam Williamson wrote:
problem. The fact that updates default to auto-push after +3 karma is entirely plucked out of the air, it's just something someone made up one day. We could *certainly* change that. I'd be quite interested in a tweak where there's a minimum-time-in-testing value for autopush too, which would default to say 2 days. The way that would work is automatic push would never happen until the update had actually been in updates- testing (not queued for push) for that long. *Manual* push could still be done during that time, and the update submitter could make the minimum-time-in-testing value larger or smaller (as they can make the karma threshold for autopush greater or smaller). 2 days would just be the default (and is similarly a number I've just made up; we could make it something else).
What if we combined this time threshold with, also, auto-pushes happen only on Monday (or whatever)?
On Fri, 2016-12-09 at 15:05 -0500, Matthew Miller wrote:
On Fri, Dec 09, 2016 at 11:55:26AM -0800, Adam Williamson wrote:
problem. The fact that updates default to auto-push after +3 karma is entirely plucked out of the air, it's just something someone made up one day. We could *certainly* change that. I'd be quite interested in a tweak where there's a minimum-time-in-testing value for autopush too, which would default to say 2 days. The way that would work is automatic push would never happen until the update had actually been in updates- testing (not queued for push) for that long. *Manual* push could still be done during that time, and the update submitter could make the minimum-time-in-testing value larger or smaller (as they can make the karma threshold for autopush greater or smaller). 2 days would just be the default (and is similarly a number I've just made up; we could make it something else).
What if we combined this time threshold with, also, auto-pushes happen only on Monday (or whatever)?
I wouldn't hate it. On a visceral level I've never bought the 'batched updates' idea at all, but if it only affects autopushes I don't mind tweaking around. It doesn't involve too much work to change, it's easy to change back, and manual pushes are still available.
On Fri, Dec 9, 2016 at 1:12 PM, Adam Williamson adamwill@fedoraproject.org wrote:
On Fri, 2016-12-09 at 15:05 -0500, Matthew Miller wrote:
On Fri, Dec 09, 2016 at 11:55:26AM -0800, Adam Williamson wrote:
problem. The fact that updates default to auto-push after +3 karma is entirely plucked out of the air, it's just something someone made up one day. We could *certainly* change that. I'd be quite interested in a tweak where there's a minimum-time-in-testing value for autopush too, which would default to say 2 days. The way that would work is automatic push would never happen until the update had actually been in updates- testing (not queued for push) for that long. *Manual* push could still be done during that time, and the update submitter could make the minimum-time-in-testing value larger or smaller (as they can make the karma threshold for autopush greater or smaller). 2 days would just be the default (and is similarly a number I've just made up; we could make it something else).
What if we combined this time threshold with, also, auto-pushes happen only on Monday (or whatever)?
I wouldn't hate it. On a visceral level I've never bought the 'batched updates' idea at all, but if it only affects autopushes I don't mind tweaking around. It doesn't involve too much work to change, it's easy to change back, and manual pushes are still available.
For whatever reason I've gotten three update notifications in Gnome Software this week alone, and I've done the restart and install each time. This is Fedora 25. And then 3-4 times at separate occasions I've needed to add some command line items and each time dnf does a full fedora and updates repo metadata download of around 40MB each time, which is what I thought dnf-makecache.timer was supposed to do in the background so I'd never see and have to wait for it just to get a 53K program installed.
So batching would be vastly preferred to what I've been experiencing this week even though I agree that the Windows update Tuesdays model has its own short coming; not least of which is the connotation.
On Fri, 2016-12-09 at 13:21 -0700, Chris Murphy wrote:
On Fri, Dec 9, 2016 at 1:12 PM, Adam Williamson adamwill@fedoraproject.org wrote:
On Fri, 2016-12-09 at 15:05 -0500, Matthew Miller wrote:
On Fri, Dec 09, 2016 at 11:55:26AM -0800, Adam Williamson wrote:
problem. The fact that updates default to auto-push after +3 karma is entirely plucked out of the air, it's just something someone made up one day. We could *certainly* change that. I'd be quite interested in a tweak where there's a minimum-time-in-testing value for autopush too, which would default to say 2 days. The way that would work is automatic push would never happen until the update had actually been in updates- testing (not queued for push) for that long. *Manual* push could still be done during that time, and the update submitter could make the minimum-time-in-testing value larger or smaller (as they can make the karma threshold for autopush greater or smaller). 2 days would just be the default (and is similarly a number I've just made up; we could make it something else).
What if we combined this time threshold with, also, auto-pushes happen only on Monday (or whatever)?
I wouldn't hate it. On a visceral level I've never bought the 'batched updates' idea at all, but if it only affects autopushes I don't mind tweaking around. It doesn't involve too much work to change, it's easy to change back, and manual pushes are still available.
For whatever reason I've gotten three update notifications in Gnome Software this week alone, and I've done the restart and install each time. This is Fedora 25. And then 3-4 times at separate occasions I've needed to add some command line items and each time dnf does a full fedora and updates repo metadata download of around 40MB each time, which is what I thought dnf-makecache.timer was supposed to do in the background so I'd never see and have to wait for it just to get a 53K program installed.
GNOME Software doesn't use dnf's caches.
Software will check for new updates at most every 48 hours, so if there happen to *be* new updates every 48 hours and your system is running the whole time, yeah, you can get 3-and-a-bit update notifications per week.
On Fri, 2016-12-09 at 12:25 -0800, Adam Williamson wrote:
Software will check for new updates at most every 48 hours, so if there happen to *be* new updates every 48 hours and your system is running the whole time, yeah, you can get 3-and-a-bit update notifications per week.
Not *quite* -- Software does check for updates daily (not 48 hours, unless this changed...?) but it notifies the user at most once per week, unless there is a security update, in which case it notifies immediately.
The problem is that we have very minor security updates all the time, and each one causes Software to present all updates to you. That really shouldn't happen IMO; only "important" security updates should trigger this, for some value of "important".
Michael
On Fri, 2016-12-09 at 14:31 -0600, Michael Catanzaro wrote:
On Fri, 2016-12-09 at 12:25 -0800, Adam Williamson wrote:
Software will check for new updates at most every 48 hours, so if there happen to *be* new updates every 48 hours and your system is running the whole time, yeah, you can get 3-and-a-bit update notifications per week.
Not *quite* -- Software does check for updates daily (not 48 hours, unless this changed...?) but it notifies the user at most once per week, unless there is a security update, in which case it notifies immediately.
Oh yeah, sorry, I was forgetting the logic. You're right, it's 24 hours. I just punted the clock 48 hours to be safe (in the test where I have to work around this behaviour...)
problem. The fact that updates default to auto-push after +3 karma is entirely plucked out of the air, it's just something someone made up one day. We could *certainly* change that. I'd be quite interested in a tweak where there's a minimum-time-in-testing value for autopush too, which would default to say 2 days. The way that would work is automatic push would never happen until the update had actually been in updates- testing (not queued for push) for that long. *Manual* push could still be done during that time, and the update submitter could make the minimum-time-in-testing value larger or smaller (as they can make the karma threshold for autopush greater or smaller). 2 days would just be the default (and is similarly a number I've just made up; we could make it something else).
What if we combined this time threshold with, also, auto-pushes happen only on Monday (or whatever)?
I wouldn't hate it. On a visceral level I've never bought the 'batched updates' idea at all, but if it only affects autopushes I don't mind tweaking around. It doesn't involve too much work to change, it's easy to change back, and manual pushes are still available.
For whatever reason I've gotten three update notifications in Gnome Software this week alone, and I've done the restart and install each time. This is Fedora 25. And then 3-4 times at separate occasions I've needed to add some command line items and each time dnf does a full fedora and updates repo metadata download of around 40MB each time, which is what I thought dnf-makecache.timer was supposed to do in the background so I'd never see and have to wait for it just to get a 53K program installed.
I actively disable the dnf makecache functionality and just run "dnf --refresh" each time I need to use it as I find the makecache functionality is just broken and pulls down vast amount of data repeatedly.
dnf also has an issue where it pulls down all the repo data each tme, the fullfilelist is generally not needed for pure update/package install and yum had functionality where it would only pull that down if and when it was explicitly needed. From memory this is only for things like repoquery against file lists not in the usual bin directories (the bin directory file lists are included in the standard repo data). The full list makes up most of the 40Mb downloaded. The dnf developers seem to think that everyone has lots of data/bandwidth and don't see the problem with it.
Peter
On Fri, Dec 09, 2016 at 10:22:44PM +0000, Peter Robinson wrote:
repo data). The full list makes up most of the 40Mb downloaded. The dnf developers seem to think that everyone has lots of data/bandwidth and don't see the problem with it.
Isn't the problem that the SAT solver used by DNF for dependency resolution doesn't know in advance if it will be needed or not, and there isn't an easy way of adding it in midway?
On Fri, Dec 9, 2016 at 10:58 PM, Matthew Miller mattdm@fedoraproject.org wrote:
On Fri, Dec 09, 2016 at 10:22:44PM +0000, Peter Robinson wrote:
repo data). The full list makes up most of the 40Mb downloaded. The dnf developers seem to think that everyone has lots of data/bandwidth and don't see the problem with it.
Isn't the problem that the SAT solver used by DNF for dependency resolution doesn't know in advance if it will be needed or not, and there isn't an easy way of adding it in midway?
No idea of the exact details, by default yum didn't use all that data so didn't download it (it's basically a full file list of everything in all rpms) until it actually did need it which was never for standard use cases. dnf downloads it all the time, this IMO is a regression which can cost non insignificant amounts of bandwidth (eg my parents have a 5Gb allocation a month, 40Mb a day * 30 days a month is 1.2Gb or over 20% of their allocation).
Peter
On Fri, Dec 9, 2016 at 5:58 PM, Matthew Miller mattdm@fedoraproject.org wrote:
On Fri, Dec 09, 2016 at 10:22:44PM +0000, Peter Robinson wrote:
repo data). The full list makes up most of the 40Mb downloaded. The dnf developers seem to think that everyone has lots of data/bandwidth and don't see the problem with it.
Isn't the problem that the SAT solver used by DNF for dependency resolution doesn't know in advance if it will be needed or not, and there isn't an easy way of adding it in midway?
This is correct. While it is possible to omit the data, the solver will then generate unsolvable solutions in some cases, as well as not be able to install using file paths, as there's no easy path to identify that you're just missing data vs not having a solution at all.
That being said, there was some work on a prototype to make it so our metadata downloads didn't suck so bad[1][2]. However, I've not seen any progress on developing this and integrating it into createrepo_c and libdnf for PackageKit and DNF. This would be an excellent way to make our larger metadata much more tolerable for all kinds of environments. Maybe it might become important and be revived soon?
[1]: https://github.com/rh-lab-q/deltametadata-prototype [2]: https://github.com/rh-lab-q/deltametadata-prototype/wiki/Deltametadata-of-re...
On Sun, Dec 11, 2016 at 3:10 PM, Neal Gompa ngompa13@gmail.com wrote:
On Fri, Dec 9, 2016 at 5:58 PM, Matthew Miller mattdm@fedoraproject.org wrote:
On Fri, Dec 09, 2016 at 10:22:44PM +0000, Peter Robinson wrote:
repo data). The full list makes up most of the 40Mb downloaded. The dnf developers seem to think that everyone has lots of data/bandwidth and don't see the problem with it.
Isn't the problem that the SAT solver used by DNF for dependency resolution doesn't know in advance if it will be needed or not, and there isn't an easy way of adding it in midway?
This is correct. While it is possible to omit the data, the solver will then generate unsolvable solutions in some cases, as well as not be able to install using file paths, as there's no easy path to identify that you're just missing data vs not having a solution at all.
That being said, there was some work on a prototype to make it so our metadata downloads didn't suck so bad[1][2]. However, I've not seen any progress on developing this and integrating it into createrepo_c and libdnf for PackageKit and DNF. This would be an excellent way to make our larger metadata much more tolerable for all kinds of environments. Maybe it might become important and be revived soon?
The other issue is that dnf and PackageKit each download their own copies of this metadata. Delete /var/cache/dnf and /var/cache/PackageKit, reboot and within a couple hours du will show:
108M dnf 132M PackageKit
On Dec 11, 2016 9:00 PM, "Chris Murphy" lists@colorremedies.com wrote:
On Sun, Dec 11, 2016 at 3:10 PM, Neal Gompa ngompa13@gmail.com wrote:
On Fri, Dec 9, 2016 at 5:58 PM, Matthew Miller mattdm@fedoraproject.org
wrote:
On Fri, Dec 09, 2016 at 10:22:44PM +0000, Peter Robinson wrote:
repo data). The full list makes up most of the 40Mb downloaded. The dnf developers seem to think that everyone has lots of data/bandwidth and don't see the problem with it.
Isn't the problem that the SAT solver used by DNF for dependency resolution doesn't know in advance if it will be needed or not, and there isn't an easy way of adding it in midway?
This is correct. While it is possible to omit the data, the solver will then generate unsolvable solutions in some cases, as well as not be able to install using file paths, as there's no easy path to identify that you're just missing data vs not having a solution at all.
That being said, there was some work on a prototype to make it so our metadata downloads didn't suck so bad[1][2]. However, I've not seen any progress on developing this and integrating it into createrepo_c and libdnf for PackageKit and DNF. This would be an excellent way to make our larger metadata much more tolerable for all kinds of environments. Maybe it might become important and be revived soon?
Deltametadata-of-repo-md-files
The other issue is that dnf and PackageKit each download their own copies of this metadata. Delete /var/cache/dnf and /var/cache/PackageKit, reboot and within a couple hours du will show:
108M dnf 132M PackageKit
My hope is that this issue can be addressed before Fedora 26 releases. The necessary pieces of the puzzle to unify at least the cache are in place in Rawhide now. One of the DNF developers can further elaborate on this, though.
On Fri, Dec 09, 2016 at 12:12:56PM -0800, Adam Williamson wrote:
What if we combined this time threshold with, also, auto-pushes happen only on Monday (or whatever)?
I wouldn't hate it. On a visceral level I've never bought the 'batched updates' idea at all, but if it only affects autopushes I don't mind tweaking around. It doesn't involve too much work to change, it's easy to change back, and manual pushes are still available.
I submitted these ideas as Bodhi RFEs:
https://github.com/fedora-infra/bodhi/issues/1156 https://github.com/fedora-infra/bodhi/issues/1157
To necromance this old thread, I wanted to give a heads up that we're about to get a cool feature in Bodhi in response to this thread:
https://github.com/fedora-infra/bodhi/pull/1678
With that pull request, there will be a new request state called "batched". When non-priority[0] updates reach the karma threshold, they will go into request:batched instead of request:stable (they will remain status:testing). Once a week, a cron script will look for all updates in the batched state and will switch them all the request:stable. Then they will continue on as they do today. This should help us to reduce the daily churn of Fedora updates for end-users to only be updates they truly need. It may also make the masher be a little faster on 6 days of the week (and slower on one ☺).
There will still be a little more polish work to do after that pull request is merged. For example, for non-autokarma updates we want to change the "push to stable" button to be "push to batched".
[0] The code considers two things to determine whether the update is priority or not: security updates are high prioritity, and urgent updates are considered high priority. All other updates are considered "normal" and will go through the new batched workflow.
On Mon, Jul 31, 2017 at 09:34:24PM -0000, Randy Barlow wrote:
request:stable. Then they will continue on as they do today. This should help us to reduce the daily churn of Fedora updates for end-users to only be updates they truly need. It may also make the masher be a little faster on 6 days of the week (and slower on one ☺).
This is awesome, Randy! Thanks!
On Mon, Jul 31, 2017 at 5:34 PM, Randy Barlow bowlofeggs@fedoraproject.org wrote:
To necromance this old thread, I wanted to give a heads up that we're about to get a cool feature in Bodhi in response to this thread:
https://github.com/fedora-infra/bodhi/pull/1678
With that pull request, there will be a new request state called "batched". When non-priority[0] updates reach the karma threshold, they will go into request:batched instead of request:stable (they will remain status:testing). Once a week, a cron script will look for all updates in the batched state and will switch them all the request:stable. Then they will continue on as they do today. This should help us to reduce the daily churn of Fedora updates for end-users to only be updates they truly need. It may also make the masher be a little faster on 6 days of the week (and slower on one ☺).
There will still be a little more polish work to do after that pull request is merged. For example, for non-autokarma updates we want to change the "push to stable" button to be "push to batched".
[0] The code considers two things to determine whether the update is priority or not: security updates are high prioritity, and urgent updates are considered high priority. All other updates are considered "normal" and will go through the new batched workflow.
I have two questions about this:
1. Are you saying that this feature will be *activated* once it's merged, or just that it will be available should Fedora decide to turn it on as a policy decision? I'm assuming it's the latter, as I don't think I've seen a change proposal or anything be formally filed about this, and I would have expected that for this kind of change, but it's not entirely clear to me from this email.
2. If we do implement this, could we consider not batching new package updates in addition to security and "urgent" updates? New package updates wouldn't get downloaded onto users systems upon running "dnf upgrade", so the update process would still *feel* batched from an end-user point of view. But we would simultaneously be able to deliver new software quickly to users, or at least as quickly as we do today. (I find that people rarely test new package updates, or at least rarely test them and give karma, which means that a newpackage request generally sits the full 7 or 14 days in bodhi-- so I don't think we should add up to 7 days to that timetable).
I guess if this were done there might need to be a check put in place to stop someone from flagging their bodhi update as "newpackage" when it's not, in fact, a new package to bypass the batching, but this seems like something that should be easy to do.
Thanks, Ben Rosser
On Mon, 2017-07-31 at 22:13 -0400, Ben Rosser wrote:
- Are you saying that this feature will be *activated* once it's
merged, or just that it will be available should Fedora decide to turn it on as a policy decision? I'm assuming it's the latter, as I don't think I've seen a change proposal or anything be formally filed about this, and I would have expected that for this kind of change, but it's not entirely clear to me from this email.
Hi Ben!
It would be activated whenever the Bodhi that has it is deployed. However, it won't be a forced policy - developers will still be free to click "push to stable" if they please. The autokarma feature will simply move updates to batched now. Once the UI work is completed, the plan is for the UI to offer a "push to batched" option for testing updates that meet the 7 day criteria, and a "push to stable" button for all batched updates. Thus, I didn't think it would be necessary to file for a change, but I'd be happy to do so if it is necessary.
- If we do implement this, could we consider not batching new
package updates in addition to security and "urgent" updates? New package updates wouldn't get downloaded onto users systems upon running "dnf upgrade", so the update process would still *feel* batched from an end-user point of view. But we would simultaneously be able to deliver new software quickly to users, or at least as quickly as we do today. (I find that people rarely test new package updates, or at least rarely test them and give karma, which means that a newpackage request generally sits the full 7 or 14 days in bodhi-- so I don't think we should add up to 7 days to that timetable).
That's a good suggestion that I hadn't though about. Sure, I think that's a good idea - care to propose it on the pull request yourself since it was your idea? This is the line where an "or self.type is newpackage" would go:
https://github.com/fedora-infra/bodhi/pull/1678/files#diff-6406e7faaf2526305...
I guess if this were done there might need to be a check put in place to stop someone from flagging their bodhi update as "newpackage" when it's not, in fact, a new package to bypass the batching, but this seems like something that should be easy to do.
Since it's not a forced policy I don't think we need to worry about anyone trying to work around the system. Developers will be able to keep pushing to stable as they see fit. This just offers another path for those times where you have an update that is more on the minor side and you don't mind it waiting for the next batch.
On Mon, Jul 31, 2017 at 10:51 PM, Randy Barlow bowlofeggs@fedoraproject.org wrote:
It would be activated whenever the Bodhi that has it is deployed. However, it won't be a forced policy - developers will still be free to click "push to stable" if they please. The autokarma feature will simply move updates to batched now. Once the UI work is completed, the plan is for the UI to offer a "push to batched" option for testing updates that meet the 7 day criteria, and a "push to stable" button for all batched updates. Thus, I didn't think it would be necessary to file for a change, but I'd be happy to do so if it is necessary.
Oh! Somehow I misunderstood what the change actually was. This seems entirely reasonable. :)
That's a good suggestion that I hadn't though about. Sure, I think that's a good idea - care to propose it on the pull request yourself since it was your idea? This is the line where an "or self.type is newpackage" would go:
https://github.com/fedora-infra/bodhi/pull/1678/files#diff-6406e7faaf2526305...
Certainly; I've left a comment on the PR suggesting this.
Ben Rosser
On Mon, 2017-07-31 at 22:51 -0400, Randy Barlow wrote:
On Mon, 2017-07-31 at 22:13 -0400, Ben Rosser wrote:
- If we do implement this, could we consider not batching new
package updates in addition to security and "urgent" updates? New package updates wouldn't get downloaded onto users systems upon running "dnf upgrade", so the update process would still *feel* batched from an end-user point of view. But we would simultaneously be able to deliver new software quickly to users, or at least as quickly as we do today. (I find that people rarely test new package updates, or at least rarely test them and give karma, which means that a newpackage request generally sits the full 7 or 14 days in bodhi-- so I don't think we should add up to 7 days to that timetable).
That's a good suggestion that I hadn't though about. Sure, I think that's a good idea - care to propose it on the pull request yourself since it was your idea? This is the line where an "or self.type is newpackage" would go:
https://github.com/fedora-infra/bodhi/pull/1678/files#diff-6406e7faaf 25263056c68009517cf66dR2376
If a new package needs an updated library from another package, then the update in Bodhi would contain both a new package and an update.
Should that still go directly to request:stable? Or does the (non- urgent) update make it go to request:batched?
Keep in mind that GNOME Software already only checks for non-security updates weekly. So it will actually take as much as two weeks for the update to reach users once it enters batched, which I suspect may not have been intended. We are really looking at weekly updates with an additional one-week delay. Probably you were intending to implement weekly updates without that delay, which seems more desirable? If so, coordination with the Software developers will be needed.
Also, if I mark a security update as low priority, that means it really is low priority. There's no need for many security updates to skip batched. Many are e.g. minor DoS vulnerabilities that are unlikely to be exploited ever, let alone in the next two weeks. Of course remote code execution problems should probably skip batched, but those are unlikely to be marked as low priority. ;)
Michael
On Tue, Aug 01, 2017 at 08:26:04AM +0100, Michael Catanzaro wrote:
Keep in mind that GNOME Software already only checks for non-security updates weekly. So it will actually take as much as two
Doesn't it check daily but only *alert* weekly? AFAIK there's no way to just ask our servers for security updates; the process is to ask for any updates, and then select only the security ones. (Right?)
Also, if I mark a security update as low priority, that means it really is low priority. There's no need for many security updates to skip batched. Many are e.g. minor DoS vulnerabilities that are unlikely to be exploited ever, let alone in the next two weeks. Of course remote code execution problems should probably skip batched, but those are unlikely to be marked as low priority. ;)
+1 to this.
On 08/01/2017 02:35 PM, Matthew Miller wrote:
On Tue, Aug 01, 2017 at 08:26:04AM +0100, Michael Catanzaro wrote:
Keep in mind that GNOME Software already only checks for non-security updates weekly. So it will actually take as much as two
Doesn't it check daily but only *alert* weekly? AFAIK there's no way to just ask our servers for security updates; the process is to ask for any updates, and then select only the security ones. (Right?)
Yes, this is correct. Check daily; alert daily if there are security updates. Otherwise alert weekly.
On Tue, Aug 01, 2017 at 02:41:57PM +0100, Kalev Lember wrote:
Keep in mind that GNOME Software already only checks for non-security updates weekly. So it will actually take as much as two
Doesn't it check daily but only *alert* weekly? AFAIK there's no way to just ask our servers for security updates; the process is to ask for any updates, and then select only the security ones. (Right?)
Yes, this is correct. Check daily; alert daily if there are security updates. Otherwise alert weekly.
And in the meantime, even without an alert, the updates are _available_ if you look in Software, right?
On 08/01/2017 03:58 PM, Matthew Miller wrote:
On Tue, Aug 01, 2017 at 02:41:57PM +0100, Kalev Lember wrote:
Keep in mind that GNOME Software already only checks for non-security updates weekly. So it will actually take as much as two
Doesn't it check daily but only *alert* weekly? AFAIK there's no way to just ask our servers for security updates; the process is to ask for any updates, and then select only the security ones. (Right?)
Yes, this is correct. Check daily; alert daily if there are security updates. Otherwise alert weekly.
And in the meantime, even without an alert, the updates are _available_ if you look in Software, right?
Yep, exactly. gnome-software downloads updates daily and prepares them in the background and makes them available if you open up gnome-software and go to the updates page. They are also available in the gnome-shell shutdown dialog as soon as they are downloaded.
The user notifications however are done weekly if there are no security updates.
El mar, 01-08-2017 a las 08:26 +0100, Michael Catanzaro escribió:
Keep in mind that GNOME Software already only checks for non-security updates weekly. So it will actually take as much as two weeks for the update to reach users once it enters batched, which I suspect may not have been intended. We are really looking at weekly updates with an additional one-week delay. Probably you were intending to implement weekly updates without that delay, which seems more desirable? If so, coordination with the Software developers will be needed.
Also, if I mark a security update as low priority, that means it really is low priority. There's no need for many security updates to skip batched. Many are e.g. minor DoS vulnerabilities that are unlikely to be exploited ever, let alone in the next two weeks. Of course remote code execution problems should probably skip batched, but those are unlikely to be marked as low priority. ;)
Michael
gnome-software really should follow what we ship, in what you describe we have the potential of making some parts worse for users. because there is a non null probability that users will lose the ability to install deltarpms. I would really like to see us have a single unified view on update management at a distro level and not having different tools implementing their own behaviours.
Dennis
On Tue, 2017-08-01 at 11:02 -0500, Dennis Gilmore wrote:
I would really like to see us have a single unified view on update management at a distro level and not having different tools implementing their own behaviours.
I agree with Dennis here - not all users of Fedora use the Gnome Software client.
On Tue, Aug 1, 2017 at 3:10 PM, Randy Barlow bowlofeggs@fedoraproject.org wrote:
On Tue, 2017-08-01 at 11:02 -0500, Dennis Gilmore wrote:
I would really like to see us have a single unified view on update management at a distro level and not having different tools implementing their own behaviours.
I agree with Dennis here - not all users of Fedora use the Gnome Software client.
There are several things missing to make the experience more consistent. I think some of issues have been known[1], it's just there's not much interest in collecting them all and prioritizing fixing them. Which is a real shame.
[1]: https://ctrl.blog/entry/packagekit-dnf
On Tue, 2017-08-01 at 08:26 +0100, Michael Catanzaro wrote:
Also, if I mark a security update as low priority, that means it really is low priority. There's no need for many security updates to skip batched. Many are e.g. minor DoS vulnerabilities that are unlikely to be exploited ever, let alone in the next two weeks. Of course remote code execution problems should probably skip batched, but those are unlikely to be marked as low priority. ;)
I feel a bit on the fence about this, but I see that mattdm +1'd it. If you feel strongly about it, please comment on the pull request to this effect.
On Tue, Aug 01, 2017 at 03:09:22PM -0400, Randy Barlow wrote:
Also, if I mark a security update as low priority, that means it really is low priority. There's no need for many security updates to skip batched. Many are e.g. minor DoS vulnerabilities that are unlikely to be exploited ever, let alone in the next two weeks. Of course remote code execution problems should probably skip batched, but those are unlikely to be marked as low priority. ;)
I feel a bit on the fence about this, but I see that mattdm +1'd it. If you feel strongly about it, please comment on the pull request to this effect.
I think this is why we don't just automatically make security fixes all high priority but instead have a separate field. Many security updates fix problems which only happen in unlikely configurations, or have extremely minor consequences. (Exploits which get you the exact level of privilege you had in the first place, for example.)
Of course, this does require packagers to think a little more in classifying their updates. Maybe we could add a little bit of text explaining what our norms are for but security and bugfix severity. I'd look to the security team for wording. (Hmmm. And should enhancement and new packages _get_ a severity option? Maybe that should be locked to "unspecified"?)
On Tue, 2017-08-01 at 15:31 -0400, Matthew Miller wrote:
(Hmmm. And should enhancement and new packages _get_ a severity option? Maybe that should be locked to "unspecified"?)
Hahaha, "This newpackage update is urgently severe! Have some severe new features!"
On Tue, Aug 01, 2017 at 03:50:45PM -0400, Randy Barlow wrote:
(Hmmm. And should enhancement and new packages _get_ a severity option? Maybe that should be locked to "unspecified"?)
Hahaha, "This newpackage update is urgently severe! Have some severe new features!"
Yeah, exactly. Do you want a new RFE issue for this?
On Tue, 2017-08-01 at 16:11 -0400, Matthew Miller wrote:
Yeah, exactly. Do you want a new RFE issue for this?
Sure, it makes sense to me. Though I will say that there probably isn't much tangible harm done leaving it as it is, even though it doesn't make sense.
On Tue, Aug 01, 2017 at 04:34:34PM -0400, Randy Barlow wrote:
Yeah, exactly. Do you want a new RFE issue for this?
Sure, it makes sense to me. Though I will say that there probably isn't much tangible harm done leaving it as it is, even though it doesn't make sense.
What about the opposite? Should we require classification for bugfix and security updates?
On Tue, 2017-08-01 at 16:47 -0400, Matthew Miller wrote:
What about the opposite? Should we require classification for bugfix and security updates?
I'd say it wouldn't hurt to require it. It always makes data nice if the parser of the data can know that a field is guaranteed to exist so they don't have to do this kind of nonsense:
if hasattr(thing, attr) and thing.attr == whatever:
On Tue, 2017-08-01 at 15:31 -0400, Matthew Miller wrote:
I think this is why we don't just automatically make security fixes all high priority but instead have a separate field. Many security updates fix problems which only happen in unlikely configurations, or have extremely minor consequences. (Exploits which get you the exact level of privilege you had in the first place, for example.)
This sounds reasonable to me. Since nobody seems to be opposed so far, I suggest commenting on the PR that only "urgent" severity items should skip batched by default.
On Mon, Jul 31, 2017 at 11:34 PM, Randy Barlow <bowlofeggs@fedoraproject.org
wrote:
To necromance this old thread, I wanted to give a heads up that we're about to get a cool feature in Bodhi in response to this thread:
https://github.com/fedora-infra/bodhi/pull/1678
With that pull request, there will be a new request state called "batched". When non-priority[0] updates reach the karma threshold, they will go into request:batched instead of request:stable (they will remain status:testing). Once a week, a cron script will look for all updates in the batched state and will switch them all the request:stable.
Randy, can you please describe how it's going to change in terms of koji tags? How the (new) koji tags are going to be named, when packages enter them, when they leave them (including the -pending tags). Or is there a diagram somewhere? I'd like to know whether I need to do some adjustments in Taskotron tasks.
Thanks.
On 08/03/2017 05:41 AM, Kamil Paral wrote:
Randy, can you please describe how it's going to change in terms of koji tags? How the (new) koji tags are going to be named, when packages enter them, when they leave them (including the -pending tags). Or is there a diagram somewhere? I'd like to know whether I need to do some adjustments in Taskotron tasks.
Hey Kamil!
So far there are no plans to change koji tags. One detail I didn't make clear is that the update's "status" isn't going to change when it is "Request: batched". This means that batched updates will still be in the testing repo (and thus, the testing koji tags). Thus, I don't believe this will affect Taskotron.
The original thread did have a suggestion to make a new repo that would include the batched updates but not the testing updates. We could consider doing that later, but I think mirror bandwidth and disk space might become a concern. And of course, then we would be introducing new koji tags. I'm certainly open to the idea, but it would be a more significant change and would require coordination between larger groups.
I hope you are enjoying your afternoon!
On Fri, Dec 9, 2016 at 4:51 AM, Michael Schwendt mschwendt@gmail.com wrote:
And there it is again, the rush to get out updates. Quickly! Quickly! What has been released before is not bug-free, and the update are not bug-free either, and even if no user has reported a bug, the flow of updates will ensure that the user will be affected by a new bug eventually.
No one is saying that. There is a process now to test changes - KDE
utilizes that and it has been working and your not waiting on an artificially imposed criteria to get new releases.
If as a maintainer you don't release version upgrades quickly, some users complain everywhere they are permitted to post. Except for bugzilla. And if you make available upgrades quickly, the users will complain if they think they are affected by bugs.
Upstream release cycles are not aligned with Fedora's dist release schedule
anyway.
True, and holding back a release because of the distributions release schedule really doesn't make any sense. The current system handles it. KDE has proved that.
I think pushing all updates in a big drop will actually make them LESS tested than if they just trickle through one at a time.
The latter is turning all users of the stable "updates" repo into testers once those updates are unleashed so quickly. And those brave ones, albeit only few, who would be willing to evaluate "Test Updates" for some time, don't get any real chance to do so, because updates are rushed out.
No, there is a Testing repo. They aren't pushed to stable until the maintainer decides they are ready.
On Qui, 2016-12-08 at 09:17 -0500, Matthew Miller wrote:
Trying to make this idea a little more concrete. Here's two suggestions for how it might work. These are strawman ideas -- please provide alternates, poke holes, etc. And particularly from a QA and rel-eng point of view. Both of these are not taking modularity into account in any way; it's "how we could do this with our current distro-building process".
Option 1: Big batched update
1. Release F26 according to schedule https://fedoraproject.org/wiki/Releases/26/Schedule
2. At the beginning of October, stop pushing non-security updates from updates-testing to updates
3. Bigger updates (desktop environment refreshes, etc.) allowed into updates-testing at this time.
4. Mid-October, freeze exceptions for getting into updates-testing even.
5. Test all of that together in Some Handwavy Way for serious problems and regressions.
6. Once all good, push from updates-testing to updates at end of October or beginning of November.
Option 2: Branching!
1. Release F26 according to schedule.
2. July/August: branch F26.1 from F26 (not rawhide)
3. Updates to F26 also go into F26.1 (magic happens here?)
4. No Alpha, but do "Beta" freeze and validation as normal for release.
5. And same for F26.1 final
6. And sometime in October/November, release that (but without big press push).
7. GNOME Software presents F26.1 as upgrade option
8. F26 continues in parallel through December
9. In January, update added to F26 which activates the F26.1 repo.
10. And also in January updates stop going to F26.
I like the idea of option 2 or any idea that may give us more stable releases. And I think we should work on this idea since I am not alone :)
I wrote something near to that, years ago, F26.1 final be a new base i.e. all updates go to the base repo when F26.1 release . I'm thinking also in post-release idea , so maybe, the plan could be something like : F26 GA , one month later F24 EOL and F26.1 at same time and just another month later F27 branch. This idea avoid duplication of QA work in stable and devel branch at same time and QA just begin to work on devel branch one (or two) month(s) later.
Best regards.
Some of this idea, by the way, is reminiscent of Spot's suggestions at FUDCon Lawrence in 2013. This is not completely coincidence - I always liked those ideas!
On Thursday, December 8, 2016 9:17:14 AM CET Matthew Miller wrote:
Trying to make this idea a little more concrete. Here's two suggestions for how it might work. These are strawman ideas -- please provide alternates, poke holes, etc. And particularly from a QA and rel-eng point of view. Both of these are not taking modularity into account in any way; it's "how we could do this with our current distro-building process".
Option 1: Big batched update
Release F26 according to schedule https://fedoraproject.org/wiki/Releases/26/Schedule
At the beginning of October, stop pushing non-security updates from updates-testing to updates
Bigger updates (desktop environment refreshes, etc.) allowed into updates-testing at this time.
Mid-October, freeze exceptions for getting into updates-testing even.
Test all of that together in Some Handwavy Way for serious problems and regressions.
Once all good, push from updates-testing to updates at end of October or beginning of November.
[..]
I'm lost. I'm against prolonging delays before pushes from updates-testing to updates if there's given karma, even for non-security stuff. If that's not enough, we should shape the karma-process.
Option 2: Branching! [..]
Sounds really complicated to me. What's the purpose?
--
I probably lost the context ... what real-world problems are trying to fix? Everything which comes to my mind should be solved by better tooling for updates-testing testers.
Have you considered the recent "bodhi for rawhide" proposal, too?
Pavel
On Tue, Dec 20, 2016 at 04:48:44PM +0100, Pavel Raiskup wrote:
I probably lost the context ... what real-world problems are trying to fix? Everything which comes to my mind should be solved by better tooling for updates-testing testers.
I've given this in several ways across the thread, but I don't mind restating. :)
1. I believe in the value of releases, for the project and for end users — as opposed to a "rolling release" system. But major releases are a lot of work across the project — not just release engineering, but marketing, ambassadors, design, docs, and others. One possible way to reduce this is to have major releases less frequently. I want a cadence that gives us the highest return on effort. Maybe that's six months — and maybe it isn't.
2. I really want releases to come at a known time every year, +/- two weeks. Keeping to this with six month targets means that if (when!) we slip, the next release may only have five or four months to bake. This doesn't seem like it's the ideal for the above — maybe we can get the engineering processes streamlined enough to make it comfortable, but there's still the matter of marketing and the rest.
3. The modularity initiative will mean that different big chunks of what we use to compose the OS can update at different speeds and have different lifecycles. That gives us a lot more flexibility in the above, and I'd like us to start thinking about what we *want* to.
I suggested one release a year as an alternative to the current two per year. I guess three per year would be possible (but seems counter to the above); other plans like eight- or nine-month cycles don't have the fixed-calendar property I'm looking for (and I'm pretty sure no one wants to go to one every two years).
The proposals previously in this thread are ideas aimed at presenting users with an annual release from a marketing/ambassadors/design, etc., point of view, but also addressing our upstream stakeholders' desire to have Fedora ship their software fast. (For example, GNOME.) I hoped we could find ways to make them also reduce release effort for developers, packagers, releng, and QA, but from the feedback so far people don't really feel like those particular suggestions do.
Another possibility would be to simply keep releases as normal but go revist the "tick-tock" cadence we talked about a while ago: that is, a May/June release aimed at features, and faster Oct/Nov release where we concentrate on infrastructure — and then call that second release each year the ".1".
And yet another possibility is that we keep things as they are. If that's the overall consensus, okay. :)
On Tue, Dec 20, 2016, at 05:20 PM, Matthew Miller wrote:
On Tue, Dec 20, 2016 at 04:48:44PM +0100, Pavel Raiskup wrote:
I probably lost the context ... what real-world problems are trying to fix? Everything which comes to my mind should be solved by better tooling for updates-testing testers.
I've given this in several ways across the thread, but I don't mind restating. :)
- I believe in the value of releases, for the project and for end users — as opposed to a "rolling release" system. But major releases are a lot of work across the project — not just release engineering, but marketing, ambassadors, design, docs, and others. One possible way to reduce this is to have major releases less frequently. I want a cadence that gives us the highest return on effort. Maybe that's six months — and maybe it isn't.
If we prepare to do more "significant" updates during the release cycle we are going to need to do some of this streamlining regardless. It sounds like this is worthy of exploring solely as a need to grow area.
- I really want releases to come at a known time every year, +/- two weeks. Keeping to this with six month targets means that if (when!) we slip, the next release may only have five or four months to bake. This doesn't seem like it's the ideal for the above — maybe we can get the engineering processes streamlined enough to make it comfortable, but there's still the matter of marketing and the rest.
We build Fedora for a lot of reasons, and I think this one is as important as the others. I believe that we will find an easier time creating energy around our releases if they are more known in time. I am not sure they have to be once per year on a fixed calendar, but they need to represent the culmination of work to introduce significant features. I actually would prefer to see a feature-gated release option as opposed to only thinking in terms of time-gates. I think having something to say is more important than knowing when you're going to speak.
- The modularity initiative will mean that different big chunks of what we use to compose the OS can update at different speeds and have different lifecycles. That gives us a lot more flexibility in the above, and I'd like us to start thinking about what we *want* to.
Building on this, major module releases might be the feature-gate trigger we need to do a new "release" while incremental improvement gets pushed out as a .X release.
I suggested one release a year as an alternative to the current two per year. I guess three per year would be possible (but seems counter to the above); other plans like eight- or nine-month cycles don't have the fixed-calendar property I'm looking for (and I'm pretty sure no one wants to go to one every two years).
The proposals previously in this thread are ideas aimed at presenting users with an annual release from a marketing/ambassadors/design, etc., point of view, but also addressing our upstream stakeholders' desire to have Fedora ship their software fast. (For example, GNOME.) I hoped we could find ways to make them also reduce release effort for developers, packagers, releng, and QA, but from the feedback so far people don't really feel like those particular suggestions do.
Another possibility would be to simply keep releases as normal but go revist the "tick-tock" cadence we talked about a while ago: that is, a May/June release aimed at features, and faster Oct/Nov release where we concentrate on infrastructure — and then call that second release each year the ".1".
Tick-tock makes me worried that people will begin to assume the Tick isn't worthwhile and they should wait on Tock.
And yet another possibility is that we keep things as they are. If that's the overall consensus, okay. :)
Now you're talking crazy :P j/k!
regards,
bex
On Tuesday, December 20, 2016 11:20:49 AM CET Matthew Miller wrote:
- I believe in the value of releases, for the project and for end users — as opposed to a "rolling release" system. But major releases are a lot of work across the project — not just release engineering, but marketing, ambassadors, design, docs, and others. One possible way to reduce this is to have major releases less frequently. I want a cadence that gives us the highest return on effort. Maybe that's six months — and maybe it isn't.
I believe in both -- and I believe Fedora could have both -- "rolling release" and "major releases" as a separate "products".
There are people in the wild who will never use Fedora as the workstation system because they seek for rolling distro (while Rawhide is _almost_ there). It is sad we loose those users.
I suggested one release a year as an alternative to the current two per year.
I don't have a strong opinion here ... but I personally like the idea about annual "major release" cycle (supporting one stable fedora for 2Y+).
The proposals previously in this thread are ideas aimed at presenting users with an annual release from a marketing/ambassadors/design, etc., point of view, but also addressing our upstream stakeholders' desire to have Fedora ship their software fast. (For example, GNOME.)
Would the 'rolling release' approach help WRT upstream stakeholders, even if we had longer major release cycle?
Pavel
On Tue, Dec 20, 2016 at 05:51:33PM +0100, Pavel Raiskup wrote:
I believe in both -- and I believe Fedora could have both -- "rolling release" and "major releases" as a separate "products".
There are people in the wild who will never use Fedora as the workstation system because they seek for rolling distro (while Rawhide is _almost_ there). It is sad we loose those users.
I have a two-pronged approach here.
First, I very frequently hear this: "Fedora should have an LTS — or be a rolling release." These two things are very far apart in actual implication, but they have one big thing in common, and when pressed, it usually comes down to: "Upgrades are painful and scary." We have been working really hard on making upgrades fast and seamless, so we need to deliver that message to users (and of course work to make further improvements).
Second, yeah, for the enthusiasts and people who really _do_ want the *bleeding* edge and do not mind all that entails, let's improve Rawhide (and/or Bikeshed).
The proposals previously in this thread are ideas aimed at presenting users with an annual release from a marketing/ambassadors/design, etc., point of view, but also addressing our upstream stakeholders' desire to have Fedora ship their software fast. (For example, GNOME.)
Would the 'rolling release' approach help WRT upstream stakeholders, even if we had longer major release cycle?
Maybe? I think the value in getting the upstream software into Fedora is getting it to more mainstream users, and I think rolling-Fedora via Rawhide/Bikeshed would still be niche.
On Tuesday, December 20, 2016 12:11:32 PM CET Matthew Miller wrote:
First, I very frequently hear this: "Fedora should have an LTS — or be a rolling release." These two things are very far apart in actual implication, but they have one big thing in common, and when pressed, it usually comes down to: "Upgrades are painful and scary." We have been working really hard on making upgrades fast and seamless, so we need to deliver that message to users (and of course work to make further improvements).
Indeed, I don't remember when I had troubles with N->N+1 major fedora upgrade last time (though I always do distro-sync) on _my workstation_.
Upgrades on production servers (services built on top of Fedora) is probably what scares users (with N->N+1 you can always expect a lot of library API changes). But maybe this is the thing which might be solved by modularity; one version of "module" version to span multiple Fedora major versions...
Pavel
On 12/20/2016 08:20 AM, Matthew Miller wrote:
On Tue, Dec 20, 2016 at 04:48:44PM +0100, Pavel Raiskup wrote:
I probably lost the context ... what real-world problems are trying to fix? Everything which comes to my mind should be solved by better tooling for updates-testing testers.
I've given this in several ways across the thread, but I don't mind restating. :)
I believe in the value of releases, for the project and for end users — as opposed to a "rolling release" system. But major releases are a lot of work across the project — not just release engineering, but marketing, ambassadors, design, docs, and others. One possible way to reduce this is to have major releases less frequently. I want a cadence that gives us the highest return on effort. Maybe that's six months — and maybe it isn't.
I really want releases to come at a known time every year, +/- two weeks. Keeping to this with six month targets means that if (when!) we slip, the next release may only have five or four months to bake. This doesn't seem like it's the ideal for the above — maybe we can get the engineering processes streamlined enough to make it comfortable, but there's still the matter of marketing and the rest.
The modularity initiative will mean that different big chunks of what we use to compose the OS can update at different speeds and have different lifecycles. That gives us a lot more flexibility in the above, and I'd like us to start thinking about what we *want* to.
I'd like to clarify what people have in mind here because it's pretty fundamental to how to take the proposal. More on my interpretation below.
I suggested one release a year as an alternative to the current two per year. I guess three per year would be possible (but seems counter to the above); other plans like eight- or nine-month cycles don't have the fixed-calendar property I'm looking for (and I'm pretty sure no one wants to go to one every two years).
The proposals previously in this thread are ideas aimed at presenting users with an annual release from a marketing/ambassadors/design, etc., point of view, but also addressing our upstream stakeholders' desire to have Fedora ship their software fast. (For example, GNOME.) I hoped we could find ways to make them also reduce release effort for developers, packagers, releng, and QA, but from the feedback so far people don't really feel like those particular suggestions do.
Another possibility would be to simply keep releases as normal but go revist the "tick-tock" cadence we talked about a while ago: that is, a May/June release aimed at features, and faster Oct/Nov release where we concentrate on infrastructure — and then call that second release each year the ".1".
And yet another possibility is that we keep things as they are. If that's the overall consensus, okay. :)
You can't implement modularity *and* keep things as they are. So here's how I take your proposal:
Once per year a new base Fedora release comes out. It has a nice new stable glibc, gcc, etc. This is the content that all editions and spins have in common.
Each edition or spin makes releases of their content layered on top of the above package stream, but they can inject packages that are unique to their edition. So the desktop edition can still make multiple releases per year if they want, but they're layering on top of the basic annual Fedora release.
Is that what people have in mind, or something else?
On 20 December 2016 at 11:20, Matthew Miller mattdm@fedoraproject.org wrote:
On Tue, Dec 20, 2016 at 04:48:44PM +0100, Pavel Raiskup wrote:
I probably lost the context ... what real-world problems are trying to fix? Everything which comes to my mind should be solved by better tooling for updates-testing testers.
I've given this in several ways across the thread, but I don't mind restating. :)
I believe in the value of releases, for the project and for end users — as opposed to a "rolling release" system. But major releases are a lot of work across the project — not just release engineering, but marketing, ambassadors, design, docs, and others. One possible way to reduce this is to have major releases less frequently. I want a cadence that gives us the highest return on effort. Maybe that's six months — and maybe it isn't.
I really want releases to come at a known time every year, +/- two weeks. Keeping to this with six month targets means that if (when!) we slip, the next release may only have five or four months to bake. This doesn't seem like it's the ideal for the above — maybe we can get the engineering processes streamlined enough to make it comfortable, but there's still the matter of marketing and the rest.
The modularity initiative will mean that different big chunks of what we use to compose the OS can update at different speeds and have different lifecycles. That gives us a lot more flexibility in the above, and I'd like us to start thinking about what we *want* to.
I am having a hard time reconciling 2 and 3. We want to have regular releases AND we want them to be whenever we want... this is Quantum Mechanics all over... the release is both a particle and a wave.. and the cat is both alive and dead.
The difference is that only one of the two events is 'real' after you have opened the box. Do we end up with a release which is regular? Or do we end up with a release which has different life-cycles? If the answer is 'yes', ok but we will need to be clearer when and how we measure both.
I suggested one release a year as an alternative to the current two per year. I guess three per year would be possible (but seems counter to the above); other plans like eight- or nine-month cycles don't have the fixed-calendar property I'm looking for (and I'm pretty sure no one wants to go to one every two years).
The proposals previously in this thread are ideas aimed at presenting users with an annual release from a marketing/ambassadors/design, etc., point of view, but also addressing our upstream stakeholders' desire to have Fedora ship their software fast. (For example, GNOME.) I hoped we could find ways to make them also reduce release effort for developers, packagers, releng, and QA, but from the feedback so far people don't really feel like those particular suggestions do.
The only way I can see that working is that QA, releng, etc only deal with a small part of the OS that the rest of the OS is built from. Everything else above either a sack of potatoes or a well oiled machine but it depends on the group (be it KDE, GNOME, Docker/Atomic/etc, i386/ppc/arm, etc) putting the work into it to make it so. It may make things look like 'second class citizens' but every group has called itself that when it doesn't get its way so it just makes it clearer.
On Ter, 2016-12-20 at 11:20 -0500, Matthew Miller wrote:
- I really want releases to come at a known time every year, +/- two
weeks. Keeping to this with six month targets means that if (when!) we slip, the next release may only have five or four months to bake.
This is a problem IMHO. We shouldn't shorter the next release when we slip. But I remember when we got a big slip (because Fedora have one of the first releases that support secure boot), I saw a big concern with marketing , that was a bad image (the big slip) etc etc. So I think this rule was created by marketing/image of the Fedora to outside. So at least we should assume that we may do less 2 release per year and break the cycle of releases on May/Jun and Nov/Dec. One more note , I didn't agree that slip was bad , Fedora software is based in many upstream software if other parts slip we may/should also slip until get things stable. So maybe here is more a question of marketing to have more freedom in choice of the cycles. Maybe we can do a schedule with more 3 or 4 week and instead slip we could anticipate the release, I don't know just another idea.
This doesn't seem like it's the ideal for the above — maybe we can get the engineering processes streamlined enough to make it comfortable, but there's still the matter of marketing and the rest.
On 12/20/2016 11:14 PM, Sérgio Basto wrote:
On Ter, 2016-12-20 at 11:20 -0500, Matthew Miller wrote:
- I really want releases to come at a known time every year, +/- two weeks. Keeping to this with six month targets means that if
(when!) we slip, the next release may only have five or four months to bake.
This is a problem IMHO. We shouldn't shorter the next release when we slip. But I remember when we got a big slip (because Fedora have one of the first releases that support secure boot), I saw a big concern with marketing , that was a bad image (the big slip) etc etc. So I think this rule was created by marketing/image of the Fedora to outside. So at least we should assume that we may do less 2 release per year and break the cycle of releases on May/Jun and Nov/Dec. One more note , I didn't agree that slip was bad , Fedora software is based in many upstream software if other parts slip we may/should also slip until get things stable. So maybe here is more a question of marketing to have more freedom in choice of the cycles. Maybe we can do a schedule with more 3 or 4 week and instead slip we could anticipate the release, I don't know just another idea.
If Fedora doesn't ship on predictable boundaries its alignment with other projects that do is compromised. This includes projects like glibc and gcc, which are a significant underpinnings of what makes a Fedora release version.
devel@lists.stg.fedoraproject.org