Every time I see this during a 'yum update' I wonder what it really means or if it is a bug.
<delta rebuild> 22% [=== ] 277 kB/s | 50 MB 10:35 ETA
Isn't the kB/s being displayed for a CPU activity? Usually this speed output is associated with the network download speed. It seems strange to to see it in this context. Is it really telling us anything useful?
John
On Wed, 2009-09-30 at 21:52 -0700, John Poelstra wrote:
Every time I see this during a 'yum update' I wonder what it really means or if it is a bug.
<delta rebuild> 22% [=== ] 277 kB/s | 50 MB 10:35 ETA
Isn't the kB/s being displayed for a CPU activity? Usually this speed output is associated with the network download speed. It seems strange to to see it in this context. Is it really telling us anything useful?
it's a simple way to compare whether the delta RPM thing is saving you any time. If that speed is faster than your download speed, it is...
(I don't know if that's the reasoning behind displaying it that way, but it seems to work that way to me.)
On 10/01/2009 07:02 AM, Adam Williamson wrote:
On Wed, 2009-09-30 at 21:52 -0700, John Poelstra wrote:
Every time I see this during a 'yum update' I wonder what it really means or if it is a bug.
<delta rebuild> 22% [=== ] 277 kB/s | 50 MB 10:35 ETA
Isn't the kB/s being displayed for a CPU activity? Usually this speed output is associated with the network download speed. It seems strange to to see it in this context. Is it really telling us anything useful?
it's a simple way to compare whether the delta RPM thing is saving you any time. If that speed is faster than your download speed, it is...
But sometimes it is reduced to some tens of kB/s :-)
(I don't know if that's the reasoning behind displaying it that way, but it seems to work that way to me.)
On Thu, 2009-10-01 at 07:30 +0200, Joachim Backes wrote:
On 10/01/2009 07:02 AM, Adam Williamson wrote:
On Wed, 2009-09-30 at 21:52 -0700, John Poelstra wrote:
Every time I see this during a 'yum update' I wonder what it really means or if it is a bug.
<delta rebuild> 22% [=== ] 277 kB/s | 50 MB 10:35 ETA
Isn't the kB/s being displayed for a CPU activity? Usually this speed output is associated with the network download speed. It seems strange to to see it in this context. Is it really telling us anything useful?
it's a simple way to compare whether the delta RPM thing is saving you any time. If that speed is faster than your download speed, it is...
But sometimes it is reduced to some tens of kB/s :-)
there's a discussion on -devel-list atm; we're using a much too intensive compression setting for xz. that should get reduced, which would make the rebuild stage much faster.
it's a simple way to compare whether the delta RPM thing is saving you any time. If that speed is faster than your download speed, it is...
Do you mean there is a parameter a user can employ to control the balance between network and processor resources for an update? I surmised the object of the delta strategy was primarily to reduce data traffic at the servers, with a secondary benefit of faster delivery to users with small network bandwidth.
I have found the rebuild speeds displayed on my F12 test system one-quarter to one-tenth of my network bandwidth. This has not been onerous, but certainly time would be saved if I could say "Do not use delta rpms." to yum.
Recent experience has delivered the worst of both worlds: about half the delta rpms fail integrity checks, and complete copies of those files have to be downloaded after the delta rebuild failures. An earlier post to this list explained a format change causes these failures, and they will abate as packages are rebuilt.
On Thu, 1 Oct 2009, Richard Ryniker wrote:
it's a simple way to compare whether the delta RPM thing is saving you any time. If that speed is faster than your download speed, it is...
Do you mean there is a parameter a user can employ to control the balance between network and processor resources for an update? I surmised the object of the delta strategy was primarily to reduce data traffic at the servers, with a secondary benefit of faster delivery to users with small network bandwidth.
I have found the rebuild speeds displayed on my F12 test system one-quarter to one-tenth of my network bandwidth. This has not been onerous, but certainly time would be saved if I could say "Do not use delta rpms." to yum.
yum --disableplugin=presto ....
or yum remove yum-presto
your problem solved. -sv
On Thu, 2009-10-01 at 08:41 -0400, Richard Ryniker wrote:
it's a simple way to compare whether the delta RPM thing is saving you any time. If that speed is faster than your download speed, it is...
Do you mean there is a parameter a user can employ to control the balance between network and processor resources for an update?
No, I hope I didn't suggest that as it's not what I meant. :) You can only turn delta usage on or off, there's no 'control' over the process.
I surmised the object of the delta strategy was primarily to reduce data traffic at the servers, with a secondary benefit of faster delivery to users with small network bandwidth.
Both goals are considered important, AFAIK.
I have found the rebuild speeds displayed on my F12 test system one-quarter to one-tenth of my network bandwidth. This has not been onerous, but certainly time would be saved if I could say "Do not use delta rpms." to yum.
See comment about the too-high compression level used.
Recent experience has delivered the worst of both worlds: about half the delta rpms fail integrity checks, and complete copies of those files have to be downloaded after the delta rebuild failures. An earlier post to this list explained a format change causes these failures, and they will abate as packages are rebuilt.
I'm not sure we've actually dealt with that problem in any way yet, though we've spent lots of time arguing about it...it's not about a format change, exactly, the problem is that xz doesn't produce the exact same compressed output on PPC architectures as on x86 architectures, and some noarch deltaRPMs were/are being generated on PPC buildhosts, so when they come to be reconstructed on x86 hosts, the check fails...there are various ways this could be addressed (stop building noarch packages on PPC buildhosts, adjust xz so it doesn't have this particular arch difference any more), but as usual the -devel list took it as an excuse for a good old 'robust discussion' about whether xz was evil and what would be the really really really correct way to fix it ('but either of the simple and obvious fixes are just band-aids and don't fix the real underlying problem that xz is evil!'), rather than actually dealing with the problem first. good times!
On Thu, 2009-10-01 at 12:37 -0400, Bill Nottingham wrote:
Adam Williamson (awilliam@redhat.com) said:
I'm not sure we've actually dealt with that problem in any way yet, though we've spent lots of time arguing about it...
It's being worked with upstream.
ah, good to know there's some light as well as heat =)