Hi all
A question to packagers: what would you think of a policy to add the library soname in package libraries ? For example, I have a libkexif package, which provides libkexif.so.0, and at least 3 applications depend on it. Now there is an update to libkexif, which provides libkexif.so.1. I can't update libkexif without updating the applications depending on it. OK, this is probably something that you know much better than me, and that you've run into several times before, so you probably already know the solution. I've searched a bit, and it seems that Mandrake and Debian both have a policy to include the library soname in the package name : http://qa.mandrakesoft.com/twiki/bin/view/Main/RpmHowToAdvanced#Library_poli... http://www.debian.org/doc/debian-policy/ch-sharedlibs.html
How about a similar policy for Fedora ? Is it the best solution to this problem ?
Aurélien
On Mon, Jan 24, 2005 at 11:34:03AM +0100, Aurelien Bompard wrote:
A question to packagers: what would you think of a policy to add the library soname in package libraries ? [...]
If I understand your question correct: this is all done automatically when rpm calculates the provides/requires (using the "find-provides" script) when building the package.
Look at the package you generated with "rpm -qp --provides package.rpm" and see what it shows.
Jos Vos wrote:
A question to packagers: what would you think of a policy to add the library soname in package libraries ? [...]
If I understand your question correct: this is all done automatically when rpm calculates the provides/requires (using the "find-provides" script) when building the package.
Look at the package you generated with "rpm -qp --provides package.rpm" and see what it shows.
$ rpm -qp --provides libkexif-0.2.1-0.fdr.1.i386.rpm libkexif.so.1 libkexif = 0:0.2.1-0.fdr.1
$ sudo rpm -Uvh libkexif-0.2.1-0.fdr.1.i386.rpm libkexif-devel-0.2.1-0.fdr.1.i386.rpm erreur: Dépendances requises: libkexif.so.0 est nécessaire pour (déjà installé) kipi-plugins-0.1-0.fdr.0.1.beta1 libkexif.so.0 est nécessaire pour (déjà installé) digikam-0.7-0.fdr.1 libkexif.so.0 est nécessaire pour (déjà installé) showimg-0.9.4.1-0.fdr.1.2
My point is that if the rpm name contained the soname, version 0.1 and version 0.2 could be installed at the same time, and I would not have to update all the packages depending on it immediatly. With only 3 applications, it's not a very big problem, but it could be a lot worse with a more important library.
Aurélien
Le lundi 24 janvier 2005 à 11:58 +0100, Aurelien Bompard a écrit :
$ sudo rpm -Uvh libkexif-0.2.1-0.fdr.1.i386.rpm
Try with "rpm -i".
My point is that if the rpm name contained the soname, version 0.1 and version 0.2 could be installed at the same time, and I would not have to update all the packages depending on it immediatly.
It packages don't conflit you can install multiple packages with the same name. This is done with the kernel package : [fmatias@one i386]$ rpm -q kernel kernel-2.6.10-1.736_FC3.mat.1 kernel-2.6.10-1.750_FC3.mat.1
[snip]
solution. I've searched a bit, and it seems that Mandrake and Debian both have a policy to include the library soname in the package name : http://qa.mandrakesoft.com/twiki/bin/view/Main/RpmHowToAdvanced#Library_poli...
Unrelated, but I have to ask: what is the purpose of defining name/version/release like this:
%define name gtkmm %define version 1.2.4 %define release 1mdk
and then:
Name: %{name} Version: %{version} Release: %{release}
I've seen it in a number of spec files.
And please no "Well, Mandrake sucks!".
/Peter
Hi.
Peter Backlund peter.backlund@home.se wrote:
I've seen it in a number of spec files.
I'm doing this in my spec files so these three variables are at the very top of the file, while "Name: ..." and the others may be some way further down.
On Mon, 2005-01-24 at 11:42 +0100, Ralf Ertzinger wrote:
Hi.
Peter Backlund peter.backlund@home.se wrote:
I've seen it in a number of spec files.
I'm doing this in my spec files so these three variables are at the very top of the file, while "Name: ..." and the others may be some way further down.
Another way to achieve this is to... *drum roll*... place those three real "Name:, Version:, Release:" lines at the very top of the spec file! ;) See eg. the spec templates in fedora-rpmdevtools in pre- extras.
On Mon, Jan 24, 2005 at 11:45:30AM +0100, Peter Backlund wrote:
Unrelated, but I have to ask: what is the purpose of defining name/version/release like this:
%define name gtkmm %define version 1.2.4 %define release 1mdk
and then:
Name: %{name} Version: %{version} Release: %{release}
I've seen it in a number of spec files.
Technically useless, except that the idea probably is that the %define's are in top of the spec file and can be found (even) more easy than the corresponding headers when you want to change them. It's a matter of taste (and I personally don't like this).
Peter Backlund wrote:
[snip]
solution. I've searched a bit, and it seems that Mandrake and Debian both have a policy to include the library soname in the package name : http://qa.mandrakesoft.com/twiki/bin/view/Main/RpmHowToAdvanced#Library_poli...
Unrelated, but I have to ask: what is the purpose of defining name/version/release like this:
%define name gtkmm %define version 1.2.4 %define release 1mdk
and then:
Name: %{name} Version: %{version} Release: %{release}
I've seen it in a number of spec files.
The is the "Marc Ewing School of Packaging" style, adopted by GNOME, and one of the first attempts at generating *.spec files automagically from scripts, that still flourishes (like a milk weed does.
There is no reason whatsoever to use %define's for name/version/release however.
And please no "Well, Mandrake sucks!".
MDK is RHEL optimized for the i686!
Perhaps MDK would be willing to tak over hosting of the endless optimization discussion from fedora-devel too.
73 de Jeff
Le lundi 24 janvier 2005 à 11:45 +0100, Peter Backlund a écrit :
[snip]
solution. I've searched a bit, and it seems that Mandrake and Debian both have a policy to include the library soname in the package name : http://qa.mandrakesoft.com/twiki/bin/view/Main/RpmHowToAdvanced#Library_poli...
Unrelated, but I have to ask: what is the purpose of defining name/version/release like this:
%define name gtkmm %define version 1.2.4 %define release 1mdk
and then:
Name: %{name} Version: %{version} Release: %{release}
I've seen it in a number of spec files.
What about :
%define _name gtkmm %define _version 1.2.4 %define _release 1mdk
and then:
%define name _name %define version _version %define release _release
and then:
Name: %{name} Version: %{version} Release: %{release}
And please no "Well, Mandrake sucks!".
And please no "Well, you suck!".
/Peter
On Mon, 24 Jan 2005, Peter Backlund wrote:
[snip]
solution. I've searched a bit, and it seems that Mandrake and Debian both have a policy to include the library soname in the package name : http://qa.mandrakesoft.com/twiki/bin/view/Main/RpmHowToAdvanced#Library_poli...
Unrelated, but I have to ask: what is the purpose of defining name/version/release like this:
%define name gtkmm %define version 1.2.4 %define release 1mdk
and then:
Name: %{name} Version: %{version} Release: %{release}
I've seen it in a number of spec files.
I don't see the point of defining name/version/release *twice*. ??
Also, this reply didn't really answer the original question, which was, how to release rpm's to provide multiple versions of the same shared library.
-- Rex
Unrelated, but I have to ask: what is the purpose of defining name/version/release like this:
%define name gtkmm %define version 1.2.4 %define release 1mdk
I do this -
at very top of spec file:
%define my_build yjl.1
Then in the normal place
Name: foobar Version: 20.12 Release: 3.%{my_build}
-=- That makes more sense to me than defining name and then using Name: with the defined %{name} macro.
But that's just me. In the future I may define what release/distro it is being built for - but I don't think so, my packages are intended to be installed via yum which should take care of that - by grabbing them from the right directory.
Le lundi 24 janvier 2005 à 11:34 +0100, Aurelien Bompard a écrit :
Hi all
A question to packagers: what would you think of a policy to add the library soname in package libraries ? For example, I have a libkexif package, which provides libkexif.so.0, and at least 3 applications depend on it. Now there is an update to libkexif, which provides libkexif.so.1. I can't update libkexif without updating the applications depending on it. OK, this is probably something that you know much better than me, and that you've run into several times before, so you probably already know the solution. I've searched a bit, and it seems that Mandrake and Debian both have a policy to include the library soname in the package name : http://qa.mandrakesoft.com/twiki/bin/view/Main/RpmHowToAdvanced#Library_poli... http://www.debian.org/doc/debian-policy/ch-sharedlibs.html
How about a similar policy for Fedora ? Is it the best solution to this problem ?
This is done when needed. Example for FC3 : gtkhtml2-2.6.2-1.i386.rpm ; gtkhtml3-3.3.2-3.i386.rpm
btw : [fmatias@one i386]$ rpm -q --provides -p gtkhtml2-2.6.2-1.i386.rpm libgtkhtml-2.so.0 <=== gtkhtml2 = 2.6.2-1 [fmatias@one i386]$ rpm -q --provides -p gtkhtml3-3.3.2-3.i386.rpm libgnome-gtkhtml-editor-3.1.so libgtkhtml-3.1.so.11 <=== gtkhtml3 = 3.3.2-3
Other packages must use "Requires : libgtkhtml-2.so" and not "libgtkhtml3" or "gtkhtml3".
Aurelien Bompard wrote:
Hi all
A question to packagers: what would you think of a policy to add the library soname in package libraries ? For example, I have a libkexif package, which provides libkexif.so.0, and at least 3 applications depend on it. Now there is an update to libkexif, which provides libkexif.so.1. I can't update libkexif without updating the applications depending on it. OK, this is probably something that you know much better than me, and that you've run into several times before, so you probably already know the solution. I've searched a bit, and it seems that Mandrake and Debian both have a policy to include the library soname in the package name : http://qa.mandrakesoft.com/twiki/bin/view/Main/RpmHowToAdvanced#Library_poli... http://www.debian.org/doc/debian-policy/ch-sharedlibs.html
How about a similar policy for Fedora ? Is it the best solution to this problem ?
Ick. I see the entropic death of packaging, one file per-package. Silly.
73 de Jeff
Jeff Johnson wrote:
Ick. I see the entropic death of packaging, one file per-package. Silly.
Of course not :) Shared libraries are a different problem however, and it may be nice to install different versions at the same time.
Aurélien
Aurelien Bompard wrote:
Jeff Johnson wrote:
Ick. I see the entropic death of packaging, one file per-package. Silly.
Of course not :)
I shall have to find the package in Debian that not only added the libraray soname, but every Provides: and configure flag, to the package name as well. The final package name is >4Kb.
Seriously, there are othere metadata elements than package name, even if Debian is still confused.
Shared libraries are a different problem however, and it may be nice to install different versions at the same time.
Installing multiple libraries with different sonames, and adding sonames to package NEVR, are very different issues.
73 de Jeff
Jeff Johnson wrote:
Installing multiple libraries with different sonames, and adding sonames to package NEVR, are very different issues.
Really ? How could I install two different versions of the same library without changing the package name then ?
Thanks
Aurélien
Aurelien Bompard wrote:
Jeff Johnson wrote:
Installing multiple libraries with different sonames, and adding sonames to package NEVR, are very different issues.
Really ? How could I install two different versions of the same library without changing the package name then ?
Try with rpm -i.
73 de Jeff
Jeff Johnson wrote:
Try with rpm -i.
Yeah OK. How about something that would be understood by depsolvers then ?
This must be a common problem, isn't it ? What do you do when an important library changes its soname in the next version ?
Aurélien
Aurelien Bompard wrote:
Jeff Johnson wrote:
Try with rpm -i.
Yeah OK. How about something that would be understood by depsolvers then ?
This must be a common problem, isn't it ? What do you do when an important library changes its soname in the next version ?
1) Rebuild all packages that depend on that lib with the new library 2) Every sane depresolver will then pick up the new packages, solve those deps and update the depending packages (the whole deptree) too.
If you don't have any updated packages you're screwed anyway, no depsolver can help you with that (and i can't see a depsolver then deciding: "Oh, a new library has some out, lets grab the srpms, automatically rebuild and install them".... *shudder*)
Read ya, Phil
Phil Knirsch wrote:
- Rebuild all packages that depend on that lib with the new library
- Every sane depresolver will then pick up the new packages, solve
those deps and update the depending packages (the whole deptree) too.
If you don't have any updated packages you're screwed anyway, no depsolver can help you with that (and i can't see a depsolver then deciding: "Oh, a new library has some out, lets grab the srpms, automatically rebuild and install them".... shudder)
Well, I think that adding the soname to the rpm name could help with that: dependant apps will still require the old package, until they are rebuilt and automatically require the new package. The only con I see is that there may be libraries around which are not required anymore. But that can easily be solved.
I think that this problem should be all the more adressed that Fedora is becoming a community-open distribution: the maintainer of the library is not necessarily the maintainer of the dependant apps. There must be an easy upgrade path for those situations.
Aurélien
Le lundi 24 janvier 2005 à 13:23 +0100, Aurelien Bompard a écrit :
Phil Knirsch wrote:
- Rebuild all packages that depend on that lib with the new library
- Every sane depresolver will then pick up the new packages, solve
those deps and update the depending packages (the whole deptree) too.
If you don't have any updated packages you're screwed anyway, no depsolver can help you with that (and i can't see a depsolver then deciding: "Oh, a new library has some out, lets grab the srpms, automatically rebuild and install them".... shudder)
Well, I think that adding the soname to the rpm name could help with that: dependant apps will still require the old package, until they are rebuilt and automatically require the new package. The only con I see is that there may be libraries around which are not required anymore. But that can easily be solved.
Well if it can be easily solved how come we still have this problem ? There are several packages that use the soname approach in FC and they are the ones that are a PITA to cleanup now and then
Regards,
On Mon, 24 Jan 2005, Nicolas Mailhot wrote:
Le lundi 24 janvier 2005 à 13:23 +0100, Aurelien Bompard a écrit :
Well, I think that adding the soname to the rpm name could help with that:
...
Well if it can be easily solved how come we still have this problem ? There are several packages that use the soname approach in FC and they are the ones that are a PITA to cleanup now and then
Agreed. Avoid the extra hassle of multiply installed libfoo.so's unless absolutely necessary. Otherwise, it's added complexity and bloat for relatively little gain.
-- Rex
On Mon, 2005-01-24 at 06:45 -0600, Rex Dieter wrote:
On Mon, 24 Jan 2005, Nicolas Mailhot wrote:
Le lundi 24 janvier 2005 à 13:23 +0100, Aurelien Bompard a écrit :
Well, I think that adding the soname to the rpm name could help with that:
...
Well if it can be easily solved how come we still have this problem ? There are several packages that use the soname approach in FC and they are the ones that are a PITA to cleanup now and then
Agreed. Avoid the extra hassle of multiply installed libfoo.so's unless absolutely necessary. Otherwise, it's added complexity and bloat for relatively little gain.
It's the entire reason why _have_ library sonames...
We _have_ had this problem, btw. The problem is that it's not generally developers that notice it. It's the user that just want to have their machine work. I go to install third party app Foo from Foo's web site, it needs libbar.so.2, Fedora only has libbar.so.1, and many other apps on the net require libbar.so.1.
You can't just upgrade apps to use libbar.so.2 because it's a *different library*, it might require massive code changes in order to use - if it were something compatible there wouldn't be a problem at all.
Both libraries can be installed at once (again, the whole bloody reason we _have_ sonames), but because the packaging creates an artificial incompatibility that only exists because the package is, put plainly, braindead and broken, the user can't install Foo.
Now you, as a developer or experienced Linux admin, can easily solve this. Maybe you install the library from source. Maybe you rpm -i the new library package. Both are things that the average person - even if they _are_ an experienced Linux user - shouldn't have the waste time doing. Every hour of your life that you spend working around broken, lazy, braindead library packaging is an hour you could have spent with your family, friends, doing something you enjoy, working on some new Free Software, etc.
The system is designed so that multiple libraries that are incompatible can be installed at the same time. Let's not have the packaging system continue to break that because of something so incredibly trite and meaningless as the aesthetics of the rpm -qa output when you have the two versions installed.
-- Rex -- fedora-devel-list mailing list fedora-devel-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-devel-list
On Mon, 24 Jan 2005 09:15:14 -0500, Sean Middleditch elanthis@awesomeplay.com wrote:
We _have_ had this problem, btw. The problem is that it's not generally developers that notice it. It's the user that just want to have their machine work. I go to install third party app Foo from Foo's web site, it needs libbar.so.2, Fedora only has libbar.so.1, and many other apps on the net require libbar.so.1.
A third part website is packaging libbar.so.2 in a package of the same package name as Feodora's libbar.so.1? Why would a third party site do that? Unless the intention was to replace the Fedora package? Isn't this an example of the care 3rd party packagers should be taking to make sure their packages work well with Core?
And I might add.. that while users and admins.. might want to install many other apps from anywhere on the net that the find them... this is not necessarily advisable behavior. You continue to cater to this sort of thing and you will end up with people install very old libraries that are no longer being maintained so that they can install very old applications that are no longer being maintained and could have unresolved but well understood security problems. I'm really not sure its in anyones best interest to make it really drop-dead easy to install unmaintained software that might be expoitable simply because the package was created in 2000.
-jef
On Mon, 2005-01-24 at 10:02 -0500, Jeff Spaleta wrote:
On Mon, 24 Jan 2005 09:15:14 -0500, Sean Middleditch elanthis@awesomeplay.com wrote:
We _have_ had this problem, btw. The problem is that it's not generally developers that notice it. It's the user that just want to have their machine work. I go to install third party app Foo from Foo's web site, it needs libbar.so.2, Fedora only has libbar.so.1, and many other apps on the net require libbar.so.1.
A third part website is packaging libbar.so.2 in a package of the same package name as Feodora's libbar.so.1? Why would a third party site do that? Unless the intention was to replace the Fedora package? Isn't this an example of the care 3rd party packagers should be taking to make sure their packages work well with Core?
Sure, they just make up their own package name. Then FC4 comes along and includes a package that provides the same library, but in a different package name because there's no standard, and the user's system breaks until they magically become experienced enough to fix it.
And I might add.. that while users and admins.. might want to install many other apps from anywhere on the net that the find them... this is not necessarily advisable behavior. You continue to cater to this
Because Fedora is going to provide every application that every user could ever want with the latest version with the latest features such that no user will ever, ever need anything not on the Fedora Core/Extras CD, ever, under any circumstance, ever... right?
sort of thing and you will end up with people install very old libraries that are no longer being maintained so that they can install very old applications that are no longer being maintained and could have unresolved but well understood security problems. I'm really not sure its in anyones best interest to make it really drop-dead easy to install unmaintained software that might be expoitable simply because the package was created in 2000.
So, because a user might install an old app, you won't to make sure users can't install any app...?
Hmm, the user might download an old app from source and install it! Even an inexperienced user can follow a README or HOWTO. I suggest that FC4 disables all Internet access and does not ship with a compiler so that users don't inadvertently install an insecure or buggy app. Advanced users who are knowledgeable about security will still be able to manually configure network access and find compiler binaries off the 'net, so this change won't reduce the usefulness of Fedora, but simply protect users who don't know any better. </sarcastic-extremism> ;-)
-jef
On Mon, 24 Jan 2005 10:02:52 -0500, Jeff Spaleta wrote:
And I might add.. that while users and admins.. might want to install many other apps from anywhere on the net that the find them... this is not necessarily advisable behavior. You continue to cater to this sort of thing and you will end up with people install very old libraries that are no longer being maintained so that they can install very old applications that are no longer being maintained and could have unresolved but well understood security problems. I'm really not sure its in anyones best interest to make it really drop-dead easy to install unmaintained software that might be expoitable simply because the package was created in 2000.
Wow - 2000 is only 5 years ago guys. There are *lots* of people still running programs designed for Windows 95, which is now 10 years old!
Face it: people will run the software they want. If you make it difficult or annoying for them out of a misguided sense that security-through-obnoxiousness is OK, they'll just use Windows which doesn't do much for security at all but at least makes it easy for the user to achieve their goal.
The best solution is for libraries to not break backwards compatibility every other week, that way security fixes are magically present even for 5 year old apps.
Seriously, 5 years is really nothing, it's all about mindset.
thanks -mike
On Mon, 24 Jan 2005 19:25:44 +0000, Mike Hearn mike@navi.cx wrote:
Face it: people will run the software they want. If you make it difficult or annoying for them out of a misguided sense that security-through-obnoxiousness is OK, they'll just use Windows which doesn't do much for security at all but at least makes it easy for the user to achieve their goal.
Yeah... i like AOL's new commercials about virus protection which speak to your point about Windows Acheiving one goal quickly can have very serious long term effects thanks to the insecurity of the quick solution. Design decisions meant to make things easier upfront can have serious security implications. There is always tension between security and quick solutions.
Let them use windows... i have no problem with people choosing to use insecure technology. But i do have a problem setting up this project in a way that makes it "very simple" to run old, unmaintained, vulnerable libraries by inexperienced users of Fedora. You can do some pretty flexible things on the commandline with rpm if you really want to do it and I'm not arguing that ability should be taken away. But i don't want encourage the general user base to use packaged libraries from old trees that are no longer being maintained just because it happens to be a package they find on the net in an old ftp. And i definitely want to encourage package builders to rebuild against libraries that are being maintained.
The best solution is for libraries to not break backwards compatibility every other week, that way security fixes are magically present even for 5 year old apps.
This is orthogonal to packaging issues... and frankly... not something a distributor of libraries can dictate to each upstream project. Please take your crusade to each and every component project so no package distributor will ever have to deal with these questions.
Seriously, 5 years is really nothing, it's all about mindset.
If this were debian... with debian timescales for the development and end-of-life... 5 years isnt that long. But this isn't debian.. and this project doesn't have those sorts of timescales... so with respect to FC's timetable 5 years is definitely a long time.
-jef
On Mon, 2005-01-24 at 14:57 -0500, Jeff Spaleta wrote:
Let them use windows... i have no problem with people choosing to use insecure technology. But i do have a problem setting up this project in a way that makes it "very simple" to run old, unmaintained, vulnerable libraries by inexperienced users of Fedora. You can do some pretty flexible
You're not going to stop anyone from installing old libraries; you're just stopping people from running modern applications that depend on last week's libraries. A user's system basically becomes impossible to upgrade and impossible to install new software on until the entire Open Source world recompiles all their packages for the new library. If two libraries could be installed at once the user wouldn't be trapped during the transition - they could just get on with life as normal.
be a package they find on the net in an old ftp. And i definitely want to encourage package builders to rebuild against libraries that are being maintained.
Is Fedora supposed to be an exercise in speedy RPM rebuilding, or an operating system?
The best solution is for libraries to not break backwards compatibility every other week, that way security fixes are magically present even for 5 year old apps.
This is orthogonal to packaging issues... and frankly... not something a distributor of libraries can dictate to each upstream project.
Please take your crusade to each and every component project so no package distributor will ever have to deal with these questions.
Oh, but they will, eventually. Looks like Fedora added a gtk2 package instead of just updating the gtk package to the 2.x series. You guys did great with gtk, so what's the problem with other packages? gtk1 is completely unmaintained and not only installed on many users machines, but even shipped with Fedora. ;-)
Unfortunately, Fedora seems to be moving towards relying on huge massive centralization of software packages to resolve broken packaging and lazy development.
If it isn't shipped with Fedora Core/Extras, users aren't allowed to use it?
Le lundi 24 janvier 2005 à 15:39 -0500, Sean Middleditch a écrit :
Is Fedora supposed to be an exercise in speedy RPM rebuilding, or an operating system?
Fedora is (or try to be) :
http://fedora.redhat.com/about/objectives.html Create a complete general-purpose operating system with capabilities equivalent to competing operating systems, built for and by a community — those who not only consume, but also produce for the good of other community members.
* Build the operating system exclusively from open source software.
Easy to recompile.
* Do as much of the development work as possible directly in the upstream packages. This includes updates; our default policy will be to upgrade to new versions for security as well as for bugfix and new feature update releases of packages.
Sync with upstream.
* Provide a robust development platform for building software, particularly open source software.
Open source (again)
* Be on the leading edge of open source technology, by adopting and helping develop new features and version upgrades.
Promote new technologies.
Fedora is not (or try to avoid) :
Non-Objectives of Fedora Core:
1. Slow rate of change.
2. ....
3. Being a dumping ground for unmaintained or poorly designed software.
On Mon, 2005-01-24 at 22:14 +0100, Féliciano Matias wrote:
Fedora is (or try to be) :
http://fedora.redhat.com/about/objectives.html Create a complete general-purpose operating system with capabilities equivalent to competing operating systems, built for and by a community — those who not only consume, but also produce for the good of other community members.
"capabilities equivalent to competing operating systems"... does that not include "install stuff" for some reason?
The Open Source community is pretty good these days about using sonames properly. It's silly to have packages go and break all that effort. Sonames exist for a reason.
Fedora is not (or try to avoid) :
Non-Objectives of Fedora Core: 1. Slow rate of change. 2. .... 3. Being a dumping ground for unmaintained or poorly designed software.
Neither of which have I ever asked for, nor would I.
The number of incredibly absurd ways people seem to interpret "let's not break stuff pointlessly" is constantly amazing, and starting to become rather dumbfounding.
Le lundi 24 janvier 2005 à 16:23 -0500, Sean Middleditch a écrit :
"capabilities equivalent to competing operating systems"... does that not include "install stuff" for some reason?
The Open Source community is pretty good these days about using sonames properly. It's silly to have packages go and break all that effort. Sonames exist for a reason.
I replied to "Fedora supposed to be an exercise in speedy RPM rebuilding, or an operating system?".
I don't have any problem with libthis0, libthis1, libthat0, ... If this help, I think this can be a good practise.
But it's not the primary focus of Fedora. This does not mean Fedora should ignore compatibility issue.
On Mon, 24 Jan 2005 14:57:57 -0500, Jeff Spaleta wrote:
Yeah... i like AOL's new commercials about virus protection which speak to your point about Windows
I'll skip the "backwards compatibility == viruses" stuff as it really isn't relevant here and doesn't stand up to close inspection anyway (hint: is it useful to prevent people who rely on a particular program from getting *any* security updates at all, because one breaks their program? no other desktop OS vendor has said yes here).
Let them use windows... i have no problem with people choosing to use insecure technology. But i do have a problem setting up this project in a way that makes it "very simple" to run old, unmaintained, vulnerable libraries by inexperienced users of Fedora. You can do some pretty flexible things on the commandline with rpm if you really want to do it and I'm not arguing that ability should be taken away. But i don't want encourage the general user base to use packaged libraries from old trees that are no longer being maintained just because it happens to be a package they find on the net in an old ftp. And i definitely want to encourage package builders to rebuild against libraries that are being maintained.
Since when is this just about "rebuilding" stuff? Do you think apps are magically ported to GTK+ 2 by running gcc on them? What about OpenSSL?
This is not simply a matter of running gcc or rebuilding packages. It's a much deeper issue.
This is orthogonal to packaging issues...
Not in the slightest, it's fundamental to packaging issues.
and frankly... not something a distributor of libraries can dictate to each upstream project.
Why is Fedora including unstable libraries as discrete packages at all? Why not just statically link them into the packages that need them?
Yes I'm aware of the disadvantages of static linking. Are you aware of the disadvantages of dynamic linking?
If this were debian... with debian timescales for the development and end-of-life... 5 years isnt that long. But this isn't debian.. and this project doesn't have those sorts of timescales... so with respect to FC's timetable 5 years is definitely a long time.
Outside of the Linux community (ie, in the *desktop world*) the current rate of instability is simply unacceptable. Why do you think Red Hat make money by selling what is essentially an old version of Fedora?
<sigh>
I don't know why I bother, really, Sean is quite right - the number of ways people justify massive "platform" (haha) instability to themselves is astonishing. I should keep a note of them all or something. This sort of thing keeps coming up again and again because it causes users *pain*, and each time it does people write it off as "not our problem", "unfixable", "only proprietary software needs that" or "DO YOU HATE INNOVATION!?!" type crap.
thanks -mike
On Mon, Jan 24, 2005 at 10:02:52AM -0500, Jeff Spaleta wrote:
On Mon, 24 Jan 2005 09:15:14 -0500, Sean Middleditch elanthis@awesomeplay.com wrote:
We _have_ had this problem, btw. The problem is that it's not generally developers that notice it. It's the user that just want to have their machine work. I go to install third party app Foo from Foo's web site, it needs libbar.so.2, Fedora only has libbar.so.1, and many other apps on the net require libbar.so.1.
A third part website is packaging libbar.so.2 in a package of the same package name as Feodora's libbar.so.1? Why would a third party site do that? Unless the intention was to replace the Fedora package? Isn't this an example of the care 3rd party packagers should be taking to make sure their packages work well with Core?
I would reverse the setup: Shouldn't the core distribution have scheme to allow for doing so without blowing away half of the system?
After all sonames were "invented" upstream to allow coexistence of libraries of different sonames, so having the soname-in-rpmname scheme follows this philosophy and allows 3rd party packagers (and the vendor itself) to have clean ways of updating/coinstalling a new library.
The current solutions are too hackish to be even considered a natural approach. Look at gcc34 and the required obsoletes in gcc to deal with this cruft. A proper scheme of coexisting packages for certain classes (libraries, compilers, interpreters) layed out once and for all will bring piece here forever ;)
On Mon, 24 Jan 2005 23:08:47 +0100, Axel Thimm Axel.Thimm@atrpms.net wrote:
The current solutions are too hackish to be even considered a natural approach. Look at gcc34 and the required obsoletes in gcc to deal with this cruft. A proper scheme of coexisting packages for certain classes (libraries, compilers, interpreters) layed out once and for all will bring piece here forever ;)
But is there clean way to indicate to a user that an older version of the library is no longer being maintained and has "expired" and won't be getting any security updates? This is not not just an issue of finding the unused libs. An unmaintained library package could still be in use by an application. How do you make the admin aware that a library package they are using is no longer being maintained so they can review whether or not to keep it and the applications using it installed? Unless there is a mechanism by which admins are informed of an expiring library so they can make an informed decision, i don't feel its worthwhile to encourage the accumulation of older libraries at all.
-jef
On Mon, Jan 24, 2005 at 05:29:55PM -0500, Jeff Spaleta wrote:
On Mon, 24 Jan 2005 23:08:47 +0100, Axel Thimm Axel.Thimm@atrpms.net wrote:
The current solutions are too hackish to be even considered a natural approach. Look at gcc34 and the required obsoletes in gcc to deal with this cruft. A proper scheme of coexisting packages for certain classes (libraries, compilers, interpreters) layed out once and for all will bring piece here forever ;)
But is there clean way to indicate to a user that an older version of the library is no longer being maintained and has "expired" and won't be getting any security updates? This is not not just an issue of finding the unused libs. An unmaintained library package could still be in use by an application. How do you make the admin aware that a library package they are using is no longer being maintained so they can review whether or not to keep it and the applications using it installed? Unless there is a mechanism by which admins are informed of an expiring library so they can make an informed decision, i don't feel its worthwhile to encourage the accumulation of older libraries at all.
Not sure how this fits in here. These are valid points you make, but they are valid for both the current and a soname-in-the-rpmname scheme :)
The problem you are addressing is much larger, if you do a RH7.3 -> RH8.0 -> ... -> FC3 upgrade party you'll find that your system has quite a lot of old unsupported cruft left over. Any deprecated not replaced package will be there including its security implications.
On Mon, 24 Jan 2005 23:40:43 +0100, Axel Thimm Axel.Thimm@atrpms.net wrote:
Not sure how this fits in here. These are valid points you make, but they are valid for both the current and a soname-in-the-rpmname scheme :)
keeping the package the same name... regardless of the soname means on distro upgrade the old version gets removed. This works to expire an older library package but breaks externally build apps that need the old library. You want to keep those things from breaking and I want to expired libs off the system. I would prefer to use the sonames in the packages as sparingly as possible.. to minimize the amount of deprecated libraries on system...until there is a solution to the larger question of how expiring of a package is suppose to work.
-jef
On Mon, Jan 24, 2005 at 05:54:20PM -0500, Jeff Spaleta wrote:
On Mon, 24 Jan 2005 23:40:43 +0100, Axel Thimm Axel.Thimm@atrpms.net wrote:
Not sure how this fits in here. These are valid points you make, but they are valid for both the current and a soname-in-the-rpmname scheme :)
keeping the package the same name... regardless of the soname means on distro upgrade the old version gets removed. This works to expire an older library package but breaks externally build apps that need the old library. You want to keep those things from breaking and I want to expired libs off the system. I would prefer to use the sonames in the packages as sparingly as possible.. to minimize the amount of deprecated libraries on system...until there is a solution to the larger question of how expiring of a package is suppose to work.
Already posted in different siblings of this thread and implemented at ATrpms. Auto-expiring packages that should be disposed of if there is no dependency on them should simply provide a fake dependency to hook a garbage collector to.
The concept is solid, proven on various distros and even within the Red Hat world at ATrpms.
On Tue, 25 Jan 2005 00:21:13 +0100, Axel Thimm Axel.Thimm@atrpms.net wrote:
Already posted in different siblings of this thread and implemented at ATrpms. Auto-expiring packages that should be disposed of if there is no dependency on them should simply provide a fake dependency to hook a garbage collector to.
no.. you missed my point... expiring because nothing depends on them is NOT what im talking about. I'm talking about expiring a package because the package author is no longer maintaining that version of the library regardless of whether an externally built application from somewhere else are still using that version of the library. If i haven't made my point clear enough, I apologize. Your leaf detection mechanism is not going to address this issue.
-jef
On Mon, January 24, 2005 7:33 pm, Jeff Spaleta said:
no.. you missed my point... expiring because nothing depends on them is NOT what im talking about. I'm talking about expiring a package because the package author is no longer maintaining that version of the library regardless of whether an externally built application from somewhere else are still using that version of the library. If i haven't made my point clear enough, I apologize. Your leaf detection mechanism is not going to address this issue.
Jeff,
Why not propose a solution that does what you want instead of imposing this artificial barrier on others trying to solve different problems?
Sean
On Mon, 24 Jan 2005 19:54:30 -0500 (EST), Sean seanlkml@sympatico.ca wrote:
Why not propose a solution that does what you want instead of imposing this artificial barrier on others trying to solve different problems?
I'm explaining a situation that the current naming method handles better than the proposed per soname does. I'm not arguing for any policy change at all, you are. if you want to argue for a policy change I would hope that you are open minded enough to keep other issues other than your primary concern in mind. The soname in package policy you would like to see for all packages... will negatively impact the aspect of packaging I bring up compared to the current method of doing the per sonaming on an as needed basis. Packaging policy is a complex issue, and while you would like to focus on one aspect and build a policy that makes that one aspect easier to deal with... it has consequences for other aspects of packaging. I'd love a perfect solution that everyone will like that solves all problems, but in the meantime I'm not horribly upset with the 'as needed' policy as a compromise no one likes and solves no problems well.
How about we backup... and we imagine reasons why historically the per soname scheme hasn't been used by and large by Red Hat... perhaps if we did that there will be other aspects of packaging besides the one I bring up that constrain your soname solution.
-jef"but its much more fun to think Red Hat packagers are just stupid or malicious and have chosen a package naming scheme delibrately to irk other people"spaleta
On Mon, Jan 24, 2005 at 07:33:15PM -0500, Jeff Spaleta wrote:
On Tue, 25 Jan 2005 00:21:13 +0100, Axel Thimm Axel.Thimm@atrpms.net wrote:
Already posted in different siblings of this thread and implemented at ATrpms. Auto-expiring packages that should be disposed of if there is no dependency on them should simply provide a fake dependency to hook a garbage collector to.
no.. you missed my point... expiring because nothing depends on them is NOT what im talking about. I'm talking about expiring a package because the package author is no longer maintaining that version of the library regardless of whether an externally built application from somewhere else are still using that version of the library. If i haven't made my point clear enough, I apologize. Your leaf detection mechanism is not going to address this issue.
OK, neither will the soname-in-the-rpmname address this in any way, positive or negative, and as said the issue you raise is far more involved and w/o any good solution. The leaf detection would circumstantially help here, but that wouldn't be the main focus.
OK, neither will the soname-in-the-rpmname address this in any way, positive or negative, and as said the issue you raise is far more involved and w/o any good solution. The leaf detection would circumstantially help here, but that wouldn't be the main focus.
okay I'm going to throw out some silly ideas for gathering info on the 'age' of a package.
1. panu's leaf code - I've used it - it does turn up things that nothing depends on and does it nicely. 2. look for shared objects in the filelists of the things turned up in 1 3. look for packages that have been untouched in > than N days/months/years 4. look for packages whose build date is many many months ago.
None of these will get rid of every case - but it will help us identify odd situations and left-over cruft.
thoughts?
-sv
seth vidal wrote:
okay I'm going to throw out some silly ideas for gathering info on the 'age' of a package.
- panu's leaf code - I've used it - it does turn up things that nothing
depends on and does it nicely. 2. look for shared objects in the filelists of the things turned up in 1 3. look for packages that have been untouched in > than N days/months/years 4. look for packages whose build date is many many months ago.
None of these will get rid of every case - but it will help us identify odd situations and left-over cruft.
thoughts?
Sounds interesting. On the other hand, maybe we could do it manually (from a packager's point of view) : if you package library foo, and you know that version 0.1 is not supported anymore (noone knows that better than you since you maintain it), you can add an Obsoletes tag to your libfoo5 package, and force the removal. Would that solve Jeff's issue ?
Aurélien
Sounds interesting. On the other hand, maybe we could do it manually (from a packager's point of view) : if you package library foo, and you know that version 0.1 is not supported anymore (noone knows that better than you since you maintain it), you can add an Obsoletes tag to your libfoo5 package, and force the removal. Would that solve Jeff's issue ?
Obsoletes should be used when a package changes name. Not when someone thinks a new version gets rid of an old version.
-sv
seth vidal wrote:
Obsoletes should be used when a package changes name. Not when someone thinks a new version gets rid of an old version.
But "Obsoletes" could be used for this, couldn't it ? And it's not any "someone" who would use it, but the packager, who knows the lib. Actually, the package *does* change name, since the proposal is to include the soname in it. What are the shortcomings ?
Aurélien
On Tue, 25 Jan 2005, Aurelien Bompard wrote:
seth vidal wrote:
Obsoletes should be used when a package changes name. Not when someone thinks a new version gets rid of an old version.
But "Obsoletes" could be used for this, couldn't it ?
I depends on your definition of Obsoletes. In my mind, it could be used for either purpose.
-- Rex
On Tue, 25 Jan 2005 08:53:17 -0500, seth vidal wrote:
Sounds interesting. On the other hand, maybe we could do it manually (from a packager's point of view) : if you package library foo, and you know that version 0.1 is not supported anymore (noone knows that better than you since you maintain it), you can add an Obsoletes tag to your libfoo5 package, and force the removal. Would that solve Jeff's issue ?
Obsoletes should be used when a package changes name. Not when someone thinks a new version gets rid of an old version.
Where are such strict semantics of the "Obsoletes" field defined?
Isn't it rather free to use? As in "we don't need that package any longer, it's obsolete and can be erased"?
If, for instance, functionality of one package is supplied by another package, that's not a rename, but a relocation of package capabilities. Package "foo" would "Obsoletes: bar <= 1.0". If an old library API/ABI is not used anymore and hence considered obsolete, a new version of the library could "Obsolete: libfoo <= 0.9" just fine.
Where are such strict semantics of the "Obsoletes" field defined?
Isn't it rather free to use? As in "we don't need that package any longer, it's obsolete and can be erased"?
If, for instance, functionality of one package is supplied by another package, that's not a rename, but a relocation of package capabilities. Package "foo" would "Obsoletes: bar <= 1.0". If an old library API/ABI is not used anymore and hence considered obsolete, a new version of the library could "Obsolete: libfoo <= 0.9" just fine.
Actually I was expressing my opinion on how I think it should be used.
that's why I said 'should'
-sv
On Tue, January 25, 2005 10:11 am, seth vidal said:
Where are such strict semantics of the "Obsoletes" field defined?
Isn't it rather free to use? As in "we don't need that package any longer, it's obsolete and can be erased"?
If, for instance, functionality of one package is supplied by another package, that's not a rename, but a relocation of package capabilities. Package "foo" would "Obsoletes: bar <= 1.0". If an old library API/ABI is not used anymore and hence considered obsolete, a new version of the library could "Obsolete: libfoo <= 0.9" just fine.
Actually I was expressing my opinion on how I think it should be used.
that's why I said 'should'
Is there a good reason it shouldn't be used to mark packages as obsoleting others? If so, what is it?
Sean
On Tue, 25 Jan 2005 10:21:56 -0500, seth vidal wrote:
Is there a good reason it shouldn't be used to mark packages as obsoleting others? If so, what is it?
For right now? mostly b/c it's just obsoleting an old version - not a change of package name.
we shouldn't use obsoletes to do what a normal version update should do.
We don't. This is still about
libfoo obsoletes libfoo10
isn't it?
Michael Schwendt wrote:
On Tue, 25 Jan 2005 08:53:17 -0500, seth vidal wrote:
Sounds interesting. On the other hand, maybe we could do it manually (from a packager's point of view) : if you package library foo, and you know that version 0.1 is not supported anymore (noone knows that better than you since you maintain it), you can add an Obsoletes tag to your libfoo5 package, and force the removal. Would that solve Jeff's issue ?
Obsoletes should be used when a package changes name. Not when someone thinks a new version gets rid of an old version.
Where are such strict semantics of the "Obsoletes" field defined?
The strict semantics are hard coded in up2date and yum, for starters, hard coded is strict enough for me.
Isn't it rather free to use? As in "we don't need that package any longer, it's obsolete and can be erased"?
Sure, feel free to do whatever. Without well defined packaging guidelines, packages will break, but reports will be filed, packages will be fixed and, life as usual.
If, for instance, functionality of one package is supplied by another package, that's not a rename, but a relocation of package capabilities. Package "foo" would "Obsoletes: bar <= 1.0". If an old library API/ABI is not used anymore and hence considered obsolete, a new version of the library could "Obsolete: libfoo <= 0.9" just fine.
Obsoletes: has changed to erase a package that contains a virtual provides for exactly this reason.
In fact, dependency comparisons don't use package NEVR at all several years now in order to handle functionality shifts that are not package renames.
73 de Jeff
On Tue, Jan 25, 2005 at 10:56:17AM -0500, Jeff Johnson wrote:
Obsoletes: has changed to erase a package that contains a virtual provides for exactly this reason.
Unless it is provided by the same package like the usual
Provides: foo Obsoletes: foo [<= ...]
But it is even worse, if two different packages provide/obsolete foo, now they obsolete each-other, too. That's unexpected behaviour IMHO.
Furthermore Provides: currently also effectively implies Obsoletes, but only for non-virtual Provides (e.g. a package name). That's the bug^Wfeature that has troubled PyVault and others so much.
So I agree with Jeff. There is a high need for strict definitions of what these semantics express and what rpm (and thus rpm-based resolvers) should do with it.
Aurelien Bompard wrote:
Sounds interesting. On the other hand, maybe we could do it manually (from a packager's point of view) : if you package library foo, and you know that version 0.1 is not supported anymore (noone knows that better than you since you maintain it), you can add an Obsoletes tag to your libfoo5 package, and force the removal. Would that solve Jeff's issue ?
Or you release a new release of that version which provides the garbage collector symbol, Axel suggested.
Harald Hoyer wrote:
Or you release a new release of that version which provides the garbage collector symbol, Axel suggested.
I'd be very happy to do that, but I'd like to know : is this the scheme Red Hat is going to be moving to ? Soname in the package name and a Provides : shared-library-package in the spec file ? Do we agree on this ?
I also think it is a great solution, which adresses correctly all the major issues raised here. I'd be very happy to see it become a packaging policy.
Thanks
Aurélien
On Thu, 2005-01-27 at 16:02 +0100, Aurelien Bompard wrote:
Harald Hoyer wrote:
Or you release a new release of that version which provides the garbage collector symbol, Axel suggested.
I'd be very happy to do that, but I'd like to know : is this the scheme Red Hat is going to be moving to ? Soname in the package name and a Provides : shared-library-package in the spec file ? Do we agree on this ?
I also think it is a great solution, which adresses correctly all the major issues raised here. I'd be very happy to see it become a packaging policy.
I missed Axel's original posting so I may well be missing something: I thought the discussion was moving towards how to have the user specify at install time that a package is okay to delete if nothing depends on it. Isn't a "Garbage collector symbol" via a Provides done by the packager at rpm creation time instead? If so, the user will still have to manually intervene with the list of garbage collections. (Although, it might not be as bad as some of the other schemes out there.)
I'm also still waiting to see why the current de facto scheme of: current = libname previous = libname[Version]
is _compellingly_ wrong... Perhaps there just needs to be a summary of Pros and Cons so we can see the tradeoffs.
-Toshio --
On Thu, 27 Jan 2005 12:49:12 -0500, Toshio toshio@tiki-lounge.com wrote:
I'm also still waiting to see why the current de facto scheme of: current = libname previous = libname[Version]
is _compellingly_ wrong... Perhaps there just needs to be a summary of Pros and Cons so we can see the tradeoffs.
If i understand the argument that people are making... is that doing it this way... is a burden on 3rd party packagers who have to try to predict when and if Core is going to introduce a libname[Version] for previous versions.
And by association also a burden on users who are trying to use applications from outside of current Core that still need the older libs for applications until a 3rd party is able to rebuild a package with the older libs or the application developers retool to support the new library.
My counter argument is that doing it this way historically has provided a mechanism by which Core(and rhl before it) explicitly and delibrately chooses to expire older libraries that Core is no longer maintaining and is no longer needed by anything inside Core.
Another argument which has been made for continuing in this fashion is that standardizing on using sonames in all library packages potentially lowers the bar to backward-compatibility cruft. Such libraries would linger in an unknown maintainership state and be difficult to drop at any point because there will always be users who are using legacy application that needs that legacy library.
-jef
Jeff Spaleta wrote:
If i understand the argument that people are making... is that doing it this way... is a burden on 3rd party packagers who have to try to predict when and if Core is going to introduce a libname[Version] for previous versions.
I also find it much easier for new contributors to Extra to have a clear policy about this. Since more and more people are going to be contributing to Fedora, I think that we have to agree on policies.
Another argument which has been made for continuing in this fashion is that standardizing on using sonames in all library packages potentially lowers the bar to backward-compatibility cruft. Such libraries would linger in an unknown maintainership state and be difficult to drop at any point because there will always be users who are using legacy application that needs that legacy library.
Very true, and that's what Axel's proposal to introduce a virtual Provides is trying to solve. With it, a simple "rpm --whatprovides shared-library-package" returns all the library packages.
Now we need a way to actually expire the old libraries. Proposals have been made in this direction to flag what the user has directly asked for and what was brought in as a dependency. In which case the library could be garbage-collected (if nothing depends on it anymore). This would be great, but I don't know how much work it would require to implement it in rpmlib (and in wrappers maybe ? maybe not needed)
Another solution would be to select those of which nothing depend and to ask the user if it has to be removed. This would be much easier to implement of course.
Does this sum it up correctly ?
I think that both solutions are not mutually exclusive. We could start the slow move to add virtual provides in the shared libs, and sonames in the rpm name, and write the second script. We would have something useable before rpmlib gets the "garbage-collect" feature.
Did I miss some points ?
Thanks
Aurélien
On Thu, 27 Jan 2005 21:52:18 +0100, Aurelien Bompard gauret@free.fr wrote:
Another argument which has been made for continuing in this fashion is that standardizing on using sonames in all library packages potentially lowers the bar to backward-compatibility cruft. Such libraries would linger in an unknown maintainership state and be difficult to drop at any point because there will always be users who are using legacy application that needs that legacy library.
Very true, and that's what Axel's proposal to introduce a virtual Provides is trying to solve. With it, a simple "rpm --whatprovides shared-library-package" returns all the library packages.
I dont think the shared-library-package provides solves the cruft problem from the project perspective. From the user perpective yes the provides hack will be a somewhat useful mechanism for users/admins to cull cruft from their boxes. But i don't think thats what Ville had in mind when making the argument. I think Ville was talking about how the project as a whole prevents itself from accumulating layers and layers of library cruft that become harder and harder to maintain as time goes by because there is less and less upstream interest in maintaining the aging codebase. Its easy to package up an old library and offer it.. it gets harder and harder to 'maintain' such a package with critical and security related patches if the upstream development interest has dried out. I think Ville's arguement is aimed at making sure that whatever policy Core is using doesn't encourage the accumulation of difficult to maintain legacy libraries for which users are expecting to be able to be maintained at a high standard.
Now we need a way to actually expire the old libraries. Proposals have been made in this direction to flag what the user has directly asked for and what was brought in as a dependency. In which case the library could be garbage-collected (if nothing depends on it anymore). This would be great, but I don't know how much work it would require to implement it in rpmlib (and in wrappers maybe ? maybe not needed)
Again these solutions have focused on the user's perspective of how to garbage collect unused libraries. That is not the crux of my concern. I've been trying to talk about a mechanism by which the package authors can expire a library.. thus notifying ALL users of that particular package that the author is no longer going to be providing any maintence for that library. And when the package authors have decided to expire the library.. the secure thing to do is to remove that library from the system regardless of what applications are still using it, until a new package author can be found for that library who is willing to 'maintain' the library. This is what effectively happens when core drops libfoo.so.1 from package libfoo and adds libfoo.so.2 instead. When the installer is used to upgrade to the new Core, any package that depends on that old libfoo.so.1 library breaks or gets removed. Sucks for the user that doesn't care about running vulnerable code.... sucks marginally less for a user who wants to make sure they aren't going to be opening themselves up to a future vulnerability that won't get patched with an update.
The garbage collecting ideas address a different problem altogether than the 'expiration' issue i see the current core naming scheme being implicitly used for in some situations. Every if you garbage collected.. you still don't have a clue about whether the original author is going to be issuing updates anymore for a libfoo1 package that you are still using for a number of applications in the soname in packagename scheme. Keeping the libfoo package name across soname versions implicitly defines an enforced expiration policy by the package author.
-jef
On Thu, January 27, 2005 5:43 pm, Jeff Spaleta said:
I dont think the shared-library-package provides solves the cruft problem from the project perspective. From the user perpective yes the provides hack will be a somewhat useful mechanism for users/admins to cull cruft from their boxes. But i don't think thats what Ville had in mind when making the argument. I think Ville was talking about how the project as a whole prevents itself from accumulating layers and layers of library cruft that become harder and harder to maintain as time goes by because there is less and less upstream interest in maintaining the aging codebase. Its easy to package up an old library and offer it.. it gets harder and harder to 'maintain' such a package with critical and security related patches if the upstream development interest has dried out. I think Ville's arguement is aimed at making sure that whatever policy Core is using doesn't encourage the accumulation of difficult to maintain legacy libraries for which users are expecting to be able to be maintained at a high standard.
Jeff,
What you're talking about is not available today. Except to the extent people only install officially sanctioned core rpms. You're right that making it easier to install 3rd party rpms may lead to some poor user having a few old libraries kicking around. But this should not be an overarching concern.
Unless you stop all 3rd party rpms from being installed there will be absolutely no way to know when some arbitrary 3rd party rpm falls into the category of unmaintained. This issue you keep raising just seems like a boondoggle in the face of trying to solve real issues of interoperability.
If you want to write an app that scans peoples hard drive or rpm database and warns them of any old unmaintained or security risk software... all the power to you. Can we please get on with the task of making software installation, along with the necessary dependencies easier? If there is really a market for some software to scan for problem software, i'm sure it will get written.
Sean
On Thu, 27 Jan 2005 19:08:36 -0500 (EST), Sean seanlkml@sympatico.ca wrote:
This issue you keep raising just seems like a boondoggle in the face of trying to solve real issues of interoperability.
Feel free to read any intent you feel you need to into what I'm saying. I'm looking at what I see as the historical 'implied' usage of library naming scheme in use. I'd love to be told I am wrong, and that this hasn't been a reason why the particular naming scheme has been used in the past. But if this has been an reason in the past.. its worth noting and trying to understand why it was deemed important before. You feel this isn't an important issue... fine... your opinion. But I want to make sure that everyone in this discussion has a competent understanding of WHY we have the current naming scheme... before myopic decisions are made with regard to what to do in the future.
You don't think this is an important issue.. fine... but your opinion of its importance doesn't change whether or not the issue of 'expiring' is a reason why the naming scheme is currently in use. Regardless of what you think the correct path forward is... i think its vitally important to have an understanding of WHY the current naming scheme is being used to underpin any competent discussion about how to change it. I'm not sure any of the vocal proponents of the naming policy change have an understanding of why the current naming scheme exists.
Can we please get on with the task of making software installation, along with the necessary dependencies easier?
Once everyone has a good feel for why the current naming scheme exists... maybe... sort of depends on whether the 'right' people's priorities line up with yours. I'm not even sure the 'right' people are even reading this thread any more so any further discussion may very well be moot.
-jef"playing the jester to the moot court"spaleta
-jef
On Thu, January 27, 2005 7:59 pm, Jeff Spaleta said:
Feel free to read any intent you feel you need to into what I'm saying. I'm looking at what I see as the historical 'implied' usage of library naming scheme in use. I'd love to be told I am wrong, and that this hasn't been a reason why the particular naming scheme has been used in the past. But if this has been an reason in the past.. its worth noting and trying to understand why it was deemed important before. You feel this isn't an important issue... fine... your opinion. But I want to make sure that everyone in this discussion has a competent understanding of WHY we have the current naming scheme... before myopic decisions are made with regard to what to do in the future.
Perhaps you could come up with some better objections to the proposals beyond some vague concern that someone might have some old software left on their system that might be unmaintained. Perhaps you could elucidate how you intend to make such a determination for every third party rpm in the ether.
You don't think this is an important issue.. fine... but your opinion of its importance doesn't change whether or not the issue of 'expiring' is a reason why the naming scheme is currently in use. Regardless of what you think the correct path forward is... i think its vitally important to have an understanding of WHY the current naming scheme is being used to underpin any competent discussion about how to change it. I'm not sure any of the vocal proponents of the naming policy change have an understanding of why the current naming scheme exists.
Perhaps a more constructive conversation could ensue if it weren't interrupted by notions of providing protection from "unmaintained" software, that doesn't even exist today. You're demanding features from the proponents of change that don't even exist today. It's not helpful or constructive.
Once everyone has a good feel for why the current naming scheme exists... maybe... sort of depends on whether the 'right' people's priorities line up with yours. I'm not even sure the 'right' people are even reading this thread any more so any further discussion may very well be moot.
Frankly, I don't think this point that seems so dear to you has helped anyone get a feel for why the current naming scheme exists. It sure doesn't protect against unmaintained 3rd party RPMS today.
Sean
Jeff Spaleta wrote:
Again these solutions have focused on the user's perspective of how to garbage collect unused libraries. That is not the crux of my concern. I've been trying to talk about a mechanism by which the package authors can expire a library.. thus notifying ALL users of that particular package that the author is no longer going to be providing any maintence for that library.
I see your point. OK, this is a dirty hack, but could we have, say, the fedora-release package "Conflict" with expiring package libraries ? (Told you it was dirty...) My point is: could we use our existing packages and the Conflicts (or another one) mechanism to expire a library ?
I really think that this problem is worth taking some time to solve it.
Aurélien
On Sun, January 30, 2005 7:06 pm, Aurelien Bompard said:
I see your point. OK, this is a dirty hack, but could we have, say, the fedora-release package "Conflict" with expiring package libraries ? (Told you it was dirty...) My point is: could we use our existing packages and the Conflicts (or another one) mechanism to expire a library ?
I really think that this problem is worth taking some time to solve it.
What exactly is the problem? Users uninstall old applications they no longer use or upgrade to new versions that use new libraries. When nothing references an old library any more it can be garbage collected.
Sean
Sean wrote:
What exactly is the problem? Users uninstall old applications they no longer use or upgrade to new versions that use new libraries. When nothing references an old library any more it can be garbage collected.
I think Jeff wants the garbage-collecting process to be initiated by the distribution, and not to depend on the user running a garbage collector by himself.
Aurélien
On Mon, January 31, 2005 5:54 am, Aurelien Bompard said:
I think Jeff wants the garbage-collecting process to be initiated by the distribution, and not to depend on the user running a garbage collector by himself.
Your idea would work as would a couple other options. It just seems like the wrong time to worry about such a minor corner case that won't really change the end-user experience. If jeff or someone else wants to worry about it now, there are ways they can proceed without needing to interject the demand that it be solved by anyone else working on other improvements to rpm.
Cheers, Sean
On Mon, 31 Jan 2005 06:18:25 -0500 (EST), Sean seanlkml@sympatico.ca wrote:
Your idea would work as would a couple other options. It just seems like the wrong time to worry about such a minor corner case that won't really change the end-user experience. If jeff or someone else wants to worry about it now, there are ways they can proceed without needing to interject the demand that it be solved by anyone else working on other improvements to rpm.
Yuo keep missing my point. I think some packagers have been worrying about this all along and is one of the reason why soname in packagename hasnt been the norm for Red Hat ever. Whether or not you think its a corner case.. while a fascinating bit of information for me to know... doesn't change whether or not this issue i bring has been an important factor in the past for packagers who are not using the soname-in-packagename model. And if it has been an important factor in the past... i'd like to know what has changed.
-jef
On Mon, 31 Jan 2005, Jeff Spaleta wrote:
I think some packagers have been worrying about this all along and is one of the reason why soname in packagename hasnt been the norm for Red Hat ever.
One reasonable option is to only use the 'Soname in rpm name' scheme when/if the packager ever expects/needs multiple installed versions. If not, keep the status-quo.
-- Rex
On Mon, January 31, 2005 8:35 am, Jeff Spaleta said:
Yuo keep missing my point. I think some packagers have been worrying about this all along and is one of the reason why soname in packagename hasnt been the norm for Red Hat ever. Whether or not you think its a corner case.. while a fascinating bit of information for me to know... doesn't change whether or not this issue i bring has been an important factor in the past for packagers who are not using the soname-in-packagename model. And if it has been an important factor in the past... i'd like to know what has changed.
Jeff,
You keep missing the point. Just because you don't think it's a corner case, while a fascinating bit of information doesn't mean it's important factor for anyone. You continue to miss that people have already offered workable solutions. Anyway, you've failed to show it has ever been an important factor in the past or would even be one in the future. Nothing has changed.
Sean
On Mon, 31 Jan 2005 10:12:02 -0500 (EST), Sean seanlkml@sympatico.ca wrote:
You keep missing the point. Just because you don't think it's a corner case, while a fascinating bit of information doesn't mean it's important factor for anyone.
I don't think i've made a judgement as to whether or not this is a corner case. What I care about is understanding why the current naming scheme was chosen so we can have informed debate. So far no one has told me my obvservations are incorrect. Whether or not this historical usage is an important consideration is a discussion that comes after we have an understanding of what the historical usages are. Without an understanding of why the current naming scheme was chosen, and how its been used.. we are not in a solid position to evaluate a change in policy. I challenge you... as a proponent of change.. to give us your understanding as to why the current naming policy is in place.
You continue to miss that people have already offered workable solutions.
actually.. they havent. There have been proposals to address garbage collecting which is not what I'm talking about at all. Unused libraries...just take up space... and can be garbaged collected. Used but unmaintained libraries could become significant security concerns over time. Its a hard issue.. with no clean solution. How as a package vendor do you do the due diligence to prevent users from running unmaintained libraries unknowingly. It's about being as honest as possible with the users who are relying on updates to packages. I believe that Red Hat's current naming scheme.. is a hacked up attempt.. to be as honest as possible with the userbase (inside the constraints of the packaging system) about the maintainence state of shared library packages. Is it a hack? Yep, absolutely.. but this is the role i see the current naming scheme playing.
Anyway, you've failed to show it has ever been an important factor in the past or would even be one in the future.
Only Red Hat packages who have been using the naming scheme over the multiple releases of rhl, rhel and fedora can give credible insight into why the naming scheme is being used. If I'm wrong.. im wrong. But I think a serious discussion about changing the naming schedule requires a serious understanding of why the current naming scheme is being used. I challenge you as a proponent of change, to give me your understanding of why you think the current naming scheme is being used.
Nothing has changed.
Great... nothing has changed... whatever priorities have unpinned the usage of the current naming scheme are still the same and we can can continue to use the current naming scheme.
-jef
On Mon, January 31, 2005 11:30 am, Jeff Spaleta said:
-- I don't think i've made a judgement as to whether or not this is a corner case. What I care about is understanding why the current naming scheme was chosen so we can have informed debate. So far no one has told me my obvservations are incorrect. Whether or not this historical usage is an important consideration is a discussion that comes after we have an understanding of what the historical usages are. Without an understanding of why the current naming scheme was chosen, and how its been used.. we are not in a solid position to evaluate a change in policy. I challenge you... as a proponent of change.. to give us your understanding as to why the current naming policy is in place. --
I'm just a proponent of you coming to terms with the fact that your instinct to become involved in every single thread just wasn't helpful or insightful this time. Perhaps there's a chance... however small... that you might actually be .... gasp .... wrong. Really, a viable option was even presented in this thread to deal with your concern. But if you have no historical insight or judgement of your own on the matter, let someone who does pipe in if they think its important, instead of sidetracking the conversation into oblivion. Not everything has to be explained to _you_ before proceeding.
TTN, Sean
On Thu, 27 Jan 2005 14:46:01 -0500, Jeff Spaleta wrote:
On Thu, 27 Jan 2005 12:49:12 -0500, Toshio toshio@tiki-lounge.com wrote:
I'm also still waiting to see why the current de facto scheme of: current = libname previous = libname[Version]
is _compellingly_ wrong... Perhaps there just needs to be a summary of Pros and Cons so we can see the tradeoffs.
If i understand the argument that people are making... is that doing it this way... is a burden on 3rd party packagers who have to try to predict when and if Core is going to introduce a libname[Version] for previous versions.
Whenever that happens - when a Core package is renamed like this - the 3rd party packagers need to update their spec files to make them buildrequire libname[Version]-devel instead.
Similarly, when the soname's major library version is put into the package name always, the Core packagers and everyone else, who wants to build rpms with the new library, would need to update the build requirements in all spec files when the major library version is increased.
And what does the rule look like with packages which contain multiple libraries with different major versions?
And by association also a burden on users who are trying to use applications from outside of current Core that still need the older libs for applications until a 3rd party is able to rebuild a package with the older libs or the application developers retool to support the new library.
That I don't see. Thanks to automatic dependencies on versioned sonames, the user doesn't need to know where libfoo.so.3 comes from, i.e. whether it is to be found in package libfoo3-3.0.1-1 or libfoo-3.0.3-1.
On Fri, 28 Jan 2005, Michael Schwendt wrote:
If i understand the argument that people are making... is that doing it this way... is a burden on 3rd party packagers who have to try to predict when and if Core is going to introduce a libname[Version] for previous versions.
Whenever that happens - when a Core package is renamed like this - the 3rd party packagers need to update their spec files to make them buildrequire libname[Version]-devel instead.
I thought the proposal included that each package include Provides: libname = %version or was that also determined to be problematic?
-- Rex
On Thu, 27 Jan 2005 20:45:47 -0600 (CST), Rex Dieter wrote:
If i understand the argument that people are making... is that doing it this way... is a burden on 3rd party packagers who have to try to predict when and if Core is going to introduce a libname[Version] for previous versions.
Whenever that happens - when a Core package is renamed like this - the 3rd party packagers need to update their spec files to make them buildrequire libname[Version]-devel instead.
I thought the proposal included that each package include Provides: libname = %version or was that also determined to be problematic?
Unfortunately, that's a rather short example. Can you extend that a bit, please, or point me to the full-blown proposal? What is %version here? The same old version as we know it? How exactly does it look like in a library package, it's -devel counterpart and a spec file which buildrequires this thing? So far, we've had "Buildrequires: taglib-devel"? What would it look like instead?
On Thu, Jan 27, 2005 at 08:45:47PM -0600, Rex Dieter wrote:
On Fri, 28 Jan 2005, Michael Schwendt wrote:
If i understand the argument that people are making... is that doing it this way... is a burden on 3rd party packagers who have to try to predict when and if Core is going to introduce a libname[Version] for previous versions.
Whenever that happens - when a Core package is renamed like this - the 3rd party packagers need to update their spec files to make them buildrequire libname[Version]-devel instead.
I thought the proposal included that each package include Provides: libname = %version or was that also determined to be problematic?
That would be a way, but I would be less intrusive right now. Just use foo-devel and have foo-devel require libfoo<major>.
I.e. the libfoo<major> package is nowhere explicitly requested outside the package itself. That way all packages can be refactored in asynchronous time w/o changing dependencies of other packages.
Example:
old: foo-1.2.3-4.src.rpm generates foo-1.2.3-4.i386.rpm foo-devel-1.2.3-4.i386.rpm
new: foo-1.2.3-4.src.rpm generates foo-1.2.3-4.i386.rpm libfoo5-1.2.3-4.i386.rpm foo-devel-1.2.3-4.i386.rpm
On Thu, Jan 27, 2005 at 04:02:14PM +0100, Aurelien Bompard wrote:
Harald Hoyer wrote:
Or you release a new release of that version which provides the garbage collector symbol, Axel suggested.
I'd be very happy to do that, but I'd like to know : is this the scheme Red Hat is going to be moving to ? Soname in the package name and a Provides : shared-library-package in the spec file ? Do we agree on this ?
I hope so. There were no strict arguments against using that scheme.
I also think it is a great solution, which adresses correctly all the major issues raised here. I'd be very happy to see it become a packaging policy.
Just to summarize:
%{_libdir}/libfoo.so.N* gets packaged to libfooN %{_libdir}/libfoo-X.Y.so.N* gets packaged to libfoo-X.Y_N
(e.g. if the part before ".so.N*" ends with a digit separate the library's major version with an underscore).
Both versions have a "Provides: shared-library-package" to help identifying the disposable-if-not-depended-upon packages.
On Tue, 25 Jan 2005 02:02:51 +0100, Axel Thimm Axel.Thimm@atrpms.net wrote:
OK, neither will the soname-in-the-rpmname address this in any way, positive or negative, and as said the issue you raise is far more involved and w/o any good solution. The leaf detection would circumstantially help here, but that wouldn't be the main focus.
On the other hand keeping packages the same name when an older soname expires handles the case i bring up exceedingly well. package libfoo in fc3 provides libfoo.so.1 package libfoo in fc4 provides libfoo.so.2. Upgrades from fc3 to fc4 and libfoo.so.1 is no longer on the system. Works like a charm at keeping unmaintained library versions off the system. It works so well in fact.. that it almost seems like it was a design goal for naming library packages without sonames.
And thats the point I'm trying to make. Moving to a per soname when its absolutely not needed has consquences for other aspects of packaging. As soon as the proponents for a soname-in-the-rpmname have a workable proposed solution to the problem I bring up, I'll gladly stop bringing it up.
-jef
On Mon, Jan 24, 2005 at 08:27:59PM -0500, Jeff Spaleta wrote:
On Tue, 25 Jan 2005 02:02:51 +0100, Axel Thimm Axel.Thimm@atrpms.net wrote:
OK, neither will the soname-in-the-rpmname address this in any way, positive or negative, and as said the issue you raise is far more involved and w/o any good solution. The leaf detection would circumstantially help here, but that wouldn't be the main focus.
On the other hand keeping packages the same name when an older soname expires handles the case i bring up exceedingly well. package libfoo in fc3 provides libfoo.so.1 package libfoo in fc4 provides libfoo.so.2. Upgrades from fc3 to fc4 and libfoo.so.1 is no longer on the system. Works like a charm at keeping unmaintained library versions off the system. It works so well in fact.. that it almost seems like it was a design goal for naming library packages without sonames.
That's been taken care of the garbage collection mentioned here.
What is not taken care of and is a real problem you touched is if a package becomes deprecated with no upgrade path and rotts in your N times upgraded system. But that off-topic in this context. The soname-in-rpmname neither helps nor hurts.
And thats the point I'm trying to make. Moving to a per soname when its absolutely not needed
Not needed? Let's not reiterate the whole discussion. There is added value in having the libraries coinstallable like they are meant to be. If you need arguments for it just insert arguments for having library verioning at all at this spot.
has consquences for other aspects of packaging. As soon as the proponents for a soname-in-the-rpmname have a workable proposed solution to the problem I bring up, I'll gladly stop bringing it up.
Well, you raised a topic that is not really related to the soname-in-rpmname. And whatever is indeed tangentially related the Provides: hook and garbage collection method has its ways of dealing with it better than the current compat-whatever packaging.
So at the end of the day you have a win-win situation.
Axel Thimm wrote:
Already posted in different siblings of this thread and implemented at ATrpms. Auto-expiring packages that should be disposed of if there is no dependency on them should simply provide a fake dependency to hook a garbage collector to.
The concept is solid, proven on various distros and even within the Red Hat world at ATrpms.
Very nice idea!
On Mon, 24 Jan 2005, Jeff Spaleta wrote:
On Mon, 24 Jan 2005 09:15:14 -0500, Sean Middleditch elanthis@awesomeplay.com wrote:
We _have_ had this problem, btw. The problem is that it's not generally developers that notice it. It's the user that just want to have their machine work. I go to install third party app Foo from Foo's web site, it needs libbar.so.2, Fedora only has libbar.so.1, and many other apps on the net require libbar.so.1.
A third part website is packaging libbar.so.2 in a package of the same package name as Feodora's libbar.so.1? Why would a third party site do that? Unless the intention was to replace the Fedora package? Isn't this an example of the care 3rd party packagers should be taking to make sure their packages work well with Core?
Jeff, this implementation forces both 3rd party packagers as well as Red Hat and Fedora Extras to be more strict than it could be. It is technical possible to lift these limitations and that's what I think Sean is talking about.
Why a 3rd party packager is updating a core package may be for various reasons, some more valid as others. Not all 3rd party packagers replace core (library) packages because we know it potentially introduces problems. But technically speaking there is no real reason to have this limitation.
-- dag wieers, dag@wieers.com, http://dag.wieers.com/ -- [all I want is a warm bed and a kind word and unlimited power]
Nicolas Mailhot wrote:
Well if it can be easily solved how come we still have this problem ? There are several packages that use the soname approach in FC and they are the ones that are a PITA to cleanup now and then
I did not know, is it possible to have a tool find the rpms that no rpms depend on, and ask to remove them ? For this task, Debian has something called deborphan (which might do more than that) and Mandrake has "urpm-find-leaves" IIRC. Maybe that could be a start.
Aurélien
On Monday 24 January 2005 21:06, Aurelien Bompard wrote:
Nicolas Mailhot wrote:
Well if it can be easily solved how come we still have this problem ? There are several packages that use the soname approach in FC and they are the ones that are a PITA to cleanup now and then
I did not know, is it possible to have a tool find the rpms that no rpms depend on, and ask to remove them ? For this task, Debian has something called deborphan (which might do more than that) and Mandrake has "urpm-find-leaves" IIRC. Maybe that could be a start.
It'll be slow to materialize. Key factor is that this is usually a result of:
1. Using third party repositories. 2. Doing dist upgrades.
Neither of which finds amicable or fruitful discussion. Bad-blood carryover from earlier days.
On Mon, 24 Jan 2005, Aurelien Bompard wrote:
Nicolas Mailhot wrote:
Well if it can be easily solved how come we still have this problem ? There are several packages that use the soname approach in FC and they are the ones that are a PITA to cleanup now and then
I did not know, is it possible to have a tool find the rpms that no rpms depend on, and ask to remove them ? For this task, Debian has something called deborphan (which might do more than that) and Mandrake has "urpm-find-leaves" IIRC. Maybe that could be a start.
Check out this for example: http://lists.freshrpms.net/pipermail/freshrpms-list/2003-March/003391.html
- Panu -
Le lundi 24 janvier 2005 à 14:06 +0100, Aurelien Bompard a écrit :
I did not know, is it possible to have a tool find the rpms that no rpms depend on, and ask to remove them ? For this task, Debian has something called deborphan (which might do more than that) and Mandrake has "urpm-find-leaves" IIRC. Maybe that could be a start.
Not perfect : $ rpm -q -a | xargs -i bash -c "rpm -e --test {} > /dev/null 2>&1 && echo {}"
Hi.
Aurelien Bompard gauret@free.fr wrote:
I did not know, is it possible to have a tool find the rpms that no rpms depend on, and ask to remove them ?
The problem with this is that RPM does not indicate whether a package has "end user value" (a command line or GUI program, or a daemon), or is just a support library needed by said end user programs, which can be removed if not needed by anyone.
Ralf Ertzinger wrote:
The problem with this is that RPM does not indicate whether a package has "end user value" (a command line or GUI program, or a daemon), or is just a support library needed by said end user programs, which can be removed if not needed by anyone.
Panu's script looks in the file list for .so files, that's interesting. He also looks at the RPM Group tag apparently. There is probably room for improvements, but that's a good start.
Aurélien
On Mon, 2005-01-24 at 15:16 +0100, Aurelien Bompard wrote:
Ralf Ertzinger wrote:
The problem with this is that RPM does not indicate whether a package has "end user value" (a command line or GUI program, or a daemon), or is just a support library needed by said end user programs, which can be removed if not needed by anyone.
Panu's script looks in the file list for .so files, that's interesting. He also looks at the RPM Group tag apparently. There is probably room for improvements, but that's a good start.
Run the script with -a option to see why only packages with .so files are considered: without it there are far too many false positives for the output to be useful (never mind dangerous stuff like 'grub' listed as unneeded). One can probably improve the heuristics somewhat but it'll always be just a checklist of *potentially* unneeded stuff.
- Panu -
Panu Matilainen wrote:
One can probably improve the heuristics somewhat but it'll always be just a checklist of potentially unneeded stuff.
Just like what deborphan or urpm-find-leaves does. This is why this list should be proposed to the admin, and he can choose what should be removed and what should be kept. Nothing automatic possible IMHO, but it can still be useful to find leftover packages.
Aurélien
On Mon, 2005-01-24 at 15:11 +0100, Ralf Ertzinger wrote:
Hi.
Aurelien Bompard gauret@free.fr wrote:
I did not know, is it possible to have a tool find the rpms that no rpms depend on, and ask to remove them ?
The problem with this is that RPM does not indicate whether a package has "end user value" (a command line or GUI program, or a daemon), or is just a support library needed by said end user programs, which can be removed if not needed by anyone.
Though you won't 100% find out -- at this moment -- whether a library package is needed because the library can be dlopen()ed and this is usually not encoded into package requirements (yet), cf. the current "suggests/requires in rpm" thread.
Nils
On Jan 24, 2005, Ralf Ertzinger fedora-devel@camperquake.de wrote:
Hi. Aurelien Bompard gauret@free.fr wrote:
I did not know, is it possible to have a tool find the rpms that no rpms depend on, and ask to remove them ?
The problem with this is that RPM does not indicate whether a package has "end user value" (a command line or GUI program, or a daemon), or is just a support library needed by said end user programs, which can be removed if not needed by anyone.
Could we perhaps add such a flag to the rpm database? Then the installer and the various other package installation front-ends could mark user- (or comps-)requested packages as having end user value, and everything else brought in to satisfy dependencies such that it is (or can be) removed as soon as no dependencies remain.
On Mon, Jan 24, 2005 at 03:05:29PM -0200, Alexandre Oliva wrote:
On Jan 24, 2005, Ralf Ertzinger fedora-devel@camperquake.de wrote:
Hi. Aurelien Bompard gauret@free.fr wrote:
I did not know, is it possible to have a tool find the rpms that no rpms depend on, and ask to remove them ?
The problem with this is that RPM does not indicate whether a package has "end user value" (a command line or GUI program, or a daemon), or is just a support library needed by said end user programs, which can be removed if not needed by anyone.
Could we perhaps add such a flag to the rpm database? Then the installer and the various other package installation front-ends could mark user- (or comps-)requested packages as having end user value, and everything else brought in to satisfy dependencies such that it is (or can be) removed as soon as no dependencies remain.
ATrpms has started marking library only packages with
Provides: shared-library-package
so these packages can be identifies with
rpm --whatprovides shared-library-package
and be probed for garbage collection.
I.e. there is no need to extend rpm, you have everything already in place.
Axel Thimm wrote:
On Mon, Jan 24, 2005 at 03:05:29PM -0200, Alexandre Oliva wrote:
On Jan 24, 2005, Ralf Ertzinger fedora-devel@camperquake.de wrote:
Hi. Aurelien Bompard gauret@free.fr wrote:
I did not know, is it possible to have a tool find the rpms that no rpms depend on, and ask to remove them ?
The problem with this is that RPM does not indicate whether a package has "end user value" (a command line or GUI program, or a daemon), or is just a support library needed by said end user programs, which can be removed if not needed by anyone.
Could we perhaps add such a flag to the rpm database? Then the installer and the various other package installation front-ends could mark user- (or comps-)requested packages as having end user value, and everything else brought in to satisfy dependencies such that it is (or can be) removed as soon as no dependencies remain.
ATrpms has started marking library only packages with
Provides: shared-library-package
so these packages can be identifies with
rpm --whatprovides shared-library-package
and be probed for garbage collection.
I.e. there is no need to extend rpm, you have everything already in place.
And there's no need for the *.spec churn and the marker in dependencies:
rpm -qa 'classdict=*shared object*'
73 de Jeff
On Mon, Jan 24, 2005 at 02:32:43PM -0500, Jeff Johnson wrote:
Axel Thimm wrote:
On Mon, Jan 24, 2005 at 03:05:29PM -0200, Alexandre Oliva wrote:
On Jan 24, 2005, Ralf Ertzinger fedora-devel@camperquake.de wrote:
Hi. Aurelien Bompard gauret@free.fr wrote:
I did not know, is it possible to have a tool find the rpms that no rpms depend on, and ask to remove them ?
The problem with this is that RPM does not indicate whether a package has "end user value" (a command line or GUI program, or a daemon), or is just a support library needed by said end user programs, which can be removed if not needed by anyone.
Could we perhaps add such a flag to the rpm database? Then the installer and the various other package installation front-ends could mark user- (or comps-)requested packages as having end user value, and everything else brought in to satisfy dependencies such that it is (or can be) removed as soon as no dependencies remain.
ATrpms has started marking library only packages with
Provides: shared-library-package
so these packages can be identifies with
rpm --whatprovides shared-library-package
and be probed for garbage collection.
I.e. there is no need to extend rpm, you have everything already in place.
And there's no need for the *.spec churn and the marker in dependencies:
rpm -qa 'classdict=*shared object*'
wow, nice trick, I'm impressed! With rpm one cannot stop learning :)
Indeed, if all shared libs were already packaged in the suggested scheme, this would catch them all.
I don't think everything should be packaged that way (distribution invariants like glibc for instance not), and such a idiom would also need its time to spread into rawhide, so having the Provides: shared-library-package is a good hook to distinguish leaf-disposable packages from others that are bundling libs and further stuff.
Harald Hoyer wrote:
Jeff Johnson wrote:
And there's no need for the *.spec churn and the marker in dependencies:
rpm -qa 'classdict=*shared object*'
73 de Jeff
which fails on RHEL4 printing PIE packages also...
which is a data problem, maintaining magic with the latest and greatest rules, so that the appropriate rules are used and included in metadata when packages are built.
Try file your-favorite-PIE-executable rpm gets the same answer as file, for exactly the same reasons.
Patches cheerfully accepted!
73 de Jeff
On Jan 24, 2005, Axel Thimm Axel.Thimm@ATrpms.net wrote:
On Mon, Jan 24, 2005 at 03:05:29PM -0200, Alexandre Oliva wrote:
On Jan 24, 2005, Ralf Ertzinger fedora-devel@camperquake.de wrote:
The problem with this is that RPM does not indicate whether a package has "end user value" (a command line or GUI program, or a daemon), or is just a support library needed by said end user programs, which can be removed if not needed by anyone.
Could we perhaps add such a flag to the rpm database? Then the installer and the various other package installation front-ends could mark user- (or comps-)requested packages as having end user value, and everything else brought in to satisfy dependencies such that it is (or can be) removed as soon as no dependencies remain.
ATrpms has started marking library only packages with
Provides: shared-library-package
so these packages can be identifies with
rpm --whatprovides shared-library-package
and be probed for garbage collection.
The weak point of your argument is that it assumes that the only kind of package that doesn't provide "end user value" is the kind that provides shared-library-package. This is just not true, although I must admit it's the most common case.
Having package installers pin user-selected packages, or unpin packages brought in only to satisfy dependencies, would enable all cases to work, not only the shared library case, even without a special provides or the too-inclusive mechanism proposed by jeff.
I.e. there is no need to extend rpm, you have everything already in place.
Not quite. Consider that I might actually want to keep a shared lib around (say libdvdcss, only used as a plugin by libdvdread). With your scheme, there's no way to tell it from any other shared lib-providing package, so it could be garbage collected along with other libs. Sure enough, I could install my own meta-package with an explicit requires to keep the lib-providing package installed, but why should I have to go through these hoops if rpm might instead offer a `user-requested' bit to keep a package installed even if nothing else requires it?
Alexandre Oliva wrote:
On Jan 24, 2005, Axel Thimm Axel.Thimm@ATrpms.net wrote:
On Mon, Jan 24, 2005 at 03:05:29PM -0200, Alexandre Oliva wrote:
On Jan 24, 2005, Ralf Ertzinger fedora-devel@camperquake.de wrote:
The problem with this is that RPM does not indicate whether a package has "end user value" (a command line or GUI program, or a daemon), or is just a support library needed by said end user programs, which can be removed if not needed by anyone.
Could we perhaps add such a flag to the rpm database? Then the installer and the various other package installation front-ends could mark user- (or comps-)requested packages as having end user value, and everything else brought in to satisfy dependencies such that it is (or can be) removed as soon as no dependencies remain.
ATrpms has started marking library only packages with
Provides: shared-library-package
so these packages can be identifies with
rpm --whatprovides shared-library-package
and be probed for garbage collection.
The weak point of your argument is that it assumes that the only kind of package that doesn't provide "end user value" is the kind that provides shared-library-package. This is just not true, although I must admit it's the most common case.
Having package installers pin user-selected packages, or unpin packages brought in only to satisfy dependencies, would enable all cases to work, not only the shared library case, even without a special provides or the too-inclusive mechanism proposed by jeff.
The concept of "pinning" can will stop a depsolver from unattended, batch mode, upgrades.
But up2date certainly has (at least) the two essential "pinning" concepts: a) Never change this package. b) Never change this file. and yum has (at least) a).
I.e. there is no need to extend rpm, you have everything already in place.
Not quite. Consider that I might actually want to keep a shared lib around (say libdvdcss, only used as a plugin by libdvdread). With your scheme, there's no way to tell it from any other shared lib-providing package, so it could be garbage collected along with other libs. Sure enough, I could install my own meta-package with an explicit requires to keep the lib-providing package installed, but why should I have to go through these hoops if rpm might instead offer a `user-requested' bit to keep a package installed even if nothing else requires it?
The real problem with "pinning" is one of mechanism vs. policy.
The pain that you -- an active nerd ;-) -- might be willing to accomodate and tolerate is very different from what most users are willing to tolerate. An QA on upgrades in the face of "pinning" is way way more complex.
Erasing unused packages has never been seriously attempted, mostly because the primary goal of installers and depsolvers is Upgrade! not otherwise.
73 de Jeff
On Jan 25, 2005, Jeff Johnson n3npq@nc.rr.com wrote:
Alexandre Oliva wrote:
Having package installers pin user-selected packages, or unpin packages brought in only to satisfy dependencies, would enable all cases to work, not only the shared library case, even without a special provides or the too-inclusive mechanism proposed by jeff.
The concept of "pinning" can will stop a depsolver from unattended, batch mode, upgrades.
Not the same pinning. I'm talking about adding a bit that would tell whether a particular package was requested by the user or brought in just to satisfy a dependency of another package. Packages of the latter group that no longer have any dependency in the database may be garbage collected. Packages of the former group shouldn't.
The pain that you -- an active nerd ;-) -- might be willing to accomodate and tolerate is very different from what most users are willing to tolerate. An QA on upgrades in the face of "pinning" is way way more complex.
We're talking about different sorts of pinning. Please re-read my suggestion with the paragraph I wrote above in mind, and hopefully it will make sense, and not seem too complicated of a concept.
Erasing unused packages has never been seriously attempted, mostly because the primary goal of installers and depsolvers is Upgrade! not otherwise.
Yeah, but this discussion got into the topic of removing unused packages. A number of different heuristics were suggested, but all of them were not more than heuristics. What I suggest (marking packages in the database as having been brought in only to satisfy dependencies) would enable such packages to be easily located and removed when other packages that caused them to be brought in are removed.
Not the same pinning. I'm talking about adding a bit that would tell whether a particular package was requested by the user or brought in just to satisfy a dependency of another package. Packages of the latter group that no longer have any dependency in the database may be garbage collected. Packages of the former group shouldn't.
Right but the problem is this.
sometimes, when I know a dep a package needs i'll request it too.
so instead of: yum install rdiff-backup
I'll run: yum install rdiff-backup librsync
So, I've requested it - but only b/c I know it's a dep.
-sv
seth vidal wrote:
Right but the problem is this. sometimes, when I know a dep a package needs i'll request it too. so instead of: yum install rdiff-backup I'll run: yum install rdiff-backup librsync So, I've requested it - but only b/c I know it's a dep.
I don't think many people do that, only those that know what is required by what. Besides, since you've added manually on install, you'll probably add it manually on removal, won't you ? So there will be no lib leftover.
Aurélien
On Tue, 2005-01-25 at 14:44 +0100, Aurelien Bompard wrote:
seth vidal wrote:
Right but the problem is this. sometimes, when I know a dep a package needs i'll request it too. so instead of: yum install rdiff-backup I'll run: yum install rdiff-backup librsync So, I've requested it - but only b/c I know it's a dep.
I don't think many people do that, only those that know what is required by what. Besides, since you've added manually on install, you'll probably add it manually on removal, won't you ? So there will be no lib leftover.
No, not necessarily.
My point is that marking it based on what you _think_ the user is doing is not going to produce reliable results. -sv
seth vidal wrote:
Not the same pinning. I'm talking about adding a bit that would tell whether a particular package was requested by the user or brought in just to satisfy a dependency of another package. Packages of the latter group that no longer have any dependency in the database may be garbage collected. Packages of the former group shouldn't.
Right but the problem is this.
sometimes, when I know a dep a package needs i'll request it too.
so instead of: yum install rdiff-backup
I'll run: yum install rdiff-backup librsync
So, I've requested it - but only b/c I know it's a dep.
why did you do so? if anyone you should know that yum will intall it too. is it faster? or? is there _any_ reason for this???
On Tue, Jan 25, 2005 at 02:24:06AM -0200, Alexandre Oliva wrote:
On Jan 24, 2005, Axel Thimm Axel.Thimm@ATrpms.net wrote:
On Mon, Jan 24, 2005 at 03:05:29PM -0200, Alexandre Oliva wrote:
On Jan 24, 2005, Ralf Ertzinger fedora-devel@camperquake.de wrote:
The problem with this is that RPM does not indicate whether a package has "end user value" (a command line or GUI program, or a daemon), or is just a support library needed by said end user programs, which can be removed if not needed by anyone.
Could we perhaps add such a flag to the rpm database? Then the installer and the various other package installation front-ends could mark user- (or comps-)requested packages as having end user value, and everything else brought in to satisfy dependencies such that it is (or can be) removed as soon as no dependencies remain.
ATrpms has started marking library only packages with
Provides: shared-library-package
so these packages can be identifies with
rpm --whatprovides shared-library-package
and be probed for garbage collection.
The weak point of your argument is that it assumes that the only kind of package that doesn't provide "end user value" is the kind that provides shared-library-package. This is just not true, although I must admit it's the most common case.
Well, "anems are but sound and smoke". Originally I had "rtp" for runtimepackage, but this sounded like coming from the Windows side of the world.
Since the current greatest pain are shared libs I decided to get more specific. I wouldn't mind an alternative suggestion. The important thing is that the mechanism works.
I.e. there is no need to extend rpm, you have everything already in place.
Not quite. Consider that I might actually want to keep a shared lib around (say libdvdcss, only used as a plugin by libdvdread). With your scheme, there's no way to tell it from any other shared lib-providing package, so it could be garbage collected along with other libs.
Well, make the garbage collector have a config file with a filter with user configurable hold-backs. That isn't rocket science, is it? ;)
On Tue, 2005-01-25 at 02:24 -0200, Alexandre Oliva wrote:
Having package installers pin user-selected packages, or unpin packages brought in only to satisfy dependencies, would enable all cases to work, not only the shared library case, even without a special provides or the too-inclusive mechanism proposed by jeff.
The pacman package manager from Arch Linux (which I used to package GNOME for) seems to have recently gained a feature like this. It's fortunately better then just pinning; pinning means the package can't be upgraded.
Basically, pacman marks for each installed package whether the package was pulled in explicitly by the user or as a dependency of another package. There is then a command that can remove any package which was not explicitly requested by the user and which is not depended on.
If this was combined and made a little smarter to deal with RPM groups (something else that pacman supports, albeit in a much different way) then the method could be used in rpm and would probably work very well.
Debian also has the deborphan utility, although it isn't very intelligent at all; it just bases its decisions on whether a package has dependencies, which can lead to a lot of manual intervention after it generates a package removal list. I think the model used by pacman is superior here.
On Mon, 24 Jan 2005, Ralf Ertzinger wrote:
Aurelien Bompard gauret@free.fr wrote:
I did not know, is it possible to have a tool find the rpms that no rpms depend on, and ask to remove them ?
The problem with this is that RPM does not indicate whether a package has "end user value" (a command line or GUI program, or a daemon), or is just a support library needed by said end user programs, which can be removed if not needed by anyone.
That's why a long time ago I proposed to have a policy that forces to use lib%name in case where it was a shared library used by more than 1 application.
If these lib%name packages where designed so they would not conflict, smart package managers could handle them identically as kernel packages. Or at least that was the idea back then.
Something like this will only work if Red Hat internally sticks to a deterministic naming policy that is coherent. Until today it failed to do that :) Mandrake and Debian are much better in this regard.
-- dag wieers, dag@wieers.com, http://dag.wieers.com/ -- [all I want is a warm bed and a kind word and unlimited power]
Le lundi 24 janvier 2005 à 12:59 +0100, Aurelien Bompard a écrit :
Jeff Johnson wrote:
Try with rpm -i.
Yeah OK. How about something that would be understood by depsolvers then ?
This must be a common problem, isn't it ? What do you do when an important library changes its soname in the next version ?
OK, look at Mandrake : libglib1.2-1.2.10-14mdk.i586.rpm libglib2.0_0-2.4.6-1mdk.i586.rpm
Fedora : glib-1.2.10-15.i386.rpm glib2-2.4.7-1.i386.rpm
C'est blanc bonnet et bonnet blanc :-)
Aurélien
Féliciano Matias wrote:
C'est blanc bonnet et bonnet blanc :-)
Agreed, we already do that. If it's considered "good", then could it be a packaging policy ?
Aurélien
On Mon, 24 Jan 2005 13:24:42 +0100, Aurelien Bompard wrote:
Féliciano Matias wrote:
C'est blanc bonnet et bonnet blanc :-)
Agreed, we already do that. If it's considered "good", then could it be a packaging policy ?
Do what's necessary. If you no longer need the older libkexif, because you would rebuild its dependencies against the newer libkexif, you could simply upgrade and need not worry about moving the old one into a package like likexif0. Also note that multiple -devel packages often conflict (e.g. gpgme03-devel, gpgme-devel in pre-extras). Unless you moved libraries _and headers into separate versioned sub-directories.
Michael Schwendt wrote:
Do what's necessary. If you no longer need the older libkexif, because you would rebuild its dependencies against the newer libkexif, you could simply upgrade and need not worry about moving the old one into a package like likexif0.
I see. That's what I'm going to do beacause by chance I also maintain the applications depending on libkexif. The point of my post is not to solve my own problem, but rather to extract a policy for new packagers running into this problem.
So the unofficial "policy" about this is to keep the regular name across upgrades, and to provide an rpm with the soname in the "Name" tag for older versions (if needed). OK, fine.
Aurélien
Le lundi 24 janvier 2005 à 13:24 +0100, Aurelien Bompard a écrit :
Féliciano Matias wrote:
C'est blanc bonnet et bonnet blanc :-)
Agreed, we already do that. If it's considered "good", then could it be a packaging policy ?
Why not.
But : http://fedora.redhat.com/about/objectives.html
Non-Objectives of Fedora Core: (...) 3-Being a dumping ground for unmaintained or poorly designed software.
The objectives of fedora is not to provide support for legacy libraries.
I use RH/Fedora for many years and personally, I don't really care about multi lib.so. I don't remember to be annoyed by this.
On Monday 24 January 2005 21:21, Féliciano Matias wrote:
I don't remember to be annoyed by this.
You probably didn't remember because non-participation in making apps requiring older libs makes it a bit difficult. When upstream doesn't update API calls in time for full dist release, then it's gonna be a problem. Consider openssl; other examples abound.
Aurelien Bompard wrote:
Jeff Johnson wrote:
Try with rpm -i.
Yeah OK. How about something that would be understood by depsolvers then ?
Depsolvers (at least correctly written ones) use Provides:, not Name:, for choosing what packages to install.
The only reason for ornamenting the package name with gunk is to attempt to provide a clue of differences through primitive HTTP/FTP browser GUI's.
This must be a common problem, isn't it ? What do you do when an important library changes its soname in the next version ?
Usually a soname is slam-dunked, the library and every package that uses the library are changed at the same time. That works for the distro itself.
73 de Jeff
On Mon, 24 Jan 2005 07:13:38 -0500, Jeff Johnson wrote:
Aurelien Bompard wrote:
Jeff Johnson wrote:
Try with rpm -i.
Yeah OK. How about something that would be understood by depsolvers then ?
Depsolvers (at least correctly written ones) use Provides:, not Name:, for choosing what packages to install.
We need to support what we do have right now. And neither Yum nor "rpm -Uvh" would _not_ upgrade package libfoo to a newer libfoo.
The only reason for ornamenting the package name with gunk is to attempt to provide a clue of differences through primitive HTTP/FTP browser GUI's.
This must be a common problem, isn't it ? What do you do when an important library changes its soname in the next version ?
Usually a soname is slam-dunked, the library and every package that uses the library are changed at the same time. That works for the distro itself.
Does it? Then why do we have packages like openmotif21 and openmotif, libpng10, libpng10-devel, libpng, libpng-devel in the distro?
It's not different from what we've done in fedora.us packages. Include parts of the soname version in the package name to make multiple library versions coexist nicely, i.e. also during upgrades. Package resolvers pick the right package based on automatic Provides/Requires.
Michael Schwendt wrote:
On Mon, 24 Jan 2005 07:13:38 -0500, Jeff Johnson wrote:
Aurelien Bompard wrote:
Jeff Johnson wrote:
Try with rpm -i.
Yeah OK. How about something that would be understood by depsolvers then ?
Depsolvers (at least correctly written ones) use Provides:, not Name:, for choosing what packages to install.
We need to support what we do have right now. And neither Yum nor "rpm -Uvh" would _not_ upgrade package libfoo to a newer libfoo.
From multiply installed rpm -i? Sure, no application gets that right.
I question "need" however, as "Don't do that." is workable alternative many years now, and there are other techiques, like readline43 and compat-db, that are sufficient for the MUSTHAVE problems.
And after incompatible sonames, comes issues with --relocate that noone has ever attempted to solve the upgrade cases for. <shrug>
But I'm sure the java heads will break *.rpm packaging with --relocate for their *.jar files if/when they discover --relocate. Again <shrug>, there's invariably something along the lines of "Don't do that." that can be devised.
The only reason for ornamenting the package name with gunk is to attempt to provide a clue of differences through primitive HTTP/FTP browser GUI's.
This must be a common problem, isn't it ? What do you do when an important library changes its soname in the next version ?
Usually a soname is slam-dunked, the library and every package that uses the library are changed at the same time. That works for the distro itself.
Does it? Then why do we have packages like openmotif21 and openmotif, libpng10, libpng10-devel, libpng, libpng-devel in the distro?
Because while it works for the distro, slam dunking does not work for 3rd party packing. In fact, slam-dunking is not gonna work for 2nd party packaging like Fedora Extras. What will happen instead (imho) is that library sonames simply won't change, even if they should.
It's not different from what we've done in fedora.us packages. Include parts of the soname version in the package name to make multiple library versions coexist nicely, i.e. also during upgrades. Package resolvers pick the right package based on automatic Provides/Requires.
So put sonames into package names if that floats your fedora.us boat. Sooner or later you will run into kernel file system imposed limits on package file names. <shrug>
73 de Jeff
Le lundi 24 janvier 2005 à 07:33 -0500, Jeff Johnson a écrit :
But I'm sure the java heads will break *.rpm packaging with --relocate for their *.jar files if/when they discover --relocate. Again <shrug>, there's invariably something along the lines of "Don't do that." that can be devised.
Actually java is so bad at searching jars they'll be the last thing anyone wants to relocate. Too many hardcoded classpathes everywhere...
Nicolas Mailhot wrote:
Le lundi 24 janvier 2005 à 07:33 -0500, Jeff Johnson a écrit :
But I'm sure the java heads will break *.rpm packaging with --relocate for their *.jar files if/when they discover --relocate. Again <shrug>, there's invariably something along the lines of "Don't do that." that can be devised.
Actually java is so bad at searching jars they'll be the last thing anyone wants to relocate. Too many hardcoded classpathes everywhere...
So I hear. What's sad is that "hardcoded classpaths" is exactly what rpm (and packaging) might help rationalize, mapping the classpath mechanism into package dependencies.
rpm will get to java dependencies some day. I'm not gonna hold my breath, though, because java culture is so vastly different than other coding, and the java heads seem to wish to exist separate-but-equal until they get around to writing a kernel in java that runs everywhere.
73 de Jeff
On Mon, 24 Jan 2005 07:33:24 -0500, Jeff Johnson wrote:
Jeff Johnson wrote:
Try with rpm -i.
Yeah OK. How about something that would be understood by depsolvers then ?
Depsolvers (at least correctly written ones) use Provides:, not Name:, for choosing what packages to install.
We need to support what we do have right now. And neither Yum nor "rpm -Uvh" would _not_ upgrade package libfoo to a newer libfoo.
From multiply installed rpm -i? Sure, no application gets that right.
No. The scenario is like this:
Installed is: libfoo-0.9-3 (which provides libfoo.so.0)
Packager releases: libfoo-1.0-1 (which provides libfoo.so.1)
Then "rpm -ivh libfoo-1.0-1.i386.rpm" works just fine and installs the new library package in parallel, provided that no file conflicts between libfoo-0.9-3 and libfoo-1.0-1 exist. On the contrary, "rpm -Uvh libfoo-1.0-1.i386.rpm" and "yum -y update" would get rid of the old libfoo, running into broken dependencies if other installed packages still require the libfoo.so.0 soname.
It's not different from what we've done in fedora.us packages. Include parts of the soname version in the package name to make multiple library versions coexist nicely, i.e. also during upgrades. Package resolvers pick the right package based on automatic Provides/Requires.
So put sonames into package names if that floats your fedora.us boat. Sooner or later you will run into kernel file system imposed limits on package file names.
<shrug>
<sigh>
Jeff Johnson wrote:
So put sonames into package names if that floats your fedora.us boat. Sooner or later you will run into kernel file system imposed limits on package file names. <shrug>
How many characters could it add ? 2 digits ? 3 digits max ? How about that : $ rpm -q bash-completion bash-completion-0.0-0.fdr.4.20041017
I see your point, adding something else than the name into the package Name tag is opening the door to crazy things, like configure tags (is there really a Debian package with configure tags in the name ???). Still, I think that it could be very beneficial to provide easy upgrade paths. I think that the advantages overcome the shorcomings in this situation.
Aurélien
On Mon, 24 Jan 2005 07:33:24 -0500, Jeff Johnson wrote:
Because while it works for the distro, slam dunking does not work for 3rd party packing. In fact, slam-dunking is not gonna work for 2nd party packaging like Fedora Extras. What will happen instead (imho) is that library sonames simply won't change, even if they should.
This whole mess could be avoided if libkexif had not broken backwards compatibility of course ... it's perhaps not too late to go back and do a new release that doesn't do so?
Mike Hearn wrote:
This whole mess could be avoided if libkexif had not broken backwards compatibility of course ... it's perhaps not too late to go back and do a new release that doesn't do so?
Well, libkexif is a few months old. We can expect unstability for new projects, and as one of Fedora's aim is to be on the bleeding edge, we will see this problem more and more frequently.
Aurélien
On Mon, 24 Jan 2005 23:26:02 +0100, Aurelien Bompard wrote:
Well, libkexif is a few months old. We can expect unstability for new projects, and as one of Fedora's aim is to be on the bleeding edge, we will see this problem more and more frequently.
If it's so unstable just statically link it. Nobody should be dynamically linking against very unstable libraries (libICU *ahem*)
thanks -mike
Mike Hearn wrote:
If it's so unstable just statically link it. Nobody should be dynamically linking against very unstable libraries
I'm not really talking about my particular package here, I can deal with 4 rebuilds at the same time (actually I've already done so). I'm trying to extract some kind of packaging policy for other libraries, on which many more packages could depend. Soname changes happen, sometimes, even with stable libraries. See for example the libpng mess that occured a few years ago. Statically linking should be an exception, and I'm trying to propose a general solution.
Aurélien
On Mon, Jan 24, 2005 at 12:59:57PM +0100, Aurelien Bompard wrote:
Jeff Johnson wrote:
Try with rpm -i.
Yeah OK. How about something that would be understood by depsolvers then ?
installonly packages are already well understood by depsolvers. Look at the kernel packages.
yum.conf:
installonlypkgs=
Charles R. Anderson wrote:
installonly packages are already well understood by depsolvers. Look at the kernel packages.
True, but we would need something which can be set in the package itself. It looks like something unofficial is already used by Red Hat : if other packages depend on the old library, provide it in a different package containing the library's soname. For example, libpng and libpng10.
Aurélien
On Mon, 24 Jan 2005 09:47:39 -0500, Charles R. Anderson wrote:
On Mon, Jan 24, 2005 at 12:59:57PM +0100, Aurelien Bompard wrote:
Jeff Johnson wrote:
Try with rpm -i.
Yeah OK. How about something that would be understood by depsolvers then ?
installonly packages are already well understood by depsolvers. Look at the kernel packages.
yum.conf:
installonlypkgs=
Nah, that's only a work-around. There's no way for a repository to flag packages as install-only.
On Mon, 2005-01-24 at 16:01 +0100, Michael Schwendt wrote:
On Mon, 24 Jan 2005 09:47:39 -0500, Charles R. Anderson wrote:
On Mon, Jan 24, 2005 at 12:59:57PM +0100, Aurelien Bompard wrote:
Jeff Johnson wrote:
Try with rpm -i.
Yeah OK. How about something that would be understood by depsolvers then ?
installonly packages are already well understood by depsolvers. Look at the kernel packages.
yum.conf:
installonlypkgs=
Nah, that's only a work-around. There's no way for a repository to flag packages as install-only.
Using the rpm install method also breaks upgrades. Say you have libfoo-1.0 and libfoo-2.0. Many apps use libfoo-1.0 and will continue to do so because it's a massive API upgrade to libfoo-2.0. think gtk1 vs gtk2. Now a security bug is found in the 1.0 branch, and the developers realize that a lot of users are stuck with apps using that branch, so they release 1.1 with the security update (as the version indicates, it's backwards compatible with 1.0), so libfoo-1.1 is released. With rpm install-only packages libfoo-1.1 would not be installed unless the user manually told the system to do it, since libfoo-2.0 is installed and is newer.
The only way for this to work is make RPM recognize that libfoo-1.0 and libfoo-2.0 are completely different packages. Just because they're both libfoo is irrelevant, they are different APIs and different ABIs.
In GTK's case the second version just got a two appended. Packages are gtk and gtk2.
While that works, in the general case, using the soversion probably just makes more sense. The packages could be, for example, libfoo1.0 and libfoo2.0. The specifics aren't that important, just so long as it works.
Gentlemen:
I have a situation where I need to make a number of identical computers all with the same fedoraCore2.
I have tried putting a second (hdb) disk, doing a "df" on hda and based on the number of 1024 blocks going:
dd if=/dev/hda of=/dev/hdb bs=1024 count=<TheNumberDfShows>
and it blinks the light a long time and then croaks.
I know I did something similar to this a couple of years ago on a different project. Can someone tell me where I am going awry and perhaps educate me a little bit more.
Signed, OldDogTryingToRememberNewTrick
Le mercredi 26 janvier 2005 à 11:04 -0800, cfk a écrit :
Gentlemen:
I have a situation where I need to make a number of identical computers all with the same fedoraCore2.
I have tried putting a second (hdb) disk, doing a "df" on hda and based on the number of 1024 blocks going:
dd if=/dev/hda of=/dev/hdb bs=1024 count=<TheNumberDfShows>
and it blinks the light a long time and then croaks.
I know I did something similar to this a couple of years ago on a different project. Can someone tell me where I am going awry and perhaps educate me a little bit more.
If you can setup an nfs or ftp/http server somewhere the fastest method is network install via kickstart (using pxe if you can)
It might take a little longer to setup but you'll make up the time very fast. And if you do not have exactly the same hardware (same components & firmware versions) it's way safer.
Regards,
ons, 26.01.2005 kl. 21.19 skrev Nicolas Mailhot:
Le mercredi 26 janvier 2005 à 11:04 -0800, cfk a écrit :
Gentlemen:
I have a situation where I need to make a number of identical computers all with the same fedoraCore2.
I have tried putting a second (hdb) disk, doing a "df" on hda and based on the number of 1024 blocks going:
dd if=/dev/hda of=/dev/hdb bs=1024 count=<TheNumberDfShows>
and it blinks the light a long time and then croaks.
I know I did something similar to this a couple of years ago on a different project. Can someone tell me where I am going awry and perhaps educate me a little bit more.
If you can setup an nfs or ftp/http server somewhere the fastest method is network install via kickstart (using pxe if you can)
It might take a little longer to setup but you'll make up the time very fast. And if you do not have exactly the same hardware (same components & firmware versions) it's way safer.
or you could do the setup you said, just use dd to copy te stuff over. Just boot it off a cdrom.
On Wed, 26 Jan 2005 23:40:15 +0100, Kyrre Ness Sjobak kyrre@solution-forge.net wrote:
ons, 26.01.2005 kl. 21.19 skrev Nicolas Mailhot:
Le mercredi 26 janvier 2005 à 11:04 -0800, cfk a écrit :
Gentlemen:
I have a situation where I need to make a number of identical computers all with the same fedoraCore2.
I have tried putting a second (hdb) disk, doing a "df" on hda and based on the number of 1024 blocks going:
dd if=/dev/hda of=/dev/hdb bs=1024 count=<TheNumberDfShows>
and it blinks the light a long time and then croaks.
I know I did something similar to this a couple of years ago on a different project. Can someone tell me where I am going awry and perhaps educate me a little bit more.
If you can setup an nfs or ftp/http server somewhere the fastest method is network install via kickstart (using pxe if you can)
It might take a little longer to setup but you'll make up the time very fast. And if you do not have exactly the same hardware (same components & firmware versions) it's way safer.
or you could do the setup you said, just use dd to copy te stuff over. Just boot it off a cdrom.
Don't use dd for this. If you have different size hard drive, partition or something else down the line, it can really mess you up. I would recommend a simple tar or rsync command to copy them over.
This is if you wish to do copy machine to machine. Otherwise a custom kickstart file with pxe is a lot easier if done right.
tor, 27.01.2005 kl. 05.56 skrev Christopher Hotchkiss:
On Wed, 26 Jan 2005 23:40:15 +0100, Kyrre Ness Sjobak kyrre@solution-forge.net wrote:
ons, 26.01.2005 kl. 21.19 skrev Nicolas Mailhot:
Le mercredi 26 janvier 2005 à 11:04 -0800, cfk a écrit :
Gentlemen:
I have a situation where I need to make a number of identical computers all with the same fedoraCore2.
I have tried putting a second (hdb) disk, doing a "df" on hda and based on the number of 1024 blocks going:
dd if=/dev/hda of=/dev/hdb bs=1024 count=<TheNumberDfShows>
and it blinks the light a long time and then croaks.
I know I did something similar to this a couple of years ago on a different project. Can someone tell me where I am going awry and perhaps educate me a little bit more.
If you can setup an nfs or ftp/http server somewhere the fastest method is network install via kickstart (using pxe if you can)
It might take a little longer to setup but you'll make up the time very fast. And if you do not have exactly the same hardware (same components & firmware versions) it's way safer.
or you could do the setup you said, just use dd to copy te stuff over. Just boot it off a cdrom.
Don't use dd for this. If you have different size hard drive, partition or something else down the line, it can really mess you up. I would recommend a simple tar or rsync command to copy them over.
This is if you wish to do copy machine to machine. Otherwise a custom kickstart file with pxe is a lot easier if done right.
He said they was identical. Anyway, as long as it is properly copied to disk, shouln't kudzu handle any hardware diffs and reconfigure when the disk is booted?
I wrote this about 2 years ago. I think it is still relevant. One of the ways I would improve it is to use a USB-IDE enclosure for the target drive as you wouldn't have to use Linux rescue nor reboot.
FWIW :
I just spent about 2 days figuring out how to easily clone a Linux HD. I'm writing this to hopefully save someone else the grief that I've gone through.
I'm a relative newbie to Linux. I'm sure the experienced Linux people can enhance/correct/improve what I've written here, but at least this works.
My setup is a PC clone with multiple HD bays and a CDROM player. My source drive is a 6.4 GB IDE hard drive with 3 partitions: boot, swap and root. It is nearly full. I'm running RH8.0 using kernel 2.4.18.something.
The new drive, herein called the "clone", is a 60 GB IDE hard drive.
The major steps to cloning a HD are as follows: 1) partition the clone drive 2) format each partition on the clone drive 3) copy files from the old drive to the clone drive 4) set up grub on the clone drive 5) boot it.
The whole process sounds intimidating, but it really isn't any different or harder than cloning a DOS drive. Some people say that one should just install a fresh copy of Linux on the new drive, making it a new install rather than a clone install, but that is MUCH more work than just making a clone. My old drive has numerous patches, updates, installs and data installed just about everywhere. It works exactly as I want it to. I wouldn't want to repeat the work that it took to achieve that, so I'll clone it rather than start afresh with a new drive.
We use removable hard drives on our PCs. For absolute data safety, when partitioning and formatting hard drives, I remove all drives except the one that I am working on and run from a boot disk. One could easily perform all this work with the new drive mounted alongside an existing drive, but I don't. Thus, all my commands are based on the clone drive being installed as /hda and running from linux rescue. If you are performing your clone with the clone drive installed alongside an existing drive, your commands will be based on /dev/hdb or /dev/hdc, for example.
I also assume that the reader knows how to set up his/her BIOS so that it has the proper boot order when needed, ie boot from CDROM first so that linux rescue can be run.
In detail:
1) Partition the clone drive.
I partitioned the clone drive by running fdisk from linux rescue booted from the install CDROM. When running linux rescue, I DO NOT let the OS mount the hard drive. I always do that manually from the command line when I am doing system maintenance.
Once at the command prompt in linux rescue:
#/sbin/fdisk /dev/hda
Obviously you will change this to suit your partitions and partition types. REMEMBER THAT I INSTALL THE CLONE DRIVE AS PRIMARY MASTER IE: /dev/hda. IF YOU DON'T DO THIS, YOU NEED TO CHANGE THE DRIVE SPECIFICATION
This will start fdisk. From here you are on your own. I created 3 linux partitions: boot, swap and root.
I made boot (/dev/hda1) to be 100 MB in size and used ext3 for the file system.
I made the swap partition (/dev/hda2) twice as big as my motherboard RAM, or 512MB x 2 = 1 GB. It used the Linux swap file type.
I made the root (/dev/hda2) partition consume the rest of the drive and gave it a ext3 file type as well.
NOTE: One should set the label of the partitions while using fdisk. I didn't and had to later. The root drive (/dev/hda3), for example, should be given a label of "/" to properly work with grub as we'll see later on.
2) Format the partitions
If you are following my drive layout, these are the commands you will use.
a) Format the boot partition as ext3: #/sbin/mkfs.ext3 /dev/hda1
b) Format the swap partition as linux swap: #/sbin/mkfs.swap /dev/hda2
c) Format the root partition as ext3: #/sbin/mkfs.ext3 /dev/hda3
Obviously you will change this to suit your partitions and partition types. REMEMBER THAT I INSTALL THE CLONE DRIVE AS PRIMARY MASTER IE: /dev/hda. IF YOU DON'T DO THIS, YOU NEED TO CHANGE THE DRIVE SPECIFICATION
3) Copy the data from the old drive to the clone drive.
To do this, I reboot my computer with both the source and the clone drive in the bays. I usually boot with the source drive in /hda (ie primary master) and the clone drive in /hdc (ie secondary master). My reasoning for doing this is that sometimes new drives (ie the clone) have much newer technology than the old drive and the two will not operate properly on the same ide channel. By putting one on the primary channel and the other on the secondary channel, I can clone really old drives (1995 vintage 1 GB, for example) onto new drives (latest 120 GB) without any problems.
There are two partions that we need to copy data from: the boot partion and the root partion. I prefer to do these copies as two distinct mounts and copies. One could mount the source drive hierarchically and also the clone drive and then issue one copy command, but for only 2 partitions I prefer to do 2 separate copy commands.
Again I boot linux rescue from the CDROM and bypass any automatic mounting of filesystems.
First I mount the old boot partion as /oldboot
# cd / # mkdir oldboot # mount -text3 /dev/hda1 /oldboot # cd oldboot # ls <you should get a listing of your old boot partition here>
Next, I mount the new boot partition as /newboot
#cd / # mkdir newboot # mount -text3 /dev/hdc1 /newboot # cd newboot # ls <you should get a listing of nothing here because it should be empty>
Now, I copy the data from the old boot partition to the new boot partition:
# cp -aR /oldboot/* /newboot 2>/newboot/error.log
The -a attribute tells cp to keep all of the current attributes. The -R attribute tells cp to do a recursive copy. optional: if you want to watch the filenames stream by while they are being copied, add a "-v" to the attributes, ie "-avR"
The "2>error.log" tells the OS to redirect the stderror output of cp into the error.log file in /newboot.
The boot partition is fairly small, so this shouldn't take very long. When the copy is done, I do the following:
# cd /newboot # ls <a display of the new partition should occur. It shouldn't be empty> # more error.log < a display of the error.log file should appear. It should be empty or nearly so .>
Note: one could write a script that compared the attributes of every file in /newboot to every file in /oldboot, but I haven't found this to be necessary. As long as there are no alarming errors in error.log, the new partition should have all the files that the source drive had.
Now I repeat that copy process for the root partition: # cd / # mkdir newroot # mount -text3 /dev/hdc3 /newroot # ls /newroot <should be empty>
# mkdir oldroot # mount -text3 /dev/hda3 /oldroot # ls /oldroot # <should contain root stuff>
# cp -aR /oldroot/* /newroot 2>/newroot/error.log # cd newroot # ls <a dislpay of the new root should be here> # more error.log
If both error.log files are clean, consider your copy successful. Although the partitions and formated and the data has been copied, the clone drive is still not bootable.
BTW: many experienced linux people will tell you to use dd or tar or several other copy processes together with pipes and redirection. They might be faster, but I had trouble with most of them in one or more situations. cp is easy to understand and I've yet to find an instance where it doesn't work.
4) Set up grub on the new drive.
Here is where I got hung up and where most people get stuck.
First, we need to install grub into the boot sector of the boot drive/partition. This does NOT occur automatically when formating the boot partion, nor when copying the root files. This is akin to performing a >sys c: in Dos. To install grub on the boot sector, we need to run something called grub-install from the command line, but first we must mount our filesystem and set up a new root directory using our new filesystem. Here is how I do that:
a) boot linux rescue with the clone drive installed as /hda ie primary master. (It could actually be done with the drive mounted anywhere.)
b) mount the new cloned filesystem as it will be in the computer when running, but mount it under /new. NOTE: some people will tell you to mount it under /mnt, which already exists in linux rescue. Don't do this. Why ? Because the command shell for linux rescue appears to be running under /mnt and when/if you mount the new filesystem under /mnt, you will no longer have a shell to run under ! Ie, you won't have access to ls, mount, chroot, etc. So, here are my commands:
# mkdir new # mount -text3 /dev/hda3 /new # ls /new <listing should occur here> # mount -text3 /dev/hda1 /new/boot # ls /new/boot <listing of /boot should occur here>
We've now got our cloned filesystem mounted exactly as it will be in our new system, but under /new. The boot partition (/dev/hda1) is mounted in /boot just like it will be when we run the drive. We didn't really need to do the root hierarchical mounting, but I like it that way.
Now, we change the root directory of our filesystem and start a new bash shell. We need to do this because grub-install is not present on the linux rescue shell:
# chroot /new /bin/bash
This command will make / be what /new/ was. It will also start /bin/bash, which was /new/bin/bash, which is the bash shell on our cloned drive. We can now go to work using all the tools present on our cloned drive, ie all the tools that were present on the old drive. We can actually startX is we need to, although I never have.
We need to install grub onto the boot sector of the boot drive, so:
# /sbin/grub-install /dve/hda1
BTW: if we want to learn about grub, we can use man grub or info grub now. info grup-install is interesting to read. Neither man nor info are available in linux rescue mode.
Grub is now installed on the boot sector, but sometimes it needs to be configured to find grub.conf. Grub.conf is the configuration file that grub runs to know what kernels to offer for booting, where the images are, etc. Grub config is located in hda1/grub/, which is really /boot/grub. (Remember that we mounted hda1 as /new/boot/, but our chroot changed that to /boot/.)
"info grub" in the FAQ part contains this nice little gem:
<paste begins> I have a separate boot partition and GRUB doesn't recognize it. This is often reported as a "bug", but this is not a bug really. This is a feature.
Because GRUB is a boot loader and it normally runs under no operating system, it doesn't know where a partition is mounted under your operating systems. So, if you have the partition `/boot' and you install GRUB images into the directory `/boot/grub', GRUB recognizes that the images lies under the directory `/grub' but not `/boot/grub'. That's fine, since there is no guarantee that all of your operating systems mount the same partition as `/boot'.
There are several solutions for this situation.
1. Install GRUB into the directory `/boot/boot/grub' instead of `/boot/grub'. This may sound ugly but should work fine.
2. Create a symbolic link before installing GRUB, like `cd /boot && ln -s . boot'. This works only if the filesystem of the boot partition supports symbolic links and GRUB supports the feature as well.
3. Install GRUB with the command `install', to specify the paths of GRUB images explicitly. Here is an example:
grub> root (hd0,1) grub> install /grub/stage1 d (hd0) /grub/stage2 p /grub/grub.conf <paste ends>
I like option number 3, so that is what I run. However, my boot drive is /dev/hda1, which is (hd0,0). (See info grub for info on how grub specifies drives. Remember that all drive numbers in grub are base 0, not base 1). I thus run the following commands:
# grub grub> root (hd0,0) grub> install /grub/stage1 d (hd0) /grub/stage2 p /grub/grub.conf <if you've got the directories and drive number right, there won't be an error. If one is wrong, there will be a file not found error> grub> exit
Grub itself is now set up in the boot sector and it knows where grub.conf is. If grub.conf worked well in the source drive, it should work well on the cloned drive.
At this point, your drive is bootable, although it probably won't boot. Here is the rub: many of the entries in grub, automount and init files refer to drives by their LABEL. Yes, there are now 3 ways to refer to drives in Linux:
1) by the device ie: /dev/hda1, hda2, etc. see man mount for more on this. 2) by the grub specification ie: (hd0,0) see info grub for more on this. 3) by the drive label.
It is handy for certain linux processes to be able to refer to drives by their function (ie root, boot, etc.) rather than have them hardcoded into the /dev/ spec. Thus, certain processes refer to drives by their label, which is chosen to indicate their function. It turns out that certain processes in the boot process refer to the root drive by its label, which they expect to be "/". In order for our clone drive to boot properly, we need to set its label to "/". Issue the following command to do this:
# e2label /dev/hda3 "/"
There might be other ways to do this, like during fdisk or mkfs, but I am a newbie, remember. I think that labels can be checked with hdparm, but I can't remember.
Interestingly enough the clone drive boots with only the label for root changed. I did not have to change the label for the boot partition. However, I'm sure that somewhere there is an application or process that will refer to the boot partition by its label, thus I would set its label to /boot as well:
# el2label /dev/hda1 "/boot"
Your clone drive should now be an exact image of the source drive and bootable.
Using Clones
I keep a small, outdated drive with nothing but a clean, uptodate copy of Linux on it. Anytime we need to build a new drive for a new computer or a new process (in an existing computer via installing a new drive), I just clone the "clean" drive. This works really well with new users (friends) because I know how everything is set up and they don't have to muddle through the installation process.
The Beauty of Linux
This whole process could actually be done as one script, with the grub commands put into a separate grup scrip file. I haven't done it yet, but the whole cloning process would just be a script that could be run at some period of convenience, like just before leaving work for home or just before going to bed.
I removed the source drive while formatting and partitioning and rebooted /remounted when setting up grub, but that wouldn't have to be done that way.
One more thing: the cloning script could run the new drive and add a few default users as well as initiate the email client.
Enjoy !!!
possible changes: use mirrordir, but it isn't available on most machines without installing an RPM
Change the cp attributes. cp /dir/* won't copy the dot files in /dir/
and this:
<paste> You are a bit wrong with the cp arguments: the -a arguments archives a whole directory structure, which means: - preserves uig/gid/timestamps, - archives recursively - Doesn't go through symbolic links (it just copies them as well)
From the ls manpage, -a is the same with -dpR
So, to mirror your harddisk, just: cp -avx / /mnt/newdisk
-v: To view what is being done. -x: To stay to the current filesystem.
<end of paste>
Lastly, mkdir /proc
On Mon, 2005-01-24 at 09:47 -0500, Charles R. Anderson wrote:
On Mon, Jan 24, 2005 at 12:59:57PM +0100, Aurelien Bompard wrote:
Jeff Johnson wrote:
Try with rpm -i.
Yeah OK. How about something that would be understood by depsolvers then ?
installonly packages are already well understood by depsolvers. Look at the kernel packages.
yum.conf:
installonlypkgs=
I'd recommend finding a way to specify this in the packaging information, though.
otherwise you'll be editing that line all the time. :(
-sv
Jeff Johnson wrote :
%post chattr +i `rpm -ql name`
should make the package non-upgradeable no matter what.
Nice one, "bulldozer style". Never thought of it before :-)
Matthias
Matthias Saou wrote:
Jeff Johnson wrote :
%post chattr +i `rpm -ql name`
should make the package non-upgradeable no matter what.
Nice one, "bulldozer style". Never thought of it before :-)
You miss the point.
There is simply no way for rpm (or any rpmlib based tool) to guarantee package non-upgradeability reliably.
There are side effects, not only from opaque scripts, but also from system administrators, and from selinux policy, and more, that are not represented in any metadata that rpm has access to, that are necessary to make a package -- and all the package contents -- non-upgradeable.
Meanwhile, it's kinda pointless to attempt to mark a package non-upgradeable imho *without* a bulldozer and more to provide the strongest possible guarantee reliably.
Sure, can be done, but is trivially subverted. In fact, there's almost certainly gonna have to be Yet Another Option to rpm to disable (or otherwise manage) packaging mistakes from an advisory Autoupgrade: no marker in packaging.
I question whether it's worth the complexity cost in rpm.
I hope that clarifies.
73 de Jeff
On Fri, Jan 28, 2005 at 07:57:09PM +0100, Matthias Saou wrote:
%post chattr +i `rpm -ql name`
should make the package non-upgradeable no matter what.
Nice one, "bulldozer style". Never thought of it before :-)
It does tend to make a nasty mess of update tools however. If you want to do this for the normal situation just bump the epoch of your private rebuilds
Hi.
Jeff Johnson n3npq@nc.rr.com wrote:
Try with rpm -i.
Try updating that.
Ralf Ertzinger wrote:
Hi.
Jeff Johnson n3npq@nc.rr.com wrote:
Try with rpm -i.
Try updating that.
rpm -e N-V-R && rpm -i N-V-R.A.rpm
is exactly an update.
73 de Jeff
On Jan 24, 2005, Jeff Johnson n3npq@nc.rr.com wrote:
rpm -e N-V-R && rpm -i N-V-R.A.rpm
is exactly an update.
You missed the --nodeps after -e. And it doesn't quite work for rpm itself :-)
On Mon, 24 Jan 2005, Aurelien Bompard wrote:
Hi all
A question to packagers: what would you think of a policy to add the library soname in package libraries ? For example, I have a libkexif package, which provides libkexif.so.0, and at least 3 applications depend on it. Now there is an update to libkexif, which provides libkexif.so.1. I can't update libkexif without updating the applications depending on it.
In this particular case, my suggestion would be to bite the bullet and upgrade libkexif and all apps that depend on it.
While were on the subjet of exif, any reason fedora-devel hasn't upgraded to libexif-0.6.x yet? libkexif-0.2.1's configure scriptlet strongly recommends it.
-- Rex
Rex Dieter wrote:
In this particular case, my suggestion would be to bite the bullet and upgrade libkexif and all apps that depend on it.
Well, that's what I'm going to do, because by chance I also maintain the apps depending on it. But in a community-open distribution, it does not have to be the case. My very own particular problem is not significant, I'd just like to raise the general issue so that we could work out a general solution.
Aurélien
On Mon, 24 Jan 2005, Aurelien Bompard wrote:
Hi all
A question to packagers: what would you think of a policy to add the library soname in package libraries ? For example, I have a libkexif package, which
If it *must* be done, you need to follow (appoximately at least) Mandrake's style, and build libkexif0 libkexif1 ... etc
With each (ideally) including Provides: libkexif = %{version}
Also, doing it this way, You can't ever have a *real* libkexif rpm pkg anymore due to a recent rpm bug... err... feature: http://bugzilla.redhat.com/bugzilla/130352 http://bugzilla.redhat.com/bugzilla/111071
-- Rex
On Mon, 24 Jan 2005 06:39:55 -0600 (CST), Rex Dieter wrote:
On Mon, 24 Jan 2005, Aurelien Bompard wrote:
Hi all
A question to packagers: what would you think of a policy to add the library soname in package libraries ? For example, I have a libkexif package, which
If it *must* be done, you need to follow (appoximately at least) Mandrake's style, and build libkexif0 libkexif1 ... etc
With each (ideally) including Provides: libkexif = %{version}
Also, doing it this way, You can't ever have a *real* libkexif rpm pkg anymore due to a recent rpm bug... err... feature: http://bugzilla.redhat.com/bugzilla/130352 http://bugzilla.redhat.com/bugzilla/111071
Add this one to the list: https://bugzilla.redhat.com/145091#c6
On Mon, Jan 24, 2005 at 11:34:03AM +0100, Aurelien Bompard wrote:
Hi all
A question to packagers: what would you think of a policy to add the library soname in package libraries ?
Yes, please. This is very important for smooth upgrades, i.e. no pressure to rebuild all dependent packages in one atomic step. This becomes even more important for a community with multiple repos like fedora core is evolving to.
For example, I have a libkexif package, which provides libkexif.so.0, and at least 3 applications depend on it. Now there is an update to libkexif, which provides libkexif.so.1. I can't update libkexif without updating the applications depending on it. OK, this is probably something that you know much better than me, and that you've run into several times before, so you probably already know the solution. I've searched a bit, and it seems that Mandrake and Debian both have a policy to include the library soname in the package name : http://qa.mandrakesoft.com/twiki/bin/view/Main/RpmHowToAdvanced#Library_poli... http://www.debian.org/doc/debian-policy/ch-sharedlibs.html
How about a similar policy for Fedora ? Is it the best solution to this problem ?
I think this is an area, where other distributions have found nice solutions and Fedora/Red Hat should simply use one of the existing policies (instead of creating one from scratch). I think there aren't that big differences in Mandrake's vs Debian's policies.
BTW ATrpms has been converting library packages to use this scheme for quite some time in order to have good gluing components between the various repos.
On Mon, 2005-01-24 at 20:06 +0100, Axel Thimm wrote:
On Mon, Jan 24, 2005 at 11:34:03AM +0100, Aurelien Bompard wrote:
http://qa.mandrakesoft.com/twiki/bin/view/Main/RpmHowToAdvanced#Library_poli... http://www.debian.org/doc/debian-policy/ch-sharedlibs.html
How about a similar policy for Fedora ? Is it the best solution to this problem ?
I think this is an area, where other distributions have found nice solutions and Fedora/Red Hat should simply use one of the existing policies (instead of creating one from scratch).
I sort of agree, but shipping such packages should be done only if absolutely necessary in FC/FE. Carrying backwards compatibility baggage is not something that aligns well with the project's objectives IMO. See eg. points 5 and 7 in Objectives (and maybe 3 in Non-Objectives) at http://fedora.redhat.com/about/objectives.html
Of course, there are other distributions with other kinds of focuses than the Fedora one where doing this soname-in-name thing might actually be a good general rule of thumb rather than an ugly exception.
On Mon, Jan 24, 2005 at 10:22:13PM +0200, Ville Skyttä wrote:
On Mon, 2005-01-24 at 20:06 +0100, Axel Thimm wrote:
On Mon, Jan 24, 2005 at 11:34:03AM +0100, Aurelien Bompard wrote:
http://qa.mandrakesoft.com/twiki/bin/view/Main/RpmHowToAdvanced#Library_poli... http://www.debian.org/doc/debian-policy/ch-sharedlibs.html
How about a similar policy for Fedora ? Is it the best solution to this problem ?
I think this is an area, where other distributions have found nice solutions and Fedora/Red Hat should simply use one of the existing policies (instead of creating one from scratch).
I sort of agree, but shipping such packages should be done only if absolutely necessary in FC/FE. Carrying backwards compatibility baggage is not something that aligns well with the project's objectives IMO.
But on the contrary it is not directed compatibility, but rather a unilateral, you have a scheme for both forward and backward compatibility packages, so that third parties (as well as Red Hat as a vendor itslef) can easily move forward to the next bleeding edge softwrae release.
See eg. points 5 and 7 in Objectives (and maybe 3 in Non-Objectives) at http://fedora.redhat.com/about/objectives.html
In the sense of the above I would even see them as promoting arguments.
Currently if Red Hat ships libfoo.so.2 and you want to have libfoo.so.3, so need to go through a couple of hops, replace core packages with compatibility libs (or even worse, not care about dependencies on libfoo.so.2) and finally called a heretic. Been there, seen that ...
If this idiom already existed, one would simply package the new library into libfoo3 and no clashes/replacements with libfoo2 would occur.
(But note: the above is slightly simplified, there are a coupdl of controlable issues that must be taken care of).
Of course, there are other distributions with other kinds of focuses than the Fedora one where doing this soname-in-name thing might actually be a good general rule of thumb rather than an ugly exception.
If we are talking on naming schemes we cannot only consider Fedora, as its bigger brother RHEL will share the same. But I believe that the soname-in-rpmname based approach is both suitable for a stable ABI like RHEL as well as a fast pacing one like Fedora's. At the very end it only adds flexibility at the cost of required garbage collection that can easily be solved.
And note that the frame objectives of the two distributions already deploying this scheme are indeed very close to RHEL (infinite release cycles of Debian ;) and Fedora (Mandrake's latest and greatest).
On Mon, 2005-01-24 at 22:55 +0100, Axel Thimm wrote:
On Mon, Jan 24, 2005 at 10:22:13PM +0200, Ville Skyttä wrote:
I sort of agree, but shipping such packages should be done only if absolutely necessary in FC/FE. Carrying backwards compatibility baggage is not something that aligns well with the project's objectives IMO.
But on the contrary it is not directed compatibility, but rather a unilateral, you have a scheme for both forward and backward compatibility packages, so that third parties (as well as Red Hat as a vendor itslef) can easily move forward to the next bleeding edge softwrae release.
Well, I guess it can be seen in many ways. Note that I don't object to a common consistent naming scheme at all, on the contrary.
My concern is that such a scheme _when applied as a standard procedure for all library packages_ would probably lower the barrier for including backwards compatibility cruft for which there will probably no interested parties to clean it up nor maintain. And once something is in, it's always hard to drop it; there will always be someone yelling "don't remove fooX.Y.Z, I need it for my ancient baz package". I don't think that stuff belongs in Fedora.
On Tue, Jan 25, 2005 at 12:34:27AM +0200, Ville Skyttä wrote:
On Mon, 2005-01-24 at 22:55 +0100, Axel Thimm wrote:
On Mon, Jan 24, 2005 at 10:22:13PM +0200, Ville Skyttä wrote:
I sort of agree, but shipping such packages should be done only if absolutely necessary in FC/FE. Carrying backwards compatibility baggage is not something that aligns well with the project's objectives IMO.
But on the contrary it is not directed compatibility, but rather a unilateral, you have a scheme for both forward and backward compatibility packages, so that third parties (as well as Red Hat as a vendor itslef) can easily move forward to the next bleeding edge softwrae release.
Well, I guess it can be seen in many ways. Note that I don't object to a common consistent naming scheme at all, on the contrary.
My concern is that such a scheme _when applied as a standard procedure for all library packages_ would probably lower the barrier for including backwards compatibility cruft for which there will probably no interested parties to clean it up nor maintain.
There's no need to, these packages can be easily marked (see my other replies in this thread) and removed even in cron-jobs.
And once something is in, it's always hard to drop it; there will always be someone yelling "don't remove fooX.Y.Z, I need it for my ancient baz package". I don't think that stuff belongs in Fedora.
I don't want any backward compatibility _content_ in Fedora, but a proper forward/backward combatibility _mechanism_. Fedora Core (rawhide) should continue its aggressive schedule, it only allows for flexibility when needed.
It even allows to skip concerning about creating compatibility libs for selected bits of Fedora Core N-1 and N-2 for Fedora Core N, since the libs from the previous releases can effectively simply be copied over with no renames to compat-this and compat-that, and most importantly no new QA, you know they worked for any ISV.
So I guess we are on the same side ;)
On Mon, 2005-01-24 at 23:48 +0100, Axel Thimm wrote:
On Tue, Jan 25, 2005 at 12:34:27AM +0200, Ville Skyttä wrote:
My concern is that such a scheme _when applied as a standard procedure for all library packages_ would probably lower the barrier for including backwards compatibility cruft for which there will probably no interested parties to clean it up nor maintain.
There's no need to, these packages can be easily marked (see my other replies in this thread) and removed even in cron-jobs.
My concern not about that, but about what's included in the distro, as in DVD's, download.fedora.redhat.com etc.
On Tue, Jan 25, 2005 at 01:19:46AM +0200, Ville Skyttä wrote:
On Mon, 2005-01-24 at 23:48 +0100, Axel Thimm wrote:
On Tue, Jan 25, 2005 at 12:34:27AM +0200, Ville Skyttä wrote:
My concern is that such a scheme _when applied as a standard procedure for all library packages_ would probably lower the barrier for including backwards compatibility cruft for which there will probably no interested parties to clean it up nor maintain.
There's no need to, these packages can be easily marked (see my other replies in this thread) and removed even in cron-jobs.
My concern not about that, but about what's included in the distro, as in DVD's, download.fedora.redhat.com etc.
There isn't anything wrong in having these packages follow the soname-in-rpmname idiom. Even if there would be no further need for concurrent libs it would solve the leftover libs of previous Fedora Core/Red Hat Linux installations.
I only see added value at no real cost: the required simple garbage collector pays off immediately for not having to obsolete old forward compatibility packages (like the gcc34 example, not a library, but the same packaging issues apply here).
On Tuesday 25 January 2005 08:10, Axel Thimm wrote:
I only see added value at no real cost
Quick reminder: if Bug 130352 isn't addressed, non of the above discussion is very useful:
http://bugzilla.redhat.com/bugzilla/130352
I think the Bug should be re-formulated into an RFE based on Axel's Provides(noupdate): foo syntax. Or, maybe Provides(virtual):, or something.
But, this is definitely the first hurdle to jump over, then a more well-rounded discussion on a method for soname rpms can be more effective.
take care,
On Tue, Jan 25, 2005 at 06:11:25PM +0800, Jeff Pitman wrote:
On Tuesday 25 January 2005 08:10, Axel Thimm wrote:
I only see added value at no real cost
Quick reminder: if Bug 130352 isn't addressed, non of the above discussion is very useful:
I agree that this bug needs to be fixed somehow, but in the case of soname-in-rpmtag you don't run into the problem, as the libfooN packages don't share any common provides, neither real nor virtual.
Otherwise I would have noticed, multiple libfooN packages have been in ATrpms since ages now, not to think of Mandrake. :)
I think the Bug should be re-formulated into an RFE based on Axel's Provides(noupdate): foo syntax. Or, maybe Provides(virtual):, or something.
But, this is definitely the first hurdle to jump over, then a more well-rounded discussion on a method for soname rpms can be more effective.
take care,
On Tue, 25 Jan 2005 18:11:25 +0800, Jeff Pitman wrote:
On Tuesday 25 January 2005 08:10, Axel Thimm wrote:
I only see added value at no real cost
Quick reminder: if Bug 130352 isn't addressed, non of the above discussion is very useful:
It looks like https://bugzilla.redhat.com/111071
On Tuesday 25 January 2005 19:19, Michael Schwendt wrote:
On Tuesday 25 January 2005 08:10, Axel Thimm wrote:
I only see added value at no real cost
Quick reminder: if Bug 130352 isn't addressed, non of the above discussion is very useful:
It looks like https://bugzilla.redhat.com/111071
Uh, yeah. 130352 was submitted after discussion between Rex, Axel, and I when 111071 couldn't be found. 111071 was eventually found but we realized it to be a dead-end discussion. 130352 has the trimmings of an RFE, rather than a bug report. Maybe the report should be modified and reworked as such so we can make progress on this.
devel@lists.stg.fedoraproject.org