I took the following quote from from the recent thread "Fedora 25 not booting after update" - didn't want to hijack that thread and therefore started this new one:
On Tue, 2017-09-05 at 01:56 -0700, Samuel Sieb wrote:
[ ... ]
I use Gnome and I still do my updates directly online with dnf. It's your choice how you do them.
Ditto here: Gnome here (plus KDE installed, but rarely used) and I also update via dnf only, i.e. I log out of Gnome, log in to a tty, run "dnf upgrade", and reboot - did you, or anyone else, find a way to upgrade safely without the need to reboot? On Gnome?
More info, and what got me to upgrade always with a reboot following: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject. org/thread/7ULAG243UNGTOSL6URGNG23GC4B6X5GB/
I'm relatively new to Fedora, and I'm astonished we seem to be on Linux in a situation now that I had on MS Windows. In previous times, on a Debian system, I rebooted the machine maybe once or twice a year (not kidding ..) and it worked - provided I didn't mess up dependencies with the package manager (I *did* mess it up .. :) .. )
TIA Wolfgang
On Thu, 2017-09-07 at 14:16 +0200, Wolfgang Pfeiffer wrote:
did you, or anyone else, find a way to upgrade safely without the need to reboot? On Gnome?
Please note: I'm not talking about a full version upgrade from let's say F25 to F26 - just about the usual upgrades inside a single Fedora version ..
Den 2017-09-07 kl. 14:16, skrev Wolfgang Pfeiffer:
I took the following quote from from the recent thread "Fedora 25 not booting after update" - didn't want to hijack that thread and therefore started this new one:
On Tue, 2017-09-05 at 01:56 -0700, Samuel Sieb wrote:
[ ... ]
I use Gnome and I still do my updates directly online with dnf. It's your choice how you do them.
Ditto here: Gnome here (plus KDE installed, but rarely used) and I also update via dnf only, i.e. I log out of Gnome, log in to a tty, run "dnf upgrade", and reboot - did you, or anyone else, find a way to upgrade safely without the need to reboot? On Gnome?
You only need to reboot if the kernel is updated.
Please note: I'm not talking about a full version upgrade from let's say F25 to F26 - just about the usual upgrades inside a single Fedora version ..
OK, then you need to reboot. See following link to learn to do system upgrade:
https://fedoraproject.org/wiki/DNF_system_upgrade
TIA Wolfgang
On Thu, 2017-09-07 at 14:52 +0200, Jon Ingason wrote:
Den 2017-09-07 kl. 14:16, skrev Wolfgang Pfeiffer:
Please note: I'm not talking about a full version upgrade from let's say F25 to F26 - just about the usual upgrades inside a single Fedora version ..
OK, then you need to reboot. See following link to learn to do system upgrade:
No: that's about a complete system upgrade from one Fedora version to another. It's clear one has to reboot for that.
But I'm trying to avoid reboots after package updates in one single Fedora version, the ones that are showering in every few hours ....
And I'm talking about this, from Adam Williamson, a Fedora guy:
"The 'STANDARD FEDORA SOLUTION' for Workstation is offline updates with GNOME Software." ... which I understand as a need to reboot even after simple package updates.
Link again: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/...
Or here, again Adam Williamson: "The safest possible way to update a Fedora system is to use the ‘offline updates’ mechanism. If you use GNOME, this is how updates work if you just wait for the notifications to appear, the ones that tell you you can reboot to install updates now."
https://www.happyassassin.net/2016/10/04/x-crash-during-fedora-update-when-s...
See? "The safest possible way to update a Fedora system is to use the ‘offline updates’ mechanism."
That's why I started the thread. Again: I use dnf, not the GNOME update mechanism, but from how I understand A. Williamsen, this might also apply to package updates via dnf and the reboot following on that ...
TIA Wofgang
Den 2017-09-07 kl. 15:32, skrev Wolfgang Pfeiffer:
On Thu, 2017-09-07 at 14:52 +0200, Jon Ingason wrote:
Den 2017-09-07 kl. 14:16, skrev Wolfgang Pfeiffer:
Please note: I'm not talking about a full version upgrade from let's say F25 to F26 - just about the usual upgrades inside a single Fedora version ..
OK, then you need to reboot. See following link to learn to do system upgrade:
No: that's about a complete system upgrade from one Fedora version to another. It's clear one has to reboot for that.
Sorry Wolfgang, I mist the "not" :-(
But you can ignore the Gnome update and use dnf instead regularly and exclude the kernel update if you don't want to reboot.
The kernel updated more rapidly in Fedora than in Debian so you are bound to update the kernel more often and reboot tog get the latest security update of the kernel.
But I'm trying to avoid reboots after package updates in one single Fedora version, the ones that are showering in every few hours ....
And I'm talking about this, from Adam Williamson, a Fedora guy:
"The 'STANDARD FEDORA SOLUTION' for Workstation is offline updates with GNOME Software." ... which I understand as a need to reboot even after simple package updates.
Link again: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/...
Or here, again Adam Williamson: "The safest possible way to update a Fedora system is to use the ‘offline updates’ mechanism. If you use GNOME, this is how updates work if you just wait for the notifications to appear, the ones that tell you you can reboot to install updates now."
https://www.happyassassin.net/2016/10/04/x-crash-during-fedora-update-when-s...
See? "The safest possible way to update a Fedora system is to use the ‘offline updates’ mechanism."
That's why I started the thread. Again: I use dnf, not the GNOME update mechanism, but from how I understand A. Williamsen, this might also apply to package updates via dnf and the reboot following on that ...
TIA Wofgang
On Thu, 2017-09-07 at 15:48 +0200, Jon Ingason wrote:
Den 2017-09-07 kl. 15:32, skrev Wolfgang Pfeiffer:
On Thu, 2017-09-07 at 14:52 +0200, Jon Ingason wrote:
Den 2017-09-07 kl. 14:16, skrev Wolfgang Pfeiffer:
Please note: I'm not talking about a full version upgrade from let's say F25 to F26 - just about the usual upgrades inside a single Fedora version ..
OK, then you need to reboot. See following link to learn to do system upgrade:
No: that's about a complete system upgrade from one Fedora version to another. It's clear one has to reboot for that.
Sorry Wolfgang, I mist the "not" :-(
Not a problem at all ... :)
Thanks, and Regards Wolfgang
On 09/07/2017 09:32 PM, Wolfgang Pfeiffer wrote:
See? "The safest possible way to update a Fedora system is to use the ‘offline updates’ mechanism."
That's why I started the thread. Again: I use dnf, not the GNOME update mechanism, but from how I understand A. Williamsen, this might also apply to package updates via dnf and the reboot following on that ...
Yes, that is the "safest" path.
The best thing to do after an update via dnf is to run "dnf needs-restarting". This will give you a list of processes that could potentially be impacted by the last update by, for example, some libraries being updated.
Depending on what you find, you may just need to logout/login to restart the processes. Or, you may need to restart some daemon using "systemctl restart whatever".
There is no definitive answer to the question. I've the habit of not rebooting or logging out. Most of the time to no ill effect. Other times, after a while, I may find some odd behaviors. So, I'll logout/login and if all is OK, continue. If the oddities continue, I'll reboot.
I've never run into a situation where a corruption occurred causing a permanent damage to my system.
On Thu, 2017-09-07 at 21:53 +0800, Ed Greshko wrote:
On 09/07/2017 09:32 PM, Wolfgang Pfeiffer wrote:
See? "The safest possible way to update a Fedora system is to use the ‘offline updates’ mechanism."
That's why I started the thread. Again: I use dnf, not the GNOME update mechanism, but from how I understand A. Williamsen, this might also apply to package updates via dnf and the reboot following on that ...
Yes, that is the "safest" path.
The best thing to do after an update via dnf is to run "dnf needs- restarting".
Wow, that looks like a really powerful tool: I didn't even know about it. Thanks a lot for letting me know: might save me quite some time ...
I really will need to have a look at the rest of the installed dnf.plugin.* tools ...
This will give you a list of processes that could potentially be impacted by the last update by, for example, some libraries being updated.
Depending on what you find, you may just need to logout/login to restart the processes. Or, you may need to restart some daemon using "systemctl restart whatever".
There is no definitive answer to the question.
That's actually what I was thinking, too. But I simply don't know too much about Fedora so far to be sure about it ...
I've the habit of not rebooting or logging out. Most of the time to no ill effect. Other times, after a while, I may find some odd behaviors. So, I'll logout/login and if all is OK, continue. If the oddities continue, I'll reboot.
Makes sense, yes ...
I've never run into a situation where a corruption occurred causing a permanent damage to my system.
Sounds good ... :) As I said: For an update I - meantime - always try to log out of the X environment and then start the upgrade process on a VT - I think this is a good idea.
Again: thanks again for mentioning the "needs-restarting" plugin ...
Regards Wolfgang
On Thu, 2017-09-07 at 21:53 +0800, Ed Greshko wrote:
The best thing to do after an update via dnf is to run "dnf needs-restarting". This will give you a list of processes that could potentially be impacted by the last update by, for example, some libraries being updated.
AFAIK this has now been deprecated in favour of 'tracer' (standalone command). Also the dnf.plugin.tracer plugin.
poc
On Thu, Sep 07, 2017 at 04:46:26PM +0100, Patrick O'Callaghan wrote:
The best thing to do after an update via dnf is to run "dnf needs-restarting". This will give you a list of processes that could potentially be impacted by the last update by, for example, some libraries being updated.
AFAIK this has now been deprecated in favour of 'tracer' (standalone command). Also the dnf.plugin.tracer plugin.
I ran with dnf.plugin.tracer installed for a bit, but ultimately disabled it because it's rather slow. I'd rather reboot. :)
On 09/07/2017 11:46 PM, Patrick O'Callaghan wrote:
On Thu, 2017-09-07 at 21:53 +0800, Ed Greshko wrote:
The best thing to do after an update via dnf is to run "dnf needs-restarting". This will give you a list of processes that could potentially be impacted by the last update by, for example, some libraries being updated.
AFAIK this has now been deprecated in favour of 'tracer' (standalone command). Also the dnf.plugin.tracer plugin.
I rarely use needs-restarting. On the issue of tracer I actually tried it and hate it more than needs-restarting.
As Matthew pointed out, needs-restarting is rather slow. But, at least you can elect to run it. With the tracer plugin it runs after every successful dnf run and it is no faster than needs-restarting. Additionally, I found it interfered with the akmod process update of nVidia drivers when the kernel was updated.
On Fri, 2017-09-08 at 07:14 +0800, Ed Greshko wrote:
On 09/07/2017 11:46 PM, Patrick O'Callaghan wrote:
On Thu, 2017-09-07 at 21:53 +0800, Ed Greshko wrote:
The best thing to do after an update via dnf is to run "dnf needs-restarting". This will give you a list of processes that could potentially be impacted by the last update by, for example, some libraries being updated.
AFAIK this has now been deprecated in favour of 'tracer' (standalone command). Also the dnf.plugin.tracer plugin.
I rarely use needs-restarting. On the issue of tracer I actually tried it and hate it more than needs-restarting.
As Matthew pointed out, needs-restarting is rather slow. But, at least you can elect to run it. With the tracer plugin it runs after every successful dnf run and it is no faster than needs-restarting. Additionally, I found it interfered with the akmod process update of nVidia drivers when the kernel was updated.
True that it's no faster, but it does have options that can give more information and hints about what to do. Not in all cases though. It will often say "restart foo manually" and you have to investigate how to do that because it doesn't know, which can be a challenge when foo is some daemon you aren't familiar with and was originally started at boot time.
poc
On Fri, 2017-09-08 at 01:16 +0100, Patrick O'Callaghan wrote:
On Fri, 2017-09-08 at 07:14 +0800, Ed Greshko wrote:
[ ... ]
I rarely use needs-restarting. On the issue of tracer I actually tried it and hate it more than needs-restarting.
As Matthew pointed out, needs-restarting is rather slow. But, at least you can elect to run it. With the tracer plugin it runs after every successful dnf run and it is no faster than needs-restarting. Additionally, I found it interfered with the akmod process update of nVidia drivers when the kernel was updated.
True that it's no faster, but it does have options that can give more information and hints about what to do. Not in all cases though. It will often say "restart foo manually" and you have to investigate how to do that because it doesn't know, which can be a challenge when foo is some daemon you aren't familiar with and was originally started at boot time.
Exactly: It can be difficult to see which services need to be restarted, and how they need to be restarted properly (order of restarting might even be relevant) ...
I'm more and more wondering why Fedora users after an upgrade are supposed to test by **themselves** via the various plugins whether there are services that need to be restarted in the running system, or whether there is even a full reboot needed.
That whole testing of services and whether their restart/reload is needed, then actually restarting them is something the dnf installer might be able to do by itself: Inform the user - maybe at the end of or during an upgrade - which services need a restart: dnf: "Shall we restart foo now: Yes or No, and if No: here's how you can do it manually ...." Or if a reboot is required: tell the users ... That whole procedure looks actually like a no-brainer ...
What did I miss? ...
Wolfgang
On 09/08/2017 08:12 PM, Wolfgang Pfeiffer wrote:
On Fri, 2017-09-08 at 01:16 +0100, Patrick O'Callaghan wrote:
On Fri, 2017-09-08 at 07:14 +0800, Ed Greshko wrote:
[ ... ]
I rarely use needs-restarting. On the issue of tracer I actually tried it and hate it more than needs-restarting.
As Matthew pointed out, needs-restarting is rather slow. But, at least you can elect to run it. With the tracer plugin it runs after every successful dnf run and it is no faster than needs-restarting. Additionally, I found it interfered with the akmod process update of nVidia drivers when the kernel was updated.
True that it's no faster, but it does have options that can give more information and hints about what to do. Not in all cases though. It will often say "restart foo manually" and you have to investigate how to do that because it doesn't know, which can be a challenge when foo is some daemon you aren't familiar with and was originally started at boot time.
Exactly: It can be difficult to see which services need to be restarted, and how they need to be restarted properly (order of restarting might even be relevant) ...
I'm more and more wondering why Fedora users after an upgrade are supposed to test by **themselves** via the various plugins whether there are services that need to be restarted in the running system, or whether there is even a full reboot needed.
That whole testing of services and whether their restart/reload is needed, then actually restarting them is something the dnf installer might be able to do by itself: Inform the user - maybe at the end of or during an upgrade - which services need a restart: dnf: "Shall we restart foo now: Yes or No, and if No: here's how you can do it manually ...." Or if a reboot is required: tell the users ... That whole procedure looks actually like a no-brainer ...
What did I miss? ...
IMHO, it should be changed from "needs" to "should". It is often the case that processes which are already running will continue to run just fine even though they "should" be restarted to make use of the updated libraries.
It isn't as cut and dry as you may think. It probably isn't a good idea to restart some processes after an update as a user may be accessing the process and restarting it in the middle may make for a bad user experience. A connection to a socket may be broken, for example.
GNOME is trying to make updates more "user friendly" by doing them during the reboot phase. If you're adverse to doing reboots then you need to understand the risks, or problems, with doing things without rebooting.
You are sure to find plenty of people that will object to the "Windows" philosophy that a reboot is required after every update. At least you can take comfort in that with Linux you're not exposed to updates depending or previous updates. So you won't have the "Update, reboot, update more, reboot" cycle you often see in the Windows environment.
On Fri, 2017-09-08 at 23:14 +0800, Ed Greshko wrote:
That whole testing of services and whether their restart/reload is needed, then actually restarting them is something the dnf installer might be able to do by itself: Inform the user - maybe at the end of or during an upgrade - which services need a restart: dnf: "Shall we restart foo now: Yes or No, and if No: here's how you can do it manually ...." Or if a reboot is required: tell the users ... That whole procedure looks actually like a no-brainer ...
What did I miss? ...
IMHO, it should be changed from "needs" to "should". It is often the case that processes which are already running will continue to run just fine even though they "should" be restarted to make use of the updated libraries.
It isn't as cut and dry as you may think. It probably isn't a good idea to restart some processes after an update as a user may be accessing the process and restarting it in the middle may make for a bad user experience. A connection to a socket may be broken, for example.
Yes, the situation can be complex and I wouldn't advocate dnf just restarting stuff without asking first. I wasn't trying to understate the difficulty. Nevertheless, key services should be restartable by the user without having to poke around in documentation, which is often incomplete or even non-existent. Core services descend from systemd, but in some cases there is no corresponding target or unit file because the execution was down via something else. If tracer is smart enough to know what processes are using obsolete libraries, I presume it could be made smart enough to read the journal and trace how the process was originally run, but of course this is mere speculation.
poc
On 09/08/2017 08:25 AM, Patrick O'Callaghan wrote:
On Fri, 2017-09-08 at 23:14 +0800, Ed Greshko wrote:
That whole testing of services and whether their restart/reload is needed, then actually restarting them is something the dnf installer might be able to do by itself: Inform the user - maybe at the end of or during an upgrade - which services need a restart: dnf: "Shall we restart foo now: Yes or No, and if No: here's how you can do it manually ...." Or if a reboot is required: tell the users ... That whole procedure looks actually like a no-brainer ...
What did I miss? ...
IMHO, it should be changed from "needs" to "should". It is often the case that processes which are already running will continue to run just fine even though they "should" be restarted to make use of the updated libraries.
It isn't as cut and dry as you may think. It probably isn't a good idea to restart some processes after an update as a user may be accessing the process and restarting it in the middle may make for a bad user experience. A connection to a socket may be broken, for example.
Yes, the situation can be complex and I wouldn't advocate dnf just restarting stuff without asking first. I wasn't trying to understate the difficulty. Nevertheless, key services should be restartable by the user without having to poke around in documentation, which is often incomplete or even non-existent. Core services descend from systemd, but in some cases there is no corresponding target or unit file because the execution was down via something else. If tracer is smart enough to know what processes are using obsolete libraries, I presume it could be made smart enough to read the journal and trace how the process was originally run, but of course this is mere speculation.
IIRC, on process startup ld checks to see if the desired _shared_ library is already present in RAM and only loads it from disk if no copy already exists in memory (that's the whole point of shared libraries--only one copy of the _code_ section is needed). So even if a library was updated, the new version won't be used unless _all_ processes currently using the old version shut down and a new process is launched that needs that library. The only way to ensure you're using the latest and greatest version of any given library is to do a reboot to kill all the existing processes. Whether to run a new kernel at that reboot is up to you.
Now, should the package manager of choice alert you to potential changes? Unless the update to the library is security-related or prevents some potential catastrophic meltdown, I see no particular reason to. Others feel differently and may install the tracer plugin to be alerted automatically (at the cost of a slower update cycle) or they run "dnf needs-restarting" should they feel like it. The choice is theirs.
I don't run automatic updates. I run them interactively and look at what's being updated. That way I can determine what to do. Being the oddball I am, I do periodic reboots (generally the first thing every Monday morning) so I'm running the newest kernel and using the latest libraries. But (as everyone familiar with this list knows) I'm simply cranky, obnoxious and "weird". ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - - If Windows isn't a virus, then it sure as hell is a carrier! - ----------------------------------------------------------------------
On Fri, 2017-09-08 at 10:26 -0700, Rick Stevens wrote:
IIRC, on process startup ld checks to see if the desired _shared_ library is already present in RAM and only loads it from disk if no copy already exists in memory (that's the whole point of shared libraries--only one copy of the _code_ section is needed). So even if a library was updated, the new version won't be used unless _all_ processes currently using the old version shut down and a new process is launched that needs that library. The only way to ensure you're using the latest and greatest version of any given library is to do a reboot to kill all the existing processes. Whether to run a new kernel at that reboot is up to you.
Yes and no. AFAIK if some process is using libfoo.so.3.1 and an update installs libfoo.so.3.2, subsequent processes will use libfoo.so.3.2 so there will be two versions of the library in memory because strictly speaking they are different binaries. That's why you want to kill (or equivalently re-exec) processes using libfoo.so.3.1, which is what tracer is telling you to do. However not doing it will not block use of the new version, it will just be more expensive.
Now, should the package manager of choice alert you to potential changes? Unless the update to the library is security-related or prevents some potential catastrophic meltdown, I see no particular reason to. Others feel differently and may install the tracer plugin to be alerted automatically (at the cost of a slower update cycle) or they run "dnf needs-restarting" should they feel like it. The choice is theirs.
I don't run automatic updates. I run them interactively and look at what's being updated. That way I can determine what to do.
It's the "determining what to do" that we're talking about. That's the hard part.
poc
Rick Stevens writes:
IIRC, on process startup ld checks to see if the desired _shared_ library is already present in RAM and only loads it from disk if no copy already exists in memory (that's the whole point of shared libraries--only one copy of the _code_ section is needed). So even if a
Actually, it's not the runtime loader that explicitly does this, itself. All that the runtime loader does is open the shared library and mmap it into the process space. It's actually the kernel that notices that the same inode is already mmaped, and just links the already-mmaped pages to the new process, marking them copy-on-write (which will only make any difference with the data sections' pages, since the code sections are readonly and would never get copied).
It's the kernel's job to keep track of these things, not userspace's.
library was updated, the new version won't be used unless _all_ processes currently using the old version shut down and a new process is launched that needs that library. The only way to ensure you're using the latest and greatest version of any given library is to do a reboot to kill all the existing processes. Whether to run a new kernel at that reboot is up to you.
I am 100% confident that this is not true. I'm so confident that I don't even want to bother building a simple demonstration, with a helloworld() sample shared library, that will trivially show that this is not true.
I build and install my shared libraries, with an existing running daemon still having the old, uninstalled version mmaped in its process space. Sometimes I even go through a build/upgrade cycle more than once, before restarting the daemon. I have no issues, whatsoever, with testing new code that links to the new version, and still have the old daemon putter along, until I restart the new version. If this were actually true, I would not be able to build and link with the new C++ library, and its changed ABI, and I would get immediate runtime segfaults after linking with the new library, but still loading the old version at runtime because the existing daemon still has the old shared library loaded. That would be a rather rude, and impolite thing to do.
This is not a novel concept, and Unix worked this way long before Linux ever existed. You could open a filehandle, replace the file, and have the existing process continue using the file without any issues; while all new processes get the new one.
This basic concept of how Unix handled inodes has been in common knowledge for many decades. The kernel does not physically delete the file after its inode reference count goes down to 0 until all existing open file descriptors are also closed, if there are any for the same inode. Until that happens, there is no noteworthy difference between this open file descriptor and some other one. If some other process happens to create a new file with the same name, purely by luck of the draw, the kernel will hardly notice, or care, and will than gladly offer its services to access the contents of the new file to any other process that has the requisite permissions to open it. Perhaps even the same process that still has the deleted file opened via another file descriptor – it can open() the same filename and get the new file instead.
Let's do a quick experiment. Let's open two terminal windows and execute the following, in /tmp (or /var/tmp, if you like that directory better), in the first window:
[mrsam@octopus tmp]$ cat >foo The quick Brown fox Jumped over The lazy dog's Tail <<<<CTRL-D>>>> [mrsam@octopus tmp]$ exec 3<foo [mrsam@octopus tmp]$ while read bar
do read foobar <&3 echo $foobar done
<<<<ENTER>>>> The quick <<<<ENTER>>>> Brown fox
Now, leave this terminal window, for just a teensy-weensy moment, and switch to the second one. There, we'll execute the following:
[mrsam@octopus tmp]$ rm foo # Buh-bye! [mrsam@octopus tmp]$ cat >foo Mary had a little lamb its fleece was white as snow and everywhere mary went the lamb was sure to go. <<<<CTRL-D>>>> [mrsam@octopus tmp]$ cat foo Mary had a little lamb its fleece was white as snow and everywhere mary went the lamb was sure to go. [mrsam@octopus tmp]$
We will now return to the first terminal, and drop the mic:
<<<<ENTER>>>> Jumped over <<<<ENTER>>>> The lazy dog's ^C
Heavens to Betsy! One can delete a file, replace it, use it, and still have some other existing process have no issues, whatsoever, screwing around with the deleted file. We just witnessed something amazing, for just a brief moment: two processes having the same filename open, with one reading the new file, and the other one keeping its tenuous grasp on the old file, and was able to continue reading it afterwards.
( Feel free to repeat this experiment by creating "foo.new", then renaming it to "foo", like how dnf/rpm does it; the results will be the same )
There is no valid technical reason why a live 'dnf upgrade' should not work. Period. Full stop. There's nothing else to discuss. The subject matter is closed. Any actual problems that happen must be solely due to crappy software, somewhere. The only undetermined piece of information is the precise identification of the crappy software in question (which would be responsible for the aforementioned problems) but it won't be the upgrade process per se, and there will not be any valid, solid, technical excuse for that, in total.
Whether the unidentified crappy software in question could actually be dnf, or some crappy GUI wrapper for dnf which flips out when its own shared libraries get replaced (and it will REALLY have to go out of its way, almost be intentionally crappy, in order to even realize that its own shared libraries were replaced, see above) or if it's whatever's actually upgraded – that's something that someone else can figure out.
I'm not disputing that a live dnf upgrade might be problematic in some cases. It's just that there is no valid, technical, fundamental reason why it must be a problem that cannot be avoided. Everything but the kernel itself – including the C library itself, and even including the unmentionable abomination for an init process – should be flawlessly[1] upgradable. After all, this is Linux and not Microsoft Windows.
[1] flawlessly, n.: at the minimum, "dnf upgrade -y && reboot" should finish upgrading all packages, and successfully reboot the system; and typically the system is expected to remain perfectly stable without rebooting, or at least stable enough to reboot manually.
On Fri, 2017-09-08 at 18:06 -0400, Sam Varshavchik wrote:
One can delete a file, replace it, use it, and still have some other existing process have no issues, whatsoever, screwing around with the deleted file. We just witnessed something amazing, for just a brief moment: two processes having the same filename open, with one reading the new file, and the other one keeping its tenuous grasp on the old file, and was able to continue reading it afterwards.
Of course. This is so fundamental to how Unix (and hence Linux) works that IMHO it's the most radical difference with Windows. The fact that you can delete or replace a file without affecting any process that has already opened it is a direct consequence of the separation between directory entries and inodes, which DOS-based systems and their successors do not have.
poc
On Fri, 2017-09-08 at 18:06 -0400, Sam Varshavchik wrote:
Sam: Thanks for all your effort: The following paragraph seems to be a crucial part of what you are trying to explain - but there are a few instances where I don't understand what is what.
What you were writing about seems to be one of the core concepts on Unix/Linux - so I'd like to understand them. So please allow me ask:
I build and install my shared libraries, with an existing running daemon
"shared libraries" = the new libraries, new versions of them, right? And the "running daemon" is the new version of it, not the "uninstalled" old one ..?
still having the old, uninstalled version mmaped in its process space.
"uninstalled version" of the daemon. Right?
Sometimes I even go through a build/upgrade cycle more than once,
"build/upgrade cycle": the "build/upgrade cycle" of new libraries, not the daemon - right?
before restarting the daemon.
Which one: the old one, still on the system - or the newly installed one?
I have no issues, whatsoever, with testing new code
"new code", again you mean the libraries?
that links to the new version, and still have the old daemon putter along,
the "new version" now is the newly installed daemon. So we have two versions of that daemon on the system?, the old one and the newly installed one, right?
until I restart the new version.
the "new version" of the daemon, right?
If this were actually true, I would not be able to build and link with the new C++ library, and its changed ABI, and I would get immediate runtime segfaults after linking with the new library, but still loading the old version at runtime because the existing daemon still has the old shared library loaded. That would be a rather rude, and impolite thing to do.
Are you saying that we can have two different versions of a daemon on the system, and each one with its own specific and different versions, old and new, of libraries attached to it?
I surely know one needs to have lots of patience to dig through these questions. So I fully understand if you simply ignore them. After all this is a mailing list, not a classroom. I'm just curious ...
Whatever: If you made it already until here: Thanks a lot for your patience, in anticipation!
Wolfgang
On Sat, 2017-09-09 at 14:16 +0200, Wolfgang Pfeiffer wrote:
If this were actually true, I would not be able to build and link with the new C++ library, and its changed ABI, and I would get immediate runtime segfaults after linking with the new library, but still loading the old version at runtime because the existing daemon still has the old shared library loaded. That would be a rather rude, and impolite thing to do.
Are you saying that we can have two different versions of a daemon on the system, and each one with its own specific and different versions, old and new, of libraries attached to it?
Short answer, yes, but the same applies to any program, not just daemons, and in fact to any file, not just executables and libraries. As Sam said, if you replace a file while it is still running, the old version will continue to be used by any process which had already opened it. However once you execute the program again (or link to the new library, or re-open the file), you'll get the new version. Both can be in use at the same time and this normally doesn't matter.
NB: "replace" means "unlink the old one and create the new one with the same name", e.g. using "mv". Overwriting the old file with new data is different and in the case of executables or libraries will almost certainly cause problems.
poc
Patrick O'Callaghan writes:
NB: "replace" means "unlink the old one and create the new one with the same name", e.g. using "mv". Overwriting the old file with new data is different and in the case of executables or libraries will almost certainly cause problems.
Which is why rpm does not install each file from each package simply be creating this file and writing it out. rpm always creates the file with a temporary name, and renames it once it finished writing and closing it.
In short, there is no valid, technical, fundamental reason why a dnf upgrade should leave the system in an unstable state, or somehow interfere with any running daemon; and why a running daemon has to intentionally go out of its way to frak things up if it was upgraded while it was running.
On Sat, Sep 09, 2017 at 09:34:00AM -0400, Sam Varshavchik wrote:
In short, there is no valid, technical, fundamental reason why a dnf upgrade should leave the system in an unstable state, or somehow interfere with any running daemon; and why a running daemon has to intentionally go out of its way to frak things up if it was upgraded while it was running.
It's not just the opened-at-initial-run libraries and files. Many programs just aren't written to deal properly with this and make assumptions they shouldn't. It'd be awesome if that weren't the case, but it's not. (See the Firefox example I posted for a very clear one.)
Additionally, of course, in the event of a security update, you may *think* you've applied patches, but without restarts, you haven't.
On Sat, 2017-09-09 at 09:34 -0400, Sam Varshavchik wrote:
Patrick O'Callaghan writes:
NB: "replace" means "unlink the old one and create the new one with the same name", e.g. using "mv". Overwriting the old file with new data is different and in the case of executables or libraries will almost certainly cause problems.
Which is why rpm does not install each file from each package simply be creating this file and writing it out. rpm always creates the file with a temporary name, and renames it once it finished writing and closing it.
In short, there is no valid, technical, fundamental reason why a dnf upgrade should leave the system in an unstable state, or somehow interfere with any running daemon; and why a running daemon has to intentionally go out of its way to frak things up if it was upgraded while it was running.
In case it wasn't clear, I'm agreeing with you. However I still would like to know how to restart those services which don't come with a unit file, other than rebooting of course. The problem is that each case is different and many are not reliably documented.
poc
Patrick O'Callaghan writes:
In case it wasn't clear, I'm agreeing with you. However I still would like to know how to restart those services which don't come with a unit file, other than rebooting of course. The problem is that each case is different and many are not reliably documented.
Right. I don't think there could possibly be a cookie-cutter recipe that will work with everything under the sun. Each package has its own pecularities and hangups.
Allegedly, on or about 8 September 2017, Ed Greshko sent:
GNOME is trying to make updates more "user friendly" by doing them during the reboot phase. If you're adverse to doing reboots then you need to understand the risks, or problems, with doing things without rebooting.
You are sure to find plenty of people that will object to the "Windows" philosophy that a reboot is required after every update.
I certainly do, especially considering that rebooting has rarely been necessary after updates (beyond kernel updates), ever since I started using Linux (before Fedora existed). It just smacks of half-arsed programming to suffer this retrograde behaviour.
The Windows-methodology of rebooting after/while updating is a major pain in the arse, for many reasons. It means you can't update while using your computer, because it's going to interrupt what you actually want to do.
It's a major timewaster. You end up having to either do updates before *you* start doing things with your computer, delaying what you actually wanted to do, often by extraordinary amounts of time (by way of horrible example - with Windows Vista, I once watched the entire movie of Dr Zhivago during one of those update, and that's a bloody long film). Or, you wait until you've finished what you wanted to do, then let it do updates when you really wanted to shutdown and do something else. You often have to babysit, or waste even more time debugging update failures. That, or leave it running overnight. Only to find that when you want to use the computer, next time, you have to go through that debugging procedure.
With the way Linux is getting worse and worse about this kind of thing, I wonder if we're getting more and more programmers coming over from the Windows world, where they just don't understand what's wrong with that philosophy of computing. That, or it's sabotage.
On 09/11/2017 08:34 PM, Tim wrote:
With the way Linux is getting worse and worse about this kind of thing, I wonder if we're getting more and more programmers coming over from the Windows world, where they just don't understand what's wrong with that philosophy of computing. That, or it's sabotage.
Why are you being so difficult about this? Are you deliberately trying to not understand? You *don't* have to reboot to do updates. You are completely free to run dnf yourself and do live updates. That's what I do most of the time, but I also tend to reboot after the updates are finished. Just don't complain if something goes wrong when you do live updates instead of doing them offline. As has been explained many times here, it is very difficult to make live updates completely safe unless maybe if you're using something like ostree. There are very few people that think the time and effort to make live updates safe is worth it or even possible. If you're so disturbed by this, then find another OS that will let you do that if such a thing even exists. I can't think of one. Windows, Mac OSX, and Android all reboot to do updates except maybe for some minor ones.
On Mon, 2017-09-11 at 23:11 -0700, Samuel Sieb wrote:
On 09/11/2017 08:34 PM, Tim wrote:
With the way Linux is getting worse and worse about this kind of thing, I wonder if we're getting more and more programmers coming over from the Windows world, where they just don't understand what's wrong with that philosophy of computing. That, or it's sabotage.
Why are you being so difficult about this? Are you deliberately trying to not understand? You *don't* have to reboot to do updates.
From what Tim wrote, I cannot see that he didn't see this as well ..
You are completely free to run dnf yourself and do live updates. That's what I do most of the time, but I also tend to reboot after the updates are finished. Just don't complain if something goes wrong when you do live updates instead of doing them offline.
That's exactly the point: You don't have to reboot, as you say a few lines earlier, but it's simply safer to do just that. That's his - and others - whole point.
Apart from kernel updates there wasn't a need, IIRC, to reboot in earlier times. Read Tim:
" ...rebooting has rarely been necessary after updates (beyond kernel updates), ever since I started using Linux (before Fedora existed)."
Is he right or not?
We have a choice: let Linux - or Fedora Linux at least - keep being what it is, namely a system (with what some see as an annoyance) that you better reboot after updates, or we try to find a way back to its status quo pro ante where there was no need to reboot.
Our choice: I definitely won't complain, no matter what the Linux coders' or the Fedora management's choices will be. But let us at least get the fact straight:
Linux, from how I understand the results of this debate, now *better* is getting rebooted after updates. This wasn't necessary in earlier times.
Would you agree on these last two sentences, Sam, or anyone else?
Regards Wolfgang
On Tue, 2017-09-12 at 14:36 +0200, Wolfgang Pfeiffer wrote:
We have a choice: let Linux - or Fedora Linux at least - keep being what it is, namely a system (with what some see as an annoyance) that you better reboot after updates,
The annoyance, meaning: the need to reboot, not Fedora as a whole ...
Wolfgang
On 09/12/2017 05:36 AM, Wolfgang Pfeiffer wrote:
Linux, from how I understand the results of this debate, now*better* is getting rebooted after updates. This wasn't necessary in earlier times.
Would you agree on these last two sentences, Sam, or anyone else?
I don't know if Linux needs to be rebooted after every update, and I certainly don't. I do know that more and more users think that it has to be rebooted.
On 12 September 2017 at 19:02, Joe Zeff joe@zeff.us wrote:
This wasn't necessary in earlier times.
Two things:
* It was, but people mostly pretended that Linux was better than Windows because the latter enforced it and the former didn't...
* When it wasn't, it was because everything was less complicated, in a "you need to use a command line to mount a USB disk" kind of way.
I don't know if Linux needs to be rebooted after every update
If you use a distribution method such as flatpak, one that's entirely designed to be safely live-updatable then rebooting is basically not required. Deploying a graphical application using a system-wide RPM package just doesn't make for an awesome user experience.
Richard.
On 09/12/2017 12:09 PM, Richard Hughes wrote:
On 12 September 2017 at 19:02, Joe Zeffjoe@zeff.us wrote:
This wasn't necessary in earlier times.
Two things:
- It was, but people mostly pretended that Linux was better than
Windows because the latter enforced it and the former didn't...
Then why, then, did sysadmins brag about the long uptimes of their fully updated systems? (Remember, back then you only downloaded, compiled and installed a new kernel if it patched something that was giving you trouble, or provided something you needed. You didn't upgrade it just because you could.)
On 09/12/2017 12:39 PM, Joe Zeff wrote:
Then why, then, did sysadmins brag about the long uptimes of their fully updated systems?
Because back then a host was probably an installation on bare metal that was painstakingly set up by hand and preserved for as long as possible.
If you're building your systems with CI/CD pipelines as containers or VMs and constantly replacing them, uptime is not a meaningful statistic.
On 09/12/2017 05:36 AM, Wolfgang Pfeiffer wrote:
Apart from kernel updates there wasn't a need, IIRC, to reboot in earlier times. Read Tim:
" ...rebooting has rarely been necessary after updates (beyond kernel updates), ever since I started using Linux (before Fedora existed)."
Is he right or not?
We have a choice: let Linux - or Fedora Linux at least - keep being what it is, namely a system (with what some see as an annoyance) that you better reboot after updates, or we try to find a way back to its status quo pro ante where there was no need to reboot.
Our choice: I definitely won't complain, no matter what the Linux coders' or the Fedora management's choices will be. But let us at least get the fact straight:
Linux, from how I understand the results of this debate, now *better* is getting rebooted after updates. This wasn't necessary in earlier times.
Would you agree on these last two sentences, Sam, or anyone else?
I don't agree. I really don't think anything has changed. It has always been somewhat risky doing live upgrades, although it's possible that now more programs are doing runtime dynamic loading (plugins, etc.) which makes it extra risky. Either way, some users have been getting caught by these problems without knowing why, so there is now a way for those not so technical users to safely update their systems. The default method is safe. However, nothing has changed for those that believe they know what they are doing. There is no obstacle for them to continue doing live updates. I really don't see why this is such a big deal. I am happy that there is a safe way for my users (teachers, elementary students, my mom) to safely upgrade their computers without having to call me in. I usually do the release upgrades for them so I can keep most of them on the same release, but some of them have done that themselves as well.
On Tue, 2017-09-12 at 12:02 -0700, Samuel Sieb wrote:
On 09/12/2017 05:36 AM, Wolfgang Pfeiffer wrote:
[ ... ] Linux, from how I understand the results of this debate, now *better* is getting rebooted after updates. This wasn't necessary in earlier times.
Would you agree on these last two sentences, Sam, or anyone else?
I don't agree. I really don't think anything has changed. It has always been somewhat risky doing live upgrades, although it's possible that now more programs are doing runtime dynamic loading (plugins, etc.) which makes it extra risky. Either way, some users have been getting caught by these problems without knowing why, so there is now a way for those not so technical users to safely update their systems. The default method is safe. However, nothing has changed for those that believe they know what they are doing. There is no obstacle for them to continue doing live updates. I really don't see why this is such a big deal. I am happy that there is a safe way for my users (teachers, elementary students, my mom) to safely upgrade their computers without having to call me in.
That looks like a considerable change for the better on Linux - and "Elementary students", from what I gather via Google, are maximum 14 or 15 years old. I wasn't aware of it ..
Thanks.
Regards Wolfgang
I usually do the release upgrades for them so I can keep most of them on the same release, but some of them have done that themselves as well.
Tim:
With the way Linux is getting worse and worse about this kind of thing, I wonder if we're getting more and more programmers coming over from the Windows world, where they just don't understand what's wrong with that philosophy of computing. That, or it's sabotage.
Samuel Sieb:
Why are you being so difficult about this? Are you deliberately trying to not understand?
I'm not. Why are people trying to degrade Linux? Why are people defending that? Why are people saying it's necessary? It wasn't before. Why should it, *NOW*, become so.
For the last decade, or so, we've rarely had to reboot. But now it's becoming the norm. You have programmers who are claiming that they need to do a reboot, yet previously this was not needed. Heck, I could even continue to use running software in the middle of it being updated. It's showing a trend, that over time more and more programmers will go down that path.
Why is that when users point out flaws and bad practices other people go into "shoot the messenger" mode, instead of acknowledging those flaws?
If you're so disturbed by this, then find another OS that will let you do that if such a thing even exists. I can't think of one. Windows, Mac OSX, and Android all reboot to do updates except maybe for some minor ones.
Why should I have to find another OS? The (pre)existing OS was already that way, and is moving away from it.
Stop copying the worst aspects of other OSs that drove us away from using them, over to using Linux, instead. Those other OSs are bad examples of practices, not good examples.
On 09/12/17 21:19, Tim wrote:
I'm not. Why are people trying to degrade Linux? Why are people defending that? Why are people saying it's necessary? It wasn't before. Why should it, *NOW*, become so.
I get the feeling that if it were possible to go back to previous versions of say "fedora" and compile "tracer" it would come up with similar suggestions as to what should be "restarted". The stand-alone "tracer" program is supplied by python3-tracer whose description is...
Description : Tracer determines which applications use outdated files and prints : them. For special kind of applications such as services or daemons, : it suggests a standard command to restart it. Detecting whether file : is outdated or not is based on a simple idea. If application has : loaded in memory any version of a file which is provided by any : package updated since system was booted up, tracer consider this : application as outdated.
I'm pretty sure the same would have been true "way back when".
You also said "For the last decade, or so, we've rarely had to reboot." Well, guess what? You *still* don't "HAVE TO". I certainly don't!
This is pretty much a ridiculous thread.
You are not obligated to apply updates when they are released.
If you don't like it that GNOME downloads the updates for you to be applied when you reboot, disable that feature.
If you don't like it that updates are applied by GNOME when you reboot, then run dnf manually.
If you want to run dnf manually and not reboot, then don't reboot. It isn't *required* just *suggested*. I can't imagine that my systems are so special that I am the only one that chooses not to reboot most of the time after doing an update and rarely run into difficulties.
Feel that you want to follow the suggestions of rebooting but can't stand waiting the few minutes it takes? Schedule it while you sleep. And pause that VM you "must" have running before you sleep.
And, if none of the above works for you and you *must* have a system which never "should" be rebooted or processes never restarted due to updates then create a "working group" and put together standards that need to be followed to accomplish your goal and get people to buy in. Because that is the only way it will get done. Beating a dead horse on the user's list is not going get it done.
On 7 September 2017 at 14:53, Ed Greshko ed.greshko@greshko.com wrote:
I've never run into a situation where a corruption occurred causing a permanent damage to my system.
Ohh, it must never happen then. From the person who used to triage the bugs from when it did, please trust me that offline updates has reduced the number of trashed systems by three orders of magnitude.
Richard.
On 09/08/2017 12:21 PM, Richard Hughes wrote:
On 7 September 2017 at 14:53, Ed Greshko ed.greshko@greshko.com wrote:
I've never run into a situation where a corruption occurred causing a permanent damage to my system.
Ohh, it must never happen then. From the person who used to triage the bugs from when it did, please trust me that offline updates has reduced the number of trashed systems by three orders of magnitude.
I too have updated my desktop many times over the years with no trouble. However, it's rare for me to do a system upgrade that doesn't get munged, leaving me to clean up dupes from a CLI, and using my laptop (which doesn't have trouble upgrading) to stay on line. Having spent years doing tech support for an ISP, I'm not going to argue with your experience, but I'd like to point out that you only heard about the cases where it went wrong, so there might be a bit of selection bias going on.
On 09/09/2017 03:21 AM, Richard Hughes wrote:
On 7 September 2017 at 14:53, Ed Greshko ed.greshko@greshko.com wrote:
I've never run into a situation where a corruption occurred causing a permanent damage to my system.
Ohh, it must never happen then. From the person who used to triage the bugs from when it did, please trust me that offline updates has reduced the number of trashed systems by three orders of magnitude.
Please do not take my quote out of context. I never stated, nor implied, that something could not happen.
Also, my quote had nothing to with the actual update process itself.
My statement in the original, unedited, un-snipped email in summary said....
"My system was not damaged by not immediately doing a reboot after an update that would have indicated processes needed rebooting".
And when I say "my" system I do mean "my" system. That is not to be taken as an indication that something can never happen. It is simply a statement of fact.
Den 2017-09-07 kl. 14:16, skrev Wolfgang Pfeiffer: You only need to reboot if the kernel is updated.
Supposedly there are ways to get around this even this, but I can't find a way to make it work.
kSplice smoothly switches the running system to the new kernel, but Oracle acquired the project and made it proprietary, so the only way to use it is with an enterprise subscription.
Red Hat then announced a free/gratis and open source alternative called kpatch. You could try it, but as of today, it's still not considered stable enough for production use -- https://github.com/dynup/kpatch
Then apparently Red Hat and openSUSE merged parts of both of their competing implementations into the kernel itself? -- http://www.zdnet.com/article/no-reboot-patching-comes-to-linux-4-0/ -- But I can't find anything about how to *use* this feature.
On 09/07/2017 05:20 PM, Andrew Toskin wrote:
Den 2017-09-07 kl. 14:16, skrev Wolfgang Pfeiffer: You only need to reboot if the kernel is updated.
Supposedly there are ways to get around this even this, but I can't find a way to make it work.
kSplice smoothly switches the running system to the new kernel, but Oracle acquired the project and made it proprietary, so the only way to use it is with an enterprise subscription.
Red Hat then announced a free/gratis and open source alternative called kpatch. You could try it, but as of today, it's still not considered stable enough for production use -- https://github.com/dynup/kpatch
Then apparently Red Hat and openSUSE merged parts of both of their competing implementations into the kernel itself? -- http://www.zdnet.com/article/no-reboot-patching-comes-to-linux-4-0/ -- But I can't find anything about how to *use* this feature.
I'd be very suspicious of trying to use a new kernel without actually _booting_ the new kernel. There were abortive attempts to partially restart the kernel with the Mach kernel. It sorta worked...in fairly rare occurrences...but never reliably and often ended with the system crashing in fairly ugly ways. Both Apple, Carnegie-Melon and DEC gave it the old college try.
My basic rules of thumb is...if the kernel was updated, reboot. If glibc was updated, you could restart your apps but it's safest to reboot.
I mean, do you do an engine swap on your car when you're tooling along at 80MPH? I sure don't! ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - - If this is the first day of the rest of my life... - - I'm in BIG trouble! - ----------------------------------------------------------------------
On Thu, Sep 07, 2017 at 02:16:23PM +0200, Wolfgang Pfeiffer wrote:
Ditto here: Gnome here (plus KDE installed, but rarely used) and I also update via dnf only, i.e. I log out of Gnome, log in to a tty, run "dnf upgrade", and reboot - did you, or anyone else, find a way to upgrade safely without the need to reboot? On Gnome?
You can certainly do this. I highly recommend using tmux or screen, because that will shield you from some of the potential problems in case an update of the running GUI breaks that GUI.
Additionally, do be aware of issues like https://forums.fedoraforum.org/showthread.php?t=308371, which is caused by upgrading firefox when firefox is running. (The flash problem is just a very visible symptom.)
Note also that in the event of security problems, you may still have stuff running with the old versions in memory. Run `sudo dnf needs-restarting` to see what needs to restart; often a reboot is just _easier_.
On Thu, 2017-09-07 at 11:00 -0400, Matthew Miller wrote:
[ ... ] Note also that in the event of security problems, you may still have stuff running with the old versions in memory. Run `sudo dnf needs-restarting` to see what needs to restart; often a reboot is just _easier_.
Easier, yes, and probably simply the only option in instances where services needing a restart simply cannot be restarted after an upgrade, except by a full reboot.
That's what just happened here, an hour ago:
I did an upgrade on a tmux session, on a tty - quite a few packages got upgraded, glibc among them. After the upgrade, and after restarting gdm, (something like systemctl restart gdm, IIRC), after re-logged in to Gnome still I got this from dnf needs-restarting:
================================= 1 : /usr/lib/systemd/systemd --system --deserialize 17 3720 : /usr/lib/systemd/systemd-udevd 3916 : /sbin/auditd 3946 : /usr/libexec/accounts-daemon 3957 : /usr/lib/systemd/systemd-logind 3958 : /usr/sbin/abrtd -d -s 3972 : /usr/sbin/gssproxy -D 3999 : /usr/bin/python3 -Es /usr/sbin/firewalld --nofork --nopid 4010 : /usr/lib/polkit-1/polkitd --no-debug 4030 : /usr/sbin/NetworkManager --no-daemon 4047 : /usr/sbin/libvirtd 4227 : /usr/libexec/bluetooth/bluetoothd 4295 : /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper 4375 : /usr/bin/abrt-dump-journal-xorg -fxtD 4376 : /usr/bin/abrt-dump-journal-oops -fxtD 4377 : /usr/bin/abrt-dump-journal-core -D -T -f -e 4411 : /usr/libexec/upowerd 4447 : /usr/bin/pulseaudio --start --log-target=syslog 4495 : /usr/libexec/packagekitd 4578 : /usr/libexec/colord 5084 : /usr/libexec/udisks2/udisksd 5436 : /usr/sbin/cupsd -l 5650 : /usr/libexec/fwupd/fwupd 7493 : /sbin/agetty --noclear tty3 linux 18667 : /usr/sbin/crond -n 19027 : /usr/bin/python3 -Es /usr/sbin/setroubleshootd -f 27522 : /sbin/agetty --noclear tty5 linux 27538 : /sbin/agetty --noclear tty6 linux ==================================================
Please note systemd-udevd, 3720, on top of the list: seems to be the kernel device manager, so I think it was good reason for a reboot, because how do you restart a "static" service like systemd-udevd?
After a full reboot then, 'dnf needs-restarting' now yields nothing.
I can't say I'm happy with this situation, but at least - I think - I now know how to handle it.
Thanks a lot to everyone in this thread: you definitely helped a lot!
Best Regards Wolfgang
On 09/07/2017 03:39 PM, Wolfgang Pfeiffer wrote:
After a full reboot then, 'dnf needs-restarting' now yields nothing.
Of course it does. running dnf needs-restarting gives you a list of programs that either have been updated more recently than their last start or are using shared resources that have just been upgraded. As a reboot closes and restarts *everything,* what else would you expect?
On Thu, 2017-09-07 at 15:47 -0700, Joe Zeff wrote:
On 09/07/2017 03:39 PM, Wolfgang Pfeiffer wrote:
After a full reboot then, 'dnf needs-restarting' now yields nothing.
Of course it does. running dnf needs-restarting gives you a list of programs that either have been updated more recently than their last start or are using shared resources that have just been upgraded. As a reboot closes and restarts *everything,* what else would you expect?
Just what I got: but on computers (as with men in general ..:)) I'm always ready for a surprise .... I just reported the result to let youknow it worked ...
Wolfgang
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
On Thu, 07 Sep 2017 11:00:06 -0400 Matthew Miller wrote:
I highly recommend using tmux or screen,
Or simply use nohup and "dnf -y".
For example like this:
nohup dnf -y upgrade >& /var/tmp/dnf-upgrade.log &
On Thu, 2017-09-07 at 14:16 +0200, Wolfgang Pfeiffer wrote:
In previous times, on a Debian system, I rebooted the machine maybe once or twice a year (not kidding ..) and it worked
Addendum: I just remembered that at least the last years I had run that system I didn't update it at all (was impossible - messed up dependencies). So actually these last years there obviously wasn't any need for a reboot ...
So please take the quoted previous comment with the necessarily limited value.
I *think* tho' (not being sure ...) that the years preceding these mentioned last years I also rarely rebooted ...
Wolfgang
Just a couple of my servers: [0:root@apinetstore2 ~]$ cat /etc/redhat-release Fedora release 21 (Twenty One) [0:root@apinetstore2 ~]$ uptime 02:18:00 up 949 days, 17:08, 1 user, load average: 0.21, 0.41, 0.44
[0:root@elvis ~]$ cat /etc/redhat-release Fedora release 16 (Verne) [0:root@elvis ~]$ uptime 02:19:02 up 553 days, 16:00, 4 users, load average: 0.20, 0.16, 0.14
It's usually a disk wearing out that forces a reboot.
Bill
On 9/8/2017 4:33 PM, Wolfgang Pfeiffer wrote:
On Thu, 2017-09-07 at 14:16 +0200, Wolfgang Pfeiffer wrote:
In previous times, on a Debian system, I rebooted the machine maybe once or twice a year (not kidding ..) and it worked
Addendum: I just remembered that at least the last years I had run that system I didn't update it at all (was impossible - messed up dependencies). So actually these last years there obviously wasn't any need for a reboot ...
So please take the quoted previous comment with the necessarily limited value.
I *think* tho' (not being sure ...) that the years preceding these mentioned last years I also rarely rebooted ...
Wolfgang _______________________________________________ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
On 09/10/17 15:21, Bill Shirley wrote:
Just a couple of my servers: [0:root@apinetstore2 ~]$ cat /etc/redhat-release Fedora release 21 (Twenty One) [0:root@apinetstore2 ~]$ uptime 02:18:00 up 949 days, 17:08, 1 user, load average: 0.21, 0.41, 0.44
[0:root@elvis ~]$ cat /etc/redhat-release Fedora release 16 (Verne) [0:root@elvis ~]$ uptime 02:19:02 up 553 days, 16:00, 4 users, load average: 0.20, 0.16, 0.14
Well, since those versions are EOL and no updates are being released for them that is to be expected. So, not sure what value that adds to anything.
On Sun, 2017-09-10 at 03:21 -0400, Bill Shirley wrote:
Just a couple of my servers: [0:root@apinetstore2 ~]$ cat /etc/redhat-release Fedora release 21 (Twenty One) [0:root@apinetstore2 ~]$ uptime 02:18:00 up 949 days, 17:08, 1 user, load average: 0.21, 0.41, 0.44
[0:root@elvis ~]$ cat /etc/redhat-release Fedora release 16 (Verne) [0:root@elvis ~]$ uptime 02:19:02 up 553 days, 16:00, 4 users, load average: 0.20, 0.16, 0.14
It's usually a disk wearing out that forces a reboot.
All that means is that you're running out-of-date systems on your servers. Not usually a good idea, and not what Fedora is intended for.
poc
On Sun, 2017-09-10 at 11:37 +0100, Patrick O'Callaghan wrote:
On Sun, 2017-09-10 at 03:21 -0400, Bill Shirley wrote:
Just a couple of my servers: [0:root@apinetstore2 ~]$ cat /etc/redhat-release Fedora release 21 (Twenty One) [0:root@apinetstore2 ~]$ uptime 02:18:00 up 949 days, 17:08, 1 user, load average: 0.21, 0.41, 0.44
[0:root@elvis ~]$ cat /etc/redhat-release Fedora release 16 (Verne) [0:root@elvis ~]$ uptime 02:19:02 up 553 days, 16:00, 4 users, load average: 0.20, 0.16, 0.14
It's usually a disk wearing out that forces a reboot.
All that means is that you're running out-of-date systems on your servers.
It's also a strong hint that it's possible to have machines up and running for such a long time.
That's what this whole debate basically is about: less maintenance work and more usage of the machines - and to reach that I (and probably quite a few more than just me) need at least less reboots. It's doable, see Bill Shirley's machines, and yes: it might need quite some work to reach that target - question remains: does anyone care? ... :)
Have all a nice Sunday!
Regards Wolfgang
On 09/10/17 21:03, Wolfgang Pfeiffer wrote:
On Sun, 2017-09-10 at 11:37 +0100, Patrick O'Callaghan wrote:
On Sun, 2017-09-10 at 03:21 -0400, Bill Shirley wrote:
Just a couple of my servers: [0:root@apinetstore2 ~]$ cat /etc/redhat-release Fedora release 21 (Twenty One) [0:root@apinetstore2 ~]$ uptime 02:18:00 up 949 days, 17:08, 1 user, load average: 0.21, 0.41, 0.44
[0:root@elvis ~]$ cat /etc/redhat-release Fedora release 16 (Verne) [0:root@elvis ~]$ uptime 02:19:02 up 553 days, 16:00, 4 users, load average: 0.20, 0.16, 0.14
It's usually a disk wearing out that forces a reboot.
All that means is that you're running out-of-date systems on your servers.
It's also a strong hint that it's possible to have machines up and running for such a long time.
Sure..... If you never do any updates!
As we've pointed out. The versions showing above are EOL. As in End Of Life. They are not getting ANY updates. They are not getting a new kernel. They are not getting security updates. They are not getting "maintained".
That's what this whole debate basically is about: less maintenance work and more usage of the machines - and to reach that I (and probably quite a few more than just me) need at least less reboots. It's doable, see Bill Shirley's machines, and yes: it might need quite some work to reach that target - question remains: does anyone care? ... :)
Seriously, all you have to do is not update until you want to and you can reduce the number of reboots that take all of a few minutes for most reasonably powered systems. Do updates once a month if you want. Maybe you want to keep an eye out for serious security updates that pop up from time to time. But if everything is working for you then you aren't obligated to apply updates simply because they are available.
Have all a nice Sunday!
Already just about over here.
On Sun, 2017-09-10 at 21:26 +0800, Ed Greshko wrote:
On 09/10/17 21:03, Wolfgang Pfeiffer wrote:
On Sun, 2017-09-10 at 11:37 +0100, Patrick O'Callaghan wrote:
On Sun, 2017-09-10 at 03:21 -0400, Bill Shirley wrote:
Just a couple of my servers: [0:root@apinetstore2 ~]$ cat /etc/redhat-release Fedora release 21 (Twenty One) [0:root@apinetstore2 ~]$ uptime 02:18:00 up 949 days, 17:08, 1 user, load average: 0.21, 0.41, 0.44
[0:root@elvis ~]$ cat /etc/redhat-release Fedora release 16 (Verne) [0:root@elvis ~]$ uptime 02:19:02 up 553 days, 16:00, 4 users, load average: 0.20, 0.16, 0.14
It's usually a disk wearing out that forces a reboot.
All that means is that you're running out-of-date systems on your servers.
It's also a strong hint that it's possible to have machines up and running for such a long time.
Sure..... If you never do any updates!
I wouldn't recommend that: What I wanted to say, was: give us the updates, make sure they're safely applied in a running system *and* remove the need to reboot. And yes, I know this is stuff from a still distant future ...
Regards Wolfgang
On 09/10/17 21:55, Wolfgang Pfeiffer wrote:
On Sun, 2017-09-10 at 21:26 +0800, Ed Greshko wrote:
On 09/10/17 21:03, Wolfgang Pfeiffer wrote:
On Sun, 2017-09-10 at 11:37 +0100, Patrick O'Callaghan wrote:
On Sun, 2017-09-10 at 03:21 -0400, Bill Shirley wrote:
Just a couple of my servers: [0:root@apinetstore2 ~]$ cat /etc/redhat-release Fedora release 21 (Twenty One) [0:root@apinetstore2 ~]$ uptime 02:18:00 up 949 days, 17:08, 1 user, load average: 0.21, 0.41, 0.44
[0:root@elvis ~]$ cat /etc/redhat-release Fedora release 16 (Verne) [0:root@elvis ~]$ uptime 02:19:02 up 553 days, 16:00, 4 users, load average: 0.20, 0.16, 0.14
It's usually a disk wearing out that forces a reboot.
All that means is that you're running out-of-date systems on your servers.
It's also a strong hint that it's possible to have machines up and running for such a long time.
Sure..... If you never do any updates!
I wouldn't recommend that: What I wanted to say, was: give us the updates, make sure they're safely applied in a running system *and* remove the need to reboot. And yes, I know this is stuff from a still distant future ...
IMO, you're making a mountain out of a mole hill.
Don't want to do reboots "too" often (with "too often" being subjective) then don't update "too" often.
Your system is up 24/7 and you are concerned about the few minutes of downtime while the rebooting is happening? Schedule the reboot while you're sleeping. You do sleep, yes? Or while you eat lunch. You must eat.
You have systems providing vital services 24hrs/day to people outside of your local network and have service level agreements? Look into load balancing and/or fail-over systems so you can update one system while not affecting the service.
Of course you do understand that the software provided by Linux distributions is open-source and written by a vast number of people with no central control. I mean there isn't a central authority that can demand and enforce the edict "remove the need to reboot".
So, simply define your goals for how you want to maintain your system and develop your procedures to meet these goals.
are not getting ANY updates. They are not getting a new kernel. They are not getting security updates. They are not getting "maintained".
They ARE being maintained. It's possible to update some rpms without updating the release: https://www.spinics.net/linux/fedora/fedora-users/msg477184.html
dnf updates can go wrong: https://www.spinics.net/linux/fedora/fedora-users/msg476574.html I had that happen to the server sitting behind me. It would have been much harder to recover from if the server, my eyes, and my hands were across the country.
I once 'talked' someone through recovering from both drives failing in a md mirror. We replaced one drive and then the other failed. It was probably a heat problem since one or more case fans had failed. I actually never spoke to him (he has a thick accent which is hard to understand); we just conversed with SMS messages and screen shots.
Don't take it that I'm recommending not to update. (Kids, don't try this at home). You just have to be cautious what you do to a server that has to be up 25 hours a day, 8 1/2 days a week, and 365 days a year which is hundreds of miles away. :-)
I had one server that had 1100+ days uptime until the operator rebooted the wrong server in the cluster. This was a couple of years ago so, add about 730 days to that. We finally had to reboot last month because of a failing hard drive. I know you'll think I'm lying, but it was a Seagate Barracuda. =-O
Bill
On Sun, 2017-09-10 at 12:22 -0400, Bill Shirley wrote:
Don't take it that I'm recommending not to update. (Kids, don't try this at home). You just have to be cautious what you do to a server that has to be up 25 hours a day, 8 1/2 days a week, and 365 days a year which is hundreds of miles away. :-)
Once again, only you know your situation, but this is simply not a case in which I would be using Fedora.
I had one server that had 1100+ days uptime until the operator rebooted the wrong server in the cluster. This was a couple of years ago so, add about 730 days to that. We finally had to reboot last month because of a failing hard drive. I know you'll think I'm lying, but it was a Seagate Barracuda. =-O
I can believe that. I have a NAS that came with 2 of those. Luckily I had them in a RAID-1 (mirror) configuration, because first one of them failed and a few months later so did the other one.
poc
On Sun, 2017-09-10 at 15:03 +0200, Wolfgang Pfeiffer wrote:
All that means is that you're running out-of-date systems on your servers.
It's also a strong hint that it's possible to have machines up and running for such a long time.
That isn't news. Anyone who has used or administered Unix/Linux for the last 4 decades or so, as I have, knows this. The question is whether you want to actually maintain your system in a stable and secure condition, or just try for some meaningless uptime record. If it's the former, you'll update it when it's prudent to do so, which of course depends on your specific situation.
That's what this whole debate basically is about: less maintenance work and more usage of the machines - and to reach that I (and probably quite a few more than just me) need at least less reboots. It's doable, see Bill Shirley's machines, and yes: it might need quite some work to reach that target - question remains: does anyone care? ... :)
Speaking personally, no I don't care. I have never used the Gnome update system and cannot imagine why I ever would, but it no doubt works for some people. OTOH I do reboot my personal machine quite often as I update it every morning using dnf. That's my choice. The reboot generally takes about 30 seconds, unless I'm running a Windows VM in which case I usually try to shut it down properly, which can take a long time. If I were administering a mail and web service with several thousand users, as I once did, I simply wouldn't be using Fedora but CentOS or some other LTS distro. And I would still reboot it when necessary, after a judicious advisory period.
poc
On 09/10/2017 07:07 AM, Patrick O'Callaghan wrote:
as I update it every morning using dnf. That's my choice. The reboot generally takes about 30 seconds, unless I'm running a Windows VM in which case I usually try to shut it down properly, which can take a long time. If I were administering a mail and web service with several
If you're using KVM/QEMU, you don't need to shut down the VM. It will be paused for the reboot (memory saved) and then resumed when the server comes back up. It is a very nice feature and I think it's the default now, but obviously you should verify that before trying. The VM has a higher uptime than the host. :-)
On Sun, 2017-09-10 at 14:07 -0700, Samuel Sieb wrote:
On 09/10/2017 07:07 AM, Patrick O'Callaghan wrote:
as I update it every morning using dnf. That's my choice. The reboot generally takes about 30 seconds, unless I'm running a Windows VM in which case I usually try to shut it down properly, which can take a long time. If I were administering a mail and web service with several
If you're using KVM/QEMU, you don't need to shut down the VM. It will be paused for the reboot (memory saved) and then resumed when the server comes back up. It is a very nice feature and I think it's the default now, but obviously you should verify that before trying. The VM has a higher uptime than the host. :-)
Would that it were so simple :-) The VM is running VFIO passthrough for a second GPU which I use for gaming. The state of the GPU will not be saved by freezing the VM, even when a game is not actually running. Windows doesn't have a "hibernate" feature except for laptops, and there doesn't appear to be a way of convincing it that the VM is a laptop (the GPU drivers are a dead giveaway). Thus killing the libvirtd process is equivalent to a system reset AFAIK.
poc