http://rudd-o.com/en/linux-and-free-software/tales-from-responsivenessland-w...
What is you comment?
On Tue, Feb 17, 2009 at 2:19 PM, Valent Turkovic valent.turkovic@gmail.com wrote:
http://rudd-o.com/en/linux-and-free-software/tales-from-responsivenessland-w...
What is you comment?
I think if someone proposed a patch which tweaked some kernel parameters as part of the desktop kickstart, it'd be reasonable to consider. I'd definitely agree with him that the default desktop installation should be tuned for responsiveness over throughput.
Colin Walters (walters@verbum.org) said:
http://rudd-o.com/en/linux-and-free-software/tales-from-responsivenessland-w...
What is you comment?
I think if someone proposed a patch which tweaked some kernel parameters as part of the desktop kickstart, it'd be reasonable to consider. I'd definitely agree with him that the default desktop installation should be tuned for responsiveness over throughput.
Well, we could just turn off swap entirely, which obviates the issue (at the expense of other issues.)
Bill
On Tue, Feb 17, 2009 at 2:38 PM, Bill Nottingham notting@redhat.com wrote:
Colin Walters (walters@verbum.org) said:
http://rudd-o.com/en/linux-and-free-software/tales-from-responsivenessland-w...
What is you comment?
I think if someone proposed a patch which tweaked some kernel parameters as part of the desktop kickstart, it'd be reasonable to consider. I'd definitely agree with him that the default desktop installation should be tuned for responsiveness over throughput.
Well, we could just turn off swap entirely, which obviates the issue (at the expense of other issues.)
On the face of it, that seems a bit more radical to me than tuning some kernel parameters, though I won't claim deep knowledge about the kernel parameters in question. This reminds me, if someone were to propose this patch I'd suggest it require signoff from the Fedora kernel people, even if it is just for the desktop image.
On Tue, Feb 17, 2009 at 8:38 PM, Bill Nottingham notting@redhat.com wrote:
Colin Walters (walters@verbum.org) said:
http://rudd-o.com/en/linux-and-free-software/tales-from-responsivenessland-w...
What is you comment?
I think if someone proposed a patch which tweaked some kernel parameters as part of the desktop kickstart, it'd be reasonable to consider. I'd definitely agree with him that the default desktop installation should be tuned for responsiveness over throughput.
Well, we could just turn off swap entirely, which obviates the issue (at the expense of other issues.)
merge the swap prefetch patch (upstream)
Bill Nottingham notting@redhat.com writes:
Colin Walters (walters@verbum.org) said:
http://rudd-o.com/en/linux-and-free-software/tales-from-responsivenessland-w...
What is you comment?
I think if someone proposed a patch which tweaked some kernel parameters as part of the desktop kickstart, it'd be reasonable to consider. I'd definitely agree with him that the default desktop installation should be tuned for responsiveness over throughput.
Well, we could just turn off swap entirely, which obviates the issue (at the expense of other issues.)
Systems where I have done that, have generally been thrashing themselves to death. When there is no swap at all, the kernel can't move any malloced or otherwise anonymous data to disk, which means that filebacked data will be competing for fewer and fewer pages as the amount of available RAM shrinks.
Basically, no swap at all means that a simple memory leak can completely kill the system.
Soren
On Tue, Feb 17, 2009 at 2:19 PM, Valent Turkovic valent.turkovic@gmail.com wrote:
http://rudd-o.com/en/linux-and-free-software/tales-from-responsivenessland-w...
What is you comment?
...this article is wrong in enough factual areas it's hard to comment (just because an explanation feels right doesn't make it true, it just makes it 'truthy'). But one thing is correct. Linux Desktop performance has gotten sluggish. It's not due to eg swap. I have no swap on any of my machines and haven't for years.
It's because a) virtually everything is backed by a db today (firefox is a pig because every keypress is firing off multiple database queries) b) Filesystems have finally turned on barriers to avoid most cases of 'I lost alot of data after a power outage' and c) the 'Wings Fall Off' buffercache serialization bug that showed up sometime after 2.6.15. You think sluggish is bad, try 'my buffercache filled up while rendering video and the machine wouldn't even ping for a week'.
These problems all feed each other.
Monty
On Tue, Feb 17, 2009 at 2:38 PM, Christopher Montgomery xiphmont@gmail.com wrote:
It's because a) virtually everything is backed by a db today (firefox is a pig because every keypress is firing off multiple database queries
I believe there's some work in Firefox 3.1 to make these queries asynchronous. The primary issue is the amount of I/O they're doing in the mainloop thread, which is always a big user experience mistake in desktop applications.
On Tue, Feb 17, 2009 at 8:38 PM, Christopher Montgomery xiphmont@gmail.com wrote:
On Tue, Feb 17, 2009 at 2:19 PM, Valent Turkovic valent.turkovic@gmail.com wrote:
http://rudd-o.com/en/linux-and-free-software/tales-from-responsivenessland-w...
What is you comment?
...this article is wrong in enough factual areas it's hard to comment (just because an explanation feels right doesn't make it true, it just makes it 'truthy'). But one thing is correct. Linux Desktop performance has gotten sluggish. It's not due to eg swap. I have no swap on any of my machines and haven't for years.
It's because a) virtually everything is backed by a db today (firefox is a pig because every keypress is firing off multiple database queries) b) Filesystems have finally turned on barriers to avoid most cases of 'I lost alot of data after a power outage' and c) the 'Wings Fall Off' buffercache serialization bug that showed up sometime after 2.6.15. You think sluggish is bad, try 'my buffercache filled up while rendering video and the machine wouldn't even ping for a week'.
These problems all feed each other.
Monty
Author asked to be corrected so it would be helpful to correct him where he is wrong. But also share all your knowledge how to make Linux desktop more responsive.
Cheers, Valent.
On Tue, 2009-02-17 at 20:19 +0100, Valent Turkovic wrote:
http://rudd-o.com/en/linux-and-free-software/tales-from-responsivenessland-w...
What is you comment?
If we really thought this was true, it would be straightforward enough to bump the mlock limits for users and get some of the high-touch apps to lock their text sections. I can add this to the X server tomorrow trivially even without that (the joys of being root).
I'm not _that_ convinced. I mean, the way to measure this is to look at the io trace hooks and see what you end up reading in. I'd be mildly surprised if it was text sections.
- ajax
2009/2/17 Adam Jackson ajax@redhat.com:
On Tue, 2009-02-17 at 20:19 +0100, Valent Turkovic wrote:
http://rudd-o.com/en/linux-and-free-software/tales-from-responsivenessland-w...
What is you comment?
If we really thought this was true, it would be straightforward enough to bump the mlock limits for users and get some of the high-touch apps to lock their text sections. I can add this to the X server tomorrow trivially even without that (the joys of being root).
I'm not _that_ convinced. I mean, the way to measure this is to look at the io trace hooks and see what you end up reading in. I'd be mildly surprised if it was text sections.
Ok, I only skimmed his article initially, I thought his argument was basically that it's better for interactivity to have a smaller buffer cache than to (preemptively or not) page out application sections (be that text, or stack/heap).
Certainly in the default configuration, the heap can be paged out, no? I think by "Prioritize code." he really means "whatever the app needs to respond to user input".
This is apparently not a new debate: http://kerneltrap.org/node/3000
Though big picture if you're swapping very much you've basically lost. So the biggest wins here definitely involve fixing applications (like federico's work on image caching and jemalloc in Firefox, alex's recent blog about tracking down extra nautilus heap usage).
On Tue, 2009-02-17 at 16:15 -0500, Colin Walters wrote:
Ok, I only skimmed his article initially, I thought his argument was basically that it's better for interactivity to have a smaller buffer cache than to (preemptively or not) page out application sections (be that text, or stack/heap).
Certainly in the default configuration, the heap can be paged out, no? I think by "Prioritize code." he really means "whatever the app needs to respond to user input".
This is apparently not a new debate: http://kerneltrap.org/node/3000
Though big picture if you're swapping very much you've basically lost. So the biggest wins here definitely involve fixing applications (like federico's work on image caching and jemalloc in Firefox, alex's recent blog about tracking down extra nautilus heap usage).
There have been a couple of other ideas along these lines that I've been kicking around for a while. I'm not taking credit, certainly these aren't revolutionary, but I do think they'd be worthwhile.
* Memory pressure signal from the kernel. If the kernel gets within (say) 5% of needing to evict something from memory to satisfy an allocation, it could mark some fd readable, and then apps could voluntarily shrink their caches. If the time to recreate from source is less than the cost of swapping, this wins; think JPEG to pixmap expansion cost here.
* Casual pixmaps in X. Normally we have to hold on to pixel data come hell or high water. Firefox could reasonably create its pixmaps through some other channel if it knew that it had the source data still to work with; this would give X the ability to respond to the above pressure signal sanely.
* Compressed image transport in X. We did have this at one point but it wasn't a big performance win in terms of raw drawing speed. But for memory pressure? Maybe worth it.
- ajax
Colin Walters wrote:
Ok, I only skimmed his article initially, I thought his argument was basically that it's better for interactivity to have a smaller buffer cache than to (preemptively or not) page out application sections (be that text, or stack/heap).
The down-side, of course, is that less buffering will slow down whatever is trying to do I/O, which can cause the very responsiveness issues you're trying to fix.
Certainly in the default configuration, the heap can be paged out, no? I think by "Prioritize code." he really means "whatever the app needs to respond to user input".
I think the default configuration is to reserve 40% of memory for buffering, and the rest for application memory (there is a kernel parameter to tune it, I forget what though).
Hmm... will quantum memory allow to store both buffer AND app memory in ram, such that the system will choose which is actually read (thereby "destroying" the other)? Because that's what we really need... otherwise you don't know if it's better to keep that file you just read, or the app memory that hasn't been touched in 30 minutes.
If you just read in a .cpp in a mass build (say, something the size of KDE), chances are you don't need it again... especially when the user goes back to writing that letter he stopped working on 30 minutes ago. Or maybe the user won't work on the letter and that file is the database the user is currently working with. The point is, there isn't a way to /know/, so the kernel has to just guess, and it favors (in its current configuration) new things.
Though big picture if you're swapping very much you've basically lost.
Yes, but for someone like me, you need a HUGE amount of RAM to avoid swapping. I build KDE and do digital photography. The former needs probably a few GB of ram, at least (when you account for file buffering, especially in massively-parallel builds). The latter also needs a few GB of memory, especially if working on multiple images. I'd say 16 GB is a good number, but not so many desktops have that much (not yet at least). (Netbooks certainly don't, but then, you probably shouldn't be doing that sort of workload on a netbook in the first place.) Even hard-core web browsing can eat upwards of 1 GB (lots of sites open, especially graphics-heavy ones).
IOW, planning how to swap /well/ is still important, IMO.
Matthew Woehlke wrote:
Colin Walters wrote:
Ok, I only skimmed his article initially, I thought his argument was basically that it's better for interactivity to have a smaller buffer cache than to (preemptively or not) page out application sections (be that text, or stack/heap).
The down-side, of course, is that less buffering will slow down whatever is trying to do I/O, which can cause the very responsiveness issues you're trying to fix.
Certainly in the default configuration, the heap can be paged out, no? I think by "Prioritize code." he really means "whatever the app needs to respond to user input".
I think the default configuration is to reserve 40% of memory for buffering, and the rest for application memory (there is a kernel parameter to tune it, I forget what though).
Hmm... will quantum memory allow to store both buffer AND app memory in ram, such that the system will choose which is actually read (thereby "destroying" the other)? Because that's what we really need... otherwise you don't know if it's better to keep that file you just read, or the app memory that hasn't been touched in 30 minutes.
If you just read in a .cpp in a mass build (say, something the size of KDE), chances are you don't need it again... especially when the user goes back to writing that letter he stopped working on 30 minutes ago. Or maybe the user won't work on the letter and that file is the database the user is currently working with. The point is, there isn't a way to /know/, so the kernel has to just guess, and it favors (in its current configuration) new things.
Though big picture if you're swapping very much you've basically lost.
Yes, but for someone like me, you need a HUGE amount of RAM to avoid swapping. I build KDE and do digital photography. The former needs probably a few GB of ram, at least (when you account for file buffering, especially in massively-parallel builds). The latter also needs a few GB of memory, especially if working on multiple images. I'd say 16 GB is a good number, but not so many desktops have that much (not yet at least). (Netbooks certainly don't, but then, you probably shouldn't be doing that sort of workload on a netbook in the first place.) Even hard-core web browsing can eat upwards of 1 GB (lots of sites open, especially graphics-heavy ones).
IOW, planning how to swap /well/ is still important, IMO.
I may be totally "out in the weeds" with this comment, but here goes. Is is possible to set up a small app that would maintain a record of the swap/buffer usage patterns and set up a "sliding scale" that would move the swap priority based on the usage pattern of the logged in user? I say this because different people tend to use their computers in different ways, as seen above. This would also allow a "starting point" for system tuning based on the amount of RAM and paging ratios. In the past I have had to do system tuning for Oracle DBs and know that different DB architectures require different tuning. It is a very technical art, generally beyond a nominal user. A usage tracking app may go a long way toward "auto tuning" based on usage patterns of particular users.
Roy Bynum wrote:
I may be totally "out in the weeds" with this comment, but here goes. Is is possible to set up a small app that would maintain a record of the swap/buffer usage patterns and set up a "sliding scale" that would move the swap priority based on the usage pattern of the logged in user?
Good question. I don't know enough if it can track usage patterns, but my guess is it could. (At least, if running as root; if not root I think it could only read the memory of processes belonging to the effective user, but since you say it should track that users' stuff anyway I think that's a non-issue. That said...) AFAIK the ratio is adjustable in real-time. (...it might need to be root to tweak the ratio, or else have an suid helper program. The latter is probably better... although it's probably better to make the whole thing run as root so it is system-wide. For single-user systems, it will mostly track the logged-in user anyway, but also account for system daemons. For multi-user systems, presumably you don't want to treat one user preferentially. And surely you don't want multiple instances running and contending on what to make the ratio.)
Short answer: I think it's possible.
Usage patterns are a function of user /and time/. I assume such a program could be tuned to handle varying usage patterns as well.
Matthew Woehlke wrote:
Roy Bynum wrote:
I may be totally "out in the weeds" with this comment, but here goes. Is is possible to set up a small app that would maintain a record of the swap/buffer usage patterns and set up a "sliding scale" that would move the swap priority based on the usage pattern of the logged in user?
Good question. I don't know enough if it can track usage patterns, but my guess is it could. (At least, if running as root; if not root I think it could only read the memory of processes belonging to the effective user, but since you say it should track that users' stuff anyway I think that's a non-issue. That said...) AFAIK the ratio is adjustable in real-time. (...it might need to be root to tweak the ratio, or else have an suid helper program. The latter is probably better... although it's probably better to make the whole thing run as root so it is system-wide. For single-user systems, it will mostly track the logged-in user anyway, but also account for system daemons. For multi-user systems, presumably you don't want to treat one user preferentially. And surely you don't want multiple instances running and contending on what to make the ratio.)
Short answer: I think it's possible.
Usage patterns are a function of user /and time/. I assume such a program could be tuned to handle varying usage patterns as well.
Desktop systems tend to be single user and usage centric which can change, while multiuser systems tend to be setup for a dedicated usage which does not change. The tuning application would be optional in both cases with at least two different modes of operation. The single user would more likely use it in a transparent auto-tuning mode while the administrator of the multiuser system would use it as a support tool in non auto-tuning, reporting only mode.
One of the things that I have learned over the years is that what I don't know exceeds what I do know. I may know the utilization that I have for my systems and those that I have supported. There are probably quite a few that I don't know about. If the single user systems were given the option of sending feedback to a development repository and provide a "usefulness" reporting site for feedback that could be used for making adjustments to the auto-tuning parameters. In addition to the nominal testing that would be done during development, other usage and utilization functionalities can be accounted for.
This type of applications would be useful for a broad range of implementations, and perhaps help reduce some of the "art" to system tuning. Additionally, it might have a positive impact on "perceived" desktop performance over a broad range of environments.
Roy Bynum wrote:
Desktop systems tend to be single user
This seems to be changing, in my experience. Linux especially encourages multiple users.
My home system deals with two non-daemon users on a regular basis and occasionally three... and I'm the only human using it. Family computers will sometimes (and should /always/, TBH*) have different user accounts for each family member.
(* not just for security reasons, it's also practical; each user gets their own personalizations)
and usage centric which can change, while multiuser systems tend to be setup for a dedicated usage which does not change.
You clearly haven't met some of the systems I use, that get used for anything and everything :-)... running IDE's, builds, stress testing... Even a "single-purpose" box for software QA can easily run the gamut of usage patterns.
The tuning application would be optional in both cases with at least two different modes of operation. The single user would more likely use it in a transparent auto-tuning mode while the administrator of the multiuser system would use it as a support tool in non auto-tuning, reporting only mode.
Sure, but if it's well-written, I don't see why you shouldn't be able to use it to auto-tune on a multi-user system. Even on a "true" single-use system, you could use it as a "fire and forget" way to improve performance; I agree you probably will not get the maximum benefit from this, but unless the program really sucks, it should be better than leaving the default settings.
At any rate, my previous point was mainly that it should be able to monitor the entire system (which likely requires elevated privileges). Since you mentioned monitoring at the system-level, we seem to agree on this.
One of the things that I have learned over the years is that what I don't know exceeds what I do know. I may know the utilization that I have for my systems and those that I have supported. There are probably quite a few that I don't know about. If the single user systems were given the option of sending feedback to a development repository and provide a "usefulness" reporting site for feedback that could be used for making adjustments to the auto-tuning parameters.
That sounds like an interesting idea.
On Tue, Feb 17, 2009 at 8:19 PM, Valent Turkovic valent.turkovic@gmail.com wrote:
http://rudd-o.com/en/linux-and-free-software/tales-from-responsivenessland-w...
What is you comment?
-- http://kernelreloaded.blog385.com/ linux, blog, anime, spirituality, windsurf, wireless registered as user #367004 with the Linux Counter, http://counter.li.org. ICQ: 2125241, Skype: valent.turkovic
As a long time Linux desktop user and Linux enthusiast I want bloody screaming fast desktop :) There are some situations that I just want to pull my hair out when I see that desktop performance just crawls to a halt :(
When I read articles like Tales from responsivenessland[1] I really don't get why there aren't bells ringing in the heads of the people who can actually make a difference for Linux desktop performance.
I was also really sad when I read interview with Con Kolivas[2] and the reasons why he quit kernel development[3].
I hope kernel developers will wake up and realise that there are also us - Desktop users and what we need and want are responsive desktops.
Will Fedora be the first Linux distro to have sane desktop defaults (vm.swappiness=1 and vm.vfs_cache_pressure=50). Current Fedora slogan is "Features. Freedom. Friends. First", I hope to see "Desktop performance" as part of it soon ;)
[1] http://rudd-o.com/en/linux-and-free-software/tales-from-responsivenessland-w... [2] http://apcmag.com/interview_with_con_kolivas_part_1_computing_is_boring.htm [3] http://apcmag.com/why_i_quit_kernel_developer_con_kolivas.htm
Valent Turkovic wrote:
On Tue, Feb 17, 2009 at 8:19 PM, Valent Turkovic valent.turkovic@gmail.com wrote:
http://rudd-o.com/en/linux-and-free-software/tales-from-responsivenessland-w...
What is you comment?
-- http://kernelreloaded.blog385.com/ linux, blog, anime, spirituality, windsurf, wireless registered as user #367004 with the Linux Counter, http://counter.li.org. ICQ: 2125241, Skype: valent.turkovic
As a long time Linux desktop user and Linux enthusiast I want bloody screaming fast desktop :) There are some situations that I just want to pull my hair out when I see that desktop performance just crawls to a halt :(
When I read articles like Tales from responsivenessland[1] I really don't get why there aren't bells ringing in the heads of the people who can actually make a difference for Linux desktop performance.
I was also really sad when I read interview with Con Kolivas[2] and the reasons why he quit kernel development[3].
I hope kernel developers will wake up and realise that there are also us - Desktop users and what we need and want are responsive desktops.
Will Fedora be the first Linux distro to have sane desktop defaults (vm.swappiness=1 and vm.vfs_cache_pressure=50). Current Fedora slogan is "Features. Freedom. Friends. First", I hope to see "Desktop performance" as part of it soon ;)
[1] http://rudd-o.com/en/linux-and-free-software/tales-from-responsivenessland-w... [2] http://apcmag.com/interview_with_con_kolivas_part_1_computing_is_boring.htm [3] http://apcmag.com/why_i_quit_kernel_developer_con_kolivas.htm
Valent may have partially pointed to the issue of performance vs. features. As Microsoft users have discovered, the more active processes that are running and the pipes that interactive data, such as email and internet, go through, the slower a system will run. Newer, more complex (read: amount of code required to be functional) applications and updates are applied, the perceived performance continues to degrade. The amount of load on a desktop system has expanded at a staggering rate. Virtualization adds its own load to active desktops as well. And because of additional security monitoring processes, older hardware should not be pushed to perform at the level that it has in the past.
New hardware technology such as higher speed, multi-core processor and 2 and 3 channel memory is becoming more common, which tends to be better able to handle the expanded processing and I/O load. This does nothing for the majority of Linux users that are used to being able to use older hardware, yet want the features of the newer applications and functions.
Desktop performance comes down to a trade off between the perceived performance and number of active features/processes with the amount of code to be executed, based on a common hardware performance. A proposed "auto-tunning" I/O manager may provide some assistance, but it also adds an processing load on the desktop.
Has anyone done any benchmarking on the amount of code, granularity of the code, and processing performance? Has anyone done any benchmarking of applications and versions that may give some insight on the code processing vs hardware performance issues?
desktop@lists.stg.fedoraproject.org