Hi,
How hard would it be to assign sub-application-granularity "blame" for all the memory used by a full desktop (GNOME+Evolution+Mozilla+OO.org)?
Something like:
5M sum of per-app icon theme caching 5M sum of per-app base gtk_init() overhead 10M sum of per-email data in Evolution 7M base evo overhead with no mail loaded 30M sum of all executable pages (libraries and binaries) ...
i.e. try to get an idea of where focused optimization could have the most impact on the desktop overall - what percentage of TOTAL memory usage for the whole desktop can be blamed on each optimizable item, with sufficient granularity to be useful.
Havoc
Havoc Pennington wrote:
Hi,
How hard would it be to assign sub-application-granularity "blame" for all the memory used by a full desktop (GNOME+Evolution+Mozilla+OO.org)?
Something like:
5M sum of per-app icon theme caching 5M sum of per-app base gtk_init() overhead 10M sum of per-email data in Evolution 7M base evo overhead with no mail loaded 30M sum of all executable pages (libraries and binaries) ...
i.e. try to get an idea of where focused optimization could have the most impact on the desktop overall - what percentage of TOTAL memory usage for the whole desktop can be blamed on each optimizable item, with sufficient granularity to be useful.
Havoc
You could use "pmap pid" to get a coarse view of memory used by libraries and per process basis, but this doesn't handle which functions allocated the space. You could combine the data from pmap of different applications to get an overall view of space usage.
I have noticed that the "-x" option doe not seem to be working on version of pmap that FC2 uses, so it doesn't provide RSS, Anon, or Locked information.
-Will
On Tue, 2004-06-01 at 18:05, Havoc Pennington wrote:
Hi,
How hard would it be to assign sub-application-granularity "blame" for all the memory used by a full desktop (GNOME+Evolution+Mozilla+OO.org)?
Something like:
5M sum of per-app icon theme caching 5M sum of per-app base gtk_init() overhead 10M sum of per-email data in Evolution 7M base evo overhead with no mail loaded 30M sum of all executable pages (libraries and binaries) ...
i.e. try to get an idea of where focused optimization could have the most impact on the desktop overall - what percentage of TOTAL memory usage for the whole desktop can be blamed on each optimizable item, with sufficient granularity to be useful.
I was once thinking about something like this. If you could get allocations in each source file bundled in with the other allocations from that file we'd get a pretty decent approximation of the memory use for a certain class of objects (since each GObject is typically implemented in one source file). If we also count the number of live objects of each GType and match the GType to its implemented source file we'd have pretty good data about things like average memory use for each object type, total memory use for certain object types and number of live objects of each type.
This should be implementable if you turn g_malloc into a macro that expands to something with __FILE__ in it, altought it would require us to rebuild the whole stack with this macro defined.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Alexander Larsson Red Hat, Inc alexl@redhat.com alla@lysator.liu.se He's a globe-trotting guitar-strumming astronaut gone bad. She's a bloodthirsty paranoid pearl diver married to the Mob. They fight crime!
On Tue, 2004-06-01 at 18:05, Havoc Pennington wrote:
Something like:
5M sum of per-app icon theme caching 5M sum of per-app base gtk_init() overhead 10M sum of per-email data in Evolution 7M base evo overhead with no mail loaded 30M sum of all executable pages (libraries and binaries) ...
I don't think people agree with me, but in my opinion it is important to measure the working set. A program can malloc() 500 MB and then just sit in poll() never touching that memory.
An approach that could give you something close to what you are after is to LD_PRELOAD a new malloc() for the entire desktop and have that new malloc() report a stack trace to another application that could then process the data:
- Calculate the total amount of memory used by applications:
sum of all mmap()ed pages in physical RAM + sum of all anonymous pages in physical RAM
where "sum of all anonymous pages in physical RAM" is calculated by for each application subtracting the number of mapped pages in RAM from the RSS.
- Report like memprof does now, the amount of memory allocated by a function and its children. Divide all numbers by the total amount of memory used.
The amount of memory used by mmap()ed files is easy to measure:
- scan /proc and build a list of mmap()ed files - mmap() of those files - use mincore() to find out how many pages of those files are actually in RAM.
(I have a program that does this somewhere).
i.e. try to get an idea of where focused optimization could have the most impact on the desktop overall - what percentage of TOTAL memory usage for the whole desktop can be blamed on each optimizable item, with sufficient granularity to be useful.
The above might give you something like what you are after. It would be possible to report at a filename granularity instead of function granularity, which might be interesting as Alex suggested.
Soeren
desktop@lists.stg.fedoraproject.org