sounds really interesting the "coroner" part is something that we're
planning for abrt for some time and would be great if we could share the
code. I'm a bit worried about the perl part, that's something we
definitely don't want in "agents", but I think it shouldn't be problem
for the coroner part.
The coroner provides a web
interface that allows data mining of the incidents and allows owners of
applications to configure actions to take (e.g. "email me a daily digest of
any incidents", "send the incident data to a morgue for archival",
all core dumps", "delete all but 3 core-dumps", etc). The web interface
also show what applications core-dump the most and other boring statistics
This is something I like a lot.
The system understands different coredumps, linux and solaris crashdumps,
and custom incident data (e.g. we have a tool we use when a process is hung
that straces and attaches gdb and does a variety of other probes - that data
can be collected in the same way as a coredump).
This also sounds really interesting, ABRT is now only able to collect
post-mortem data and detects only crashes.
Obviously much of this makes sense within an organisation but may not
sense in a distributed world such as the typical use-case for crash-catcher.
However, it's possible that there may be sufficient overlap that we could
share API's or provide somesuch.
I think it makes sense even for a distro. If we had the "coroner" part
somewhere in our infrastructure for reporting bugs, then the reports can
go thru this "coroner" before they get to bugzilla - so the coroner can
store the backtraces (or coredumps, if user is brave enough to send it
:)) and then filter dupes based on analysis against the stored
bactraces.. something like http://crash-stats.mozilla.com/
Is there interest in exploring some kind of collaboration?
Sure, but we are now close to an important deadline, so we won't have
time for any deeper reading/learning about this for about 2 weeks.
Have a nice day,
Crash-catcher mailing list