Do we have any comments on the sound server topic from a broader perspective than only GNOME?
A competing proposal I've heard is to just use ALSA directly.
I've got some notes on how to get hardware accelerated/network transparent mixing sorted out with minimal effort, which unfortunately I don't have here. I could post them in a few days, hopefully.
Basically:
- Write an alsa-lib plugin that by default just forwards the audio to the slave device (ie, is a no-op) but when a magic environment variable is set or whatever takes the audio and writes it to a socket created somewhere with the name == pid of process, ie
/tmp/audio-mike/12345
The alsa-lib configuration format allows passing environment vars as plugin parameters, ie it's all quite flexible with minimal policy hardcoded.
Typically therefore you'd have the following alsalib plugin pipeline:
default -> as-yet-unnamed-plugin -> [dmix] -> plughw:0,0
- Have a simple GStreamer app that monitors $MAGIC_DIRECTORY and when a new socket appears that is readable, connect to it and start downloading the audio direct from the app. You can then feed it via any GST pipeline you like; ie:
fdsrc ! vorbisenc ! oggmux ! tcpsink
(dunno the exact plugin names). Net result is sound-server policy is very flexible: you can have a dedicated "connect to me" style server or you can stream it via SSH or via X, or you can forward audio on a case-by-case basis.
- The key point is that in the common case of no network transparency you can take advantage of dmix mixing (so instant interop with all other ALSA supporting apps and OSS apps via the alsa LD_PRELOAD wrapper), and in the case of having a sound card that doesn't suck you can use hardware accelerated mixing/resampling as ALSA supports this natively.
- One issue I haven't considered is syncing with X, because I don't know anything about that. How often do people play movies over a remote X connection anyway?
The main catch is that it doesn't exist, I just made it up. Somebody would have to learn ALSA plugin programming then write as-yet-unnamed-plugin. I don't have time, sorry :(
The other major catch is that obviously it's not portable. To be frank I don't care about this whatsoever - IMHO audio mixing/resampling/network-transparency is *not* a desktop level problem, it's an operating system level problem. If the sound system of other operating systems don't support automatic software mixing or network transparency then too bad. However I suspect a proposal based on leveraging ALSA would go down like a ton of bricks in most portable desktop projects. There's nothing inherantly unportable about the ALSA API, but people would still object.
Final catch: in Fedora Core 2 at least ALSA support still seems a bit buggy. I found running XMMS for long periods with the ALSA plugin caused a huge memory leak, and also after playing music for a while noise and corruption started leaking into the audio.
Here is the ALSA dmix bug:
http://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=130593
thanks -mike
devel@lists.stg.fedoraproject.org