On Sat, Dec 10, 2011 at 01:10:38PM +0100, Philipp Überbacher wrote:
> > 2. Players like VLC have a normalize function. I don't know if it's
Huh? I cannot find anything on this website that says "normalization is
"Although music is encoded to a digital format with a clearly defined
maximum peak amplitude, and although most recordings are normalized
to utilize this peak amplitude, not all recordings sound equally loud.
This is because once this peak amplitude is reached, perceived loudness
can be further increased through signal-processing techniques such as
dynamic range compression and equalization"
I have to admit I'm not too familiar with the details of replaygain, but
I'm not aware that it actually does compression or equalization. In
contrast, they explicitly state:
"The player reads the corresponding gain metadata value from the file
and scales the audio data as appropriate. Scaling the audio data simply
means multiplying each sample value by a constant value."
That's gain, nothing more, nothing less.
According to the wiki, they store four values:
1. Peak track amplitude
2. Peak album amplitude
3. Track replay gain
4. Album replay gain
We can directly forget about the album values, since we're talking
Let's assume we're brave and normalize to 0dBFS, then this is obviously
the resulting peak track amplitude.
Asking for additional replay gain would simply cause distortion.
Long story short: only the peak track amplitude could be a useful
information if you don't want to apply automatic gain control or
read the entire file every time you play it just to determine the
correct gain for normalization.
> > There are special tools. If nothing helps, vlc and mplayer can do this.
Sure, libavformat from ffmpeg or libav, whatever you prefer.
> > > After that, the scanning process should work as with any audio file.
It's probably not that simple. If the container doesn't provide the
possibility to add sane metadata, you'd be lost. Likewise, there might
be multiple audio streams (stereo, multichannel, different languages).
You'd need a way to relate to those substreams from within your global
It could be easier to work on the audio streams directly, but this would
require re-muxing the file, causing subtle problems like A-V sync or the
necessity to write arbitrary containers (MPEG, MPEG-TS, MP4, ogv...).
ffmpeg/libav might help.
Either way, it's complex. ;)
[jackd connection handling]
> If you have a good idea, please tell me. I'll have to find a team on
I'm still looking for somebody to rewrite hdspmixer. ;) While we're at
it, how about an HTML based approach: you fire up your browser, either
have a matrix mixer for everything or can select individual output
buses, and then have the input/playback faders for this destination
only. (I have a couple of details, if you want to go this route)
Other idea: a P1722 streamer, no idea if Christoph Kuhr is still working
Or ask Paul if he needs some help with jack3. ;)
Linux-audio-dev mailing list