On Mon, Mar 03, 2014 at 11:28:49PM +0100, "Jeremia Bär" wrote:
> I'm not sure I get the difference here. As I understand, optimization includes
The difference is that my (2) only requires programming skills,
while (1) would require familiarity with the application domain.
A trivial example: say you have a vector of 1024 floating point
values and you need to compute log10 of all of them. In a general
purpose routine you have no choice but to test each value x for
x > 0.0, and that is something we like to avoid inside a loop or
in vector code.
But if you know the application domain, you may know that all values
will be > -0.001, and that adding 0.002 to them won't affect the
result in any way that matters in practice. So you can just compute
log10 (x + 0.002) and remove the test.
In real-world cases such changes may be much more invasive.
For example, when designing a demodulator/decoder for a telecom
system, you will have a 'degradation budget', say 0.1 dB. This
means that your algorithm is allowed to perform as if the S/N
ratio of the input signal was 0.1 dB less than it really is.
You're free to spend that 0.1 dB wherever you want. But you
can do this only if you understand the consequences of e.g.
using a less accurate computation at some point, and are able to
demonstrate (by analysis) that you remain within the budget by
doing so. This requires understanding the algorithm at a much
deeper level than would be required to code it given a detailed
> We will have to submit code to a university-internal repository and it will run
No, but I can't reveal the actual way it will be used.
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
Linux-audio-dev mailing list