On Sat, 2010-02-06 at 21:36 +0100, Emanuel Rumpf wrote:
I can only say what I have done and _very_ _carefully_ point out that I
know _absolutely_ _nothing_ about RT-performance on any other system
The box is a T3-P5945GC:
.. with an Intel 945GC chipset, AD1988b HDA audio and an E1200 (1.2 GHz
"Conroe" style dual core Celeron.)
The GPU is a pretty quiet, slightly overclocked Nvidia GT220 with DDR3
(And apparently the GT240 model with GDDR5 is a lemon. Avoid!)
Kernel is the 184.108.40.206-rt19, configured as per the Mandriva repositories
(optimized for Core2 as the only change)
> Where is the switch that would tell ladspa/dssi to use the GPU for processing ?
That "switch" ideally would be somewhere here on LAD. The problem being
mostly stirring up enough momentum to get something going. There might
be this fear of jumping into the unknown? but hey look, here is one more
programmer who has also had it with SSE and is not planning on upgrading
to a 4 way $2K server any time soon:
A realtime Mandelbrot zoomer in SSE assembly and CUDA
In which one of those two codepaths would you like to spend your spare
time? Which one looks the most civilized? Just wondering ...
Reasonable expectations for RT performance is 1/3 of what Nvidia says is
peak performance. So, 60 - 75 GFlops in the case at hand.
Which translates to $1 per GFlop (including a healthy chunk of fast
memory.) Hard to beat!
The 0.3 msec turnaround time depends on that you are working from the
console rather than the X-server. This might have been different if I
had had dedicated cards, one for audio and one for video, but -
unfortunately - I have only one PCIe slot, so there is nothing really
that I can know or tell about that ... Working from within Gnome on a
single card; 3 × 1.3 (+1.3) msec works and double that to become
[OK... That might be enough CUDA advocacy for tonight? :-D]
Linux-audio-dev mailing list