Are there any reviews or benchmarks on the performance on this technology? I'm rather curious because enabling it in CoreAVC actually caused my videos to stutter (whereas they play smoothly without it), which goes against the whole hardware acceleration is always faster than software belief.
-
I know that people use it to program GPUs to accelerate research code on the order of a couple magnitudes (very optimal case), but I don't know any consumer applications.
-
Those people probably have GTX 295 in quad SLI or something. Hardly representative of the average person, much less the average notebook.
But still, this is quite surprising. The technology isn't exactly new, yet CoreAVC is the one and only consumer-level program that uses it? -
Peter Bazooka Notebook Evangelist
Toms Hardware had an article about cuda performance. Not sure if its what you are looking for but worth a read anyway.
http://www.tomshardware.com/reviews/nvidia-cuda-gpgpu,2299.html -
CUDA on: fps: 299.3, dfps: 64.7
CUDA off: fps: 666.5, dfps: 71.6
The GPU is a GeForce 9600M GT and the CPU is a P8600. From the results, the CPU can process the video more than twice as fast as the GPU, but it doesn't really matter as most videos are encoded at 24 or 29.9 FPS. -
CUDA enables you to run general purpose parallel processing tasks on the GPU. If you have a task that scales linearly (optimal case), a CUDA implementation will kick a multicore implementation's , simply because almost all CUDA cards pack more punch than even a quad core CPU (the amount of flops involved in rendering - or heck, even merely shading - one frame of crysis is an order of magnitude beyond what the top CPUs can offer). What people choose to do with CUDA and how they implement it determines the actual boost.
-
You have a much more powerful graphics card than I do (8400M GS here) which is probably why you don't have any trouble with CUDA. By the way, what's dfps?
-
I don't know what dfps stands for, but I believe it is the overall number of frames that is able to be sent out to the computer.
So fps might be the number of frames per second processed by the processor and dfps might be the amount that are actually sent to the screen.
I'm using 'timecodec.exe'. You can search it up, but I haven't found any documentation. -
The difference you will see will matter mostly on the graphics card you have and how well the application your using is coded, though it makes a huge difference when I start cracking hashes of random words I put into a generator
Brute force on the CPU (Dual Core 2.8GHz)-about 10 million per second
Brute force on the GPU (Nvidia GTX 260M)-about 2 billion per second
Yeah, really -
I don't know about you guys, but for me on an 8800M GTX, decoding 1080p video for playback via CUDA and CoreAVC is absolutely amazing. 0 stuttering. Not that there was any on the CPU, but hey... it's still neat! And it's even cooler to see a 1080p video playing and the CPU isn't doing anything (0%-2% CPU usage).
-
-
I use Badaboom to encode 720p/1080p videos for my WDTV.
The results are amazing. For instance, I can encode a 720p movie using reasonably high settings in about an hour.
Nvidia Cuda
Discussion in 'Gaming (Software and Graphics Cards)' started by Peon, Sep 6, 2009.