The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    Nvidia Cuda

    Discussion in 'Gaming (Software and Graphics Cards)' started by Peon, Sep 6, 2009.

  1. Peon

    Peon Notebook Virtuoso

    Reputations:
    406
    Messages:
    2,007
    Likes Received:
    128
    Trophy Points:
    81
    Are there any reviews or benchmarks on the performance on this technology? I'm rather curious because enabling it in CoreAVC actually caused my videos to stutter (whereas they play smoothly without it), which goes against the whole hardware acceleration is always faster than software belief.
     
  2. Dire NTropy

    Dire NTropy Notebook Deity

    Reputations:
    297
    Messages:
    720
    Likes Received:
    0
    Trophy Points:
    30
    I know that people use it to program GPUs to accelerate research code on the order of a couple magnitudes (very optimal case), but I don't know any consumer applications.
     
  3. Peon

    Peon Notebook Virtuoso

    Reputations:
    406
    Messages:
    2,007
    Likes Received:
    128
    Trophy Points:
    81
    Those people probably have GTX 295 in quad SLI or something. Hardly representative of the average person, much less the average notebook.

    But still, this is quite surprising. The technology isn't exactly new, yet CoreAVC is the one and only consumer-level program that uses it?
     
  4. Peter Bazooka

    Peter Bazooka Notebook Evangelist

    Reputations:
    109
    Messages:
    642
    Likes Received:
    1
    Trophy Points:
    31
  5. namaiki

    namaiki "basically rocks" Super Moderator

    Reputations:
    3,905
    Messages:
    6,116
    Likes Received:
    89
    Trophy Points:
    216
    CUDA on: fps: 299.3, dfps: 64.7
    CUDA off: fps: 666.5, dfps: 71.6

    The GPU is a GeForce 9600M GT and the CPU is a P8600. From the results, the CPU can process the video more than twice as fast as the GPU, but it doesn't really matter as most videos are encoded at 24 or 29.9 FPS.
     
  6. Starfox

    Starfox Notebook Evangelist

    Reputations:
    81
    Messages:
    302
    Likes Received:
    0
    Trophy Points:
    30
    CUDA enables you to run general purpose parallel processing tasks on the GPU. If you have a task that scales linearly (optimal case), a CUDA implementation will kick a multicore implementation's , simply because almost all CUDA cards pack more punch than even a quad core CPU (the amount of flops involved in rendering - or heck, even merely shading - one frame of crysis is an order of magnitude beyond what the top CPUs can offer). What people choose to do with CUDA and how they implement it determines the actual boost.
     
  7. Peon

    Peon Notebook Virtuoso

    Reputations:
    406
    Messages:
    2,007
    Likes Received:
    128
    Trophy Points:
    81
    Thanks for the figures, it's nice to know I'm not seeing things :) You have a much more powerful graphics card than I do (8400M GS here) which is probably why you don't have any trouble with CUDA. By the way, what's dfps?
     
  8. namaiki

    namaiki "basically rocks" Super Moderator

    Reputations:
    3,905
    Messages:
    6,116
    Likes Received:
    89
    Trophy Points:
    216
    I don't know what dfps stands for, but I believe it is the overall number of frames that is able to be sent out to the computer.

    So fps might be the number of frames per second processed by the processor and dfps might be the amount that are actually sent to the screen.

    I'm using 'timecodec.exe'. You can search it up, but I haven't found any documentation.
     
  9. Cblaze

    Cblaze Notebook Enthusiast

    Reputations:
    0
    Messages:
    24
    Likes Received:
    0
    Trophy Points:
    5
    The difference you will see will matter mostly on the graphics card you have and how well the application your using is coded, though it makes a huge difference when I start cracking hashes of random words I put into a generator

    Brute force on the CPU (Dual Core 2.8GHz)-about 10 million per second
    Brute force on the GPU (Nvidia GTX 260M)-about 2 billion per second

    Yeah, really
     
  10. ettornio

    ettornio Notebook Deity

    Reputations:
    331
    Messages:
    945
    Likes Received:
    0
    Trophy Points:
    30
    I don't know about you guys, but for me on an 8800M GTX, decoding 1080p video for playback via CUDA and CoreAVC is absolutely amazing. 0 stuttering. Not that there was any on the CPU, but hey... it's still neat! And it's even cooler to see a 1080p video playing and the CPU isn't doing anything (0%-2% CPU usage).
     
  11. namaiki

    namaiki "basically rocks" Super Moderator

    Reputations:
    3,905
    Messages:
    6,116
    Likes Received:
    89
    Trophy Points:
    216
    lol yes, the amusement factor is still there for me too.
     
  12. Shadows1990

    Shadows1990 Notebook Evangelist

    Reputations:
    81
    Messages:
    353
    Likes Received:
    0
    Trophy Points:
    30
    I use Badaboom to encode 720p/1080p videos for my WDTV.

    The results are amazing. For instance, I can encode a 720p movie using reasonably high settings in about an hour.