The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    Should physics be offloaded to CPU instead of GPU

    Discussion in 'Alienware M11x' started by mcham, Aug 13, 2010.

  1. mcham

    mcham Notebook Guru

    Reputations:
    12
    Messages:
    55
    Likes Received:
    4
    Trophy Points:
    16
    There is an option in the Nvidia control panel to offload game Physics calculations into the CPU instead of the GPU.

    Since GPU is generally the bottleneck for gaming performance, should this extra Physics task be delegated to the CPU, especially if you have an i7 CPU?

    What are your thoughts on this?
    Has anyone compared benchmarks for these two options?

    -Matt
     
  2. outspaced

    outspaced Notebook Consultant

    Reputations:
    16
    Messages:
    136
    Likes Received:
    0
    Trophy Points:
    30
    Good point! I might try it for a bit but will have to work out which games to use.
     
  3. Arak-Nafein

    Arak-Nafein Notebook Consultant

    Reputations:
    118
    Messages:
    155
    Likes Received:
    0
    Trophy Points:
    30
    The option in the Nvidia control panel is for PHYSX, not physics.



    PHYSX is a technology that Nvidia bought from Agea physics. They used to make Physics Processing Units, or PPUs.....but when Nvidia bought them, they reworked it to run on their Nvidia GPUs, rendering the PPUs obsolete.


    This only works on certain games, and only if they have PHYSX built in by the developer. Some recent examples would be:

    Mirror's Edge
    Batman AA
    Mafia II
    Cryostasis
    Metro 2033


    You can google a list if you want to know them all.



    As for PHYSX, it's mainly stuff like cloth simulation, fluid simulation, some destructible environment stuff, I think they can do fire with it, too....mostly cosmetic stuff.




    As for whether you should run it on the GPU or CPU...I'd say GPU....because the code in PHYSX is flawed, Nvidia has left some old code in there & it's gimped when running on CPU (I.E. it's inefficient & you'll see a MASSIVE framerate drop when running it on CPU. People seem to think Nvidia left it this way intentionally to sell more GPUs)

    Having said that, the little 335m can take a HARD hit from PHYSX, so you're probably better off just disabling it entirely. Which is okay, because it usually just falls into the "eye candy" category.


    Some examples showing off what PHYSX can do:

    http://www.youtube.com/watch?v=w0xRJt8rcmY
    http://www.youtube.com/watch?v=g_11T0jficE
    http://www.youtube.com/watch?v=6GyKCM-Bpuw
     
  4. kent1146

    kent1146 Notebook Prophet

    Reputations:
    2,354
    Messages:
    4,449
    Likes Received:
    476
    Trophy Points:
    151
    Short answer: No to CPU-based Physics calculations. Physics should be handled by the GPU whenever possible.


    Long answer: The nature of Physics calculations are a series of small arithmetic calculations that can be calculated in a highly parallel fashion. And a GPU, with its ridiculous number of stream processors, handles these types of calculations incredibly well. A GPU will perform FAR better on these specific types of short, highly parallel computations than a general purpose CPU.

    If you offloaded the physics calculations onto the CPU, it is true that you would free up resources on the GPU to compute other things. But it would actually severely decrease your framerate performance, because your CPU is going to run these physics calculations at a much slower rate than your GPU could.

    Benchmarks showing the effects of CPU vs. PPU (Aegea PhysX card) vs GPU computed physics:
    PhysX Performance Update: GPU vs. PPU vs. CPU
     
  5. Arak-Nafein

    Arak-Nafein Notebook Consultant

    Reputations:
    118
    Messages:
    155
    Likes Received:
    0
    Trophy Points:
    30
    Indeed.


    But I wonder how those benches would look if Physx weren't crippled on CPUs? It's not that PHYSX is suited so much more to parallel processing.....but the fact that Nvidia still has x87 instruction sets when running PHYSX on CPU. X87 is absolutely ancient.

    Many people say this is intentional to make PHYSX appear to run much better on GPUs, and thus, increase sales. Nvidia has no plans to change the code to make it run better on CPUs. :p Havok runs just fine on CPUs. :D


    What I'm REALLY waiting for is OpenCL & open source physics computing. Screw all this proprietary brand-specific bull....it only hurts consumers. ATI & Havok were pushing this at one point....but I haven't kept up-to-date in a while on this.
     
  6. kent1146

    kent1146 Notebook Prophet

    Reputations:
    2,354
    Messages:
    4,449
    Likes Received:
    476
    Trophy Points:
    151
    Well, I think "cripple" is a strong word. It implies that nVidia intentionally and nefariously took steps to make PhysX run worse on Intel CPUs. A more accurate way to say it is that nVidia is focusing enhancements of PhysX in ways that benefit people who own nVidia GPUs. They have no interest in enhancing PhysX performance on CPUs.

    As it stands, we really don't have comparison of a single physics engine that is optimized for both CPUs and GPUs. So we can't really run any kind of test to see whether the CPU or the GPU is truly "superior" as a physics processor.

    Again, I do not think that nVidia nefariously "crippled" PhysX. nVidia owns PhysX because they bought Aegea. They bought Aegea because PhysX ran well on nVidia GPUs. This was right around the time that nVidia was promoting GPGPU computing, to show that nVidia GPUs can be used for other purposes besides just graphics.

    nVidia engineers were not sitting in a lab, twirling their mustaches, trying to find ways to "trick" the public into thinking that Intel sucks. They found a company (Aegea) that had a product (PhysX engine) that worked well with products that they already sold. It happens all the time when one company buys another.

    Of course they have no plans to change the code to run better on CPUs. They own PhysX. nVidia is a company that sells GPUs. PhysX helps them do that. Why would they ever do something that would result in them selling FEWER GPU's?

    If people wanted a physics engine that worked better on a wider variety of hardware, they could either write their own, or buy a company that had one.

    Havok was working on a project called HavokFX that would allow the GPU to be used to process physics. They announced this near the end of 2006, when people were starting to wonder whether the GPU can be used for other purposes besides just graphics. Intel bought Havok in 2007, and the HavokFX project never saw the light of day again. Using your logic, you could say that Intel is "crippling" Havok so that it doesn't work well on GPUs.


    The problem is that the physics engine is a commodity. There's no money in writing a new physics engine to compete with nVidia and Intel, so nobody will do it.

    The only feasible way that I can see eliminating the brand-specific ties to different physics engines or GPGPU engines is for Microsoft to integrate it all into DirectX. They are the only company with enough weight and enough reach to push a standard that will be universally adopted across all PC's.
     
  7. Alienowner

    Alienowner Notebook Geek

    Reputations:
    0
    Messages:
    95
    Likes Received:
    0
    Trophy Points:
    15
    Could you please tell how you can disable it entirely? Because the only options provided by the Nvidia control panel are CPU or GPU...
     
  8. kent1146

    kent1146 Notebook Prophet

    Reputations:
    2,354
    Messages:
    4,449
    Likes Received:
    476
    Trophy Points:
    151
    The easiest way is to disable it in-game.

    Technically, you could go to Add/Remove Programs and Uninstall "NVIDIA PhysX", but that would be a pain to do. There may be parts of the driver package that get angry when you do that, and it would re-install itself every time you installed a new driver. I think it's easier to just disable it in-game.
     
  9. Alienowner

    Alienowner Notebook Geek

    Reputations:
    0
    Messages:
    95
    Likes Received:
    0
    Trophy Points:
    15
    Gotcha, thanks !
     
  10. stevenxowens792

    stevenxowens792 Notebook Virtuoso

    Reputations:
    952
    Messages:
    2,040
    Likes Received:
    0
    Trophy Points:
    0
    @OP - PHYSX should be Disabled (in game) and set to (Auto) in the nvidia profile. If you run a game in which you can verify that your gpu is NOT running at 99 percent utilization that supports PHYSX then try setting the nvidia profile to ON and changing the IN GAME setting to ENABLED. Our video card is a tier 2 (medium or middle) powered video card and may be able to support physx in some games. (Example Metro2033).

    Good Luck and Good Original Post/Question...

    StevenX
     
  11. Arak-Nafein

    Arak-Nafein Notebook Consultant

    Reputations:
    118
    Messages:
    155
    Likes Received:
    0
    Trophy Points:
    30
    I didn't mean to come across as if Nvidia were scheming to cripple PHYSX on CPU....I just meant they knowingly neglect it.(Understandable from a business perspective....but still aggravating) I just want more love for accelerated physics, regardless of the brand of GPU you choose. (I prefer Nvidia, so it's not an issue for me.)


    So, did OpenCL go bye-bye too? I was really stoked for that, the ATI tech demo looked awesome.
     
  12. kent1146

    kent1146 Notebook Prophet

    Reputations:
    2,354
    Messages:
    4,449
    Likes Received:
    476
    Trophy Points:
    151
    Yeah, that makes sense. I agree, nVidia probably knowingly neglect CPU support on PhysX.

    As for OpenCL - last I checked, Apple computers owned the license for it. Developers can use the license at no cost, but it is something owned by Apple. It is very tough for a 3rd party like AMD/ATI to promote a standard owned by someone else, unless they have specific partnership agreements in place that cover that.

    So I don't think OpenCL is going anywhere. If anything, I think that the best shot for a universal physics standard is going to be if Microsoft can get it into DirectX.
     
  13. Arak-Nafein

    Arak-Nafein Notebook Consultant

    Reputations:
    118
    Messages:
    155
    Likes Received:
    0
    Trophy Points:
    30
    Alright, thanks for the info. +Rep.
     
  14. mcham

    mcham Notebook Guru

    Reputations:
    12
    Messages:
    55
    Likes Received:
    4
    Trophy Points:
    16
    Thanks for all the informative replies. +rep to you all!
     
  15. ebondefender

    ebondefender Notebook Evangelist

    Reputations:
    25
    Messages:
    385
    Likes Received:
    0
    Trophy Points:
    30
    Granted PPU's are obsolete, but if I could hook my old Ageia processor card up to my m11x, I'd love to see the benchmarks/difference, if there was any. I suppose it would be unrealistic and hideously expensive- I'd need a device such as Magma's PCI enclosures or Avid's just to hook such a card up. And the m11x has no express card slot...

    Sure would be a cool "science project" though. I've hooked my obsolete Roland SC-55 up to it via USB-midi.