There is an option in the Nvidia control panel to offload game Physics calculations into the CPU instead of the GPU.
Since GPU is generally the bottleneck for gaming performance, should this extra Physics task be delegated to the CPU, especially if you have an i7 CPU?
What are your thoughts on this?
Has anyone compared benchmarks for these two options?
-Matt
-
Good point! I might try it for a bit but will have to work out which games to use.
-
The option in the Nvidia control panel is for PHYSX, not physics.
PHYSX is a technology that Nvidia bought from Agea physics. They used to make Physics Processing Units, or PPUs.....but when Nvidia bought them, they reworked it to run on their Nvidia GPUs, rendering the PPUs obsolete.
This only works on certain games, and only if they have PHYSX built in by the developer. Some recent examples would be:
Mirror's Edge
Batman AA
Mafia II
Cryostasis
Metro 2033
You can google a list if you want to know them all.
As for PHYSX, it's mainly stuff like cloth simulation, fluid simulation, some destructible environment stuff, I think they can do fire with it, too....mostly cosmetic stuff.
As for whether you should run it on the GPU or CPU...I'd say GPU....because the code in PHYSX is flawed, Nvidia has left some old code in there & it's gimped when running on CPU (I.E. it's inefficient & you'll see a MASSIVE framerate drop when running it on CPU. People seem to think Nvidia left it this way intentionally to sell more GPUs)
Having said that, the little 335m can take a HARD hit from PHYSX, so you're probably better off just disabling it entirely. Which is okay, because it usually just falls into the "eye candy" category.
Some examples showing off what PHYSX can do:
http://www.youtube.com/watch?v=w0xRJt8rcmY
http://www.youtube.com/watch?v=g_11T0jficE
http://www.youtube.com/watch?v=6GyKCM-Bpuw -
Long answer: The nature of Physics calculations are a series of small arithmetic calculations that can be calculated in a highly parallel fashion. And a GPU, with its ridiculous number of stream processors, handles these types of calculations incredibly well. A GPU will perform FAR better on these specific types of short, highly parallel computations than a general purpose CPU.
If you offloaded the physics calculations onto the CPU, it is true that you would free up resources on the GPU to compute other things. But it would actually severely decrease your framerate performance, because your CPU is going to run these physics calculations at a much slower rate than your GPU could.
Benchmarks showing the effects of CPU vs. PPU (Aegea PhysX card) vs GPU computed physics:
PhysX Performance Update: GPU vs. PPU vs. CPU -
But I wonder how those benches would look if Physx weren't crippled on CPUs? It's not that PHYSX is suited so much more to parallel processing.....but the fact that Nvidia still has x87 instruction sets when running PHYSX on CPU. X87 is absolutely ancient.
Many people say this is intentional to make PHYSX appear to run much better on GPUs, and thus, increase sales. Nvidia has no plans to change the code to make it run better on CPUs.Havok runs just fine on CPUs.
What I'm REALLY waiting for is OpenCL & open source physics computing. Screw all this proprietary brand-specific bull....it only hurts consumers. ATI & Havok were pushing this at one point....but I haven't kept up-to-date in a while on this. -
As it stands, we really don't have comparison of a single physics engine that is optimized for both CPUs and GPUs. So we can't really run any kind of test to see whether the CPU or the GPU is truly "superior" as a physics processor.
nVidia engineers were not sitting in a lab, twirling their mustaches, trying to find ways to "trick" the public into thinking that Intel sucks. They found a company (Aegea) that had a product (PhysX engine) that worked well with products that they already sold. It happens all the time when one company buys another.
If people wanted a physics engine that worked better on a wider variety of hardware, they could either write their own, or buy a company that had one.
Havok was working on a project called HavokFX that would allow the GPU to be used to process physics. They announced this near the end of 2006, when people were starting to wonder whether the GPU can be used for other purposes besides just graphics. Intel bought Havok in 2007, and the HavokFX project never saw the light of day again. Using your logic, you could say that Intel is "crippling" Havok so that it doesn't work well on GPUs.
The only feasible way that I can see eliminating the brand-specific ties to different physics engines or GPGPU engines is for Microsoft to integrate it all into DirectX. They are the only company with enough weight and enough reach to push a standard that will be universally adopted across all PC's. -
-
The easiest way is to disable it in-game.
Technically, you could go to Add/Remove Programs and Uninstall "NVIDIA PhysX", but that would be a pain to do. There may be parts of the driver package that get angry when you do that, and it would re-install itself every time you installed a new driver. I think it's easier to just disable it in-game. -
Gotcha, thanks !
-
stevenxowens792 Notebook Virtuoso
@OP - PHYSX should be Disabled (in game) and set to (Auto) in the nvidia profile. If you run a game in which you can verify that your gpu is NOT running at 99 percent utilization that supports PHYSX then try setting the nvidia profile to ON and changing the IN GAME setting to ENABLED. Our video card is a tier 2 (medium or middle) powered video card and may be able to support physx in some games. (Example Metro2033).
Good Luck and Good Original Post/Question...
StevenX -
So, did OpenCL go bye-bye too? I was really stoked for that, the ATI tech demo looked awesome. -
Yeah, that makes sense. I agree, nVidia probably knowingly neglect CPU support on PhysX.
As for OpenCL - last I checked, Apple computers owned the license for it. Developers can use the license at no cost, but it is something owned by Apple. It is very tough for a 3rd party like AMD/ATI to promote a standard owned by someone else, unless they have specific partnership agreements in place that cover that.
So I don't think OpenCL is going anywhere. If anything, I think that the best shot for a universal physics standard is going to be if Microsoft can get it into DirectX. -
-
Thanks for all the informative replies. +rep to you all!
-
Granted PPU's are obsolete, but if I could hook my old Ageia processor card up to my m11x, I'd love to see the benchmarks/difference, if there was any. I suppose it would be unrealistic and hideously expensive- I'd need a device such as Magma's PCI enclosures or Avid's just to hook such a card up. And the m11x has no express card slot...
Sure would be a cool "science project" though. I've hooked my obsolete Roland SC-55 up to it via USB-midi.
Should physics be offloaded to CPU instead of GPU
Discussion in 'Alienware M11x' started by mcham, Aug 13, 2010.