Does anybody know if ati or nvidia have plans to release gpu's that have extra processing components for handling physics??? That would be real nice for games like Crysis.
-
usapatriot Notebook Nobel Laureate
there are already seperate physics handling cards.
-
Yes, but they are not from Nvidia or ATI. They are seperate and only enhance the physiks effects and are ony available for desktops at the moment.
-
usapatriot Notebook Nobel Laureate
Thats what I meant. -
Nvidia and ATi are graphics card companies, not physics card companies. They wont be making a physics card. A graphics card doesnt even have an affect on in-game physics. Thats pretty much your CPU (or the physics card if supported and present).
-
Both NVidia and ATI are working on some physics support, yes. But it'll be limited. As said above, the graphics card is really bad at affecting gameplay, so it'll most likely just be physics-based eyecandy they support.
-
A graphics card doesnt even have an affect on in-game physics.
First off, that statement is completley false:
" In a system with multiple NVIDIA GPUs, one GPU can be dedicated to physics. In fact there are no technical impediments to multiple cards or even unmatched configurations. All of these configurations work already on NVIDIA: Single GPU, SLI, Mixed GPUs (i.e., not SLI), Multi-GPU Cards, Multi-GPU card plus unmatched GPU for physics.
"
http://enthusiast.hardocp.com/article.html?art=MTA5NywxLCxoZW50aHVzaWFzdA==
Both ATI and Nvidia have run tests in which they were able to dedicate the extra gpus in crossfire and SLI systems to physics processing
Second, I was just wondering if it is at all possible/beneficial to add a second processing component to a single GPU card to help out with physics. -
usapatriot Notebook Nobel Laureate
owned........ -
Possible, yes. Beneficial? Not really.
The reason they're working on GPU physics is that the GPU is already pretty well suited for it. That's the only reason. It's a "free ride", so to speak.
But it's not worth designing logic specifically for this. If you do that, then it'd be more efficient to offload it to a separate chip/card, so it doesn't have to share bus and memory access with the GPU. -
actually your statement isnt entirely correct - this is a work in progress, currently this can not be done (neither ati nor nvidia have released their physics handling protocols yet). So for the time being GPU's do not have any effect on physics. Besides a much more usefull way of handling physics would be to let the second cpu core do it. -
Not neccesarily. CPU's are hopelessly slow at this stuff. A GPU can run circles around it in these sorts of tasks. If more performance is needed, either the GPU or a dedicated physics chip will have to step in.
The CPU does have the advantage that it's, well, the CPU, so it's easier to exchange data with the rest of the game, which is a weak point of the GPU. That's why I said above that GPU physics will probably just end up as mostly eye candy. Add physics to stuff you can't interact with, so it doesn't affect gameplay, just looks nice. -
Meaker@Sager Company Representative
Not really, a CPU is much better at implementing more longterm maxtrix based physics. The reason we are seeing physics on the GPU is because of the just recently highly programmable natures of the chip allowing you to change its function.
-
And because the GPU can perform ~20 times as much work on such parallel data than a CPU can.
-
I hope the second core of the new CPUs will be used more efficiently for such things as physics -- i.e. perhaps Havok will enhance their product to utilize the second core better. I know it's not as good as a dedicated physics component, but it sure is an economical way to get more enhanced gameplay.
-
go and read the new report on physics in tomshardware. It will show you that gpu's are good at basic physics, but arent as flexible or as powerfull as a cpu to workout complicated physics.
-
what is complicated pysics vs simple physics? these are games, not science projects. We don't exactly need big blue here.
-
Ah, got to love when people use an article to back up a claim, without linking to the friggin' thing....
Which just means one thing. At the moment, GPU's aren't as flexible as CPU's. We knew that. But they're a lot faster at what they *can* do. But GPU's are only going to get more flexible over time. In a few years, they'll be able to handle much more complex physics.
Anyway, I found the article, and I assume you're talking about these quoteq:
I don't know what he's been smoking, but the CPU is not the "master of matrix operations" *at all* (it's quite inefficient at them), and he seems to believe that dualcore allows the CPU to close the performance gap up to the GPU. Well, it won't. Once we get 30-core CPU's, they'll have *roughly* the same raw performance as *current* CPU's.
But yes, CPU's are a lot more flexible, and his point of view is regarding scientific simulations, where performance is not an issue, and some problems are hard/impossible to solve on current GPU's. But moving them to the CPU isn't a viable solution for gaming, where you actually need decent performance.
Also, LOL, one of the guys they interviewed in the article (Kenny Erleben) is a professor at my university. Followed one of his classes last semester... Cool guy.
GPU's that help with physics???
Discussion in 'Gaming (Software and Graphics Cards)' started by londez, Aug 4, 2006.