Ok, so NVIDIA has been going around lately with "We've got PhysX, We've got PhysX" thing, so I was wondering what would happen IF you enable the hardware PhysX, lets say in UT3, while having an ATI card, and also quad core CPU, any ideas ?
-
RainMotorsports Formerly ClutchX2
I actually want to rewrite all that.... Bottom line the PhysX software implementation that the game is coded on will only work on the cpu, unless you had a supporting nvidia gpu or the old add in car fom ageia.
PhysX like havok and other engines on the software end are just physics engines that run on the cpu. AGEIA actually came out with an add in card to take over the calculations, reducing cpu load and increasing the capabilities. Nvidia aquired AGEIA and put PhysX to good use.
Games that use the PhysX SDK for their physics can additionally support hardware acceleration if the develper so chooses. This makes it so someone with a powerful enough computer can run it without problem. But this also allows someone with the extra gpu power to spare but no the cpu power to play the game without problem also.
Other companies could potentially try to add to the problem, the war. AMD/ATI could team up with Havok to possible extend the havok API's to use ATi cards for acceleration. But so far nothing yet.
EDIT - Speaking of Havok and ATi....
-
masterchef341 The guy from The Notebook
unless you have a 3-way SLI or something monster, you aren't running PhysX on your GPU in games.
PhysX, in all of the implementations in PC games, is meant to run on the CPU. In rare cases the Nvidia GPU's are additionally supported for some performance benefit given insane hardware. Very rare. It won't run on ATI cards at all. CPU for you. -
RainMotorsports Formerly ClutchX2
Correct me if I am wrong. However take GPU intensive games like crysis and newer, there isnt much room for the gpu to help out, atleast on a single gpu setup.
EDIT - Then again with a 280M and a better screen some of you would rather have the highest possible res rather then saving money on your cpu. As a console gamer I am used to seeing 720P on 1080P monitors, 1280x720 is not half bad on a 1366x768 res screen lol.
-
masterchef341 The guy from The Notebook
...could have, would have...
-
spradhan01 Notebook Virtuoso
For me, Physx is just something which is nothing.
-
RainMotorsports Formerly ClutchX2
Has nothing todo with PhysX hardware if you dont have the hardware. -
-
masterchef341 The guy from The Notebook
-
spradhan01 Notebook Virtuoso
-
RainMotorsports Formerly ClutchX2
I agree there is no room for proprietary technology that just leaves everyone out. The sad fact is, everthing is this way and has been this way since the beginning. Were lucky directx and opengl came along otherwise you would still need one gpu for one game and another for the other, anyone remember those days?
DirectX is Proprietary as well and i doubt computer shader support will come to cedega overnight. There is nothing wrong with the hardware end of physx except for the fact that support isnt availiable for ATi, one could wish thet nvidia had never bought it out and that Ageia had come up with the idea to use gpu's in the first place, but it didnt go down like that.
However directx and opengl are supported by both gpu vendors, and avaliable to a good majority of gamers, you know, those still on windows. I hope to see non proprietary acceleration, great use of a second gpu if u ask me. -
masterchef341 The guy from The Notebook
it already exists. it is called Open CL.
-
RainMotorsports Formerly ClutchX2
-
Howitzer225 Death Company Dreadnought
Hmmm. Just thought of what would be ATI's equivalent of Nvidia's CUDA.
-
RainMotorsports Formerly ClutchX2
-
-
masterchef341 The guy from The Notebook
I just meant that Open CL is an open platform for using that stream processing GPU power, like Nvidia's CUDA. It is supported by Intel, *Nvidia*, and AMD/ATI and will run on all modern, already available graphics cards.
It obviously doesn't exist in any games yet, its brand new tech. But it is *official*, and all the big names are already throwing money at it. That is main factor that can cause good things like this to die early. Insufficient funding. -
The problem with all this of course is that in 90% of games and systems your GPU is working 100% and your CPU is taking the scraps...so adding more work for the GPU dosen't make much sense, unless, like masterchef said, your running some monster GPU...
-
RainMotorsports Formerly ClutchX2
Im apparently just one of those people who cant see the difference, yet i can see the scan line on a tube and hear the whine of the tube and it drives me insane (both of them). LCD for the win lol. -
masterchef341 The guy from The Notebook
Several points.
1. There are a lot of factors that control how fluid motion appears. It is not easy to peg a number. 60 frames per second can be choppy. 25 frames per second can be plenty smooth. Anyone can be made to see the difference, given the proper scenario. Most games achieve apparent fluid motion somewhere between 30 and 60 frames per second.
2. Stream processing and GPGPU are not ATI technologies. These are generic technologies common to Nvidia and ATI.
3. CUDA is Nvidia's software implementation of GPGPU (general purpose GPU processing). It is a software engine for developers to write software for the GPU. ATI also has their own software intended for the same purpose, once called CTM (close to metal) and now called the Stream SDK. OpenCL is a third implementation with exactly the same purpose, but is supported by both ATI and Nvidia (and Intel). -
RainMotorsports Formerly ClutchX2
Ive never seen 60 frames per second choppy, and that means that the frame rate never dropped below 25. Remember all your programs measure average not minimum and it can drop fast enough to not display a drop and actually occur. Only thing i experienced at ad over 60 fps with either of my laptops is tearing which is solved by vsync.
If someones getting 30 average frames per second there is a good chance that they drop below 24 from time to time (or alot lol).
I would however not mind a demonstration sometime in the future. Usual problem is controlling said situation and actually having two of the same peice of hardware side by side to give a fair comparison either. I have been wanting for the past year to get two tvs that are the same except in so called refresh rate side by side and get someone to show me how the screen itself, not the video processor is going to take 24 fps film footage and make it look any better at 120hz or 240hz then at 60hz. Post processing is one thing, interpolating frames is another. Thats a whole different subject not related to what were talking about.
I would like to see the frame cap option in games, for me lol. World in Conflict was the first once ive seen that has something other then vsync. -
masterchef341 The guy from The Notebook
-
ClutchX2,
You're wrong, end of story.
There has yet to be a game that proves PhysX increases performance versus a system that does not have PhysX support. It's a complete dead end. You're talking complete nonsense. -
RainMotorsports Formerly ClutchX2
No ones going to write a game to only take advantage of PhysX on GPU it wouldnt profit. A game not using more than half the GPU power and using the GPU Accleration is more capable of physics calculations than any current cpu. Thats undeniable even if a game isnt on the market taking advantage of it. Several Games on the market that SUFFER from cpu load problems and generate low framerates that could take advantage but dont.
Its not a dead end as every user talking about OpenCL in this thread knows. WHile proprietary implementations is a dead end anyone advocating OpenCL can tell you the actual technology behind it is spot on. The end user buying inexpensive hardware will never take advantage of it. Physics on GPU and GPGPU is not going to dissapear, just hopefully proprietary implementations. -
RainMotorsports Formerly ClutchX2
Additionally the PS3 uses multiple vector processors to do physics math, and they do it well. Supercomputers have used vector processors for the past 3 decades for this type of math that general purpose processors are not suited for. While the PS3 will fail in graphics incomparison to whatever replaces the 360 its Physics capabilities will still be superior, there barely even utilized to their capability.
Ageia came out with basically a lesser version of a vector processor to plug into the computer and take over the calculations. The concept of doing it on GPU is valid, but proprietary implementations dont help anyone. -
-
RainMotorsports Formerly ClutchX2
Im not sure what the opinion is on World in Conflict, but we will all argue that Saints Row 2 and GTA 4 are bad ports of a console game. Both run fine on a 3.2 Ghz Triple core PowerPC processor, needless to say also on a 7 vector core cpu and go on to basically need a quad on PC though are playable with 3-4Ghz dual cores though frames do drop.
Even if i had the source code to their physics and ai it would be overwhelming so I cant tell you how poorly they are written. What we can say is a 3Ghz Dual Core isnt enough to prevent the minimum from dropping below 24 at any given point. I can tell you that if the game where limited to 30 fps and a vector processor or subsitute for it (CUDA) can take a signifigant load off the cpu. Do you feel a 260M at half power is too loaded for the task. I havent done any CUDA programming though our friend StarFox has, its not my area of work nor is their an SDK for my language.
So what is the opinion on world in conflict? The graphics test hit upwards of 100 frames per second but dropped into the 10's as well, I played the tutorial framecapped at 30 fps and it ran a solid 30 FPS because there was almost no AI being executed but even at 3.22Ghz the moment the real game started I was below 20 fps. I need to get my desktop built so I can have something to run it against.
Make no mistake games of the open world and FPS Genre are going to start needing as much or more in the CPU arena as they have in the past needed in the GPU arena. Demand for realistic physics and smart AI with masses of enemies / characters are what its about now. We can only hope that companies can write more efficient engines and update them as they find better ways todo it. -
-
Cryostasis runs on only 1 core so no, it doesn't count. GTA IV has poor framerates even if you're staring at a wall so that also doesn't count because it's not physics calculations that are hurting your framerate.
My point is that the reason physics brings down framerates now is not because we don't have PhysX, it's because developers are lazy-*** bums who won't properly code for a CPU. Even in TF2 with multi-core support, it only stresses 2 cores on my quad core. Is it CPU limited, yes, but that's the developers fault, not the CPU.
Today's CPUs are far more powerful than what games are currently taking advantage of. Spend less time on PhysX crap and spend more time properly coding for your customers.
Perhaps you also weren't aware of the GTS 250 launch debacle: NVidia would not supply reviewers with GTS 250 cards unless they promised to give glowing praise to CUDA and PhysX. Some very respectable review sites were completely cut out because they refused to kiss NVidia's butt on a card that was rebranded 3 times. -
RainMotorsports Formerly ClutchX2
GPU progress as pretty much stalled in my opinion. Pretty much reflects what ATi and Nvidia have been spitting out though.
PhysX is proprietary and that sucks, the add in card had potential but nvidia has taken it and made it work with cuda and turned it into another competition, though even with ageia only games using PhysX SDK for their physics would have worked either.
Read up more on OpenCL today and like DirectX and OpenGL bridged the gap in gpu hardware to games (sans opengls ability to substitute gpu with cpu), hopefully it will do what 2 competing technologies cant. With it atleast even different SDK's can have access to the same advantage.
Btw if the PS3 cell does anything for Folding@Home then it would run better on CUDA, FireStream -
I'm not opposed to hardware physics acceleration, I'm opposed to proprietary technology that is getting more development work than it should and stealing time away from what programmers should be doing: writing better code for the CPU as it's currently wildly underutilized.
-
RainMotorsports Formerly ClutchX2
Only defense I can give to them about poorly written code is remember the push to get these games out in X amount of time, or we can find another studio todo it. Some game titles have resorted to using 2 studios at a time to write the game and its sequel at the same time and releasing every other year, Call of Duty is the one that I know of, they do it partly for launch schedule and partly because every other is ported to console.
On the flipside they are using prewritten SDK's these libraries should be pretty damn optimized over time....
One thing I would like to see is realistic minimum requirements. The minimum requirements for GTA4 would net about 6 FPS....MAX. The recommended are on the bottom of the box, as the Games for windows setup puts minimum only on the packaging. Face it if it barley cuts it at 720P on an XBOX 360 a 1.8 Ghz C2D aint gonna run it with a... actually 7900 and x1900 might be okay on a quad... Anyways i think Crysis had pretty correct minimum requirements. -
I believe that if 3D physics calculation is very important for 3D game industrial, Microsoft will add Framework in DirectX12 or else that lead to obsolete PhysX and Havok like early of 3D graphics era.
-
masterchef341 The guy from The Notebook
I ran GTA 4 just fine on a 2.4ghz / 8600m gt
not insane hardware by any standards.
ATI and PhysX
Discussion in 'Gaming (Software and Graphics Cards)' started by wHo0p3r, Jun 29, 2009.