Those scores dont look to good, assuming that the desktop 920XM is about equal to a 920XM or a 940XM looking at the cpu score on 3dmark06 it is not much of a gain over the 5870s, unless they overclock better
-
-
The i7-920 is faster than the i7-940XM; 2.667-2.8GHz quad vs 2.133-2.4GHz quad.
Vantage prefers Evergreen to Fermi as compared to real game benchmarks, while 3DMark06 is rather out of date now.
What we need to see is more in-game benchmarks of both cards in the same system from a reliable source. -
The i7 920 being faster doesnt really help the 480m
But you are right we havent seen them in the same system, Alienware will most likely release them over the summer, so only time will tell.
-
Oh, I know that. I just like to be accurate.
There are some benchmarks of a 5870 in a D900F here, which is basically the same system used for some of the 480M tests. However, I'd like to see a lot more in-game benchmarks than can be found there.
Basically, what I would really like to see is something like this, but for the 480M vs the 5870. They don't even have to be nice graphs like in that link; it's more the variety of games I'm referring to. -
I wonder how much the GPU score in Vantage is affected by enabled Physx. My guess would be ~0.5-1k (just the GPU score).
-
Why would the GPU score be affected by PhysX? I don't think the GPU tests make any use of it.
-
That's a good question. I just looked at the benches done by my friend on his gaming desktop rig (HD5870CF + GT 240). With physx disabled the GPU only score dropped by 1.5k (CF disabled).
Dunno, I guess we need more tests to dis/prove that statement. -
If Nvidia admits physx effects GPU score in Vantage then I would assume it does. Of course to lesser extent then CPU influence by physX
NVIDIA Responds To GPU PhysX Cheating Allegation - HotHardware -
PhysX doesn't effect the GPU scores. I just tested it on my notebook.
ON = 5063
OFF = 5086
23 points is well within the margin of running it back to back. -
Maybe its just more pronounced in desktop GPU's? It would counter what they said in the link...
-
I don't see where it says the GPU score is affected.
-
BEGIN QUOTE" It has been said that the tests results look different on the screen when running with PhysX enabled on the GPU. And of course this is true, just as the screen results look different when you test on a dual-core CPU versus a quad-core. This isn't a graphics test; it's a physics test. 3DMark Vantage specifically scales more complexity into a scene to take advantage of additional physics compute resources, which of course is why it looks so different/better on a test run with PhysX processing on our GPUs. This is by design in the benchmark and if the folks accusing us took the time to run it, they would know that." from the link
Maybe I'm reading that wrong, its a pretty convoluted quote.. -
that and also, 260M is way outdated compared to the Fermi cards and even the older desktop GT240.
I still say there's gonna be a noticeable difference.
My friend has just confirmed the 1.5-2k difference on his desktop again. -
1.5k huh...
can he tell exactly where the change takes place? -
That still seems to be referring to the CPU test, as far as I can tell.
I'd like to see some more specific data. -
His exact config:
C2Q 9650 (stock), ASUS P5Q Deluxe, OCZ 8GB 800mhz, HD 5870 @stock(CF disabled for these tests) , GTS 250 512Мb (for physx).
Physx ON: GPU Vantage : 16,431
Physx OFF: GPU Vantage : 16,499
Will provide the scores in a few minutes, he's gonna run the tests again. -
He might as well give us the CPU score and the overall score as well.
-
Sure, will be in a few minutes.
-
i didn't need the scores really...
just that he enable physx and watch the first 20 seconds or so then stop it.
then do the same without physx
this is only for test 1 and 2 gpu
he is watching the total frames rendered.
side note:
doing a test right now, if it completes..ill post mine as well.
mine takes a 300 point loss between the two.
-
You are right, Johnksss. He had to do several tests to confirm that the initial statement is wrong:
There's actually a loss of the GPU score if Physx is enabled.
Attached Files:
-
-
Well that's interesting. So physx does have effect on the GPU but not in the way we theorized initially. On an abstract guestimate level maybe physx converts some of its resources in GPU's cache toward the CPU for rendering purposes?
Or maybe its just time to move on and talk about other things
-
It's probably because with the GTS 250 and dual 5870's PCI express became 8x instead of 16x, but dunno...
Anyway - sorry for the confusion. -
i did that test with a 5970 and a gxt295 for physx and two gtx295s.
5970 had better fps while the 295 setup had way better physx -
It seems to me that the score difference is generally so small that it doesn't really matter anyway.
-
techPowerUp has a very nice comparison of video cards based on their performance per watt : techPowerUp :: ZOTAC GeForce GTX 460 1 GB Review :: Page 32 / 35
The GTX 460 is a very good card in this segment so I would expect the next mobile card to use the GF104 architecture. Maybe this time Nvidia will get it right. -
They could just make a mobile card out of the GTX 460. It would consume less energy then the GTX 480M and could therefore be clocked considerably higher. It would be a true improvement over the Mobility Radeon HD 5870. The GTX 460 is the Nividia's best product in years and even beats ATI in their performance per watt domain.
-
GF104 has lower idles but ATI still beats it in performance per watt...Vancouver will look to make the Evergreen cores even more efficient.
-
GTX 460 with 192 bit beats the Radeon HD 5830 in performance, price and temps/power consumption.
-
Because ATI gets the 5830 from the 5870 junk pile.
Just about every other 5000 series chip has a better performance per watt ratio than the GF104...that includes the RV840 that makes up the mobile 5800 cards. -
You are right about that, but the difference between ATI's performance per watt and the GF104 is much smaller (10-15%) than it was between Evergreen and GF100 (30-40%). I would also argue that Nvidia is better at optimizing cards for low power consumption, i.e. for notebooks. So my guess is that now Nvidia could release a card with the same TDP as the MR5870 and at the same performance level if not slightly better.
I am sure ATI will further increase their performance per watt with the 6000 series but neither is NVidia going to sit around idle. Apparently one of their main drawbacks that made the GF100 so power hungry were those extra instructions for direct compute. In the GF104 they dropped most of them and the performance per watt increased significantly. With that being said, I am still looking forward to the 6000 series from ATI.
Pic of GTX 480M + 3DM06 Test
Discussion in 'Sager and Clevo' started by kaltmond, May 28, 2010.

