Today I was curious how much has GPU performance advanced in 2.5 years since I got my 280M. To do this I looked at the GTX 570M for comparison because it's also a 75W TDP card, just like my 280M and 285M.
Note that my GPU is running at higher clocks than the 280M and the 285M. Thus I used the 285M for comparison. It would fair to say then that I am comparing cards separated by 2 years of technology.
I took for comparison the benchmarks on notebookcheck, but individual game benchmarks not their aggregated scores (which would be misleading).
The CPUs used are roughly similar in performance - 820QM for the 285M and 2630QM for the 570M.
So this is how it looks:
Note that the biggest gains are actually in Vantage which in my view, questions the validity of this benchmark.
Game
285M
570M
Gain
Vantage P Score (no Ph)
6438
10678
+ 66%
Vantage GPU Score
5500
9445
+ 72%
Starcraft Low
184
257.2
+ 40%
Starcraft Ultra
32.8
42.2
+ 29%
Metro 2033 Medium
40.8
65
+ 59%
Metro 2033 High
21.4
32.4
+ 51%
Bad Company High
47.2
78.1
+ 65%
Bad Company Ultra
26
40.1
+ 54%
Modern Warfare 2 High.
61.2
89.9
+ 47%
Modern Warfare 2 Ultra
43.3
59.8
+ 38%
Conclusion
On average the 570M manages to provide 48% more in-game performance (52.1% if you factor in Vantage, but I would prefer to leave it out). This is normal as the die-shrink from the 285M to the 570M means that we should now get at least 37% more out of the same silicon chip. This also points to the fact that improvements due to architecture are actually minimal, close to 10%. This supports my previous impression that for notebooks, the largest jumps in performance are due to die-shrinks and not improved architectures. Under such circumstances, I would expect that the 670M which should be made on the 28nm node, to be 40% faster than the current 570M (or 200% faster than the 285M at the same TDP).
-
Nice overview! I'm especially interested in this as I am debating whether to get a 6970M/570M now or hold out till the 28nm GPUs.
So the real the question is: Will the 670M/660M work in my beloved w860cu?
I know that they will physically fit, but I guess time will tell...
I also have to point out that you are not running clocks equivelant to the 285M as it is clocked at 600/1500/1000.
I am currently overclocking my 285M to the edge. Last night I bumped my clocks from 700/1750/1100 to 720/1800/1150 (God help me!) in BF3 and it did not crash.
Another quick question now that BF3 is out, and it is DX11 only: How much gain is there in having a DX11 card? The 285M is DX10 and close to similar in performance as the 460M, yet I wonder if the 460M gets better frames as it is DX11. -
Yes, 460M will get slightly better frames in DX 11 as far as I know, but nothing spectacular.
Yes, I am running a bit better than the 285M, but the point is to compare cards at stock settings and the stock 285M gets close to my OC 280M. -
So in short, the 670M is going to be able to max out BF3 in 1080p and will be worth the money?
-
YES! 10 char.
-
-
niffcreature ex computer dyke
Also I'd like to point out that this is a very vague comparison of the actual architectures. You compared 2 cards here, not 2 architectures.
I don't know how you really want to think about it but overall you should be comparing a 580m or 480m, with your card heavily overclocked, or the others undervolted to 75w if not all of the above.
Its obvious there were more architectural improvements than 10%. You're being silly here. Shall we say, architectural improvements of the gt215 over g92 are -25%? These cards were 40nm BTW.
Anyway this is well appreciated, kind of interesting. But there is no science to these numbers. You have to choose SOMETHING more concrete than TDP, die size, ROPs, anything really to accurately compare 2 entire architectures like a review of desktop cards would. -
Bare with me here...
This is a comparison with the GTX 580M, note that the GTX 580M in these tests uses a better CPU which on average should give it 5-7% more FPS.
Note again that the biggest gains are in Vantage, which is unjustifiably high.
The GTX 580M has a TDP of 100W and thus should be 33% faster than the GTX 570M.
Game/p>
285M
580M
Gain
Vantage P Score (no Ph)
6438
14281
+ 122%
Vantage GPU Score
5500
13234
+ 141%
Starcraft Low
184
265.3
+ 44%
Starcraft Ultra
32.8
60
+ 83%
Metro 2033 Medium
40.8
96
+ 135%
Metro 2033 High
21.4
52
+ 143%
Bad Company High
47.2
95.2
+ 102%
Bad Company Ultra
26
54
+ 108%
Modern Warfare 2 High.
61.2
108.7
+ 78%
Modern Warfare 2 Ultra
43.3
77.8
+ 80%
Without the Vantage score and the Starcraft 2 (low) fps the average is a 104% gain.
According to logic and what we have from the previous post, it should be 1.48 x 1.33 = 96% and if I factor in the better CPU (1.05) ... it's about 106%. Spot on!
And this is just for fun:
Game
570M
580M
Gain
Vantage P Score (no Ph)
10678
14281
+ 34%
Vantage GPU Score
9445
13234
+ 40%
Starcraft Low
257.2
265.3
+ 3%
Starcraft Ultra
42.2
60
+ 42%
Metro 2033 Medium
65
96
+ 48%
Metro 2033 High
32.4
52
+ 60%
Bad Company High
78.1
95.2
+ 22%
Bad Company Ultra
40.1
54
+ 35%
Modern Warfare 2 High.
89.9
108.7
+ 21%
Modern Warfare 2 Ultra
59.8
77.8
+ 30%
Without Starcraft 2(low) you get a gain of 37% on average, removing Vantage doesn't change the number. This exactly in line with what is expected given the bigger die and the better CPU.
If GPU architecture doesn't improve by much, next years 680M should be 90% faster than the GTX 570M and 275% faster than the 285M. If you factor in the better CPUs it might get close to almost 290%.Last edited by a moderator: May 7, 2015 -
And the findings are not silly, it is a known fact that the GF100 architecture had a slightly lower performance per watt than the G92 in DX9 games, but it did very well in DX 11 games. The GF114 addressed some of these issues but it didn't bring anything radically new to the table.
ATI/AMD has been done a much better job in this regard. -
-
i max out bf3 in 1080p with no AA and i get 30-40 fps.if i overclock my 6970m to 800-1000 it goes up 40-50 and sometimes 60!!!im very happy with this card!!
-
As you can see, a die shrink can offer some performance gains: http://www.fudzilla.com/graphics/item/25248-radeon-hd-7970-is-around-30-percent-faster
-
Next year as in 2012. Now since we only are talking about the mobile versions, how do you think the desktop versions would perform?
I would assume, that desktop versions would do some amazing stuff, am I right? As far as I understand, the mobile versions are firmly smaller (obviously) and harder to manage, which explains the actual clocks and architecture.
Do correct me if I am wrong, I'm try to follow you as good as I canLast edited by a moderator: May 7, 2015 -
I'm actually curious with regards to planned release dates for all the 6XX series cards, or it might be 7XX according to some sources. Some people are suggesting that 6XX cards will be like the 300 series, which had no performance or high-end cards, and that 7XX will be the full series of GPUs. But that's beside the point.
I'm looking to put the highest Kepler GPU I can find into the P150EM since I'm waiting 'till April or later to buy my new system. What's going to be out by then? GTX 680M? Or only the GTX 660M? The codenames (N13E-GTX, etc) are leaving me kind of confused since no official names have come out yet. -
@ jaug1337
It's hard to tell what will happen with the desktop GPUs, because these are far less constraint by power requirements and heat limitations. Suffice to say that what happens in the mobile GPU world should be similar to desktop GPUs, however, for desktops because constraint are far more relaxed, the increase in performance at the same price point might be higher.
@AlphaMagnum
You are correct. If I look at the current Vantage scores and compare them with those leaked slides then it looks like the 600M series is a rebadge. However, this is by taking into consideration that the leaked slides include PhysX, if the slides do not include PhysX then the 600M is a die shrink and you should except between 25 to 35% better performance. I am inclined to believe those slides don't include PhysX because PhysX is no longer enabled by default in Vantage 3dmark.
GPU arhitecture performace 280M vs 570M
Discussion in 'Sager and Clevo' started by Blacky, Oct 28, 2011.