Here is the newest specs now of the GTX 670, our GTX 680M. Looks legit too, and fit the thermal compared to GTX 680. It`s from the reliable source, Sweclockers
![]()
http://translate.google.no/translate?sl=sv&tl=en&js=n&prev=_t&hl=no&ie=UTF-8&layout=2&eotf=1&u=http%3A%2F%2Fwww.sweclockers.com%2Fnyhet%2F15382-fler-bilder-pa-geforce-gtx-670-litet-kretskort-och-overdimensionerad-kylare
-
-
hmmm looks like the 670 is gonna be like 15% slower than the 680. substract another 15% to fit the 100W thermal envelope and u could make a first guess as to the perf. of the 680M, what do u think guys?
sooo...lets see... desktop 680 minus 30% would be pretty much equal to a desktop 7870 according to this overview, thus approximately 5-10% faster than a 7970M at stock clocks. would make sense if nvidia manages to pimp the 680M as good as their mobile cards in the previous years ^^
cheers -
Who told you that 680M is based on GTX670?
And your calculation is surely incorrect. GTX670 has about 14% less cores. In addition it is 10% lower GPU and 20% lower Memory clock.
Roughtly estimated it will be at least 25% weaker than GTX680. Its tdp is 30% lower. Judging that mid range cards have slightly better perf per watt ratio, i think GTX680 -25% is quite accurate prediction for GTX670. -
no one, still pure speculation at this point ^^ it would just make sense TDP-wise (if above mentioned specs hold firm) and perf-wise as well, since a 680M based on a desktop 660 would be too far behind the 7970M in my book
nvidia surely wouldnt allow AMD to stay the mobile gpu king this year, they would do their damnedest to at least be on par with them!
some further confirmation on the specs and a few pics of the desktop 670: (use google translate if needed)
http://www.computerbase.de/news/2012-05/bilder-einer-msi-geforce-gtx-670-aufgetaucht/
cheers -
GTX 580M was based on GTX 560 Ti, a 170W GPU. They managed to take away 70W to fit the mobile version. 7970M is based on 7870 which is a 144W. Same story here as well, except now Nvidia doesn`t have to sacrifice as much with Kepler since its more efficient than Fermi, only shave away 50W instead of 70W if they don`t go allin and make a 80W GPU instead. I guess that depends on how good the GTX 670 is compared to 7870.
GTX 680M will be a 256bit like the GTX 580M, GTX 660 Ti is a 192 bit bus so it doesn`t fit that. -
Ok. Lets say its true and GTX680M will be based on GTX670.
Some math.
Predictably, reference GTX670 will be around the same TPD as AMD HD7870. Or at least, even if the number doesnt say so right now, im fairly sure that those cards power cosnumption is in same ballpark.
We know that HD7870 clocks are 1000/1200 and HD7970M is 850/1200.
In 3DM11, the 7970M is about 12% slower than HD7870.
Now, judging that Nvidia does job as good as AMD and gets about same numbers, then we could predict the GTX680M perfomance like this
GTX680 -25% -12%.
Taking the base numbers from overclockersclub reviews for HD7870 and GTX680, i can predict that GTX680 will score around P6200 on 3DM11.
That will be fair amount higher than HD7970M.
Hopefully we see some more leaks soon. -
^^
edit: funny, we pretty much arrived to the same conclusion with different assumptions -
No, you cant calculate like this.
Perhaps you can like this:
14% less cores should be at least -10% loss on perfomance. This is my rought estimate based on many desktop cards.
However 10% lower core and 20% on memory perfomance loss can be calculater like this: (10+20):2=15%.
This should be fairly accurate, since -20% memory is roughly 2 times less perfomance hit, compared to GPU frequency.
That is -25% overall like i sayd.
I wish we knew exact power consumption of GT660M. It would be nice to do some math in comparison to this theory here. -
yeah well, u pretty much took it up where i left off@different weighing of the gpu components ^^
-
masterchef341 The guy from The Notebook
you guys are arguing over nothing. you should know that.
you can't assign a single performance impact percentage based on removal or reduction of particular processing elements. the impact will vary dramatically based on usage case.
arguing over a few percentage points to try and find "the number" is absolutely foolish. -
King of Interns Simply a laptop enthusiast
lets wait and see
-
No. Not arguing at all. Just trying piecefully predict the perfomance of this upcoming card. Thats what forum is for.
And. Its not foolish to do that kind of predictions. I belive its fairly accurate. Mark my words. We shall see. -
3DMark11 P6200 for GTX 680M? Do you confirm Supranium?
-
@masterchef: easy tiger ^^ were not arguing, just playing around with numbers to get a feeling of where we could place the 680M. of course its all still pure speculation at this point, nobody said otherwise
cheers -
I have my doubts too.
1. GT650M and GT660M are weak in my opinion.
2. Pitcairn has better perfomance per watt than Kepler. -
Unrelated, but interesting none the less
-
Meaker@Sager Company Representative
-
Well considering that even an overclocked HD7970m seems to consume less power than the 580m, it is very possible for both a 680m based on the GTX670.
This could lead to an OC'd version renamed HD7990m considering how easy it is to overclock it. It's the only way to remain competitive vs such a powerful GPU as the GTX670. -
-
-
Yeah i did unintentionally
since i just reached that part of the whole review
-
We all know what games "matter":
BF3
Crysis + Warhead
Crysis 2
Metro 2033
Skyrim
The Witcher 2
2. AMD will not use a card with an identical core count for the 7990M, if it is to exist. It doesn't fit the company's m.o. -
That is, if they even care about competing with a higher end 680m, but at this point I doubt it.
-
We all know that GTX680 came out close to 4 months later. Nvidia had enought time to adjust clocks as much as possible.
We all know that HD7970 reference cards are extremely underclocked and GTX680 are not.
There is no doubt that both are great cards. One is no better than other. Both have their strenghts and weaknesses.
Keep this is mind in your claims
To topic.
Having looked some GT660M power consumtion posts, im afraid that GT680 will not be as good as hoped. With a 75W TDP the 660M is truely weak.
Hopefully the 256bit and more than triple Cuda count GTX680 will be way more efficent. -
Meaker@Sager Company Representative
Holy hell the 690 reached higher mem clocks than the 680, freaking 7Ghz! Thats a real clock of 1750mhz! Silly.
-
He's saying the 7970M equipped system draws 40-50W less than the exact same one with the 580M installed. Really?
I have to question why anyone would believe that is completely accurate. -
Meaker@Sager Company Representative
Because he is a user (less chance of bias) who used a power meter of +/- 3% accuracy lol.
I'll be able to double check for you when I get my card and compare it to the 570M when overclocked if you like. -
Also, I am an AMD/ATi Fan, so I prefer their products over nvidia most of the time. That doesn't mean I can't see how both offer great products and I use both vendors, for both notebook and desktop offerings. I tend to favor AMD when their performance is similar.
And I think the GCN architecture is fantastic. Kepler is great but it didn't have much of an impact on me because its late, hardly available at the moment, and barely faster on reference boards. At least it brought prices down for the competition haha.
Back on topic: Yeah the 660m is quite weak at 75w TDP. Besides shaders cut down for the GTX670, was there any other cut downs? ROPs? Texture units? I just hope they don't end up putting very high core count with very low core locks just to match a TDP limit. -
-
Who knows? 580m and HD6990 wasn't exactly the pinnacle of power consumption efficiency. -
@long2905:
You forgot to post these results
-
And for everyones attention, the Dell TDP specs of GTX 660M IS WRONG. The 660M is actually around 45-50W GPU and a great design by Nvidia
There is a guy in this forum who own the G75 with the 660M who have benchmarked the hell out of it.
The power draw running 3DMark Vantage is 86 watt. What is in his notebook drawing power at that time is A) His Ivy Bridge Quad core CPU 35W (He says its a BestBuy model and the only model they have there have the 3610QM and is a 45W Quad but I don`t get it to fit in to the first picture?) and B) His 660M. C) RAM, Screen, fan etc
The power draw doing not GPU intensive tasks is 36.3W
Here are the pictures as proof
-
Karamazovmm Overthinking? Always!
power draw != from tdp
-
Haha those batman and Elder scrolls run obviously had bugs. It's not uncommon. There is also a bug with Crysis 2 at 2560x1600 res in DX9 for the Crossfire setup which crashes, but it can run at 5k res no problems.
For example in elder scrolls we also have this:
And well the power consumption is completely true. The CrossFire setup consumes much more power, just as it has better idle consumption as well. -
-
And here we go again.
Could anyone of you two provide with any evidence that TDP does not equal power consumption with notebook GPUs?
Are you saying that the GTX 660M draw 45W but magically take additional 30W from thin air and outputs that as heat?
@Ryzeki: Bugs? You want me to post more results where the 690 beats the 7970CF? There isn`t any bugs. Check around with various reviews and you can see more games where the 690 push ahead with just as much juice
Here is a summary if you don`t believe me
http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/53901-nvidia-geforce-gtx-690-review-27.html -
Karamazovmm Overthinking? Always!
its in the definition of tdp there is no need to prove anything.
-
Tell me this: How can a GPU output more heat than it draws from a system? Is it teleported from electric outlet?
I will comment on this: It`s the other way around. Power consumption is HIGHER than the TDP, atleast with desktop GPUs. Unless I missed something very vital when I went to school -
-
Meaker@Sager Company Representative
I always thought the 75W figure was off base.
However do remember that when you go into vantage the IGP wont be working at full pelt, it will just be outputting the frames from the buffer so is likely at its lowest clock so the CPU wont be drawing anywhere near 35W.
Also even at 45W you don't want to make a comparison to the 7970M.... -
Hahaha! yes bugs! You know, such as when SLI/Crossfire game crashes but not on single card. Crossfire has some bugs with some games in some specific, reproducible scenarios.
If you see the graph, when Crossfire performs exactly the same as single card, or worse yet... worse than single card, then CrossFire isn't working. Depending on the drivers and site of review, those results vary. I showed you a run of Skyrim where all cards perform better and not constrained.
Also TDP is Thermal Design Power. It's typically a design figure for a "worst case scenario" for how much power the component must dissipate. As for actual power consumption, it will be on average much less than the TDP, and sometimes even more for short bursts of time.
Regardless of power consumption of a chip, the physical chip has a temperature limit by their components hence a max TDP. For laptops it's much less due to constraints in size.
I assume the 660m is being touted as 75w merely as a mistake, or because people are stupid an assume higher tdp is automatically better performance? Also, 660m might have a more aggresive turboboost. And finally, depending on game, the 660m might not even break a sweat.
also, you have a bunch of GPUs like 6970m, 6990m, 480m, 485m, 580m, 670m rated at 100w TDP all, but consume different power each. Applying a very uhm... general not so specific logic example, would you assume an overclocked 675m to consume more power than an stock 580m? Both have the same TDP. -
That guy who tested out the G75VW with 3610QM and GTX 660M, got an power consumption read from the kill a watt of 83.4W.
2820QM, which is a 45W TDP Sandy Bridge peaks at 56W when all cores are on full use. Read desktop GPU reviews and you will find greater power consumption than the TDP. Here is an example. GTX 580, GPU draw is 280W, TDP of GTX 580 is 244W. 6970M was a 75W TDP, but it draw up to 100W power from the AC/DC adapter. Power consumption IS higher than TDP (!!!) because an electric component does not output every watt it use as HEAT. That would be a very bad design.
That is my point. And since the G75 AVERAGE at 83.4W and the Ivy bridge probably draw lets say 50W due to 22nm benefit, and like Meaker say doesn`t utilize full capacity by running Vantage, so maybe 30W, the GPU at full speed, draws around 53W, the TDP of 75W does not fit in my opinion. It should be lower, maybe around 45W
-
Desktop components are a bit more loose with TDP, and are allowed much more headroom in both cooling and power drawn. Unless you use a very small case, you should look for power drawn in desktop cards instead.
As far as I know the 75w tdp was made up by dell right? When was it officially stated? It would be weird the 650m to be 40w or something, and the 660m being magically much higher tdp being the exact same card with almost same clocks and config. -
Meaker@Sager Company Representative
What else is a chip giving out apart from heat? Noise? Light?
No, power = heat, everything else is negligible. Please don't start THAT debate again, it's been sealed shut and needs to stay that way. -
Yes but if a GPU/CPU whatever is drawing 100W Max, does it output 100W as heat? That is answer "yes" according to many sites that says TDP = Power consumption with mobile GPUs. And since the Vantage utilized the GPU max and I assume the power consumption of the GPU is around 53W, so should the TDP. That is what I meant Ryzeki. There is no downclocking or speedstep involved with Vantage is it? I understand that a certain scenario, lets say browsing the internet, the GPU downclocks to preserve battery and reduce heat, and then we will see lower power consumption than TDP. That I totally agree on.
Yes the 75W TDP from Dell is very strange and I find it odd.
Whatever. I`m going to find what the POWER CONSUMPTION of GTX 560M is and compare with 660M and see if they improved it. Thats what really matters anyways. Screw TDP. But first a movie -
I just laughed loud at hes test. Who the hell tests like this??? I mean Come on!
The only way to test max power draw of GPU is to run run stress test (like lynx) on CPU. Write down the power draw, then start GPU stress test and subtract this power draw from CPU stress test power draw.
3D marks are the last thing what to measure power draw of. Lol.
Besides. TDP calculations are not identical with Nvidia and AMD.
Nvidia cards always go above rated TDP at peak consumption. AMD cards never do, because AMD cards TDP is calculated from peak. AMD powertune allowes card only go up to its rated TDP. Never above.
Manual overclocking is different story ofcourse. -
Its not a professional reviewer duh. It was more of a quick test just to see
Vantage is still pretty demanding since we use it to rank GPUs. Not as precise as your way but a little interesting non the less. We get these debates out of it so it was totally worth it
Now to my movie
-
I just rep`ed you Ryzeki, Supranium and Meaker (You will have to wait since I given away too much today) for partisipating in this discussion. As a pat on the shoulder for having the energy to debate against this stubborn man. After all I`m just a clueless moron
-
I think the whole deal between TDP of nvidia and AMD was because one of them... calculated TDP of the complete board+RAM, while the other was without RAM or some nonsense like that. I forgot haha.
AMD 7970m vs GTX 680m
Discussion in 'Gaming (Software and Graphics Cards)' started by x32993x, Apr 20, 2012.