Guys,
I don't know if it has been shown, but just in case here are some benchmarks of the GTX675M and GTX680M in the M17xR4:
![]()
![]()
![]()
Apparently the GTX680M would have 768 shaders, 32 rops, 4GB GDDR5 256bits and at least as powerful as a GTX560TI
The fully article is to be read here: DELLÐÂAlienware M17x 2012ÐÂÒ»´úδ·¢²¼ÍâÐÇÈ˱¾Alienware M17x-R4µýÕÕ/ÐÔÄÜ/ÅÜ·Ö/ÅäÖÃÆØ¹â_´÷¶ûAlienwareÍâÐÇÈËÂÛ̳
Not bad I'd say...
-
Why is there 2 scores for the gtx 680m. 70-75w it is expected to be so it is a great performance increase over a 580m virtually 50% while taking 25w less.
Kepler is like 75-100% performance per watt improvement on most benchmarks over fermi.
edit: Maybe its just a 50% improvement at same consumption of 580m. -
'the GTX680M the wattage measured up to 100W, and HD7970M 65w has a huge gap, do not know the formal card will not'
Is this for real. Maybe amd were tricking us all if a 7970m is 65w. I wonder what the performance is though. -
Google Translate
Interesting, it shows performance of amd 7000m and gt. 600m series 3dmark 11 scores. -
-
Please make it true. Especially Radeon 7950/7970M, it'd be awesome.
-
It probably is as I expect amd to be beaten badly this time but if it is not and the power consumption is low I am getting an amd card. Maybe the notebook gpu's had better revisions.
-
This is going to be one hell of an expensive card Q.Q
-
I'd be pretty happy if the 7970m had 580m performance at 65 watt. That's a massive increase in perf/watt.. Probably has great OC potential too.
-
That wouldn't be too surprising to me since AMD's southern islands is apparently more efficient than Kepler, if we compare Pitcairn to GK104 at least (Tahiti should remain unaccounted for since it has all sort of GPGPU stuff that is missing in nvidia's chip). And efficiency is what matters the most in the mobile market.
But 55W for the Radeon 7950M (!!) and it has 1280 cores, not 1024 like many believed ! If it's correct the 7850M/7870M might have 1024 cores and a TDP low enough to be integrated in some multimedia notebooks.
Looks too good to be true I say. Let us wait. -
My current 9600m gt is a 23w card so if I can get a 25-27.5w card this time something probably maybe called a 7770m that using the logic 7950m scores 4200 in 3dmark11 then this could score 2100.
Another logic is the 7970m performs the same in 3dmark11 as the 680m at 65-70w vs 100w. P4500-p4700 3dm11 score. We probably can expect a 42% improvement performance per watt but depends if the tdp shows actualy power consumption.
So gtx660m performance at 45w scoring lets say 2300 in 3dmark11 with a 42% improvement would take 30w. Now that would be great for gaming.
I can see it taking 82w for a 3612qm with 7770m estimated name for 50-60% improvement over amd power efficient cards.
Review HP Pavilion dv7-6101eg Notebook - Notebookcheck.net Reviews -
7950m won't score 4.2k in 3dmark11
-
why not?i hope all these news are true.
-
wait... soo im alittle confused. were there any real benchmarks done or is all this speculation
-
Meaker@Sager Company Representative
Well lets see, an HD7870 scores 6500 points in 3dmark11 with a TDP of 140W (175W board power - 20% overdrive overhead), an HD7850 scores 5200 with a TDP of 104W (130W board power - 20% powertune overhead).
So really I think those scores are too low.
If we get a 7850 core then I expect a score of at least 5000, with a 7870 core I would expect 5500.
There is no excuse why the 680M and HD7970M should not TOAST an even overclocked 580/675m. I can reach over 4000 with my 570M when overclocked :/ -
The benchmarks screenshotted here for Nvidia cards are likely true, and I venture to guess they will get even better as Nvidia drivers optimize more for them.
The AMD stats ,numbers, and comparisons and scores are most likely false or just guesses.
AMD's 7xxx series is a step-and-a-half slower than Nvidia's Kepler and the desktop Kepler is even better at performance/watt than AMD offerings.
GeForce GTX 680 2 GB Review: Kepler Sends Tahiti On Vacation : GeForce GTX 680: The Card And Cooling
Unless Kepler is just too power-hungry to fit in a laptop and Nvidia must make staggering cuts to fit in a laptop form-factor, the laptop power ranking will be similar to the desktop.
Note, AMD can still win the "best value choice" in a laptop with some aggressive pricing.
Until some real benchmarks for the AMD 7xxx mobile series are published, nobody knows the whole story.
Funny thing, this happened last time... AMD-fanatics were talking about how badly the 6990m would beat the 580m... the real results were thatthe 580m was still better in almost every case. The 6990m was stil a great GPU because it won performance/cost easily. -
Meaker@Sager Company Representative
-
Things might turn around with Kepler. GTX 680 is cheaper than 7970. 6970 was $140ish cheaper than 580 when they were released. This may reflect on notebooks this time around. After all, Kepler is able to offer the same performance with smaller die
-
Meaker@Sager Company Representative
-
Wishful thinking but it might happen -
The 580M scores 3500 at stock and 4500 easily when overclocked. So if a 680M scores 4500 at stock it will score 5500 easily when overclocked... How is that a joke?
Your 570M is basically a 580M with some CUDA cores disabled :/ -
The GTX 675M was tested witha faster CPU, so the score ccomparisonis flawed without GPU scores.
Are you guys even paying attention? The fact that the 7970M is listed as having 1536 shaders means you should disregard this link. Unless you're going to try and sell me on it coming from the oft fabled AMD 7890. -
One might assume Tahiti loses to GK104 in die-area and perf/watt notably because 1/ it's a compute oriented chip while its competitor is purely gaming oriented (I've seen numbers that showed it struggling against a Pitcairn for certain computing tasks) 2/ 384bits. On a pure performance scale it doesn't even lose by a huge margin.
And btw Pitcairn is the king regarding performance/watt. You simply can't pull conclusions about both companies' architectures and their efficiency from such a simple argument.
There aren't any elements of comparison to warrant that AMD couldn't have pulled off a chip that would've basically been a Pitcairn scaled-up to ~300mm², clocked it at >=1ghz and won on every metrics or at least match their competitor, had they chosen to do so.
-
The 7970M being tied with the 580M and 6990M would be an absolute joke. It's technologically impossible to go from 40nm to 28nm, but maintain the same performance. AMD would have to be cutting the 7850 by like 40% to achieve that feat of failure. And why would they do that to a card which is already just barely 100W? -
that's almost too good to be true. 7970m geting 680m performance at 30-35 watts less.
that means they could make a 2000 shader 7990M. doesnt sound right to me. it has to be 100W.. or 1408 shaders and 4100ish 3dmark 2011 score.
and i already knew the 7970m would beat the 675 easily though. ive said this for weeks. -
Meaker@Sager Company Representative
Combine that with the fact it will already be overclocking itself to get that score due to their new tech and it would just be sad.
Also please don't point out the painfully obvious tech details about a card I own. -
If your scenario happens it would mean 50% faster GPU and 10W lower power consumption than 6970M.
That would have been insane -
These scores are basically what I expected, a small but evolutionary jump in performance. What is a bit worrying is if those scores are using the built-in overclocking, then that could make them pretty disappointing.
Several people have assumed that the 680m will OC into the 5000+ range, but what if these scores are already at near the max stable OC because of the new automatic overclocking? -
Ok, so I'm pretty much a newb to all of this, so forgive me if I'm wrong in thinking this, but it seems like a lot of what's posted is heresy or biased opinion. How can we truly judge the winner without real world benchmarks (i.e. fps, times, etc.)? It seems like any Joe Shmoe can polish a turd, take a picture of it, edit it in photoshop, and post it somewhere advertising it as a fancy chocolate truffle.
Also, how do synthetic benchmarks scores (like the PC Mark scores) actually translate to real world performance? Are these scores just made up units or are they actually a real, physical units of measure like a Watt or Joule?
I'm just trying to separate what is real, or at least speculation based on sound research, from opinionated guesses as I prepare to buy a new system. -
seriously, here is the 30% performance jump you guys were looking for (definitely not from your miracle 7970m, which was supposed to be the cheapest/low TDP yet the crazy performer on the planet..) just lower your impossible expectations, tech doesn't make jumps you suggested, 30% is already good for a startup, they didn't start their business with Kepler architecture right? they will perfect it in time, give our beloved tech companies a break people, if you are so disappointed, go try to do better than them.
-
Meaker@Sager Company Representative
Why should a 30% performance increase be acceptable when the new process is offering 50% increases at the same power consumption?
Seriously, you don't need to defend these companies you know, they can look after themselves. -
Because evolution is slowing down.
3 or 4 years ago every new generation brought like 60% more performance, now it's more like 30%.
I think it's logical, they have come to a point they cannot raise TDP any higher than 100W. (3 years ago, 100W in a mobile computer was unthinkable)
Anyway I'm looking for a good performaning GPU around 30-35W
GTX660M seems interesting. (still want a thin and light notebook) -
In this thread: a lot of people who know it all bickering.
Sent from my HTC Desire using Tapatalk -
Meaker@Sager Company Representative
That's where I get my 50% from. This is also why I keep quoting AT THE SAME POWER CONSUMPTION, because I appreciate there is a mobile power ceiling. I already took that into account.
This is not like the 5xxx -> 6xxx gen, we have had a new process technology and it has brought power consumption down.
-
Relax dude, it was a joke.
Sorry for posting from my phone though... -
Where are people getting 30% from. Kepler is like 70-80% easily better performance per watt maybe some things like 3dmark11 2x performance per watt and the new amd mobile 7000 series looks to be 100% better performance per watt if the tdp levels on amd is correct and power consumption is actually low.
-
******** 3dmark03/3dmark06
gtx go 7950 => 21k***
gtx 8800m* => 30k*** 9k
gtx 9800m* => 32k*** 10k
gtx 285m** => 37k*** 13k
gtx 480m** =>****** 15.5k
gtx 485m** =>****** 19k
gtx 580m** =>****** 20.5k
except for the jump from 285m to 485m (Fermi difference + which is not exactly right because 485m is equal to 580m, actually we should compare 480m) there is NO more than 30% jump (well, 40% for go 7950 to 8800m, but that was a complete change in architecture where GPUs became much more parallelized, so that is expected).. please let's be a little more realistic, and yeah, I think both nvidia and amd is doing one hell of a job. -
Meaker@Sager Company Representative
*Sigh* Really? Really really?
Your going to quote 3dmark at me? In the way you have?
Cut out the 480m and 485m, cut out the 8800m and 9800m. All of these are architecture rebrands or optical shrinks or mongrel chips.
We would get something along the lines of:
6 -> 13 -> 20.5
Thanks for proving my point. We have a new chip design and a new process, we should always see a large gain with these two combined. -
oh btw FYI 480m or 485m is NOT an architecture rebrand (neither is 8800m), it is the first highest end Fermi.. *facepalm* -
Meaker@Sager Company Representative
Oh when will people learn to actually read what I write carefully rather than assuming things. Read your quote properly.
Architecture rebrands, optical shrinks or MONGREL CHIPS. The 480M was a travesty that should never of existed.
If it makes you happy we could go with first chips of each architecture, it matters not so long as you stick with it:
6k -> 9k -> 19k
Which does expose the fact that we skipped an arch better as to when, so maybe that is a better way of ordering it I suppose.
But feel free to keep proving my point. -
-
Meaker@Sager Company Representative
Nvidia have finally woken up, they are no longer using the same arch over 3 "generations" of mobile chips. This is a new arch on a new process and significant gains should be seen.
Feel free to ask any other questions over the past chips and find out why this is the case. I'd be happy to fill any gaps in your knowledge in this area. -
-
Meaker@Sager Company Representative
I'd rather believe that you were simply ignorant of the data there rather than wilfully trying to twist it to prove your point.
I mean you can't seriously be saying that the performance jump going from the 9800M to the 280M, an optical shrink of the same arch has ANYTHING to do with the 580M to 680M going from 40nm to 28nm with a new arch? I gave you more credit than that. -
anyway dude, I will say it one last time because I saw your previous posts and thought you are a person with hardware knowledge, but then I am done.
you are in contradiction (mathematically speaking) because you are comparing 7950 to 8800 then 8800 to 485, 485 (which is essentially 580) to 680. First of all, if you wanna compare 8800 to 485, then you must compare 470/480 to 680 (not 485, it is basically 580, with 5 months in between), otherwise if you wanna compare the latest highend prev arch to earliest highend new arch, then you won't find this 50% performance improvement (where the closest you come can be 8800/7950 with above 40%), anyway, I am also expecting good results from new arch, BUT as soon as I heard TSMC is screwing things up last summer, I bought my machine, otherwise I was waiting for Kepler to buy a new one, right now it looks like we will see gtx 560Ti performance (on stock) in lappy, which is wonderful news especially for a crippled 28nm production, am I right? (I am sure nobody can argue against that) anyway let's end this thing, it went far too long, sorry for calling your standards fascinating, but I thought they are a little too high.
-
Meaker@Sager Company Representative
You are looking at the mobile market in isolation is the problem.
Nvidia created the GTX280 desktop that could not be cut down into smaller chunks so the mobile market never saw it, they then released the 480M because the chips were simply not yielding. Neither of these is the case so mobile progression is no longer stunted. -
well, let's hope so (but I am still skeptical with all the problems at TSMC), I do want a fullblown gk104 in mobile environment, but we'll see in time
btw I read from multiple sources citing 7970m as 1536 shaders 65W TDPis it even possible?
-
I think you guys need to chill out and wait for some real benchmarks before drawing any conclusions. AFAIK the 680m is coming out in june.
-
Seriously, in the absence of a good sampling of real world benchmarks, you can't be sure of anything. If anything, Nvidia is notorious for marketing hype. Because of that, hard evidence is required.
-
I think everyone's agitated over the lack of information.
Buzz: GTX680M first benchmark in M17xR4!
Discussion in 'Gaming (Software and Graphics Cards)' started by pau1ow, Apr 5, 2012.