The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
 Next page →

    Buzz: GTX680M first benchmark in M17xR4!

    Discussion in 'Gaming (Software and Graphics Cards)' started by pau1ow, Apr 5, 2012.

  1. pau1ow

    pau1ow Notebook Deity

    Reputations:
    1,336
    Messages:
    1,181
    Likes Received:
    84
    Trophy Points:
    66
  2. nissangtr786

    nissangtr786 Notebook Deity

    Reputations:
    85
    Messages:
    865
    Likes Received:
    0
    Trophy Points:
    0
    Why is there 2 scores for the gtx 680m. 70-75w it is expected to be so it is a great performance increase over a 580m virtually 50% while taking 25w less.

    Kepler is like 75-100% performance per watt improvement on most benchmarks over fermi.

    edit: Maybe its just a 50% improvement at same consumption of 580m.
     
  3. nissangtr786

    nissangtr786 Notebook Deity

    Reputations:
    85
    Messages:
    865
    Likes Received:
    0
    Trophy Points:
    0
    'the GTX680M the wattage measured up to 100W, and HD7970M 65w has a huge gap, do not know the formal card will not'

    Is this for real. Maybe amd were tricking us all if a 7970m is 65w. I wonder what the performance is though.
     
  4. nissangtr786

    nissangtr786 Notebook Deity

    Reputations:
    85
    Messages:
    865
    Likes Received:
    0
    Trophy Points:
    0
    Google Translate

    Interesting, it shows performance of amd 7000m and gt. 600m series 3dmark 11 scores.
     
  5. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    dude, this is about the most misleading speculation ever.. I wanna bet that ALL is wrong in there..
     
  6. Botsu

    Botsu Notebook Evangelist

    Reputations:
    105
    Messages:
    624
    Likes Received:
    0
    Trophy Points:
    30
    Please make it true. Especially Radeon 7950/7970M, it'd be awesome.
     
  7. nissangtr786

    nissangtr786 Notebook Deity

    Reputations:
    85
    Messages:
    865
    Likes Received:
    0
    Trophy Points:
    0
    It probably is as I expect amd to be beaten badly this time but if it is not and the power consumption is low I am getting an amd card. Maybe the notebook gpu's had better revisions.
     
  8. awakeN

    awakeN Notebook Deity

    Reputations:
    616
    Messages:
    1,067
    Likes Received:
    4
    Trophy Points:
    56
    This is going to be one hell of an expensive card Q.Q
     
  9. R3d

    R3d Notebook Virtuoso

    Reputations:
    1,515
    Messages:
    2,382
    Likes Received:
    60
    Trophy Points:
    66
    I'd be pretty happy if the 7970m had 580m performance at 65 watt. That's a massive increase in perf/watt.. Probably has great OC potential too.
     
  10. Botsu

    Botsu Notebook Evangelist

    Reputations:
    105
    Messages:
    624
    Likes Received:
    0
    Trophy Points:
    30
    According to the link I was writing in reaction to, AMD wins.

    That wouldn't be too surprising to me since AMD's southern islands is apparently more efficient than Kepler, if we compare Pitcairn to GK104 at least (Tahiti should remain unaccounted for since it has all sort of GPGPU stuff that is missing in nvidia's chip). And efficiency is what matters the most in the mobile market.

    But 55W for the Radeon 7950M (!!) and it has 1280 cores, not 1024 like many believed ! If it's correct the 7850M/7870M might have 1024 cores and a TDP low enough to be integrated in some multimedia notebooks.

    Looks too good to be true I say. Let us wait.
     
  11. nissangtr786

    nissangtr786 Notebook Deity

    Reputations:
    85
    Messages:
    865
    Likes Received:
    0
    Trophy Points:
    0
    My current 9600m gt is a 23w card so if I can get a 25-27.5w card this time something probably maybe called a 7770m that using the logic 7950m scores 4200 in 3dmark11 then this could score 2100.

    Another logic is the 7970m performs the same in 3dmark11 as the 680m at 65-70w vs 100w. P4500-p4700 3dm11 score. We probably can expect a 42% improvement performance per watt but depends if the tdp shows actualy power consumption.

    So gtx660m performance at 45w scoring lets say 2300 in 3dmark11 with a 42% improvement would take 30w. Now that would be great for gaming.

    I can see it taking 82w for a 3612qm with 7770m estimated name for 50-60% improvement over amd power efficient cards.

    Review HP Pavilion dv7-6101eg Notebook - Notebookcheck.net Reviews
     
  12. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    7950m won't score 4.2k in 3dmark11
     
  13. kolias

    kolias Notebook Evangelist

    Reputations:
    251
    Messages:
    629
    Likes Received:
    88
    Trophy Points:
    41
    why not?i hope all these news are true.
     
  14. PaKii94

    PaKii94 Notebook Virtuoso

    Reputations:
    211
    Messages:
    2,376
    Likes Received:
    4
    Trophy Points:
    56
    wait... soo im alittle confused. were there any real benchmarks done or is all this speculation
     
  15. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    Well lets see, an HD7870 scores 6500 points in 3dmark11 with a TDP of 140W (175W board power - 20% overdrive overhead), an HD7850 scores 5200 with a TDP of 104W (130W board power - 20% powertune overhead).

    So really I think those scores are too low.

    If we get a 7850 core then I expect a score of at least 5000, with a 7870 core I would expect 5500.

    There is no excuse why the 680M and HD7970M should not TOAST an even overclocked 580/675m. I can reach over 4000 with my 570M when overclocked :/
     
  16. KernalPanic

    KernalPanic White Knight

    Reputations:
    2,125
    Messages:
    1,934
    Likes Received:
    130
    Trophy Points:
    81
    The benchmarks screenshotted here for Nvidia cards are likely true, and I venture to guess they will get even better as Nvidia drivers optimize more for them.

    The AMD stats ,numbers, and comparisons and scores are most likely false or just guesses.

    AMD's 7xxx series is a step-and-a-half slower than Nvidia's Kepler and the desktop Kepler is even better at performance/watt than AMD offerings.

    GeForce GTX 680 2 GB Review: Kepler Sends Tahiti On Vacation : GeForce GTX 680: The Card And Cooling


    Unless Kepler is just too power-hungry to fit in a laptop and Nvidia must make staggering cuts to fit in a laptop form-factor, the laptop power ranking will be similar to the desktop.

    Note, AMD can still win the "best value choice" in a laptop with some aggressive pricing.

    Until some real benchmarks for the AMD 7xxx mobile series are published, nobody knows the whole story.

    Funny thing, this happened last time... AMD-fanatics were talking about how badly the 6990m would beat the 580m... the real results were thatthe 580m was still better in almost every case. The 6990m was stil a great GPU because it won performance/cost easily.
     
  17. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    If the 680M scores 4500 it will be a joke of a card.
     
  18. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Things might turn around with Kepler. GTX 680 is cheaper than 7970. 6970 was $140ish cheaper than 580 when they were released. This may reflect on notebooks this time around. After all, Kepler is able to offer the same performance with smaller die
     
  19. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    Why are you talking about the 7970? We wont get that core it in a notebook .
     
  20. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Because Kepler may in general be cheaper than GCN. Not just 680. Which could give us a competitive pricewar with notebook GPUs as well and the whole "AMD offer almost same performance for lower price" is not valid this time around

    Wishful thinking but it might happen :)
     
  21. GeoCake

    GeoCake http://ted.ph

    Reputations:
    1,491
    Messages:
    1,232
    Likes Received:
    2
    Trophy Points:
    56
    Wut?

    The 580M scores 3500 at stock and 4500 easily when overclocked. So if a 680M scores 4500 at stock it will score 5500 easily when overclocked... How is that a joke?

    Your 570M is basically a 580M with some CUDA cores disabled :/
     
  22. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    The GTX 675M was tested witha faster CPU, so the score ccomparisonis flawed without GPU scores.
    Please don't post this junk in any more threads.

    Are you guys even paying attention? The fact that the 7970M is listed as having 1536 shaders means you should disregard this link. Unless you're going to try and sell me on it coming from the oft fabled AMD 7890.
     
  23. Botsu

    Botsu Notebook Evangelist

    Reputations:
    105
    Messages:
    624
    Likes Received:
    0
    Trophy Points:
    30
    No it's not. Stop spilling crap like that without fact to back it up. You can't conclude that Kepler is more efficient than GCN based on the 7970 vs 680.

    One might assume Tahiti loses to GK104 in die-area and perf/watt notably because 1/ it's a compute oriented chip while its competitor is purely gaming oriented (I've seen numbers that showed it struggling against a Pitcairn for certain computing tasks) 2/ 384bits. On a pure performance scale it doesn't even lose by a huge margin.

    And btw Pitcairn is the king regarding performance/watt. You simply can't pull conclusions about both companies' architectures and their efficiency from such a simple argument.

    There aren't any elements of comparison to warrant that AMD couldn't have pulled off a chip that would've basically been a Pitcairn scaled-up to ~300mm², clocked it at >=1ghz and won on every metrics or at least match their competitor, had they chosen to do so.


    They're almost on par performance-wise and the 6990M doesn't cause throttling issues, or at least isn't as prone to it. And like you said it was much cheaper. The only "funny" thing is how many ignorant people still managed to convince themselves that they were making the right choice by going with the GTX 580M when they could have saved ~$200 for a similar piece of hardware minus some serious issues.
     
  24. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    Seriously? You should just undervolt a 6970M or 6990M then.

    The 7970M being tied with the 580M and 6990M would be an absolute joke. It's technologically impossible to go from 40nm to 28nm, but maintain the same performance. AMD would have to be cutting the 7850 by like 40% to achieve that feat of failure. And why would they do that to a card which is already just barely 100W?
     
  25. Zero989

    Zero989 Notebook Virtuoso

    Reputations:
    910
    Messages:
    2,836
    Likes Received:
    583
    Trophy Points:
    131
    that's almost too good to be true. 7970m geting 680m performance at 30-35 watts less.

    that means they could make a 2000 shader 7990M. doesnt sound right to me. it has to be 100W.. or 1408 shaders and 4100ish 3dmark 2011 score.

    and i already knew the 7970m would beat the 675 easily though. ive said this for weeks. :p
     
  26. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    It's a joke because it would not count as a new generation if it offered a tiny step up in performance for the same power consumption.

    Combine that with the fact it will already be overclocking itself to get that score due to their new tech and it would just be sad.

    Also please don't point out the painfully obvious tech details about a card I own.
     
  27. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Rumors say 7970M = 675M though.
    If your scenario happens it would mean 50% faster GPU and 10W lower power consumption than 6970M.

    That would have been insane :D
     
  28. YodaGoneMad

    YodaGoneMad Notebook Deity

    Reputations:
    555
    Messages:
    1,382
    Likes Received:
    12
    Trophy Points:
    56
    These scores are basically what I expected, a small but evolutionary jump in performance. What is a bit worrying is if those scores are using the built-in overclocking, then that could make them pretty disappointing.

    Several people have assumed that the 680m will OC into the 5000+ range, but what if these scores are already at near the max stable OC because of the new automatic overclocking?
     
  29. misterhobbs

    misterhobbs Notebook Evangelist

    Reputations:
    715
    Messages:
    591
    Likes Received:
    9
    Trophy Points:
    31
    Ok, so I'm pretty much a newb to all of this, so forgive me if I'm wrong in thinking this, but it seems like a lot of what's posted is heresy or biased opinion. How can we truly judge the winner without real world benchmarks (i.e. fps, times, etc.)? It seems like any Joe Shmoe can polish a turd, take a picture of it, edit it in photoshop, and post it somewhere advertising it as a fancy chocolate truffle.

    Also, how do synthetic benchmarks scores (like the PC Mark scores) actually translate to real world performance? Are these scores just made up units or are they actually a real, physical units of measure like a Watt or Joule?

    I'm just trying to separate what is real, or at least speculation based on sound research, from opinionated guesses as I prepare to buy a new system.
     
  30. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    you got to be joking? :confused: seriously, here is the 30% performance jump you guys were looking for (definitely not from your miracle 7970m, which was supposed to be the cheapest/low TDP yet the crazy performer on the planet..) just lower your impossible expectations, tech doesn't make jumps you suggested, 30% is already good for a startup, they didn't start their business with Kepler architecture right? they will perfect it in time, give our beloved tech companies a break people, if you are so disappointed, go try to do better than them.
     
  31. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    Why should a 30% performance increase be acceptable when the new process is offering 50% increases at the same power consumption?

    Seriously, you don't need to defend these companies you know, they can look after themselves.
     
  32. Devenox

    Devenox Notebook Evangelist

    Reputations:
    65
    Messages:
    417
    Likes Received:
    9
    Trophy Points:
    31
    Because evolution is slowing down.
    3 or 4 years ago every new generation brought like 60% more performance, now it's more like 30%.
    I think it's logical, they have come to a point they cannot raise TDP any higher than 100W. (3 years ago, 100W in a mobile computer was unthinkable)

    Anyway I'm looking for a good performaning GPU around 30-35W
    GTX660M seems interesting. (still want a thin and light notebook)
     
  33. oan001

    oan001 Notebook Evangelist

    Reputations:
    256
    Messages:
    482
    Likes Received:
    0
    Trophy Points:
    30
    In this thread: a lot of people who know it all bickering.

    Sent from my HTC Desire using Tapatalk
     
  34. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    The 6870 and 7870 have very similar power consumption, the 7870 is 50% faster in GPU limited titles and these happen to be the cores the mobile chips will be based off.

    That's where I get my 50% from. This is also why I keep quoting AT THE SAME POWER CONSUMPTION, because I appreciate there is a mobile power ceiling. I already took that into account.

    This is not like the 5xxx -> 6xxx gen, we have had a new process technology and it has brought power consumption down.

    Well your post is not constructive in any way shape or form, congratulations. Also posting on the go, because adding nothing to the thread is just that important.
     
  35. oan001

    oan001 Notebook Evangelist

    Reputations:
    256
    Messages:
    482
    Likes Received:
    0
    Trophy Points:
    30
    Relax dude, it was a joke.

    Sorry for posting from my phone though...
     
  36. nissangtr786

    nissangtr786 Notebook Deity

    Reputations:
    85
    Messages:
    865
    Likes Received:
    0
    Trophy Points:
    0
    Where are people getting 30% from. Kepler is like 70-80% easily better performance per watt maybe some things like 3dmark11 2x performance per watt and the new amd mobile 7000 series looks to be 100% better performance per watt if the tdp levels on amd is correct and power consumption is actually low.
     
  37. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    Dude please refer below (they are 3dmark03 and 06 unfortunately but the only common bench)

    ******** 3dmark03/3dmark06
    gtx go 7950 => 21k***
    gtx 8800m* => 30k*** 9k
    gtx 9800m* => 32k*** 10k
    gtx 285m** => 37k*** 13k
    gtx 480m** =>****** 15.5k
    gtx 485m** =>****** 19k
    gtx 580m** =>****** 20.5k

    except for the jump from 285m to 485m (Fermi difference + which is not exactly right because 485m is equal to 580m, actually we should compare 480m) there is NO more than 30% jump (well, 40% for go 7950 to 8800m, but that was a complete change in architecture where GPUs became much more parallelized, so that is expected).. please let's be a little more realistic, and yeah, I think both nvidia and amd is doing one hell of a job.
     
  38. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    *Sigh* Really? Really really?

    Your going to quote 3dmark at me? In the way you have?

    Cut out the 480m and 485m, cut out the 8800m and 9800m. All of these are architecture rebrands or optical shrinks or mongrel chips.

    We would get something along the lines of:

    6 -> 13 -> 20.5

    Thanks for proving my point. We have a new chip design and a new process, we should always see a large gain with these two combined.
     
  39. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    WHAT? THEN CUT OUT 680M, FIRST KEPLER? Dude wow...

    oh btw FYI 480m or 485m is NOT an architecture rebrand (neither is 8800m), it is the first highest end Fermi.. *facepalm*
     
  40. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    Oh when will people learn to actually read what I write carefully rather than assuming things. Read your quote properly.

    Architecture rebrands, optical shrinks or MONGREL CHIPS. The 480M was a travesty that should never of existed.

    If it makes you happy we could go with first chips of each architecture, it matters not so long as you stick with it:

    6k -> 9k -> 19k

    Which does expose the fact that we skipped an arch better as to when, so maybe that is a better way of ordering it I suppose.

    But feel free to keep proving my point.
     
  41. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    Dude, I am done arguing with you because you are full of contradiction.. if you wanna skip 3 years for a bench then wait until 2014 as your lovely 19k 485m bench was taken January 20 11, don't worry, 2 years from now on there will be 150% improvement probably due to the success of Kepler (And Maxwell), on par with your fascinating standards...
     
  42. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    You're done because you finally see that you are wrong.

    Nvidia have finally woken up, they are no longer using the same arch over 3 "generations" of mobile chips. This is a new arch on a new process and significant gains should be seen.

    Feel free to ask any other questions over the past chips and find out why this is the case. I'd be happy to fill any gaps in your knowledge in this area.
     
  43. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    very mature, congratulations.
     
  44. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    Lol, if that's all you can say then I was right, you were done.

    I'd rather believe that you were simply ignorant of the data there rather than wilfully trying to twist it to prove your point.

    I mean you can't seriously be saying that the performance jump going from the 9800M to the 280M, an optical shrink of the same arch has ANYTHING to do with the 580M to 680M going from 40nm to 28nm with a new arch? I gave you more credit than that.
     
  45. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    anyway dude, I will say it one last time because I saw your previous posts and thought you are a person with hardware knowledge, but then I am done.

    you are in contradiction (mathematically speaking) because you are comparing 7950 to 8800 then 8800 to 485, 485 (which is essentially 580) to 680. First of all, if you wanna compare 8800 to 485, then you must compare 470/480 to 680 (not 485, it is basically 580, with 5 months in between), otherwise if you wanna compare the latest highend prev arch to earliest highend new arch, then you won't find this 50% performance improvement (where the closest you come can be 8800/7950 with above 40%), anyway, I am also expecting good results from new arch, BUT as soon as I heard TSMC is screwing things up last summer, I bought my machine, otherwise I was waiting for Kepler to buy a new one, right now it looks like we will see gtx 560Ti performance (on stock :eek:) in lappy, which is wonderful news especially for a crippled 28nm production, am I right? (I am sure nobody can argue against that) anyway let's end this thing, it went far too long, sorry for calling your standards fascinating, but I thought they are a little too high.
     
  46. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    You are looking at the mobile market in isolation is the problem.

    Nvidia created the GTX280 desktop that could not be cut down into smaller chunks so the mobile market never saw it, they then released the 480M because the chips were simply not yielding. Neither of these is the case so mobile progression is no longer stunted.
     
  47. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    well, let's hope so (but I am still skeptical with all the problems at TSMC), I do want a fullblown gk104 in mobile environment, but we'll see in time :)

    btw I read from multiple sources citing 7970m as 1536 shaders 65W TDP :eek: is it even possible?
     
  48. oan001

    oan001 Notebook Evangelist

    Reputations:
    256
    Messages:
    482
    Likes Received:
    0
    Trophy Points:
    30
    I think you guys need to chill out and wait for some real benchmarks before drawing any conclusions. AFAIK the 680m is coming out in june.
     
  49. 2.0

    2.0 Former NBR Macro-Mod®

    Reputations:
    13,368
    Messages:
    7,741
    Likes Received:
    1,022
    Trophy Points:
    331
    Seriously, in the absence of a good sampling of real world benchmarks, you can't be sure of anything. If anything, Nvidia is notorious for marketing hype. Because of that, hard evidence is required.
     
  50. TwinTurbo

    TwinTurbo Notebook Consultant

    Reputations:
    92
    Messages:
    148
    Likes Received:
    0
    Trophy Points:
    30
    I think everyone's agitated over the lack of information. :D
     
 Next page →