The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    nVidia 2015 mobile speculation thread

    Discussion in 'Gaming (Software and Graphics Cards)' started by Cloudfire, May 9, 2015.

  1. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Sorry to nitpick. 460 is not full GF104, only 485M is :p. But your point is taken.

    AMD rules the top (bottom) of that graph. Funny.
     
  2. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    ah well, all discussions will be fruitless anyways once that gpu is out and about :) well see whats up then....

    Sent from my Nexus 5 using Tapatalk
     
    Cloudfire likes this.
  3. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Then please explain how you understand what I said better than I do.

    Why is n=1 using Fermi? Because it was the last time Nvidia was stupid enough to cut down something passed the knee of the perf/W curve. Nvidia learned its lesson. You're so caught up on context, not on logic or critical thinking, that you can only belittle and cry flawed example but can't explain why it's a bad example.

    Going where? Efficiency barely changes because performance and power consumption fall in line with each other. At the same time, 780 is not a huge cut from 780 Ti, certainly not to the degree a potential mobile GM200 would have to be.

    Even if the "640 less cores and 100+MHz lower clock" GM200 drops to ~200W, what about the 75W or so remaining to make it a feasible mobile card? What, drop clocks to like 500 MHz and voltage accordingly? Oh wait, Nvidia already tried that with 480M and it didn't work too well, did it :rolleyes:? It would perform worse than a fully enabled GM204 which plays once again into what n=1 said. Nvidia would be stupid to offer such a crippled GM200 chip as a mobile GPU, it would be a waste of big expensive GM200 dies.
     
  4. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    Yes your hypothetical 2432 core mobile GM200 clocked at 900 MHz core/5000 MHz memory could very well fit within 190W. But then what? How do you propose to bring that 190W down to 120W from there onwards?

    As for the GTX 680 vs 880M argument, you keep forgetting that 880M is already heavily downclocked on the core and memory in order to reduce TDP down from 195W to 120/125W, even using what is presumably the best possibly binned GK104 dies. In other words, most of the TDP saving tricks used on the 880M has already been accounted for in your cut down GM200 to bring it down to 190W. From 190W to 120W you have nothing to go by except "special binning". Ok sure I'm not discounting lower voltages will reduce power consumption, but you're talking about a 70W difference here, or in other words, 37% TDP reduction just by binning alone.

    [​IMG]

    The graph above is for illustrative purposes only, but drives home a key point: if you're already in the optimal zone on the efficiency curve, voltage has very little effect on power consumption. I mean going from +50mV to about +140mV gave a whooping 20-30W increase in power consumption. That's 20-30W for 90mV of voltage added! On a 290X no less!
     
  5. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    My thinking is that the TDP value is a maximum allowed value, the average current consumption is going to be on average less than that. In guru3d in their GPU review of 980ti: "That TDP is a maximum overall, and on average your GPU will not consume that amount of power.". I think the fact that the Titan X and the 980ti have the same TDP does not mean that they consume on average the same current when gaming. By virtue of the fact that 980ti has less cores than Titan X, if they are the same frequency & voltage then 980ti has to consume less power. Cloudfire's previous deliberations on a cut down GM200 seem possible to me. Although I am 50:50 whether 990M is going to be full GM204 vs GM200.
     
    jaybee83 likes this.
  6. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Seriously, stop resonding if you don`t have a clue what you are talking about mate. :p

    GTX 780 with 2304 cores runs HIGHER clocks than 780 Ti with 2880 cores. I literally just showed you that the GTX 780 runs 50W lower than GTX 780 Ti.
    The power efficiency graph was a huge clue toward that as well, keeping the same power efficiency as GTX 780 Ti despite having lower core count. Its because power have gone down by a lot.
    Power drop will also be seen from a cut down GM200. We are talking about a GM200 chip that is less clocked than the GTX 980 Ti and with 640 less cores. Which would mean a bigger drop than 50W.

    Which will put it below GTX 980 aka GM204.
    [​IMG]

    Getting it down from 180W to 125W or something like that will be a piece of cake. Its done through lower voltage and binning. You already see the GTX 680 in the chart above with similar power as the cut down GM200 will have. Which we got our GTX 880M from.

    I`m done wasting time on discussing this. If you still don`t believe it based on everything Ive showed so far, then you just dont understand it.

    Power requirements are no problem for a mobile GPU.
     
  7. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    Where did you get that 780 runs higher clocks than 780 Ti? According to AnandTech ( 780, 780 Ti), of the 2 games that they share in common, BF3 780 ran 12MHz faster, while in Hitman 780 actually ran 1 MHz slower. Also the memory is clocked at 7 GHz on the 780 Ti vs the 6 GHz on the 780, a pretty significant difference.

    Yeah we're just gonna have to agree to disagree then. If you think going from 180W to 125W can be achieved by undervolting alone, good luck.
     
  8. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    If I don't have a clue about what I'm talking about, why are you repeating what I just said? I said, and I quote, "efficiency barely changes [between 780 Ti and 780] because performance and power consumption fall in line with each other".

    http://www.anandtech.com/show/7492/the-geforce-gtx-780-ti-review/15

    http://www.anandtech.com/show/6973/nvidia-geforce-gtx-780-review/19

    Frequency drop (core and memory) and voltage drop (core and memory), that's where 780's power savings come from, not just the disabled units.

    Except the hypothetical GM200 has already been significantly cut down once, so to say "gee let's do it again it'll be a piece of cake" is a huge leap of faith. See, I don't have a problem with you saying 250W => 190W is possible. Binning, disabling units, dropping clocks and voltages. Sure. But you're telling me they're gonna do all that one more time (if it's even possible to squeeze any more) to go 190W => 130W and still end up with something which is better than a fully enabled GM204 and not some lobotomized 3-legged ass? Please...

    Oh sure, riding the high horse is always a good way to make a convincing argument. :rolleyes:
     
  9. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Battlefield 3
    GTX 780: 992MHz
    780Ti: 980Mhz

    Same about Crysis.

    That resulted in a 50W drop because it have around 500 less cores.
    Now imagine the same core count reduction as those two again, just with GM200, but with 100MHz lower clock instead of 12MHz higher clock.

    Power reduction will go down by a lot.

    It was literally done on GTX 680 and GTX 880M. Except it was from 195W down to 125W.
    A 180W to 125W is even better. Or if they manage the same power reduction again, 180W to 115W.

    Read what I`m writing for gods sake. The whole concept of getting power and TDP down is using lower voltage on mobile cards. You should know that by now. Its not magic.
    Its logic.
     
    Last edited: Aug 14, 2015
    Robbo99999 likes this.
  10. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Yep, to me GM200 is possible in mobile from the discussions we've had, it's just gonna come down to what makes the most commercial sense for NVidia as to whether it's gonna be GM200 or GM204. I'm still 50:50.
     
    jaybee83 and Cloudfire like this.
  11. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Me too mate.

    GM204 is smaller than GM200, so per silicon it might be cheaper to produce for Nvidia.
    However, they offer only 3072 and 2800 core GM200 chips. Between a full GM204 and GM200 there is 2048 > 3072 cores. A lot of room. What happens to the GM200 chips originally manufactured for GX 980Ti and GTX Titan X if they have more damaged cores? Throw them away as loss? Or find a product for them? Like GTX 990M?
    Might be more financial smarter for them.

    We already covered that cooling wise and overclocking wise a GM200 would be the best for us gamers.
    Lets hope GM200 happens :)
     
    jaybee83 and Robbo99999 like this.
  12. O_and_N

    O_and_N Newbie

    Reputations:
    0
    Messages:
    2
    Likes Received:
    1
    Trophy Points:
    6
    Robbo99999 likes this.
  13. Cakefish

    Cakefish ¯\_(?)_/¯

    Reputations:
    1,643
    Messages:
    3,205
    Likes Received:
    1,469
    Trophy Points:
    231
    I don't care if it's 204 or 200. Either way it's a substantial upgrade. I'm much more concerned about pricing :/
     
  14. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    If you are looking for a workstation GPU it still beats the living crap out of a 980M though.

    [​IMG]
    [​IMG]

    As far as I know GTX 980M doesnt have any FP64 cores disabled, so that drivers can cause this huge increase in performance is a bit shocking to me. Unless there is disabled FP64 cores in the GtX 980M but not M5000 :confused:
     
  15. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    Cloudfire is telling everyone to stop responding and asks us to speak logically... :D Well, here's some logic:

    Cloudfire is living in the clouds, ladies and gentlemen. :D He has surely lost his head in the GM200 world of irrelevance. Pascal is coming, you'd think he'd be more excited about that. Or is it out of pure stubbornness, does he waste time arguing over what is soon to be ancient technology?

    Why would anyone even buy a 990M? We know LESS about the 990M than we do Pascal. We expect it to run HOTTER than Pascal. We also expect it to run SLOWER than Pascal. And odds are, it will be MORE EXPENSIVE than Pascal. Logic dictates you wait until Pascal.

    Stop wasting your energy fighting over this mysterious 990M.
     
    Last edited: Aug 14, 2015
  16. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Aww, come on now, we don't know much about Pascal yet do we - in terms of mobile, so we may as well speculate on the next release - Maxwell 990M.
     
  17. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    We'll have plenty of fun speculating over Pascal re-brands after the initial launch. If they can do wonders with 28nm (Kepler to Maxwell), I can't wait to see what is in store for Pascal. How is that not more exciting? Also, there has never been a mobile x90 series card before. And I can almost guarantee you there won't be one without a desktop variant. If you can't find anything about a desktop GTX 990, then odds are, there isn't going to be a GTX 990M, let alone a GM200 990M.
     
    Last edited: Aug 14, 2015
  18. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    That Pascal thread was made waaay too early :p
     
  19. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    There's like 2 or 3 of them. Make another!
     
  20. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    theres no written law that states all mobile gpus have to have a desktop counterpart with identical model number (or vice versa)

    Sent from my Nexus 5 using Tapatalk
     
    Cloudfire likes this.
  21. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    Never said there was. That's just how it has been for years, so it's safe to assume the trend will continue.*

    *To you sticklers out there: Yes, there are a few exceptions. I am aware.
     
    Last edited: Aug 14, 2015
  22. moviemarketing

    moviemarketing Milk Drinker

    Reputations:
    1,036
    Messages:
    4,247
    Likes Received:
    881
    Trophy Points:
    181
    And most importantly, it will be available one year sooner.
     
    jaybee83 and Mr Najsman like this.
  23. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    Imagine if it is soldered. The implications for the future of mobile GPUs will be huge.
     
  24. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    There's also power saved from clocking GDDR5 1GHz lower. Probably not much, but it's there. All the sites are measuring total board power, not just GPU TDP.

    Titan X to 980 Ti saved like what, 10W?

    [​IMG]

    And for gods sake you keep missing my point. Yes it was done on the GTX 680 and 880M, because the 880M was heavily downclocked on the core and memory.

    Your cut down GM200, to even get to 180W would already need downclocking on the core and memory, yet you seem to keep missing this fact. Unless you propose to go from 180W to 120W solely through undervolting, you'll have to downclock EVEN MORE to get to 120W. And that was my point, if you cut core speed by half just to get to 120W, your cut down GM200 will perform worse than a full GM204.
     
  25. Cakefish

    Cakefish ¯\_(?)_/¯

    Reputations:
    1,643
    Messages:
    3,205
    Likes Received:
    1,469
    Trophy Points:
    231
    Depends on where you live. Finding a Maxwell MXM GPU today from a UK seller is like trying to find gold dust. Unless you import from abroad for stupidly expensive prices. So, here in the UK, soldered GPUs are already pretty much the only option.
     
  26. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    Why are we even speculating GM200 again? Why would we even want that instead of a GM204 with more SMMs unlocked? That's like going into the past, and still choosing the GTX 480M instead of the GTX 485M, with everything we know now.

    And you guys are close to being too 'all in' with your passionate predictions. Always save yourself some leeway to be wrong, lest you lose future credibility.
     
    jaybee83 likes this.
  27. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    Because the argument is you can create a GM200 with more SMMs than a full GM204 and still keep it within 120W, thus bringing better performance.
     
  28. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    See, you are not reading what I`m writing. Try to keep up.
    GTX 980Ti runs on average 60MHz higher clocks than GTX Titan X. The clocks made up for the core reduction. Please memorize that now. Im getting tired of repeating myself.

    Nonsense.

    GTX 880M goes all the way up 993MHz.
    GTX 680 to 1058MHz.

    Thats not "heavily downclocked" lol

    GTX Titan X: 3072 cores with 1120MHz in average. 243W in peak power consumption

    Cut down GM200: 2432 cores with 1050MHz. Easily down 70W since similar clocked GTX 780 with similar core drop (- 500 cores) saw 50W reduction compared to GTX 780Ti. That means around 170-175W in peak power consumption
    Just happens to be identical with GTX 680 which GTX 880M was based on.

    GTX 990M: 2432 cores with 1000MHz. Same clock reduction compared to the identical desktop chip its based on like GTX 880M and GTX 680

    Sorry buddy, perfectly plausible with GM200 on mobile.


    How is GM204 more SMMs? Full GM204 is 2048 cores. A theoretical GM200 would have 2432 cores. Thats 3SMMs more. If you have a PSU big enough, you can get way more performance out of low clocked 2432 cores than high clocked 2048 cores since most of GM204 potential is already used but GM200 isnt.

    Nobody is saying GM200 is 100% happening either. But the possibility is certainly there.
     
    Last edited: Aug 14, 2015
    Robbo99999 likes this.
  29. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    880M max boost 993 MHz
    680 max boost 1111 MHz

    880M memory 1250 MHz
    680 memory 1500 MHz
     
  30. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    Plausible: "Seeming reasonable or probable." Things that are plausible could easily happen. For example, a woman becoming President is very plausible.
    Possible: "Able to be done; within the power or capacity of someone or something." You can use possible to talk about anything that might happen.

    Let's not confuse the two. Pretty much anything is possible. Not everything is plausible.

    This is for the record, so that any articles referring to this thread as a source may have the correct meaning.
     
  31. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    On average 980 Ti is runs about 30 MHz faster than Titan X across 10 games (1156.5 MHz vs 1126 MHz). Feel free to check my math here. The 60 MHz is a worse case scenario figure.

    Ok so taking Crysis 3's example of 64 MHz speed advantage to the 980 Ti, look at the power consumption:

    [​IMG]

    A measly 4W savings, big whoop. So what does this mean? 64 MHz bump in clockspeed is nearly enough to make up for the loss of 256 cores. So if we extrapolate, this means your 2432 core GM200 is equivalent to downclocking a 2816 core GM200 by 100 MHz. Do you honestly believe a 980 Ti that's downclocked by 100 MHz will suddenly become a 180W card?

    Yes this is all before binning and voltage scaling before you point out the obvious. But point being, when looking at GM200, whatever TDP you lose from loss of cores is easily gained back by a minor bump in clockspeeds. Or in other words, losing cores don't seem to have as great of an impact on TDP vs Kepler.

    Also your 780 vs 780 Ti comparison is also flawed because 780 Ti uses faster clocked memory. Comparing 780 vs the original Titan is much better as the nominal clocks are much closer. In fact AnandTech found that in BF3, they run at the exact same core clock! Now let's look at the power consumption.

    [​IMG]

    So loss of 2 SMX for 384 cores in the 780 led to a 24W reduction in TDP. Coincidentally your hypothetical 2432 core GM200 will also be 384 cores less than 980 Ti. Obviously you can't directly compare the numbers, but I think this sufficiently shows that cutting out cores don't lead to as dramatic of a TDP decrease as you'd think.

    Why don't you ask Ethrem and some AW18 owners if their 880Ms could ever sustain full boost at 993 MHz lol. IIRC even the base 954 MHz was hard to maintain without overheating like crazy. And as octiceps pointed out, 680 boosts all the way up to 1110 MHz. (and has no issues keeping max boost btw)

    So that's about 200 MHz downclock on the core, plus anothe 1 GHz (effective) downclock on memory. Pretty significant I'd ague.

    As for the rest of your post, I'm just going to be repeating myself again, so I won't bother.
     
    Last edited: Aug 14, 2015
  32. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Also keep in mind 880M uses 1.018V @ 993 MHz while 680 uses 1.175V @ 1111 MHz, also a significant cutback. At its base clock of 1006 MHz, 680 uses 1.062V which is still ~4 voltage bins higher than 880M at 993 MHz even though it is only one frequency bin (13 MHz) higher.
     
    Last edited: Aug 14, 2015
  33. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Worst case scenario is what Peak power consumption is about.

    Whats your point here? I just showed you that we got a 50W drop by just going down 500 cores but with similar clock (GTX 780Ti > GTX 780). Going down the same 500 cores (GTX Titan X > Cut down GM200) but going from 1215MHz to 1050MHz will be much more drop and close to GTX 680 power consumption if not lower.


    Ive read about many people here with Alienware and GTX 880M that had no problems cooling it or any throttling.

    Again, its worse case scenario that is the base for peak power consumption.
    1110MHz down to 993MHz is still a 117MHz drop which is no massive downclock like you said in your previous post.

    Even 500 cores less and a 115MHz drop will be significant with GM200. Its a massive die with many cores.
     
  34. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    I'm not saying "GM204 has more SMMs than GM200", I'm asking why we'd want a cut down GM200 over a GM204 with more SMMs unlocked t han the current GTX 980M has opened for use.

    It makes no sense to jump from a 1536 core GM204 card, to a GM200, when you have so many GM204 cores left to exploit. Nvidia has literally never done this.

    I love tech discussion, but it's implausible enough to be a worthless discussion.
     
  35. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231

    Can you please stop using the 780 vs 780 Ti comparison? You have to account for power draw from the faster GDDR5 chips so it's not an apples to apples comparison. I just showed you right above in my post that 780 vs Titan at the exact same clockspeed differs by about 24W TDP, so that's how much you save by cutting out 384 Kepler cores.

    As for your 980 vs Titan X argument, again please calculate the average boost clock, it's 1119 MHz for Titan X vs 1194 MHz for 980, not 1215 vs 1050 as you claim. And again let's look at the graph here:

    [​IMG]

    Titan X already runs 64 MHz slower than 980 in Crysis 3 according to the link you posted, yet simply due to having 1024 extra cores it uses 91W more power. Assuming linear scaling, this means each core contributes 0.0888W. Cutting out 640 cores to make a 2432 core GM200 that's already downclocked by 64 MHz relative to 980 will still use 34W more, or basically a 200W card give or take.

    Does that make it clear now? Your cut down GM200 that's clocked 64 MHz slower than a 980 will still be a 200W card. How much downclocking and voltage binning do you think you'll need to get to 120W?
     
    Last edited: Aug 14, 2015
  36. JinKizuite

    JinKizuite Notebook Consultant

    Reputations:
    5
    Messages:
    123
    Likes Received:
    58
    Trophy Points:
    41
    I only hope that prices on the rest of the stuff like 970 + 980s go down when a 990m release.
     
  37. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    After all this, they release a statement saying they've decided to skip the 990 and 990M because Pascal is ahead of schedule.

    lol
     
    jaybee83 likes this.
  38. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    Or they say Pascal is delayed, so in the meantime we'll get Titan XYZ on desktop, while mobile will get 985M with 1792 cores. :D
     
  39. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    no need for further cutdown, if the infos we habe are true @Variable TDP 100 - 185W ;)

    Sent from my Nexus 5 using Tapatalk
     
  40. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    So some 990M will be faster than others? Oh I can totally see that working out well for Nvidia. First there was 880M, then there was GTX 970 Tree Fiddy, and now we have 990M "variable TDP". :rolleyes:

    God I swear this forum is slowly but surely going down the same rabbit hole of stupidity as most desktop forums...
     
  41. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    I almost wonder if that's a tongue-in-cheek way of saying it will throttle like crazy.

    I mean technically all mobile GPUs are variable TDP when they throttle trollololol
     
    J.Dre likes this.
  42. Cakefish

    Cakefish ¯\_(?)_/¯

    Reputations:
    1,643
    Messages:
    3,205
    Likes Received:
    1,469
    Trophy Points:
    231
    Seriously, I see this variable TDP as a marketing disaster. I find it far, far more unlikely than this whole GM200 debate (which, for the record I am not really sure which side I'm on, though I tend to favour the concept of 2048 GM204).

    It makes absolutely no sense to name a product that consumes 100W the exact same as a product that draws almost twice as much. None whatsoever.

    I'm calling it. That part of the rumour is false. It's either several different products with distinct names to differentiate their performance class or it's entirely false altogether and there will be only one GPU with a significantly smaller TDP range.
     
    Cloudfire and Robbo99999 like this.
  43. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Yep, I think so too, doesn't make sense to have one card with such a wide range of TDP's.
     
    Cloudfire likes this.
  44. Phase

    Phase Notebook Evangelist

    Reputations:
    7
    Messages:
    483
    Likes Received:
    100
    Trophy Points:
    56
    whats the big deal with dropping down nm? haswell to broadwell wasn't that big of a leap. heck even haswell to skylake isn't even that big of a leap. so what makes it so much different in the gpu world vs the cpu world?
     
  45. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    This is the difference:

    [​IMG]

    [​IMG]

    Nvidia invests in die space for its GPUs, as does Intel for its GPUs, but Intel does not for its CPUs. Intel dies get smaller each shrink and furthermore relative die space allocated for the iGPU gets bigger each generation. It's a double whammy for the CPU portion of the chip.

    In contrast, Nvidia die sizes don't get smaller when the process shrinks, if anything they get bigger. That's a metric butt ton more transistors and functional units. As a result GPUs grow ever more powerful and complex.

    And because graphics workloads are infinitely parallelizable (we can always increase resolution/AA, do stereo rendering for 3D and VR, add fancier shaders, etc.), dropping a node to increase transistor budget on the same or bigger die size always translates into hefty performance increases.

    CPUs aren't so lucky. Because many workloads are still serialized (like gaming for example, thanks to DirectX), and because single-threaded performance increase has stalled due to hitting the frequency wall, just building bigger CPUs with more cores and cache doesn't necessarily translate to significant real-world benefits to justify the cost.
     
    Last edited: Aug 14, 2015
    Robbo99999 likes this.
  46. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    Architecturally, the CPU is composed of a only few cores with a lot of cache memory designed to handle a few software threads at a time. In contrast, a GPU is composed of hundreds of cores that can handle thousands of threads simultaneously. The ability of a GPU (with hundreds of cores) to process thousands of threads can accelerate some software by 100x over a CPU alone. What’s more, the GPU achieves this acceleration while being more power-efficient & cost-efficient than a CPU. Architecture shrinks allow them to accelerate this process. Some say that Moore's Law stopped at 28nm because 28nm provides the most transistors per dollar, and companies have no incentive to invest heavily beyond that. I don't necessarily believe that. :p

    That's pretty much "the big deal." Hope this helps you understand or see why. ;)
     
  47. W4rrioR

    W4rrioR Notebook Enthusiast

    Reputations:
    0
    Messages:
    11
    Likes Received:
    4
    Trophy Points:
    6
    Any sources or more info about the G751JY's successor? I would love to know more.
     
  48. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    dont shoot the messenger here besides, i for one am not hellbend on jumping at other member's throats over defending an opinion on unreleased/unspecced hardware :D

    Sent from my Nexus 5 using Tapatalk
     
    Mr Najsman and Robbo99999 like this.
  49. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    I am dismayed at NBR's credulity, that's all. I thought we knew better than this.
     
  50. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    in the end, we shouldnt forget that its all just fun and games to speculate :) were all on the same (mobile high performance) side here, after all!

    Sent from my Nexus 5 using Tapatalk
     
← Previous pageNext page →