The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    Pascal: What do we know? Discussion, Latest News & Updates: 1000M Series GPU's

    Discussion in 'Gaming (Software and Graphics Cards)' started by J.Dre, Oct 11, 2014.

  1. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Yeah, I mean I don't think we should get excited about bandwidth increases, I just think we need to see the increases to support the increased core computational ability of Pascal - the memory bandwidth is just a facilitator to the main party, as long as there is enough then that's all good.
     
    i_pk_pjers_i likes this.
  2. aqnb

    aqnb Notebook Evangelist

    Reputations:
    433
    Messages:
    578
    Likes Received:
    648
    Trophy Points:
    106
    Bandwidth *is* bottleneck for a lot of cool graphics stuff that's still just in R&D pipeline (whole deferred pipeline is heavily bottlenecked by bandwidth, virtually all modern AAA game engines are deferred).

    These days you basically have plenty of shader computation available but are always scraping for bandwidth. A lot of techniques for getting prettier graphics are known but not used due to poor performance coming from not enough bandwidth. Extra bandwidth will allow better antialiasing, better global illumination, better volumetrics, better particle effects, better materials.

    The thing is, as current GPUs are pretty weak bandwidth-wise, nobody right in their mind would dare to release game / engine which would require lot of bandwidth. You always try to match rendering techniques to hardware that's common for end users.

    But things will eventually change once HBM2 becomes more common (starting with Pascal and Arctic Islands). It will take time, it's not enough that some HW is released, it needs to get big enough market penetration that it'll be worthwile to deploy bandwidth-heavy rendering techniques.

    But once that happens, there will be massive performance and visual quality difference between GPUs that do have crazy bandwidth (Pascal, Arctic Islands and newer) and ones that don't (Maxwell and older). Basically even cheap new GPUs will be killing older flagships on new things (while difference will be much less pronounced on old things).

    ------------

    TLDR: extra bandwidth is awesome and will make a big difference but software will have to catch up. You can't use games and benchmarks that exist today to assess impact of super-high-bandwidth future GPUs as nobody is targeting these capabilities yet.
     
    Last edited: Nov 4, 2015
  3. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    jaybee83 likes this.
  4. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
  5. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    Considering Nvidia is just now releasing desktop 980, I wouldn't expect Pascal in mobile devices for at least six months. Why release mobile desktop 980 at 150W - 180W when Pascal will likely have same performance at 100W?
     
    jaybee83 and hmscott like this.
  6. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    I've already answered this a few times. :p Computex 2016 (May 31st). That's all you need to know. Desktop is launched first, and mobile a week or two later. Happens every time. ;) The Chinese always get their hands on cards a month or two before launch. We will see benchmarks before then.

    NVIDIA did launch the 965M at CES, so maybe we'll see something around then, too.
     
    hmscott likes this.
  7. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    The desktop 980 was put into one or two machines made for enthusiasts that make up like 3% of the market. It's not going to make them a lot of money, no matter how long they wait. They'll probably break even on the costs or see some minimal gains.

    Also, the desktop 980 performs like 30% better than the 980M? That's not much of an improvement. I'd expect the 1080M to perform at least 50% better than the previous 980M card it will be replacing, if not more. And we are expecting it to be much more. So, that's at least 20% better than the 980.

    I imagine they'll launch 1070M / 1080M around Computex, and lower-end cards a bit earlier.
     
    Last edited: Nov 5, 2015
    Robbo99999 likes this.
  8. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    Well that's my whole point. They will look dumb if they release a 180W behemoth but then Pascal releases in a month running half the TDP and better performance. If Pascal for mobile was imminent they wouldn't have bothered with this desktop 980 nonsense.
     
    King of Interns likes this.
  9. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    I think the desktop 980 in notebooks was just a niche product, therefore I don't think they'd be worried about a low powered Pascal card stealing the thunder. If desktop 980 in mobile was as popular as say 980M then I'd agree with you, but otherwise I think everyone will be happy with a power efficient & high performance Pascal card being launched as soon as possible - been a while since the sensible power efficient & high performance 980M was released!
     
    J.Dre likes this.
  10. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    Definitely a niche product. It was also an experiment. If I were NVIDIA, I'd want to test the waters. Maybe it will become a profitable venture, maybe not.

    It doesn't cost much for them to throw a 980 in a couple machines. They probably invested very little in it, to be honest. The companies that sell these machines assume all liability. NVIDIA is paid before products launch (upon shipment via invoicing) - guaranteed payment, regardless, just for using their product.

    Pascal is the new bread winner. They have invested a lot into Pascal, so I'm sure it will perform much better than any of us expect. But for now we don't know. It's only November, and driver support has arrived. They may plan to launch the first desktop cards as early as Q1 2016.*

    *When we saw hardware ID leaks and driver support for Maxwell, they were announced within two months.
     
    Last edited: Nov 5, 2015
  11. NuclearLizard

    NuclearLizard Notebook Deity

    Reputations:
    162
    Messages:
    939
    Likes Received:
    728
    Trophy Points:
    106
    I personally cant wait to see how it works out with the 980's. if it works id hope we can see a continuation of it with the more power efficient pascal series, or at the very least have them push up the power envelope of the mobile GPUs.

    From what i have been reading they have got msi, asus, clevo and the like investing tech and money in bigger/better cooling and more capable and bigger power supplies. Its my ernest hope that some of that developement transfers back into high end laptop market and not stagnate and die with the success or failure with the 980 for notebook project.

    Sent from my SGH-M919V using Tapatalk
     
    Robbo99999 likes this.
  12. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    I'm with you, but don't necessarily appreciate it. Why? In a way, it allows NVIDIA to make even more money through the "tamed back" performance on the mobile platform. This means they'll likely dial back the 1080M, so that it won't perform equally to the desktop 1080 (which they may end up profiting twice on to throw in a laptop). This then opens up the option for them to re-brand the 1080M in 2017. My hope is that re-branding is going to dial back.

    And yeah, I know companies have to profit. But if you think about it, this behavior is limiting the advancement of technology and putting profiteering above innovation and economic growth. They are essentially slowing down the development of markets in everything and anything related to computing (e.g. software, games, etc.).

    But hey, who doesn't love to make a killing. :D I'd love to see it, regardless. The transition to mobile gaming through the placement of desktop performance in a mobile platform is something I am in support of, especially since we spend much more per dollar for performance than our desktop counterparts. ;)
     
    Last edited: Nov 5, 2015
  13. Ethrem

    Ethrem Notebook Prophet

    Reputations:
    1,404
    Messages:
    6,706
    Likes Received:
    4,735
    Trophy Points:
    431
    Asking for efficiency is asked nVidia to nerf performance like they did with Maxwell. All of us with Maxwell GPUs can confirm that when we remove the power limit, they use just as much energy as their Kepler predecessors and even worse, in some cases they actually draw more...
     
  14. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    Yep I bios modded the power limit to 405W on my 980 Ti, and in intense areas in BF4 (single player, yeah shoot me), I've seen the GPU hit its power limit and beg for mercy.

    In most games I'm hovering around 85% power limit, so 344W. This is with a watercooled 980 Ti btw, so I'm already shaving off 20W just because the GPU runs so much cooler. Before when I was on air the GPU was constantly hovering around 90% power limit. And people thought the 290X was power hungry LOL. (obviously the 980 Ti has a lot more performance and perf/watt still kills the 290X, but just because it's efficient doesn't mean it doesn't suck down power like crazy)
     
    triturbo, TomJGX, Ethrem and 2 others like this.
  15. Ethrem

    Ethrem Notebook Prophet

    Reputations:
    1,404
    Messages:
    6,706
    Likes Received:
    4,735
    Trophy Points:
    431
    Just gotta keep bragging huh? LOL

    And perf/watt dynamic flips upside down when context switching is used on a 290X so nVidia needs to keep paying devs to break AMD GPU performance...

    But at least you proved my point... what was the wattage before? 270 and throttle? (Drawn from the wall)
     
  16. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    Had a 'debate' with a member not too long ago about power limitations and how GPU's can exceed their advertised TDP. Thank you for this. ;)
     
  17. Ethrem

    Ethrem Notebook Prophet

    Reputations:
    1,404
    Messages:
    6,706
    Likes Received:
    4,735
    Trophy Points:
    431
    You didn't know that Maxwell is literally Kepler with extremely aggressive power management? The GM204 desktop cards were so unstable that they HAD to be modded because nVidia didn't bin them and set voltage based on ASIC... And that's not even the problem, its the fact that Maxwell constantly changes its voltage based on load which causes crashes. Why do you think GM200 is rated for the exact same TDP as the last gen? When you free the TDP on Maxwell and fire up AA and especially tessellation, it actually uses more power than Kepler did. Keep in mind, its on the same node as Kepler... Power consumption can't just magically drop when its using the same node... Under the same load, Maxwell should be using the exact amount of power that Kepler did, nVidia just used alternating voltages to accomplish what they did... Its nothing but a trick and actually uses MORE power when its unleashed because of their new tessellation engine they force on devs because they know AMD fails miserably with hiigh tess. nVidia is unfortunately run by people who know how to skirt... The drivers aren't bad except in rare cases... The chips themselves are the problem. And yes, nVidia purposely broke Kepler because Kepler can't do tess like Maxwell and it can't do context switching at all without a ridiculous penalty... nVidia knew that moving to 20nm wasn't smart... so they put out another 28nm chip and used dynamic voltage like they tested with the 880M to make it *seem* more efficient. Blame everyone looking for thin and light and the ability to play TW3 on max...
     
    jaybee83, TBoneSan, TomJGX and 3 others like this.
  18. King of Interns

    King of Interns Simply a laptop enthusiast

    Reputations:
    1,329
    Messages:
    5,418
    Likes Received:
    1,096
    Trophy Points:
    331
    It is a shame that Nvidia feel they have to play these underhand tactics. Very cowardly indeed.

    Why not square up fairly vs AMD and actually compete as they should shoulder to shoulder.

    Too many many tricks; power management trick, high tess trick etc.....I really don't want to buy their cards BUT amd are nowhere to be seen!
     
  19. djembe

    djembe drum while you work

    Reputations:
    1,064
    Messages:
    1,455
    Likes Received:
    203
    Trophy Points:
    81
    You answered your own question there ;)
     
  20. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    I knew. Of course I did, lol. The other guy I was debating with didn't.
     
  21. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Actually they are in San Jose... getting sued. Lel
     
  22. King of Interns

    King of Interns Simply a laptop enthusiast

    Reputations:
    1,329
    Messages:
    5,418
    Likes Received:
    1,096
    Trophy Points:
    331
    Lol! I guess people want AMD dead and buried. Then Nvidia can do completely as they please.

    Well even more so...
     
  23. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    It's AMD's own fault they almost committed suicide because of Bulldozer. Well actually what happened was they committed seppuku and then stitched themselves up before they completely bled out.
     
    TomJGX and King of Interns like this.
  24. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    Gotta brag when you can :p Default bios gives 275W, and up to 300W with 109% power limit. I can say with confidence you NEED 109% or 300W to even maintain the factory OC (1328/7100) in some games, that's how ridiculous it is. I was genuinely shocked by how much power it could suck down, even with the knowledge that Titan X throttles at stock:

    [​IMG]

    I think it's pretty clear nVidia pushed waaaaaay past the optimal point on the power/perf curve to get the performance we see out of GM200. I know we can make GK110 draw 400W+ too, but that's with extreme OC like running 1.35V+ and over 1400MHz core, which frankly you could only do on Classified/Kingpin cards anyway unless you wanted to literally set your card on fire. Here I'm simply using an MSi 980 Ti Gaming (not Lightning) and I'm not even running a crazy OC -- 1530/7800 @ 1.25V is pretty tame by any standards.

    IMO GM204 was at least more or less still in the optimal zone on the power/perf curve (if not a bit close to the edge), but me thinks to get GM200's performance there's simply no choice but to completely go outside that optimal zone, hence the power consumption.

    Most people outside of here and OCN don't have a clue what they're talking about. That being said even some 980s had a hardware lock on power limit, so it's possible some 980 Ti cards have that lock. I've read multiple reports that the Asus Strix and some Galax cards utilize a hard voltage lock on the VRMs as well.

    Yep have a whole thread dedicated to this subject. Although recently more info has surfaced as well.

    The throttling algorithm in Maxwell is extremely complex, apparently there's a hidden hard throttle at 65C baked into the bios. I say "hidden" because some voltage sliders were missing in the GM200 bios when looked at through MBT. I should've bookmarked a particular post but basically, the 2nd or 3rd slider is for "throttling voltage", and that's the one that controls what voltage the card will throttle down to once past 65C. That slider is extremely important because apparently it serves as an override switch, and without that particular slider, any volt mods you do in the bios ends up being useless.

    And just to make things worse, apparently there's a Gigabyte 980 Ti specific throttling bug that occurs once you overvolt the card past a certain limit. I think it's pretty clear there's a very complex set of throttling rules for Maxwell, and imperfections in the implementation manifest them as bugs mentioned above.

    Don't even get me started on the SLI voltage bug issue if you have cards with different ASICs (7-8% is enough to cause the bug). And the only way to fix the bug is to mod the bios and stop this voltage throttling nonsense. Of course.
     
    Last edited: Nov 8, 2015
  25. King of Interns

    King of Interns Simply a laptop enthusiast

    Reputations:
    1,329
    Messages:
    5,418
    Likes Received:
    1,096
    Trophy Points:
    331
    Totally true! Just hope they can survive the legal battle!

    I am not routing for AMD I only care they survive to entertain the slim hope that they might be able to keep Nvidia in check...
     
  26. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    I think AMD's problem is they're always making stuff "for the future" because "the future is X" rather than making products for the now. The Fury GPUs with HBM is also a good example, although there they might've needed the power savings with HBM+memory controllers in order to make that ridiculous 4096 shader chip, so at least there's some rationale.
     
    i_pk_pjers_i likes this.
  27. Cakefish

    Cakefish ¯\_(?)_/¯

    Reputations:
    1,643
    Messages:
    3,205
    Likes Received:
    1,469
    Trophy Points:
    231
    Trouble is, by the time DX12 games become mainstream, the Fury X will have been succeeded by one, if not two, new generations of flagship GPU.

    Sent from my E5823 using Tapatalk
     
    TomJGX and i_pk_pjers_i like this.
  28. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    But AMD will also have at least a year or two of experience with HBM technology to be able to use it better when it comes to implementation.
    Though, Nvidia does have the financial resources AMD lacks, so its possible they might be able to compensate... or not.
     
  29. deadsmiley

    deadsmiley Notebook Deity

    Reputations:
    1,147
    Messages:
    1,626
    Likes Received:
    702
    Trophy Points:
    131
    Fixed! :p

    /ducks
     
    TomJGX likes this.
  30. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    If the current reports about Arctic Islands are anything to go by, then your 'fix' seems inaccurate.
    Arctic Islands are supposed to sport an entirely new architecture in addition to being twice as energy efficient compared to the current Fury line and on a new manuf. process.
    So... no rebrands this time around (although, one cannot really call the Fury line rebrands when you take into account they managed to get very close in energy efficiency and performance to Maxwell
     
  31. Raidriar

    Raidriar ლ(ಠ益ಠლ)

    Reputations:
    1,708
    Messages:
    5,820
    Likes Received:
    4,311
    Trophy Points:
    431
    I really hope they haven't pulled out of the mobile space. I always had been hoping for a MXM HBM AMD card, but I"m not holding my breath.
     
    Last edited: Nov 10, 2015
    TomJGX likes this.
  32. TomJGX

    TomJGX I HATE BGA!

    Reputations:
    1,456
    Messages:
    8,707
    Likes Received:
    3,315
    Trophy Points:
    431
    Don't.... I really doubt we'll see one before end of 2016/Q1 2017...
     
    triturbo likes this.
  33. deadsmiley

    deadsmiley Notebook Deity

    Reputations:
    1,147
    Messages:
    1,626
    Likes Received:
    702
    Trophy Points:
    131
    It wasn't intended to be accurate. It was intended to be a poke at past shenanigans. A joke, good sir.

    Sent from my SPH-L720 using Tapatalk
     
  34. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    Hexus released another article ( here) about Pascal, but it's just a summary of what we know.

    AMD's Greenland GPU (14nm) tapes out for 2016 to compete with Pascal. - Source
     
    TomJGX and PrimeTimeAction like this.
  35. PrimeTimeAction

    PrimeTimeAction Notebook Evangelist

    Reputations:
    250
    Messages:
    542
    Likes Received:
    1,138
    Trophy Points:
    156

    [​IMG]
     
    jaybee83, DataShell and TomJGX like this.
  36. Ethrem

    Ethrem Notebook Prophet

    Reputations:
    1,404
    Messages:
    6,706
    Likes Received:
    4,735
    Trophy Points:
    431
  37. NuclearLizard

    NuclearLizard Notebook Deity

    Reputations:
    162
    Messages:
    939
    Likes Received:
    728
    Trophy Points:
    106
    why would it be the end? doesnt mobile pcie have the same or similar soecs?

    Sent from my SGH-M919V using Tapatalk
     
  38. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    They're probably going to milk mobile and use GDDR5X for a year.

    MXM may stick around with GDDR5X. We don't know, yet.
     
    jaybee83 likes this.
  39. King of Interns

    King of Interns Simply a laptop enthusiast

    Reputations:
    1,329
    Messages:
    5,418
    Likes Received:
    1,096
    Trophy Points:
    331
    Mobile parts are always slower than desktop. MXM might stretch until volta....or perhaps like when the switch from mxm 2.1 to mxm 3.0 happened pascal and greenland will come in two variants.

    Anyway what are the limitations of MXM for single card configuration at least PCI 2.0 x 16 still has plenty of bandwidth let alone PCI 3. I read a review that a GTX 980 (single) only saw 2-3% increase in performance due to better latency of PCI 3.0 over 2.0.
     
    jaybee83 likes this.
  40. NuclearLizard

    NuclearLizard Notebook Deity

    Reputations:
    162
    Messages:
    939
    Likes Received:
    728
    Trophy Points:
    106
    i cant see them just dropping mxm. as well, if i recall correctly (not 100%) im fairly shure nvidia said all pascal will be hbm2.

    Sent from my SGH-M919V using Tapatalk
     
  41. Ethrem

    Ethrem Notebook Prophet

    Reputations:
    1,404
    Messages:
    6,706
    Likes Received:
    4,735
    Trophy Points:
    431
    I agree with you actually. That's why I said "if" when talking about HBM2 and mobile. There's no doubt in my mind that nVidia will milk Pascal for awhile.
     
  42. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    GDDR5X are more than enough for us gamers.

    4GB HBM1
    512GB/s

    8GB GDDR5X @ 1750MHz @ 256bit bus (Mobile Pascal)
    ~450GB/s

    8GB GDDR5 @ 1750MHz @ 256bit bus (Mobile GTX 980)
    ~220GB/s

    Twice the bandwidth we have now. Absolutely nothing to complain about and frankly an amazing increase.

    Nvidia might save HBM2 for GP100, meaning the cards oriented for computing and enterprise workloads. These users can never have enough bandwidth. HBM2 will be scarce anyway. Much smarter to save them for the cards that actually need HBM2.
     
  43. King of Interns

    King of Interns Simply a laptop enthusiast

    Reputations:
    1,329
    Messages:
    5,418
    Likes Received:
    1,096
    Trophy Points:
    331
    Completely agree!

    Factor in the OCing usually possible with vram and it ends up at HBM1 bandwidth levels!
     
    Cloudfire likes this.
  44. Ethrem

    Ethrem Notebook Prophet

    Reputations:
    1,404
    Messages:
    6,706
    Likes Received:
    4,735
    Trophy Points:
    431
    We don't know how much headroom GDDR5X will have for overclocking though but yes, double the bandwidth is plenty.
     
  45. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    yupyup, oc headroom with hbm1 on the fury line is already pretty tight...

    Sent from my Nexus 5 using Tapatalk
     
  46. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    I'm sure it'll be a nice improvement over Maxwell. It just won't be what we were all hoping for. They seem to be purposefully dialing back everything on the mobile front to maintain the "performance edge" desktops have always had, and are now introducing desktop parts into laptops, profiting twice from them.

    Money, money, money. Team green is green indeed. Shareholders are priority #1. Stock prices are the highest they've been in years. ;)
     
  47. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    wonder if theyll just release a "regular" mobile flagship / 980M successor to be positioned right in between the 980M and 980 (30-35% is a large enough gap to be filled with a gpu) and only later go all out with the real flagship as a 980 mobile successor...

    after all, that would be following exactly their desktop lineup with "mainstream" 980 flagship and Ti/titan "highend" flagships

    Sent from my Nexus 5 using Tapatalk
     
  48. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    If they do, I'm done. New hobby for sure. Been waiting more than a year for the next laptop.

    Don't think they will. It would be hard to be less than 50% over Maxwell.
     
  49. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    um....when was the last time we had a larger than 50% jump in one gpu gen? o_O the closest that comes to mind was from 6970M/485M/580M to 7970M/680M and that was around 50% iirc...
     
  50. Ethrem

    Ethrem Notebook Prophet

    Reputations:
    1,404
    Messages:
    6,706
    Likes Received:
    4,735
    Trophy Points:
    431
    Yeah but look what they achieved with Maxwell without even a die shrink. Pascal is both a die shrink and a new architecture.
     
← Previous pageNext page →