The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    Pascal: What do we know? Discussion, Latest News & Updates: 1000M Series GPU's

    Discussion in 'Gaming (Software and Graphics Cards)' started by J.Dre, Oct 11, 2014.

  1. Zero989

    Zero989 Notebook Virtuoso

    Reputations:
    910
    Messages:
    2,836
    Likes Received:
    583
    Trophy Points:
    131
    It should be much better.
     
  2. Niaphim

    Niaphim Notebook Consultant

    Reputations:
    11
    Messages:
    196
    Likes Received:
    91
    Trophy Points:
    41
    I honestly don't believe in this much improvement over 9**M series. Or it will be for 4k specifically. Why would someone need increased memory bandwidth if it is already sufficient?
    With Skylake CPUs on 14 nm we barely see any improvement aside of temperature drops, so why should it be much different for GPUs?
    I'm most likely missing something, but my guess is if we ever see a 100% jump or something, it is for 4k, for FHD it will be less spectacular.
     
    jaybee83 likes this.
  3. Omnomberry

    Omnomberry Notebook Evangelist

    Reputations:
    45
    Messages:
    556
    Likes Received:
    497
    Trophy Points:
    76
    So by getting pascal you create yourself a reason to get a 4k g-sync monitor my wallet is imploding while I write this
     
  4. DataShell

    DataShell Notebook Deity

    Reputations:
    47
    Messages:
    777
    Likes Received:
    354
    Trophy Points:
    76
    Intel has zero competition on the CPU side and CPUs have not really been improving in terms of raw performance over the last five years. Contrast that to the GPU market, where AMD is still somewhat competitive on the desktop side (the only Nvidia card worth getting is a 980 Ti, and even that's only if you never plan on doing a dual-GPU config) and GPU technology is still moving forward. There's also the fact that most people don't really have use for a more powerful processor, whereas gamers are always demanding more as there is always use for more GPU horsepower in a game.
     
    jaybee83 and Omnomberry like this.
  5. Niaphim

    Niaphim Notebook Consultant

    Reputations:
    11
    Messages:
    196
    Likes Received:
    91
    Trophy Points:
    41
    Sure, as long as they do MXM versions it's a win-win situation in terms of performance. I just want to find an excuse to protect my wallet, it has already suffered enough with so much good stuff that came out this year (couldn't resist the power of Phoenix :D)
    But they will have to improve at some point, otherwise we will start seeing major CPU bottlenecks, won't we?
     
    jaybee83 and Omnomberry like this.
  6. DataShell

    DataShell Notebook Deity

    Reputations:
    47
    Messages:
    777
    Likes Received:
    354
    Trophy Points:
    76
    Yeah, but high-end CPUs don't even get close to doing that right now. I would bet that a 4690k would be fine for any 14nm or 16nm GPU that we get.
     
    Omnomberry likes this.
  7. Omnomberry

    Omnomberry Notebook Evangelist

    Reputations:
    45
    Messages:
    556
    Likes Received:
    497
    Trophy Points:
    76
    Exactly. Unless im running an optimized software that maxes my cup 100% for work, I'm running my 4790k at 3GHz because even MMORPGs like GW2 don't need that much before my 970m bottlenecks it. Here's hoping pascal will change that.

    I'm not sure about, I don't know, like 980Ti in sli ? Maybe there the gap is already smaller ?
     
  8. deadsmiley

    deadsmiley Notebook Deity

    Reputations:
    1,147
    Messages:
    1,626
    Likes Received:
    702
    Trophy Points:
    131
    If you are seeing 100% utilization then the CPU isn't strong enough. I am gonna go out on a limb here and say that if you are seeing 50% CPU utilization your CPU isn't strong enough.

    In the case of my M18 R2, it came with an i7-3720QM. I bought an i7-3940XM and didn't install it for a couple of months because I didn't see 100% utilization of the CPU. When I finally did install the i7-3940XM I saw a 20 fps jump in Mechwarrior Online. I didn't do any formal testing of this as it wasn't expected. I simply dropped in the faster processor. Neither CPU has been overclocked.

    I have also noticed that when running multiple VMs and other memory intensive applications with the CPU loaded up to about 70% the machine gets sluggish. Taking into consideration that at that time I had 32GB RAM and 21GB was in use. My theory in regards to gaming: The idea that CPU performance isn't important because CPUs are fast enough simply isn't true.

    EDIT:
    Loading up the CPU with VMs was done with my Sager, not the Alienware.
     
    Last edited: Dec 2, 2015
  9. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    It's all in how they optimize the architecture. Intel chose to focus on power consumption, not improving overall performance.

    Look at the leap from GTX 580m at 40nm vs GTX 680m at 28nm... resulted in a solid 60-70% improvement at 1080p, and architecture change at 28nm from Kepler 880m to Maxwell 980m resulted in another 50% improvement.
     
  10. Omnomberry

    Omnomberry Notebook Evangelist

    Reputations:
    45
    Messages:
    556
    Likes Received:
    497
    Trophy Points:
    76
    Interesting, is there perhaps something more to those 2 CPUs you had as in the XM is differently built ? Or is it really just more GHz ?
    Theres more Cache in the XM for example. I wonder if things like that influenced the difference.

    But I get your point and deep inside I believe/hope youre right that even though a 4790 is not at 100%, a 4790k would improve performance. Thats what Id like to believe but Im not sure.
     
  11. deadsmiley

    deadsmiley Notebook Deity

    Reputations:
    1,147
    Messages:
    1,626
    Likes Received:
    702
    Trophy Points:
    131
    In regards to the Extreme processor, the cache is 8MB vs. 6MB. The base clock in the is 400Mhz faster. The turbo boost is 300MHz faster. It is feeding two 680M, so maybe that is part of the reason? I really don't know. @Mr. Fox seems to think there is something too this as well. Perhaps he will be long to comment shortly?
     
  12. Omnomberry

    Omnomberry Notebook Evangelist

    Reputations:
    45
    Messages:
    556
    Likes Received:
    497
    Trophy Points:
    76
    Yea, it is pretty complex if I think about it now, theres just so many layers from CPU to software using it that could make the difference.
    That said. As long as my 970m is bottlenecking GW2, I get the same FPS regardless of running the 970m @4.2 or 3.0
     
    deadsmiley likes this.
  13. Niaphim

    Niaphim Notebook Consultant

    Reputations:
    11
    Messages:
    196
    Likes Received:
    91
    Trophy Points:
    41

    There is a noticeable fps drop for everything weaker than 6700k, and that's for a single Titan X. In my book, it's a bottleneck, and this means that any stronger GPU will be hindered even worse.
     
    deadsmiley likes this.
  14. Omnomberry

    Omnomberry Notebook Evangelist

    Reputations:
    45
    Messages:
    556
    Likes Received:
    497
    Trophy Points:
    76
    So a 4790k is already holding a Titan X back ? :eek: Do you think all those CPUs are 100% load ?
     
  15. Niaphim

    Niaphim Notebook Consultant

    Reputations:
    11
    Messages:
    196
    Likes Received:
    91
    Trophy Points:
    41
    I guess it depends. In the later part of the video there are games (Shadow of Mordor, Assassin's Creed) that show almost the same fps for all 4 CPUs. I'm not sure, but The Witcher 3 and GTA 5 seem to be more CPU intensive and they do stress CPU enough to make up to 2 times difference between 2600k and 6700k OC.
     
    Last edited: Dec 1, 2015
    deadsmiley and Omnomberry like this.
  16. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    That's not a bottleneck in that video. Skylake holds clock speeds more consistently resulting in a higher overall FPS. In other words, the newer technology is better at performing tasks. But that's not a bottleneck. It's just a "missing feature."

    This is exactly what we expect to happen when new hardware comes out. That doesn't mean old hardware is a bottleneck. It just means new hardware is more efficient as Skylake communicates with the motherboard much faster because of the socket type.
     
    TomJGX and Omnomberry like this.
  17. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    The difference between CPU's and GPU's is that CPU's aren't increasing their number of cores or clock frequency from generation to generation, whereas GPUs are increasing the number of transistors & number of GPU cores massively from generation to generation. That's why we see large performance increases in GPU's vs only small improvement in CPU power.
     
    jaybee83, TomJGX and Niaphim like this.
  18. Niaphim

    Niaphim Notebook Consultant

    Reputations:
    11
    Messages:
    196
    Likes Received:
    91
    Trophy Points:
    41
    If there are short dips in fps, it's true. But what is shown in this particular video is that there is a constant difference in performance between those CPUs. As a customer I don't care what the reason is. Whether it is lower clock speed or latency or anything else, it is still a decrease in performance because of CPU - a bottleneck.
     
  19. Zymphad

    Zymphad Zymphad

    Reputations:
    2,321
    Messages:
    4,165
    Likes Received:
    355
    Trophy Points:
    151
    If NVidia revamps their drivers with full DX12 features, then I won't upgrade to Pascal next year. But if they don't, they I'll probably get Pascal.

    Though I suspect they will keep Maxwell gimped on DX12 at software level to force users to upgrade to Pascal if they want DX12 gaming when DX12 games become more prevalent in 2016.
     
  20. deadsmiley

    deadsmiley Notebook Deity

    Reputations:
    1,147
    Messages:
    1,626
    Likes Received:
    702
    Trophy Points:
    131
    DX12 requires Win 10? If so, then it will be a while before I can move to DX12. None of my work (very niche) software will be up to running on Windows 10 by the time Pascal is here.

    Sent from my overpriced Galaxy S6 Edge +
     
  21. mizosafaga

    mizosafaga Notebook Geek

    Reputations:
    0
    Messages:
    86
    Likes Received:
    12
    Trophy Points:
    16
    ok guys . a simple question
    this thread is a 67 pages long so i just need the answer to particular qeustions :D
    1 - i have the msi gt70 dominator originally equipped with the 870m . i managed to upgrade it to the 980m 8 gb . what is the probability of seeing pascal mobile gpu's with mxm slot and the same size as the 870m/970m/980m so we would be able to upgrade our laptops ? because the new 980 laptop version's size is ridiculous
    2 - what is the expected performance gain of a 1080m over a 980m ?
     
  22. Cakefish

    Cakefish ¯\_(?)_/¯

    Reputations:
    1,643
    Messages:
    3,205
    Likes Received:
    1,469
    Trophy Points:
    231
    100% performance increase vs 980M.

    Sent from my SM-G925F using Tapatalk
     
  23. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    I'm hoping for that much. That's what my earliest predictions were. But I fear NVIDIA will intentionally dial it back so it's much weaker than the desktop 1080, all so they can put the desktop 1080 in a laptop (again) and still outperform the 1080M by a good 30%.

    We'll see what happens.
     
  24. Zymphad

    Zymphad Zymphad

    Reputations:
    2,321
    Messages:
    4,165
    Likes Received:
    355
    Trophy Points:
    151
    30%. I think people are expecting too much. HBM2 did not increase performance of AMD that much and don't expect it to for NVidia either. Pascal and 14nm seem way over hyped to me. I don't trust those graphs from NVidia at all. If it is 100%, then I'll upgrade next year, but I have my doubts.
     
    jaybee83, TomJGX and Niaphim like this.
  25. tgipier

    tgipier Notebook Deity

    Reputations:
    203
    Messages:
    1,603
    Likes Received:
    1,578
    Trophy Points:
    181
    Pretty much. Just because the Gk 104 exceeded all expectations doesnt mean GP 104 will. Do we even have a tapeout on GP104 yet? Tapeout for GP100 really doesnt mean anything for the consumer market. We are probably not getting that card in titan form until at least q4 2016.
     
  26. TomJGX

    TomJGX I HATE BGA!

    Reputations:
    1,456
    Messages:
    8,707
    Likes Received:
    3,315
    Trophy Points:
    431
    Oh don't make me start about Intel.. All they do is increase the size of the iGPU while the CPU package stays the same... The transistor increase from 22nm to 14nm is only due to higher transistor density in same area and hence the performance increase is limited..
     
    Niaphim, jaybee83 and Robbo99999 like this.
  27. Zymphad

    Zymphad Zymphad

    Reputations:
    2,321
    Messages:
    4,165
    Likes Received:
    355
    Trophy Points:
    151
    Also NVidia has done a great job in reducing TDP while Intel's CPU still consume a lot of power and run very hot. I'd like to see a 4790K successor with half the heat. It's why I have no interest in Skylake, the TDP improvements were minimal, nvm the performance boost.
     
    jaybee83 and Omnomberry like this.
  28. i_pk_pjers_i

    i_pk_pjers_i Even the ppl who never frown eventually break down

    Reputations:
    205
    Messages:
    1,033
    Likes Received:
    598
    Trophy Points:
    131
    I guess it's a good thing I'm not over-hyped at all and I'm perfectly happy with what I have currently. :D
     
    Niaphim and jaybee83 like this.
  29. DataShell

    DataShell Notebook Deity

    Reputations:
    47
    Messages:
    777
    Likes Received:
    354
    Trophy Points:
    76

    Unless they intentionally lower the performance (which they totally could), it is reasonable to expect large performance gains when jumping to a new manufacturing process. Look at the improvements we got from 40-28, 55-40, 80-55 etc.
     
  30. Getawayfrommelucas

    Getawayfrommelucas Notebook Evangelist

    Reputations:
    84
    Messages:
    636
    Likes Received:
    51
    Trophy Points:
    41
    So - at what point will these cards be doing 4k @ 60fps? Because after watching the below - I'm tempted to hold off on upgrading anything until I can do this

     
  31. Phase

    Phase Notebook Evangelist

    Reputations:
    7
    Messages:
    483
    Likes Received:
    100
    Trophy Points:
    56
    i just want 4k 60fps ultra.....

    on a side note, at least broadwell e will be 10 cores
     
  32. tgipier

    tgipier Notebook Deity

    Reputations:
    203
    Messages:
    1,603
    Likes Received:
    1,578
    Trophy Points:
    181
    The performance jump from 55-40, 80-55 are from one generation's big die to next generation's big die. From 28nm-40nm was like 30% if we are looking at GK 104 vs GF110. I know well enough that a GP100 should easily be 100% ahead of a 980 ti. However, nvidia wouldnt release GP100 until it has milked GP102 and 104 completely dry. We are not comparing GM200 vs GP100. We are comparing GM200 vs GP104.

    Unless AMD brings out a 500-600mm2 die in arctic islands, there is almost no chance we will see GP100 outside of Tesla before the year end. Nvidia is using mostly kepler for their high end teslas. Maxwell was completely useless in terms of FP64, its a pure gaming arch. The professional markets pays a lot more then we do.
     
    Last edited: Dec 1, 2015
  33. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    You got a lot! ;-)
     
    i_pk_pjers_i and jaybee83 like this.
  34. Niaphim

    Niaphim Notebook Consultant

    Reputations:
    11
    Messages:
    196
    Likes Received:
    91
    Trophy Points:
    41
    Only one model seems to have 10 cores, but I still don't know why they do this except for testing purposes. Either it is some marketing bs at its finest or they did some magic, because there is a reason why we haven't seen 9+ core systems yet. As you increase number of cores, the number of interactions between them also increases, and at some point by adding new cores you start to decrease overall performance, unless you do some hardcore parallel computing, which... is not very common. Last time I did a research this number was 9. So 10 and above don't give any benefits and, in particular, 10 cores is roughly equal to 8 cores.

    Edit: just found one example of what I mean:
    https://www.pugetsystems.com/labs/articles/Adobe-After-Effects-CC-2015-Multi-Core-Performance-714/
     
  35. PrimeTimeAction

    PrimeTimeAction Notebook Evangelist

    Reputations:
    250
    Messages:
    542
    Likes Received:
    1,138
    Trophy Points:
    156
    But dint Nvidia say that pascal will have 10x performance of Maxwell. ;)
    So if we are getting 120FPS shouldn't we get 1200FPS. Too bad its still too less. I want at least 9000. :D

    Ok. Serious question, Do you guys think will a Pascal laptop be up-gradable to Volta or even beyond that (2019-220).
     
    Last edited: Dec 2, 2015
    jaybee83 likes this.
  36. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Dont be so sure about huge performance gains yet.

    One of the reasons why Maxwell is better than Kepler, why GTX 980M is faster than GTX 880M, is because the Maxwell dies are huge. Nvidia built these monsters to get more room for more cores and to have additional products to sell since we have been stuck at 28nm (like Kepler was) since 2012.

    So these huge dies consist of a a lot of silicon which cost Nvidia money. So you can bet Nvidia will want to go down in size again with Pascal. Especially since 16nm FinFET silicon cost more than 28nm silicon.

    So what happens when GP104 is reverted back to 300mm2 die again? You lose a lot of silicon area where they would normally cram in more transistors and cores.
    GTX 980 vs Pascal mobile might be 400mm2 vs 300mm2. Not 400mm2 vs 400mm2.
    Previous generations have been 300mm2 vs 300mm2.
     
    Last edited: Dec 2, 2015
    jaybee83 and Robbo99999 like this.
  37. tgipier

    tgipier Notebook Deity

    Reputations:
    203
    Messages:
    1,603
    Likes Received:
    1,578
    Trophy Points:
    181

    I didnt say anything about Mobile pascal performance in my post. I said GP100 will be 100% increase of performance in GTX 980 ti.
     
  38. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    GDDR5X hasn't been confirmed to be be the only memory available for mobile GPU's during 2016. That's what I'm most interested in knowing at this point. Hopefully CES will shed some light on GDDR5X. If they go with GDDR5X for 2016 instead of HBM, we'll most likely see less overall performance gains over Maxwell.
     
  39. dumitrumitu24

    dumitrumitu24 Notebook Evangelist

    Reputations:
    24
    Messages:
    401
    Likes Received:
    40
    Trophy Points:
    41
    any chance they release pascal gpu's in march/april???i wanna gift myself a present for fiishing college with some good rig hahah :) definetly looking to buy something similar power of 970gtx or a bit better with more vram(desktop)
     
  40. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    Nobody knows. But very likely May-June time frame.
     
  41. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    The GTX 965M was released in January of this year. It's possible we'll see some low-end mobile GPU's early, but I wouldn't get your hopes up. Computex 2016 is most likely the official launch for mobile. They've reserved a big booth there and have listed multiple products for presentation.

    I'd say it's more likely we'll see a desktop GPU or two around CES or during Q1.
     
  42. aqnb

    aqnb Notebook Evangelist

    Reputations:
    433
    Messages:
    578
    Likes Received:
    648
    Trophy Points:
    106
    Seems in January 2016 we will get just a refresh of GTX 965M (basically just factory overclock):

    http://www.notebookcheck.com/Nvidia-Update-der-Geforce-GTX-965M-Ti-Anfang-2016.155377.0.html
    https://translate.google.com/transl...Geforce-GTX-965M-Ti-Anfang-2016.155377.0.html

    Performance should be between GTX 965M and GTX 970M.
     
    Cloudfire and HTWingNut like this.
  43. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    Good ol' Nvidia, confusing things again. They really need some nomenclature to differentiate it from the previous model, like the "Ti" like they mentioned in the article. And also begs the question, why doesn't Nvidia just unlock the overclock sliders and let us have at it.
     
    Kade Storm likes this.
  44. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    Not surprised, and I don't care about those rebrands. This is a Pascal discussion thread.

    We want Pascal!
     
    jaybee83 and deadsmiley like this.
  45. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    And I was talking about all Pascal die`s in general.

    If Nvidia can reduce silicon cost while still netting sales from customers, they will cut down on size.
    50-60% gain over GTX 980Ti is more than enough for them to wow the community, and that can be done with a smaller die.

    GTX Titan X (GM200) was 60% faster than GTX Titan (GK110).
    GTX Titan (GK110) was 60% faster than GTX 580 (GF110)
     
  46. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    We had "new" Fermi cards around the same time as Kepler arrived as well. Lets not forget that :)
     
  47. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    50% over the 980Ti in the next flagship mobile GPU is perfectly fine with me. That's more than capable of playing any game at the highest settings.

    Should be a bit higher, though. More like 70%. They should be able to easily double the 980M's performance.
     
  48. tgipier

    tgipier Notebook Deity

    Reputations:
    203
    Messages:
    1,603
    Likes Received:
    1,578
    Trophy Points:
    181
    I doubt we will even get the GP100 on the first gen titans at this point. The 1080 will be GP104 and the titan might be GP102. The performance on GP100 will be way ahead of 980 ti not because of gaming, but because of professional markets.
     
    jaybee83 likes this.
  49. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Haha, maybe they'll want to keep some spare overhead in the tank for the 1180M or 1180 (desktop) rehashed Pascal lines! (e.g. 680M to 780M)
     
  50. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    how about staying realistic here and expecting a +50% boost compared to the 980M? +50% compared to a desktop 980 Ti is just ridiculous for the mobile flagship :D never gonna happen...

    Sent from my Nexus 5 using Tapatalk
     
    Phase likes this.
← Previous pageNext page →