The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    GTX 860M 2Gb Maxwell vs GTX 860M 4GB Kepler

    Discussion in 'Gaming (Software and Graphics Cards)' started by soccerplayer_20, Mar 25, 2014.

  1. soccerplayer_20

    soccerplayer_20 Newbie

    Reputations:
    0
    Messages:
    8
    Likes Received:
    1
    Trophy Points:
    5
    Hey guys,

    QUESTION 1:
    I am considering purchasing the NP7378 which comes with the onboard GTX 860M 2GB Maxwell video card with only 45watts TDP (NOT UPGRADEABLE as it is not MXM).
    Alternatively my other option is NP8278 which comes with the GTX 860M 4GB Kepler video card with 75watts TDP (MXM UPGRADEABLE).

    Does anyone have any input as to the difference in performance between the 2GB version vs the 4GB version? I have searched extensively but can't seem to find any direct comparisons. Would it really make much of a difference having the 4GB as opposed to the 2GB. I'm currently completely at a loss and torn at making this decision.... Everything I read says the maxwell is the way to go, but I don't feel like any of the reviews I have been reading have touched on the difference in RAM and how significant this difference would be, despite that it is a kepler core.

    Price difference is only $100


    QUESTION 2:
    The only other thing about the NP8278 is the fact that it has Dual Vents at the back, one for GPU and one for CPU. The NP7378 only has one cooling vent at the back. Does anyone have an opinion regarding this potentially being a problem or if the dual vent setup would have any significant advantage?
     
  2. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,878
    Trophy Points:
    931
    2GB vs 4GB vRAM amount won't matter for that GPU, period.

    The 860m Maxwell will run so cool, it will be fine with a single vent. And obviously dual vent, the GPU and CPU will have their own fan. 7378 is lighter and likely get a little better battery life. You will have the option to upgrade the GPU later in the 8278 as well.
     
  3. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,431
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    If you like the thought of upgrading the 8278 will be able to go to a 780M in the future with a PSU upgrade too (and should get better battery life in general tasks, but less with gaming on battery).

    If not then the 7378 is smaller and easier to carry around.
     
  4. alamez

    alamez Newbie

    Reputations:
    0
    Messages:
    1
    Likes Received:
    0
    Trophy Points:
    5
    How easy and about how much would it cost to upgrade to better gpu like the 780m?
     
  5. SCARed

    SCARed Notebook Consultant

    Reputations:
    106
    Messages:
    265
    Likes Received:
    31
    Trophy Points:
    41
    well, the main problem is to get your hands on a GTX780M. there are only VERY few shops sellings those cards at all. and you do not want to see the prices of those things, believe me. :p

    plus you need the right BIOS for your notebook, that supports the new GPU. good luck with this ...

    verdict: theoretically possible, but hard to do. notebooks are still not very upgrad-friendly. :(
     
  6. dkboy

    dkboy Newbie

    Reputations:
    0
    Messages:
    1
    Likes Received:
    0
    Trophy Points:
    5
    I don´t think this opinion is right at all! If it wouldn´t matter, why would Nvidia launch a model with double the memory? I´m sure it matters depending on what kind of applications you want to run. In the long run, I´m sure you will notice the difference!
     
  7. Idarzoid

    Idarzoid Guest

    Reputations:
    0
    Not really, it lacks the grunt to use all that memory effectively.

    Maybe it matters more when you're doing something like rendering or CAD but in that case you're better off with a Quadro card.
     
  8. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,878
    Trophy Points:
    931
    What Idarzoid says. It's marketing, most people equate GPU power based on the amount of RAM believe it or not. Like the 880m has 8GB vRAM when desktop cards more powerful than that only have 3GB or 4GB. For the power of the card and in most cases running at 1080p at best, 2GB is more than adequate.
     
  9. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    Yeah, SLI 880M is still hair behind a Titan Black, yet 880M has 8GB vram Titan Black only has 6GB. It's all marketing hype.
     
  10. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,878
    Trophy Points:
    931
    880m SLI isn't even a good example because in SLI it's only effectively rendering half the screen, or at half the FPS, and in that case even 4GB is more than overkill. The 880m is basically a desktop 770 which "only" has 2GB vRAM. At 1080p 2GB should be more than sufficient, or at the very most 4GB. I'm sure there's some instances where the card can effectively use more than 2GB with a powerful GPU like the 880m, but something like the 860m, not gonna happen.
     
  11. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    No rendering half the screen is split-frame rendering (SFR). Multi-GPU for gaming has been alternate frame rendering (AFR) for years because it allows best performance scaling and compatibility with advanced render techniques. AFR is what the drivers and games are designed for. SFR doesn't work in modern games and introduces crazy visual artifacts and negative scaling. However, I think it benefits certain workstation applications such as visual simulation.
     
  12. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    lol my point was that 880M in SLI isn't even as powerful as a Titan Black, yet single 880M has 8GB vram, but Titan Black only has 6GB vram, so that 8GB is just pure marketing hype, because when you get to situations in which you need that 8GB vram, you're probably getting 10-20 frams anyway.
     
    HTWingNut and Cloudfire like this.
  13. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,431
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    Actually it's about on par with the titan black, when both are overclocked they are about even too.
     
  14. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    I doubt that.

    GTX 880M SLI have 3072 cores (2*1536) but because its SLI, you get the typical 10-60% reduction which varies from game to game.
    Titan Black have 2880 cores (6% less) and no SLI reduction.

    GTX 880M SLI have 256bit bus
    Titan Black have 384bit bus

    GTX 880M SLI have 32 ROPs
    Titan Black have 48 ROPs

    Pretty sure Titan Black have a decent lead over 880M SLI
     
  15. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    880M SLI is 2 x 32 ROPs and 2 x 256-bit memory bus and 2 x 160 GB/s.

    SLI scaling isn't that bad.
     
  16. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,878
    Trophy Points:
    931
    You usually don't get 100% (2x), usually it's 70-90% of single card performance.
     
  17. sponge_gto

    sponge_gto Notebook Deity

    Reputations:
    885
    Messages:
    872
    Likes Received:
    307
    Trophy Points:
    76
    HT, I thought you were the one that corrected my mistake in thinking SLI doubles memory bandwidth and explained that it only almost doubles VRAM read? Or was that Meaker? Damn it's been a while.. :p
     
  18. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Which is still very respectable.
     
  19. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    But each GPU is still limited by 256bit bus
     
  20. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    It's still an aggregate 512-bit bus.
     
  21. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,431
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    It was me, it's not double but it is faster.

    Same write speed (every write must be identical on both buses hence memory amounts dont double)as a single bus but each card gets to read whatever data it wants for the frame it is working on. You get duplicate reads obviously etc but otherwise read speed gets a hefty increase.

    I'm just speaking from benchmark experience. The titan black would be preferable as a single card but since the performance is so high you would be hard pressed to tell the difference.
     
    octiceps likes this.
  22. colonels

    colonels Notebook Consultant

    Reputations:
    147
    Messages:
    222
    Likes Received:
    18
    Trophy Points:
    31
    Silly question but if the 860m Kepler has 2 processor thingys disabled is it possible to re-enable them and get 780m speeds?
     
  23. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    GTX 860M does indeed have 2 SMX disabled, and there is no way you are able to enable them. They have probably cut away those 384 cores with laser.
     
  24. colonels

    colonels Notebook Consultant

    Reputations:
    147
    Messages:
    222
    Likes Received:
    18
    Trophy Points:
    31
    so basically we are getting NVIDIA left overs hacked down to fit their current lineup?

    i mean it's brilliant from a sourcing/manufacturing POV but kinda lame from an end user POV
     
  25. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    I actually would like to point out that the amount of vRAM being used per game depends only on the game itself. MOST high end games cap out at 2GB to 2.2GB maximum vRAM being used. Battlefield 4 has been known (by me) to go to 2.5GB, but it is VERY rare and usually drops back down to 2.2GB quickly enough.

    Some OTHER games use more. For example:
    - Arma 2 OA (and by extension its DayZ mod) use up to 2.5GB of vRAM with the video memory setting set to "default". If you set it to "very high" it'll limit itself to 1.5GB for some weird reason.
    - Call of Duty: Ghosts uses all 4GB of my 780Ms. I guarantee you that it does not do so well, and is likely bad coding, but I must also point out that when the game released I did *NOT* get stutter and framedrops that people with even SLI 680s did before they produced a few optimization patches. That being said, do not waste money and buy ghosts on PC. Or anywhere else.
    - Titanfall uses 3.6GB to 3.8GB of vRAM with "Insane" textures selected. I have seen it go up to 4GB being used before, and likely it only STOPPED at 4GB because that is what my video card has. I actually would quite like to see how much video memory it would use up with a Titan Black or a 8GB 880M being used, just to see if it'd remove that limit.
    - I have never tested Crysis 3, but since Crysis 2 uses 2.2GB of video memory, Crysis 3 should also at the minimum match this limit. Especially considering Cryengine being Cryengine.
    - Skyrim with some mods like I have it set up uses 2.5GB or more vRAM

    Now please note that the above may NOT be played by the user of the card, and that the video RAM requested can (except for ghosts) be EASILY mitigated by removing the setting which causes it to skyrocket (like the texture resolution in Titanfall), so 2GB is still quite a viable card... I just wanted to state that things using 3GB - 4GB or more of video memory don't need to be something running at 10-20fps, nor does it need to be something exceedingly good looking (Arma 2).

    So yes, 2GB cards are perfectly viable and useful and everything these days... but 4GB cards are going to get more useful as people are primarily coding for machines with large amounts of video RAM and PC-resembling hardware. It's not useless to have a 4GB card. It's not necessary either. It'll also likely get more useful in a couple years' time, but again... lowering settings should still work and I think it'll be a long time before 3-4GB become the standard for high end cards.

    -------------------------------------------------------------------------------------------------------------------------------------------

    This post was written because I am addicted to Playclaw 5's overlays which look something like this: https://hostr.co/file/T19VkZaQSU7K/Screenshot-951.jpg and play a lot of games and always take notice of how much GPU usage/sli scaling/GPU temps/vRAM usage/CPU temps/CPU usage/GPU temps each game does and thus I've noticed the trends and wish to share with everyone because we all love pizza.
     
    Sandbag and nipsen like this.
  26. ChowMeow

    ChowMeow Notebook Consultant

    Reputations:
    67
    Messages:
    275
    Likes Received:
    147
    Trophy Points:
    56
    I think it depends on the chip actually. In the radeon 6000 series there was the 6950 which could be flashed to a 6970 via firmware. I'm not sure as far as the mobile kepler is concerned-but even if it was flashable it ain't practical.
     
  27. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    ^ all of the above being said, unless you're going to get a 780M/880M/870M AND plan to overclock the card you get, I suggest grabbing the 860M maxwell. Because from what I hear... it's quite a good card.

    EDIT: Blast, I missed, didn't expect someone to post again so quickly
     
  28. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Try to create a full GK104.
    Say 80% meet the voltage leakage, stability, silicon quality etc critieria`s to make their way to a final GTX 680 product
    The rest, 20%, have a lot of the cores function normally, but not all. So Nvidia disable the damaged areas and use the rest as a new different GPU. Instead of remelting the silicon or throwing it away in the garbage.
    I don`t think that this 20% bin is enough to feed an entire production line for a specific product, and I don`t think any manufacture would want to just sell chips that are damaged so they will likely specifically make a GK104 but without the 2SMX. Or they just make full GK104s and intentionally just cut away the 2SMX anyway. I`m not sure. It may be that the GTX 680M I used to own originally had 1536 cores fully functional but Nvidia cut away the 192 cores to just have different products to sell although it wouldnt actually cost them anymore to just sell me that full GK104 (GTX 780M).

    So the finished products could be both left overs and intentionally damaged chips. Its a bit sneaky but what can you do...

    I dont think any Kepler or GCN cards was sold with a neutered bios where the customer could just reflash the card with the vbios belonging to the GPU with full cores that allowed all cores to suddenly function.
    Its a thing of the past unfortunately

    A Quadro K5000M can be reflashed to a GTX 680M but a GTX 680M can`t be reflashed to K5000M although the two are the same chip.
    Its because Nvidia have cut away some crucial parts with laser on the GTX 680M
     
  29. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    I remember when select R9 290's could be BIOS flashed to 290X's for a limited time.
     
  30. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Yeah thats right, there was a thread at overclock.net that covered this. Far more was locked but some actually was successfull at unlocking a 290X.
    One of those rare opportunities
     
  31. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,431
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    That's nice and all but the 860m does not come close to my 780m at 1045/7000 at 1080p.
     
  32. Diversion

    Diversion Notebook Deity

    Reputations:
    171
    Messages:
    1,813
    Likes Received:
    1,343
    Trophy Points:
    181
    Good info.. You have to Pay to Play these days. Hey that rhymed. Anywho, it takes hardly NO labor/time to laser cut (neuter) a GPU pcb so yes, I completely agree that PCB makers are instructed to neuter these cards knowing someone with a 680m will just reflash to a Quadro K5000M saving about 2000 dollars in cost.. They don't want to hurt profit margins. Therefore, laser cut! It sucks to know that we potentially have the top-end GPUs in our laptops but they are neutered and unable to be fixable via mods like the old days. I would love to reflash my 860m to say some better card in the future, $$$ saved!
     
    Cloudfire likes this.
  33. Diversion

    Diversion Notebook Deity

    Reputations:
    171
    Messages:
    1,813
    Likes Received:
    1,343
    Trophy Points:
    181
    I have a 860m AND a 780m powered iMac.. I agree, they are on whole different levels of raw performance. Even an extremely overclocked 860m may bench high numbers in synthetics but the gaming difference is quite noticeable under very heavy graphic intensive games.. You can't compete with shader cores.. 640 vs 1536.
     
  34. sponge_gto

    sponge_gto Notebook Deity

    Reputations:
    885
    Messages:
    872
    Likes Received:
    307
    Trophy Points:
    76
    You're right you can't compare shader cores.. Maxwell vs Kepler :p
     
    Cloudfire likes this.
  35. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,431
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    As good as they are they are not 2-2.5x as powerful per shader however.
     
  36. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    But that's... what I said. If you are not going to get a 780M/880M/870M & overclock (either now or in the future), and your choice is between 860M kepler & 860M maxwell, go for the 860M maxwell. If you do plan to upgrade & oc etc, get the mxm slot (I heard the maxwell was board-soldered)
     
  37. Mr.Koala

    Mr.Koala Notebook Virtuoso

    Reputations:
    568
    Messages:
    2,307
    Likes Received:
    566
    Trophy Points:
    131
    Legitimate 1st-hand production chip K5000M should come within $1000 if you were talking about a MXM replacement. Mobile workstation retail is a different story though.


    Speaking of RAM usage. It's not difficult to max out a GPU with a tiny benchmark using a few MB's of data or less, but for some tasks even the 16GB FPW9100 can hit VRAM capacity wall. It really depends on the specific task you're doing.