The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    Brace yourself: NEW MAXWELL CARDS INCOMING!

    Discussion in 'Gaming (Software and Graphics Cards)' started by Cloudfire, Jul 14, 2014.

  1. ryzeki

    ryzeki Super Moderator Super Moderator

    Reputations:
    6,547
    Messages:
    6,410
    Likes Received:
    4,085
    Trophy Points:
    431
    Indeed we need the option for 3-4k or whatever on laptops. Even windows tablets sport higher res. Phones are now using ridiculous high density screens that are beyond silly at this point.

    3K should be a standard now. We don't need to game at that res, but it helps with productivity and overall quality of screen.
     
    GuniGuGu and Cloudfire like this.
  2. heibk201

    heibk201 Notebook Deity

    Reputations:
    505
    Messages:
    1,307
    Likes Received:
    341
    Trophy Points:
    101
    you mean 1440p should be the standard now, 3k is really on the end side for mainstream mobile gpu
     
    D2 Ultima likes this.
  3. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    I want a 1440p 120Hz 5ms or better black to black response time monitor for laptops. It's not that hard to do.

    That being saiddddd, it would be EXCEEDINGLY stupid of nVidia to release the new flagship that doesn't beat the old card's single-GPU flagship. Even if it cost less, it's pretty much a cheaper sidegrade and will have immature drivers for a couple months.

    If the 980M was 780Ti performance, awesome. Especially moreso if it was better. GK104's best card (680/770) was clearly proven to work without issue in the mobile market; I think GM104 can do the same, and GM104 should be the GTX 980. Unless mobile gets GM204 first... in which case it definitely will smoke anything previously existing including desktop hardware (barring supreme gimping of the card releases). But yes, 980 not beating 780Ti is VERY dumb for nVidia unless there's some 980Ti card out there that's like 1.5x Titan Blacks.
     
    Cloudfire likes this.
  4. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Exactly. You don`t have to game with 3K. You can go down in resolution for the few games your rig cant do in 3K.
    Atleast you have a display that can do 1200p, 1440p, 1600p etc etc. You cant do any of those with a mediocre 1080p display

    I picked up on a rumor earlier that GTX 970 will be Nvidia`s card against R9 290X. It will be the best in terms of value and TDP against it. So if that card is stationed to battle out that card, the GTX 980 should beat GTX 780 Ti.

    Makes no sense to not have a GM204 card that doesnt beat the previous architecture.
    GTX 680 was like 25-30% faster than GTX 580.
     
    GuniGuGu and D2 Ultima like this.
  5. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Well let's just hope our 980M is a downclocked GTX 980, because we already know Gx104 can be handled on mobile =D. THEN we'll get some powah.
     
    Cloudfire likes this.
  6. ryzeki

    ryzeki Super Moderator Super Moderator

    Reputations:
    6,547
    Messages:
    6,410
    Likes Received:
    4,085
    Trophy Points:
    431
    Nah, I do mean 3k. 2880x1620 is a very nice resolution bump compared to 1080, and can easily display 1080, 1440 content and gaming without problems. Hell, even 4K would be fine, not for gaming on laptops but for screen quality in general.
     
  7. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    I thought 3K was 3200 x 1800?
     
  8. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    640 core 750 Ti is outperformed by 768 core, lower-clocked 650 Ti Boost. So core-for-core clock-for-clock, Maxwell performs the same as Kepler, it's just more efficient. The 960 core Kepler is the 660 which is clearly in a higher performance tier.
     
  9. Ningyo

    Ningyo Notebook Evangelist

    Reputations:
    644
    Messages:
    489
    Likes Received:
    271
    Trophy Points:
    76
    Come on Alienware make a 20" 16:10 (17"x10.6") laptop exactly the same size as the alienware 18 but only 1.5" thick.

    This leaves a 13mm bezel on each side and 29mm on top and bottom.

    Give us a 4k OLED 120 hz with at least 80% sRGB.

    And of course drop in a 2 or even 3 x 980m SLI after all Maxwell runs cooler and draws less power. That should be able to get decent to great framerates at 4k resolution depending on the game.
     
  10. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    Even 1080p on 17.3" gives me eye strain (I don't have the best vision admittedly). 3K and 4K on smaller screens is just begging for eye problems unless MS fixes their scaling issues in Windows. Until then I don't think 3K or 4K laptop panels is worth it.
     
  11. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    I think 1080p is fine for another generation, for gaming. 2016+ is when I'd expect to see 4k.
     
  12. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    That's odd. I was comparing the 860M to the 770M and such. The 650Ti boost has FAR more memory bandwidth though; almost double. I wonder if that is holding it back? The 770M and 860M have a lot smaller of a memory gap; 4000MHz 192-bit mem bus 770M versus 6000MHz 192-bit mem bus 650Ti boost. Though the 770M has 1 extra SMX unit over the 650Ti Boost, the clock speed lower. Hmmm, interesting. Lemme run some numbers and see if I could make clock for clock, core for core match.
     
  13. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    The comparison is null if you don't normalize the clock speeds, hence 860M vs. 770M is an inaccurate representation and therefore I used the desktop parts instead, which are much closer.

    I thought the much larger L2 cache on Maxwell reduces the memory bus/bandwidth needed?
     
  14. Ningyo

    Ningyo Notebook Evangelist

    Reputations:
    644
    Messages:
    489
    Likes Received:
    271
    Trophy Points:
    76
    The architecture is just very different so directly comparing cores and bus width don't give an accurate picture of performance.

    Thats why I based my estimate on the average 10% better FPS in real game benchmarks over the 860m kepler. Combine that with it using 45w instead of 75w and you get the 1.85x performance per watt I estimated. Of course it does depend on the particular game as to the exact gain in efficiency.
     
  15. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Yes, the arrangement of the shader blocks have been changed on Maxwell to save power, but I'm talking about how it compares to Kepler on an individual CUDA core basis. Again, normalizing clock speed and forgetting power and memory bandwidth (where Maxwell is obviously more efficient in both areas) and focusing only on shader performance. It seems to me as if Maxwell and Kepler with equal core count perform the same, in contrast to Kepler vs. Fermi where Kepler needed twice the core count to perform the same due to dropping the shader clock, which operated at 2x core clock on Fermi.
     
  16. Amal77

    Amal77 Notebook Deity

    Reputations:
    484
    Messages:
    821
    Likes Received:
    73
    Trophy Points:
    41
  17. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    Ha, that's ridiculous. Defeats the whole purpose of having a laptop.

    "Let me buy this thing that's supposed to be portable, and then buy a box that makes it unportable. Or wait, maybe I should just get a desktop?"
     
  18. Amal77

    Amal77 Notebook Deity

    Reputations:
    484
    Messages:
    821
    Likes Received:
    73
    Trophy Points:
    41
    Well, it depends.

    I'm more of a gaming at home kinda user.
    While portability outside.
    Coz I use the same laptop for work and gaming.
     
  19. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    Then you're better off purchasing a slightly larger laptop that offers SLI.
     
  20. irfan wikaputra

    irfan wikaputra Notebook Consultant

    Reputations:
    80
    Messages:
    276
    Likes Received:
    94
    Trophy Points:
    41
    That was NVidia's rule as meaker said
    however, this rule is derived from a binary perspective of exponential
    about AMD, I am not really sure (never had one lol, not that I hate them xD)
     
  21. ryzeki

    ryzeki Super Moderator Super Moderator

    Reputations:
    6,547
    Messages:
    6,410
    Likes Received:
    4,085
    Trophy Points:
    431
    I am sure a SLI machine will cost more and perform the same or perhaps even slightly better. Depends on what GPU you choose to run there. The thing is, for people like me that I tend to game on an external screen with external mouse, keyboard etc this is perfect. I have great mobility and performance when on the go, but I can just hook it up back home for max gaming performance.

    Tis's a great alternative for me.
     
    LostCoast707 likes this.
  22. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    Ever hear of an HDMI cable? SLI will offer better scaling and performance than a docked box.

    That technology is in its infancy. Assuming it kicks off, maybe one day it will be better.
     
  23. Amal77

    Amal77 Notebook Deity

    Reputations:
    484
    Messages:
    821
    Likes Received:
    73
    Trophy Points:
    41
    I'm done with carrying heavy laptops(coming from Sager 8150) around.
    Most of the time I'm outside meeting clients.
    I'm also getting older. Lol
     
    LostCoast707 likes this.
  24. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    Seems like the best combination for you would be a desktop at home for gaming and work, and a laptop specifically for work.
     
  25. irfan wikaputra

    irfan wikaputra Notebook Consultant

    Reputations:
    80
    Messages:
    276
    Likes Received:
    94
    Trophy Points:
    41
    game wise, SLI 870M might last me for another 2-4 years to stay Ultra. but it will be different case if directX 12 is released
    definitely deadsmiley, SLI of GTX x70M is worth over x80M cost/performance ratio wise. we the x70M is just a shy shadow behind the top dog
     
  26. Ningyo

    Ningyo Notebook Evangelist

    Reputations:
    644
    Messages:
    489
    Likes Received:
    271
    Trophy Points:
    76
    you can probably build a better desktop for cheaper than that docking station. likely even have it be a smaller form factor too. I can see the lure of those 1-2 inch docking stations some laptops have but that thing is bigger than some desktops.
     
  27. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    It doesn't make a HUGE difference. L2 cache should just help how quickly it can find the data to pull into the buffer. Shouldn't actually say... destroy the need for higher bandwidth. 128-bit 5400 mem bus isn't EVER going to fill 2GB of vRAM as fast as a 256-bit 6000MHz mem bus card will
     
  28. Ningyo

    Ningyo Notebook Evangelist

    Reputations:
    644
    Messages:
    489
    Likes Received:
    271
    Trophy Points:
    76
    never say never. There are ways it could theoretically be done (not likely with maxwell though). For instance you could have a memory controller that only alters bits of RAM that need changed leaving ones with the proper value alone, sort of like a gif image will often only save the altered pixels in an image for consecutive frames. Not saying this particular method would be worth using, but its entirely possible at some point someone will come up with a method to make an equal size/speed bus be more efficient.
     
  29. Amal77

    Amal77 Notebook Deity

    Reputations:
    484
    Messages:
    821
    Likes Received:
    73
    Trophy Points:
    41
    Have thought about it last couple of years.
    But still end up getting a laptop only instead. Love the feeling of working and gaming on the same device.

    Or I should just wait for the Aorus X3 Plus with a GTX 970M. For both of the best worlds. Assuming GTX 970M perform as near as the GTX 770. Lol
     
  30. irfan wikaputra

    irfan wikaputra Notebook Consultant

    Reputations:
    80
    Messages:
    276
    Likes Received:
    94
    Trophy Points:
    41
    then you might want to try Aorus X7 the so called "lightest SLI machine"
    but just prepare some ice to cool it down the road lol xD
    to be honest I'm almost at the same boat with you.
    I'm mobile on the go, need to meet clients, etc
    however, my company gave a mediocre hp elitebook
    and I have P377SM-A sitting on my table but still portable enough as long as you're not climbing the mountain with it on your back
    If the company didn't buy the Elitebook, I would've gotten myself your system, the clveo W230ST with maybe higher wattage adapter
     
  31. Amal77

    Amal77 Notebook Deity

    Reputations:
    484
    Messages:
    821
    Likes Received:
    73
    Trophy Points:
    41
    The X7 is too big(17 inch) for me to carry around, it weigh the same as my previous 8150, and like you said it is toasty, lol.
    That clevo of yours is already so powerful that I don't think you will like having a W230ST for gaming, even it is for on the go.

    Let us wait and see how the new Maxwell performs on mobile.
     
  32. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    We'll see what happens to 980 then, it could end up as a bandwidth-starved shader monster going by what you say.
     
  33. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    EDIT: Forget it. It was an estimation
     
  34. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Memory bandwidth isn't THAT big of a thing. It's also possibly getting 256-bit mem buses and 7000+ mem bandwidth. Memory bandwidth doesn't make a huge difference in most games, as evidenced by the 660Ti vs 670 benchmarks for games. Once it's above a certain threshold (which seems to be somewhere between 120GB/s and 160GB/s) you don't see much memory-based improvements unless you nearly double it. At least, that's what I've seen so far. Also, the AMD 7970 with its 3GB vRAM and 384-bit 6000MHz mem bus (288GB/s) was easily beaten by the GTX 680 in a lot of games with its measly 2GB vRAM, 256-bit 6000MHz (192GB/s) mem bandwidth. The 780Ti & Titan Black also seem to lose out in higher resolution games with its memory bandwidth surpassing the R9 290X cards; mainly because AMD's architecture better supports higher resolutions. Which probably will change with maxwell.
     
  35. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    Oh, you're killing me! :rolleyes:
     
    Cloudfire likes this.
  36. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    I'm sorry, you just contradicted yourself so many times in this post I had to give a little chuckle. :)
     
  37. Ningyo

    Ningyo Notebook Evangelist

    Reputations:
    644
    Messages:
    489
    Likes Received:
    271
    Trophy Points:
    76
    as you say memory bandwidth is more of a being past a certain threshold dependent on the particular game. However that threshold will probably get a lot higher somewhere between 2-5 years from now. Over the next two years 3k, or 4k resolution will likely become the norm. This will not directly change the required bandwidth by much at all. However having far better resolution will make lower resolution textures and shadowmaps far more obvious and let you actually see the added detail on higher resolution ones. So as the 3-4k screens become more common games will focus on improving the texture resolutions. Figure at least 2-3 years programming for a AAA game and you are looking at 2-5 years from now for the memory bus to become a major bottleneck.

    That said its hard to say exactly when or how much larger of a bus you will need, just that it likely will need more in the future.

    Of course that only matters if you plan to use the same laptop for 4+ years really.
     
  38. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Memory bandwidth does need to scale up with core config though for obvious performance reasons.
     
  39. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    It's really bloody difficult to explain it all. I need to get copious amounts of sleep before I try doing that next time.

    Basically, I'm saying that something like 120GB/s+ is the point where memory bandwidth starts being less important for games unless you REALLY need to fill/empty that framebuffer quickly (which rarely happens). Since the 750Ti and 860M are both well below that bandwidth (86.4GB/s and 80GB/s respectively) they can suffer more from it being low. The 650Ti boost is above the limit at 144GB/s, but the 770M is not (96GB/s), which is why the core count vs clock speed relation comes in better.

    Using core count x clock speed x (nonexistent for kepler) shader hotclock multiplier, the maxwell cards should lose. Handily. The 770M ought to be ~19% faster, but the 770M loses in quite a few gaming scenarios and trades blows a lot in others, despite having overall stronger hardware. So core to core, maxwell should trump kepler. Above the 120-144 bandwidth threshold like the higher end cards are, we should see better core to core, clock to clock comparisons.

    I probably did not clear up one bloody thing there. But I still need lots of sleep. XD.
     
    Cloudfire and LostCoast707 like this.
  40. trvelbug

    trvelbug Notebook Prophet

    Reputations:
    929
    Messages:
    4,007
    Likes Received:
    40
    Trophy Points:
    116
    so you guys think we will see specs of the actual cards before we hit 150 pages??
     
    Mr Najsman likes this.
  41. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    150 pages?! It's only page 48 for... oh nvm I set to view 30 posts per page :D
     
  42. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Never man. Never
     
  43. LostCoast707

    LostCoast707 Notebook Consultant

    Reputations:
    51
    Messages:
    167
    Likes Received:
    36
    Trophy Points:
    41
    You can do that? I tried a long time ago but couldn't figure out where the settings was. :eek:
     
  44. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    Nope. We may see fake leaks, though.
     
  45. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    User CP --> General Settings then scroll down to Thread Display Options and it's there.
     
    LostCoast707 likes this.
  46. ryzeki

    ryzeki Super Moderator Super Moderator

    Reputations:
    6,547
    Messages:
    6,410
    Likes Received:
    4,085
    Trophy Points:
    431
    Keep in mind that only when 680 was released did it really beat 7970. After drivers and later on, 7970 and its rebrand 280x spanked the 680 and rivaled it's overclocked rebranding, the 770.

    Memory bandwidth is important only to a certain point as you say. Afterwards you require so much resources... R9 290x showed it wasn't enough having 300GB/s++ memory bandwidth for 4k, as well as 780 ti.

    So for now, 1080p with these high end laptop GPUs is enough. Although mobile variants with starved memory bandwidth are available, depending on the game, it hardly makes a difference after 120+GB/s.
     
  47. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Yep, GCN didn't perform to its full potential during the first year due to drivers. Then Catalsyt 12.11 and the awesome Never Settle game bundles came out and it was game over for Nvidia. :D
     
  48. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    I'm fairly certain that amount of vRAM and memory bandwidth did not make a difference for 4K. I'm more believing that architecture is the thing. I remember the 280M/9800GTX/GTS250 etc cards having some problems with 1080p, even though they weren't a whole lot weaker than the say... GTX 260 (especially if they were OC'd) but the 260 handled 1080p a lot easier. I might be remembering very wrong though.

    Anyway, I think Maxwell on its own will be better suited to higher resolutions. The R9 290 and 290X cards are a newer architecture I think, and AMD let them handle higher resolutions better than the Kepler chips. On paper, the 290X doesn't come close enough to a 780Ti (both in memory bandwidth and core performance; both cards assumed stock) to outperform it at higher resolutions.
     
  49. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Within same architecture, mostly yes.
    Different architectures, no.
    Look up GTX 660 and GTX 580.
    One is a 192bit, the other one is a 384bit. They perform the same.
     
  50. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Yes that was implied.
     
← Previous pageNext page →