The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    How will Ampere scale on laptops?

    Discussion in 'Gaming (Software and Graphics Cards)' started by Kunal Shrivastava, Sep 6, 2020.

  1. joluke

    joluke Notebook Deity

    Reputations:
    1,040
    Messages:
    1,798
    Likes Received:
    1,217
    Trophy Points:
    181
    All I want is a 3080 that I can purchase on ebay or somewhere else that I can shove into my P775DM3-G and yes, the upgrade will be worth it.

    Doubt I'll see "upgrade kits" available any time soon like there are for the RTX 2080 (NOT the Super)
     
    Kunal Shrivastava and etern4l like this.
  2. Tyranus07

    Tyranus07 Notebook Evangelist

    Reputations:
    218
    Messages:
    570
    Likes Received:
    331
    Trophy Points:
    76
    The thing is that the desktop 3080 also comes with 320-bit bus vs 256-bit bus on the mobile 3080, so the 19% faster GDDR6X plus the 25% wider bus, generates a combined 50% higher bandwidth in favor of the desktop 3080, and that without any performance analyses I ensure has a big impact in performance, especially at high resolutions. That's why at high resolutions the RTX 3000 series beat the AMD Big Navi, because Big Navi has a bottleneck using GDDR6 with a 256-bit bus, infinity cache an all
     
    seanwee likes this.
  3. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,535
    Likes Received:
    3,507
    Trophy Points:
    331
    Why is the percentage difference in bus width additive with the percentage performance (throughput, presumably) improvement claimed by Micron regarding their product GDDR6X?
    Perhaps the wider bus is needed to fully utilize the faster VRAM, in which case the net performance would be closer to the minimum of the two numbers, in this case 19%, right?
     
  4. Tyranus07

    Tyranus07 Notebook Evangelist

    Reputations:
    218
    Messages:
    570
    Likes Received:
    331
    Trophy Points:
    76
    Is more a compound effect that an added effect. Because the bandwidth formula is:

    BANDWIDTH = FREQUENCY X BUS WIDTH

    In the case of the desktop RTX 3080:

    BW = 19 Gbps x 320 = 760 GB/s

    For the mobile RTX 3080:

    BW = 16 Gbps x 256 = 512 GB/s

    So the desktop RTX 3080 has nearly 50% more bandwidth than the mobile RTX 3080. At the end of the day the memory bandwidth is what really matters, you could have a GDDR5X at 10 Gbps but a bus width of 768 bit and that would have more bandwidth than GDDR6X at 19Gbps with a bus width of 320 bit and hence perform better.
     
    seanwee likes this.
  5. yrekabakery

    yrekabakery Notebook Virtuoso

    Reputations:
    1,470
    Messages:
    3,438
    Likes Received:
    3,688
    Trophy Points:
    331
    If the mobile 3080 is based on the desktop 3070, it’ll be using 14Gbps G6 memory, in which case the BW is 448GB/s.
     
  6. Kunal Shrivastava

    Kunal Shrivastava Notebook Consultant

    Reputations:
    82
    Messages:
    110
    Likes Received:
    91
    Trophy Points:
    41
    Slightly off topic here, but just out of curiosity
    1)What do "faster tensor cores" amount to? Suppose a GPU with Tensor cores upscales using DLSS what exactly do we expect with 'better' tensor cores? More FPS? More accurate upscaling with lower resolutions?
    I get that DLSS reconstructs a low-res image rather than just scale it but isn't the result similar to, say resolution scaling with TAA+ smart image sharpening(Amd RIS, nvidias Freestyle sharpen filter etc) ? Technically, pixels are being approximated and the missing information is 'guessed' by both ! please correct me if I'm wrong here.
    2)The DLSS 1.0 implementation in early games wasn't great, but nvidia said they weren't using tensor cores for that--everything was done on regular shader cores! They promised DLSS 2.0 and beyond would be on tensor cores so it'll be superior,and it is but technically it goes to show that DLSS doesn't need specialized hardware or am I missing something here?
    3)AMD has an upcoming open source direct ML feature that allegedly doesn't need specialized hardware? So either they worked out a way where machine learning algorithms doesn't need tensor instructions to emulate so no hardware requirement, or they just proved that regular fp32/int32 cores can handle tensor workflow?
    This is becoming a pattern with Nvidia-- First they release proprietary hardware for variable refresh(g-sync) and AMD freesync simply used the HDMI controller, then they go on to adopt it.
    Then they were all about RTX until cryengine could do it without needing RTX cards.
    Now are we seeing the same thing with DLSS?


     
    Last edited: Dec 23, 2020
    Vasudev likes this.
  7. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,535
    Likes Received:
    3,507
    Trophy Points:
    331
    Seems that things are a bit more complicated. These frequencies you gave are max theoretical data rates given by Nvidia/Micron. These seem to match clock frequencies - which means that this optimum data rate is only achieved in burst mode. Moreover, Micron already admitted that burst mode performance is identical between GDDR6 and GDDR6X. That's a bit of a contradiction.

    Additionally, it's not clear that real performance scales linearly with data bus width. That depends on read request / data granularity, i.e. you could waste some of data bandwidth, by reading completely unused data, although this effect would most likely be an issue for HMB, with its 1024 bit data bus.

    My point is: it would be ideal to look at real-life performance comparisons, rather than manufacturer specs, given the limited (in my case) understanding of the underlying hardware complexity.

    BTW apparently GDDR6X runs crazy hot, at over 100C, which would explain why they refrained from using it in laptops.
     
  8. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    AMD was able to get full blown performance of RX 5600 with HBM2 in mobile form on Apple at what... 60W TDP which is not even reached, let alone exceeded (even when on battery - that version is excessively efficient).

    I know this still illustrates a discrete GPU, but we're looking at signs that iGP's are likely moving into a direction with HBM2 (or 3) being fully part of the CPU iGP part and having about 4 to 8GB to use... but the catch is that the whole chip (CPU and GPU alike) will be able to make use of that bandwidth which will change the performance metrics overall.

    If you take existing Zen 2 iGP (which has an enhanced Vega inside that's basically on par with Navi in performance and efficiency), and we know that RDNA 2 has at least 50% more performance per watt improvement, add in a few more cores perhaps, on-die HBM (if AMD decides to use it in that iteration), on 5nm node, and you're looking at about RX 5500 level of performance in a 45W TDP chip (maybe close to rx 5600).
     
    Lakshya and Vasudev like this.
  9. Tyranus07

    Tyranus07 Notebook Evangelist

    Reputations:
    218
    Messages:
    570
    Likes Received:
    331
    Trophy Points:
    76
    Well what we know for sure is:

    About the RTX 3090:
    *It has a compute power of 10496 CUDA cores x 1695 MHz per core = 17790720 MHz (that's a 20% more compute power than the RTX 3080)
    *It has a bandwidth of = 936 GB/s (that's a 23% more bandwidth than the RTX 3080)
    *It performs in games between a 5-10% better than the RTX 3080

    About the RTX 3070:
    *It has a compute power of 5888 CUDA cores x 1725 MHz per core = 10156800 MHz (that's a 47% less compute power than the RTX 3080)
    *It has a bandwidth of = 448 GB/s (that's a 70% less bandwidth than the RTX 3080)
    *It performs in games between a 30-50% worst than the RTX 3080

    The thing is that the RTX 3080 has a performance increase according to the increase in computing power coming from a RTX 3070. But the RTX 3090 has not the performance increase according to the increase in compute power coming from a RTX 3080. Where is the bottleneck? I have no idea
     
    etern4l likes this.
  10. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    The RTX 3090 is actually pulling away from the 3080 at 4K these days. Pulling a 20% lead easily in newer benchmarks I was shown the other day. I'll try to find them.
     
  11. hertzian56

    hertzian56 Notebook Deity

    Reputations:
    438
    Messages:
    1,003
    Likes Received:
    788
    Trophy Points:
    131
    I take notebookcheck as accurate with their extensive testing and they say it's a 9% overall difference effectively. The 1660ti mobile has the same cuda cores and a bit lower clock speeds, it's 80w, the desktop is 120w and if Nvidia and laptop makers wanted they could put the 120w gpu in a laptop. It's just a choice to be dishonest with branding, again if they put an "m" at the end no problem but they don't do that with the latest mobile cards. Sure the 1660ti mobile, is about the same as a 1660 desktop but it's much closer to a 1660ti desktop than the RTX mobile cards are to their desktop equivalents. They need to bring back thicker heavier workstations with full fat gpus and cpus, but there is more profit in light and thin etc
     
    Last edited: Dec 24, 2020
  12. Raidriar

    Raidriar ლ(ಠ益ಠლ)

    Reputations:
    1,708
    Messages:
    5,820
    Likes Received:
    4,312
    Trophy Points:
    431
    Or nvidia could just truncate 3080 Max-Q to 3080M and stop confusing/misleading people. But we all know they won’t be doing the right thing. Pascal was as close as we got for desktop-mobile parity and it’s been going back the other direction with Turing, and for sure will take another two steps back with Ampere.
     
  13. hertzian56

    hertzian56 Notebook Deity

    Reputations:
    438
    Messages:
    1,003
    Likes Received:
    788
    Trophy Points:
    131
    On top of max-q they did the "refresh" versions of the rtx 2060/2070 mobile cards, a whole new level of confusion. It sucks because as I said they could easily put a 120w card into a laptop, I think they were able to put 200w cards in larger heavier laptops think I read that here. So it follows with the efficiency gains they could get Ampere mobiles pretty close to desktops, at least the lower level ones like 60/70 my guess. Those things seem like power hogs.
     
  14. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,535
    Likes Received:
    3,507
    Trophy Points:
    331
    Good example of a gap between specs and real time performance. The reasons could be many, it's hard to talk about specifics in the aggregate, but I would suspect a few:
    1. CPU bottlenecking
    2. Games unable to effectively utilise additional cores possibly hitting Amdahl's law limits, so the only improvement could come from higher clock speeds
    3. Least likely, but perhaps some throttling occurs preventing the 3090 from realising full performance

    You see similar thing with 2080Ti vs RTX Titan. In general, both the 3090 and Titan are more geared towards apications requiring larger amounts of VRAM where the models with less memory cannot be used either effectively or at all.
     
    Last edited: Dec 25, 2020
  15. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,747
    Messages:
    29,856
    Likes Received:
    59,723
    Trophy Points:
    931
    Why should they? Higher performing cards will only delay the move over to next model cards. People often skip a generation. With offering lesser performance they can keep on try lure you into next models before time. This sick mentality will only grow the coming years. From worse to even more worse. Yeah, I'm not sure we have seen the worst yet.

    Nvida was smart provide the 3080 desktop cards with only 10GB vRam. This way they know many gamers will jump on next gen cards a lot sooner than they have to. + they could cut down on costs. Not only the laptop gamers being screwed by Nvidia.

    ASUS Confirms GeForce RTX 3080 Ti 20 GB & GeForce RTX 3060 12 GB ROG STRIX Custom Graphics Cards

    Isn't it amusing? GeForce RTX 3060 with 12GB memory:)
     
    Last edited: Dec 25, 2020
    etern4l and Spartan@HIDevolution like this.
  16. hertzian56

    hertzian56 Notebook Deity

    Reputations:
    438
    Messages:
    1,003
    Likes Received:
    788
    Trophy Points:
    131
    Well I'm not arguing about the greedy shenanigans of what's pretty close to a monopoly. Ideally if the corporate "person" was looking to offer the highest performing products that last the longest they would do that.
     
    Papusan and JRE84 like this.
  17. Tyranus07

    Tyranus07 Notebook Evangelist

    Reputations:
    218
    Messages:
    570
    Likes Received:
    331
    Trophy Points:
    76
    What I find also interesting is the supply shortage which seems to be a problem even in 2021:

    NVIDIA could have delayed the Hopper architecture
     
    etern4l and Papusan like this.
  18. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,747
    Messages:
    29,856
    Likes Received:
    59,723
    Trophy Points:
    931
    [​IMG]
    Nvidia mobile RTX 3080 Max-P versus Max-Q: specs and estimated performance noteboocheck.net | today

    The RTX 3080 Max-P would only be a few percent faster than the RTX 3060 Ti desktop GPU, since it appears to be limited to 115 W TGP by default. On the other hand, the 80 W RTX 3080 Max-Q would be as fast as the GTX 1080 Ti desktop model, and 74% slower than the RTX 3080 desktop GPU.

    TechPowerUP also provides expected performance charts for each version, and it looks like the RTX 3080 Max-P would be 3% faster than the RTX 3060 Ti desktop model, but 40% slower than the RTX 3080 desktop GPU. On the other hand, the Max-Q is even slower, as it nearly matches the GTX 1080 Ti desktop, yet it is 74% slower than the RTX 3080 desktop GPU.

    upload_2021-1-6_19-42-3.png
     
    Tyranus07 likes this.
  19. Tyranus07

    Tyranus07 Notebook Evangelist

    Reputations:
    218
    Messages:
    570
    Likes Received:
    331
    Trophy Points:
    76
    At 74% slower it shouldn't be legal to call the max-q variant a 3080 video card, lol
     
  20. seanwee

    seanwee Father of laptop shunt modding

    Reputations:
    671
    Messages:
    1,920
    Likes Received:
    1,111
    Trophy Points:
    181
    I've posted about this in several places and you know what reaction I got? People defending nvidia.

    Thats right, people are saying that oh Nvidia is just making sure laptops don't melt or that nvidia had no choice since the 3080 is so power hungry. That and some people are even outraged at me for saying that the performance offered by the 3080 mobile is not enough, citing that its more than enough for laptop gaming.

    This is why nvidia gets away with **** like this and I'm absolutely disgusted to say the least.
     
  21. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,747
    Messages:
    29,856
    Likes Received:
    59,723
    Trophy Points:
    931
    Nvidia need to follow what the notebook manufacturers ask for. Apple designed gamingbooks. People want thin and slimy. People is their own enemy. Blinded by thin, lightweight, aluminum and flashy lightshow. How it should perform will always come after design. We have still not seen how bad it will become!
     
  22. Clamibot

    Clamibot Notebook Deity

    Reputations:
    645
    Messages:
    1,132
    Likes Received:
    1,567
    Trophy Points:
    181
    If people want thin and crappy laptops so badly, they can have them. What pisses me off is that the stupid trends that are driving the so called "progress" of thin and light craptops is negatively affecting the products I want to buy. They're taking my choices away.

    Vendors should at least throw enthusiasts a bone. The whole point of different types of computers existing in various form factors (tablets, phones, laptops, desktops, etc.) is to cater to different types of people. That's why there are markets for different types of consumers. Sure we enthusiasts are in the minority, but at least give us a few options. We may not be as profitable as other groups, but a market for products geared towards us still exists.

    No matter how many crappy laptops are out there, if there is at least one decent option out there for us, we'll be satisfied.
     
    Papusan likes this.
  23. Kunal Shrivastava

    Kunal Shrivastava Notebook Consultant

    Reputations:
    82
    Messages:
    110
    Likes Received:
    91
    Trophy Points:
    41
    !!!
    Why is the rtx 3080 only limited to 115w? What about 150/180/200?
    That's bad, it's basically on par with a desktop 2080 super at that wattage.
    I was expecting it to edge out the 2080ti at the very least.
     
  24. Kunal Shrivastava

    Kunal Shrivastava Notebook Consultant

    Reputations:
    82
    Messages:
    110
    Likes Received:
    91
    Trophy Points:
    41
    It's a fully unlocked GA104 die, so that's rtx 3070 desktop silicon rebranded as 3080 mobile. Why can't they do 200w on that? It'll be on par with a 2080ti.
     
  25. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    The article says higher wattage versions will exist if manufacturers so choose.

    Alienware, MSI, Clevo, and the like are going to put the 200W versions in the DTRs as always.
     
  26. Tyranus07

    Tyranus07 Notebook Evangelist

    Reputations:
    218
    Messages:
    570
    Likes Received:
    331
    Trophy Points:
    76
    I wonder how this works. Is Nvidia the one that decides and just say, "Hey OEMs I'm going to create GPU chips for laptops that have X TDP, X memory type, etc"? or do they talk to big laptop manufacturers and they come to an agreement about what to produce?. I wonder this because I remember at some point Nvidia said "Hey OEMs I'm going to produce 200W GPUs and you have to deal with it" So laptop manufactures had to design a proper cooling solution for that kind of TDP. But that didn't work as expected because laptop manufacturers pushed for GPUs that consume less power, so Nvidia came up with max-q crap. Maybe laptop manufacturers weren't buying as much 200 W GPUs as Nvidia expected... after all, this are business
     
  27. Kunal Shrivastava

    Kunal Shrivastava Notebook Consultant

    Reputations:
    82
    Messages:
    110
    Likes Received:
    91
    Trophy Points:
    41
    I couldn't find that part, but it really makes no sense to go with only 115w in DTR notebooks. They have to do atleast 180. An undervolted DT RTX 3070 does 180.
    Well another thing entirely is will the card use the full power or not, because according to this video the 150w Clevo edges out the 200w MSI(fully powered) and Alienware(thermal throttled) in games, despite having lower wattage on the GPU.
     
    Last edited: Jan 7, 2021
    seanwee likes this.
  28. seanwee

    seanwee Father of laptop shunt modding

    Reputations:
    671
    Messages:
    1,920
    Likes Received:
    1,111
    Trophy Points:
    181
    Most likely manufacturers giving a tdp for a laptop and nvidia decides how to nueter them.

    The RTX 3080 can very well fit into a thin and light. There are RTX 3080 undervolts to 0.7v that consume only 144w, and that's including all the fans, rgb etc on the card and before laptop power optimisations are done. A 120w RTX 3080 Max-q with RTX 3060ti/3070 performance is very possible, Nvidia just cheaped out.
     
  29. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,535
    Likes Received:
    3,507
    Trophy Points:
    331
    We shouldn't compare apples to oranges. The important question for a light laptop user will be: how much faster than 2080S MQ
    Is it though? Has Nvidia released the mobile 3080 specs yet?
     
  30. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,747
    Messages:
    29,856
    Likes Received:
    59,723
    Trophy Points:
    931
    The question will be if we will see above 10 different' 3080 Mobile SKUs. Yeah, this disgusting mess started with Max-Q and thin and slimy.

    It seems that RTX 3080 Mobile will be offered with either 16GB or 8GB memory capacity, which will segment this model even further. In reality, we might see as many as 10 models called RTX 3080 Mobile/Laptop, but each carrying a different clock speed, power limit, or memory capacity.

    https://videocardz.com/newz/nvidia-...ns-emerge-6144-cuda-cores-clocked-at-1245-mhz
     
    seanwee likes this.
  31. seanwee

    seanwee Father of laptop shunt modding

    Reputations:
    671
    Messages:
    1,920
    Likes Received:
    1,111
    Trophy Points:
    181
    Well here's the thing. It shouldn't have been apples to oranges in the first place.
     
    Kunal Shrivastava likes this.
  32. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,535
    Likes Received:
    3,507
    Trophy Points:
    331
    That's not realistic. The market requires smaller/portable laptops and you can't stick desktop-grade hardware into that. Desktops will always have some performance edge just due to the sheer size and power advantage.
     
  33. Kunal Shrivastava

    Kunal Shrivastava Notebook Consultant

    Reputations:
    82
    Messages:
    110
    Likes Received:
    91
    Trophy Points:
    41
    Well,that's sort of the point isn't it? technically the Pascal notebook and desktop comparison was "apples to apples" , so it made sense to call a GTX1080 that even in a notebook.
    While RTX 2000 notebooks were clocked lower they were still the same die, as in the RTX 2080 notebook and desktop were still the same TU104 with the same specs. In that sense nvidia can be given a pass for still calling it "apples to apples".
    Looking at ampere though, there's no way it's apples to apples.
    A GA104 GPU with gimped voltage should either be called RTX 3070 or 3080M. Calling it RTX 3080 this time around is just *false marketing when in fact they seem to be going back to the fermi/kepler days. Technically the 3080 'mobile' should cost around as much as a 3070 DT~~499-549.
    But yeah, they want us to believe its still apples to apples don't they? That way they can charge a premium for it.
     
    Papusan and seanwee like this.
  34. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,535
    Likes Received:
    3,507
    Trophy Points:
    331
    Good. Having much more powerful desktop hardware makes sense. It was never truly apples to apples anyway. We have never had Titan or Tesla-level HW for laptops.
     
  35. Kunal Shrivastava

    Kunal Shrivastava Notebook Consultant

    Reputations:
    82
    Messages:
    110
    Likes Received:
    91
    Trophy Points:
    41
    Exactly, which is why nvidia should stop misleading people. If anyone is to blame for this its AMD! Lisa Su has done this & I don't care what Huang pulls out of his oven next.
    Nvidia is getting desperate :vbbiggrin:
     
    Vasudev and seanwee like this.
  36. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    How is it "false marketing", when they are clearly laying the specs out for all to see?

    Nvidia isn't leading you to believe anything, they are literally telling the truth.
     
    etern4l and JRE84 like this.
  37. yrekabakery

    yrekabakery Notebook Virtuoso

    Reputations:
    1,470
    Messages:
    3,438
    Likes Received:
    3,688
    Trophy Points:
    331
    Because the detailed GPU info is not listed in laptop specs, not that an average joe would even understand it, all they see is the 3080 and get mislead. Having 3080 in any part of the name, in a product that performs less than half as fast as a real 3080, is a goddamn disgrace.
     
    Vasudev, seanwee and Papusan like this.
  38. JRE84

    JRE84 Notebook Virtuoso

    Reputations:
    856
    Messages:
    2,505
    Likes Received:
    1,513
    Trophy Points:
    181
    we are not the elite, its simple. nvidia tells the specs and we accept. if you don't like it buy AMD. we have seen titan peroframnce at least with pascal in the form of 2080 super mobile. and its not stag. we will see 3080 performance eventually laptop sizes dictate this we are talking a faction of the size and it's amazing we can fit that level of performance in such a small package. desktops have an advantage and so do laptops..screen cpu ram ect in something less than an inch thick is a true feat. but complaining that is indefinite. weird NBR is forcing the incorrect spelling of performance in my text
     
  39. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,535
    Likes Received:
    3,507
    Trophy Points:
    331
    I am not feeling misled by Nvidia. In fact, I am not feeling much, because all we have so far is click-bait stories and guesstimates, rather than hard facts.
    I would be feeling mislead by Nvidia though if I spent money on an "upgrade" from say 2080 MQ to 2080S MQ :D. Hopefully 3080MQ(P) will perform more than few % better than 2080S MQ(P) :)
     
  40. Clamibot

    Clamibot Notebook Deity

    Reputations:
    645
    Messages:
    1,132
    Likes Received:
    1,567
    Trophy Points:
    181
    Pascal notebooks would like to have a word with you :D.

    It's perfectly possible to have laptops and desktops perform the same. Part of the problem with Ampere is the power requirements due to a less than ideal process node, and the other part is the crappy cooling that is now going into mainstream gaming laptops. I think innovation is the problem here, not physics. Otherwise, how would it be possible to equip the Asus ProArt Studiobook One W590G6T with a Quadro RTX 6000? The Quadro RTX 6000 mobile in that laptop was given 250 watts to play with! This laptop is a little less than an inch thick by the way.

    If Asus can make a monster laptop equipped with a 250 watt GPU less than an inch thick, so can other manufacturers. This tells me other manufacturers are just cheaping out.
     
    seanwee and Kunal Shrivastava like this.
  41. yrekabakery

    yrekabakery Notebook Virtuoso

    Reputations:
    1,470
    Messages:
    3,438
    Likes Received:
    3,688
    Trophy Points:
    331
    Clamibot and etern4l like this.
  42. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,535
    Likes Received:
    3,507
    Trophy Points:
    331
    Of course, it's possible, but only if the desktop card has low enough power and cooling requirements, i. e. it's slower than it could be if it fully utilised the cooling headroom offered by the desktop platform. Essentially, your argument is that Ampere should have been gimped to avoid aggravating passionate laptop users :D

    BTW Apparently the Asus Pro Art RTX6000 runs at 200W and is slower than the desktop card:

    https://pokde.net/review/asus-proart-studiobook-one-review

    200W is just about in the region of what can be achieved by a reasonable laptop - A51M gets there too, but neither laptop's form factor would suit me (ridiculous pricing aside), hence the market for lower powered variants.
     
    Clamibot likes this.
  43. Clamibot

    Clamibot Notebook Deity

    Reputations:
    645
    Messages:
    1,132
    Likes Received:
    1,567
    Trophy Points:
    181
    Gotcha, I thought the TDP of the card in that laptop was 250 watts.

    I wasn't really advocating for gimping desktop cards, just for more innovation in laptops so that they could perform as well or at least almost as well as their desktop counterparts while still being thin. That way, the thin and light crowd could be appeased while the trends that drive developments in those products would not negatively impact enthusiast oriented products.

    Performance laptops are very gimped nowadays (with a few exceptions), but that shouldn't be the case since they're supposed to be engineered for performance.
     
    seanwee and etern4l like this.
  44. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,535
    Likes Received:
    3,507
    Trophy Points:
    331
    The problem is that they can't really shrink the laptop and provide more cooling at the same time, and they claim the market wants slimmer more than they want more powerful.
    Looking at the 7 lbs AW 15 R2, and the 2x faster 4 lbs AW m15, I can see where the market is coming from :) The thing is it took me like weeks of tweaking, repasting and modding before I got the latter machine to the point of flawless performance...

    IMHO Alienware of yore had a good solution with simply providing a 450W eGPU with dedicated PCIe lanes. The execution was meh though, with the chassis size and cooling lacking, and now finally the dumbed down version of the brand we have today have EOLed the solution....
     
    Last edited: Jan 7, 2021
    Clamibot likes this.
  45. hertzian56

    hertzian56 Notebook Deity

    Reputations:
    438
    Messages:
    1,003
    Likes Received:
    788
    Trophy Points:
    131
    When all is said and done you have seen perf increases steadily with laptop gpu's but it's a subterfuge as to what does what and how that compares to previous generations of gpus. Average joes just don't do the diligence us hobbyists do so we all get put in that category by manufacturers.

    For high level gamers it would be nice if a white label chassis maker would come out with a mobile platform that works up to 250w gpu and thermally efficient cpu. Think if you could get a 250w card as an upgrade and get the latest low wattage cpu, if needed/desired, to not get terrible bottlenecking and thermally driven throttling. That would take old school interchangeable cpu/gpu not soldered stuff. The longer term profits are not enough for them or wall street is my guess. You'd have to have board makers cooperation, like bring back mxm or new similar standard. They don't listen to what people want they just give us what meets their business goals. Just thinking out loud here.

    It's to the point where you should not upgrade for 2-3 generations, even just get a used system that's a nice step up in the same generation like 1060 to 1080 laptop, which the Dell DGFF alienware systems allow in the RTX cards, and leave it at that for the near future from what we are seeing. I guess the problem with that is that a 1080 laptop used is the same price as an rtx 2060 mobile new but it's the lowest rtx card and if it's the full fat version I'm not sure that the lower perf of a 2060ff is enough lower, approx 15% per 3dmark, to just not get a new system with a warranty and new/updated components etc. The name game is annoying since many assume any 20 level card is above any 10 level card. And every 2 gens or so the 60 trumps the 70 and 70 trumps 80, by a meaningful amount. Per UL a mobile 2070ff barely beats a 1080ff, raytracing/dlss aside. Unless prices go dramatically up for new gaming class laptops I can't see it from a holistic approach. And the wattage needed for more or the same perf comes down. Then new tech like raytracing and dlss comes along and makes it even more complicated. A lot of factors to consider.
     
  46. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    Have you ever seen a why is my GTX 980M slower than the GTX 980 post? No? That's probably because the ignorant type of "Joe" who buys a product sight unseen with zero research also doesn't spend $2,500 on a gaming laptop. Average Joes aren't in enthusiast gaming circles so let's please drop the Nvidia misleading dumb people from the discourse on freaking Notebook Review. It's absurd to the point of concern trolling. There are better ways to express your disdain for Nvidia's decisions.

    People don't drop thousands on luxury gaming laptops without doing a modicum of research. Every manufacturer sub-forum on this site, Reddit, and elsewhere has proven that tens of thousands of times over.
     
    Kunal Shrivastava, hfm and etern4l like this.
  47. yrekabakery

    yrekabakery Notebook Virtuoso

    Reputations:
    1,470
    Messages:
    3,438
    Likes Received:
    3,688
    Trophy Points:
    331
    1. They tacked -M onto mobile GPUs back then
    2. There was no precedent for mobile GPUs performing the same as desktop GPUs with the same number back then
     
  48. Clamibot

    Clamibot Notebook Deity

    Reputations:
    645
    Messages:
    1,132
    Likes Received:
    1,567
    Trophy Points:
    181
    I suppose at his point then that it would be better for gaming laptops as a whole to all become desktop replacements without a dedicated GPU. I find the idea of an eGPU interesting. Come to think of it, it would be better to game on integrated graphics while on battery power anyway. If you're using the dedicated card, you're definitely plugged into a wall outlet and probably sitting at a desk or table, therefore an eGPU shouldn't be a problem.

    Since MXM cards cost a lot over their desktop counterparts, going the eGPU only route would mean spending significantly less on upgrades. Seeing a gaming laptop with a desktop CPU and no dGPU would definitely be an interesting offering. That would help satisfy the battery life and thickness requirements for general consumers. You could even upgrade the GPU from off the shelf parts!
     
    etern4l likes this.
  49. Tyranus07

    Tyranus07 Notebook Evangelist

    Reputations:
    218
    Messages:
    570
    Likes Received:
    331
    Trophy Points:
    76
    Well I hope at a near future we get Thunderbolt 5 that use 4 PCH lanes of PCIe 4.0, that way we'll have 8 GB/s for eGPUs which is more than enough (the same x8 PCIe 3.0). TB 4.0 we already knows still has a 40 Gbps limit which is the same as TB 3.0. That way we can even use the laptop's internal monitor and not have a performance downgrade.
     
    Papusan, etern4l and joluke like this.
  50. hfm

    hfm Notebook Prophet

    Reputations:
    2,264
    Messages:
    5,299
    Likes Received:
    3,050
    Trophy Points:
    431
    Unless you care about fan noise or weight, eGPU are still inferior to dGPU. Even 8 PCIe lanes would introduce latency and bottleneck, though not quite as much. TB5 is going to be years away (I would also hope something like that isn't going to be Intel proprietary..), at least TB4 removes the 22Gb/s Data cap where 10Gb is set aside for video and allows data to use the entire 32Gb/s. That might help vs TB3. But I *THINK* we need TB4 client controllers intergrated into eGPU first which I have not heard even a rumor of one existing yet. I could be wrong about that but I don't think I am, might be some folks on egpu.io that know.

    I still love my Gram 17 + eGPU... but it's definitely not going to please anyone that only cares about performance above all else. There's drawbacks unless you are like me and see the benefits far outweighing that stuff.
     
    seanwee likes this.
← Previous pageNext page →