All I want is a 3080 that I can purchase on ebay or somewhere else that I can shove into my P775DM3-G and yes, the upgrade will be worth it.
Doubt I'll see "upgrade kits" available any time soon like there are for the RTX 2080 (NOT the Super)
-
seanwee likes this.
-
Perhaps the wider bus is needed to fully utilize the faster VRAM, in which case the net performance would be closer to the minimum of the two numbers, in this case 19%, right? -
BANDWIDTH = FREQUENCY X BUS WIDTH
In the case of the desktop RTX 3080:
BW = 19 Gbps x 320 = 760 GB/s
For the mobile RTX 3080:
BW = 16 Gbps x 256 = 512 GB/s
So the desktop RTX 3080 has nearly 50% more bandwidth than the mobile RTX 3080. At the end of the day the memory bandwidth is what really matters, you could have a GDDR5X at 10 Gbps but a bus width of 768 bit and that would have more bandwidth than GDDR6X at 19Gbps with a bus width of 320 bit and hence perform better.seanwee likes this. -
yrekabakery Notebook Virtuoso
-
Kunal Shrivastava Notebook Consultant
Slightly off topic here, but just out of curiosity
1)What do "faster tensor cores" amount to? Suppose a GPU with Tensor cores upscales using DLSS what exactly do we expect with 'better' tensor cores? More FPS? More accurate upscaling with lower resolutions?
I get that DLSS reconstructs a low-res image rather than just scale it but isn't the result similar to, say resolution scaling with TAA+ smart image sharpening(Amd RIS, nvidias Freestyle sharpen filter etc) ? Technically, pixels are being approximated and the missing information is 'guessed' by both ! please correct me if I'm wrong here.
2)The DLSS 1.0 implementation in early games wasn't great, but nvidia said they weren't using tensor cores for that--everything was done on regular shader cores! They promised DLSS 2.0 and beyond would be on tensor cores so it'll be superior,and it is but technically it goes to show that DLSS doesn't need specialized hardware or am I missing something here?
3)AMD has an upcoming open source direct ML feature that allegedly doesn't need specialized hardware? So either they worked out a way where machine learning algorithms doesn't need tensor instructions to emulate so no hardware requirement, or they just proved that regular fp32/int32 cores can handle tensor workflow?
This is becoming a pattern with Nvidia-- First they release proprietary hardware for variable refresh(g-sync) and AMD freesync simply used the HDMI controller, then they go on to adopt it.
Then they were all about RTX until cryengine could do it without needing RTX cards.
Now are we seeing the same thing with DLSS?
Last edited: Dec 23, 2020Vasudev likes this. -
Additionally, it's not clear that real performance scales linearly with data bus width. That depends on read request / data granularity, i.e. you could waste some of data bandwidth, by reading completely unused data, although this effect would most likely be an issue for HMB, with its 1024 bit data bus.
My point is: it would be ideal to look at real-life performance comparisons, rather than manufacturer specs, given the limited (in my case) understanding of the underlying hardware complexity.
BTW apparently GDDR6X runs crazy hot, at over 100C, which would explain why they refrained from using it in laptops. -
I know this still illustrates a discrete GPU, but we're looking at signs that iGP's are likely moving into a direction with HBM2 (or 3) being fully part of the CPU iGP part and having about 4 to 8GB to use... but the catch is that the whole chip (CPU and GPU alike) will be able to make use of that bandwidth which will change the performance metrics overall.
If you take existing Zen 2 iGP (which has an enhanced Vega inside that's basically on par with Navi in performance and efficiency), and we know that RDNA 2 has at least 50% more performance per watt improvement, add in a few more cores perhaps, on-die HBM (if AMD decides to use it in that iteration), on 5nm node, and you're looking at about RX 5500 level of performance in a 45W TDP chip (maybe close to rx 5600). -
About the RTX 3090:
*It has a compute power of 10496 CUDA cores x 1695 MHz per core = 17790720 MHz (that's a 20% more compute power than the RTX 3080)
*It has a bandwidth of = 936 GB/s (that's a 23% more bandwidth than the RTX 3080)
*It performs in games between a 5-10% better than the RTX 3080
About the RTX 3070:
*It has a compute power of 5888 CUDA cores x 1725 MHz per core = 10156800 MHz (that's a 47% less compute power than the RTX 3080)
*It has a bandwidth of = 448 GB/s (that's a 70% less bandwidth than the RTX 3080)
*It performs in games between a 30-50% worst than the RTX 3080
The thing is that the RTX 3080 has a performance increase according to the increase in computing power coming from a RTX 3070. But the RTX 3090 has not the performance increase according to the increase in compute power coming from a RTX 3080. Where is the bottleneck? I have no ideaetern4l likes this. -
-
Last edited: Dec 24, 2020
-
NuclearLizard, Papusan, etern4l and 1 other person like this.
-
On top of max-q they did the "refresh" versions of the rtx 2060/2070 mobile cards, a whole new level of confusion. It sucks because as I said they could easily put a 120w card into a laptop, I think they were able to put 200w cards in larger heavier laptops think I read that here. So it follows with the efficiency gains they could get Ampere mobiles pretty close to desktops, at least the lower level ones like 60/70 my guess. Those things seem like power hogs.
-
1. CPU bottlenecking
2. Games unable to effectively utilise additional cores possibly hitting Amdahl's law limits, so the only improvement could come from higher clock speeds
3. Least likely, but perhaps some throttling occurs preventing the 3090 from realising full performance
You see similar thing with 2080Ti vs RTX Titan. In general, both the 3090 and Titan are more geared towards apications requiring larger amounts of VRAM where the models with less memory cannot be used either effectively or at all.Last edited: Dec 25, 2020 -
Nvida was smart provide the 3080 desktop cards with only 10GB vRam. This way they know many gamers will jump on next gen cards a lot sooner than they have to. + they could cut down on costs. Not only the laptop gamers being screwed by Nvidia.
ASUS Confirms GeForce RTX 3080 Ti 20 GB & GeForce RTX 3060 12 GB ROG STRIX Custom Graphics Cards
Isn't it amusing? GeForce RTX 3060 with 12GB memoryLast edited: Dec 25, 2020etern4l and Spartan@HIDevolution like this. -
-
NVIDIA could have delayed the Hopper architecture -
Nvidia mobile RTX 3080 Max-P versus Max-Q: specs and estimated performance noteboocheck.net | today
The RTX 3080 Max-P would only be a few percent faster than the RTX 3060 Ti desktop GPU, since it appears to be limited to 115 W TGP by default. On the other hand, the 80 W RTX 3080 Max-Q would be as fast as the GTX 1080 Ti desktop model, and 74% slower than the RTX 3080 desktop GPU.
TechPowerUP also provides expected performance charts for each version, and it looks like the RTX 3080 Max-P would be 3% faster than the RTX 3060 Ti desktop model, but 40% slower than the RTX 3080 desktop GPU. On the other hand, the Max-Q is even slower, as it nearly matches the GTX 1080 Ti desktop, yet it is 74% slower than the RTX 3080 desktop GPU.
Tyranus07 likes this. -
Kunal Shrivastava, Papusan and seanwee like this.
-
Thats right, people are saying that oh Nvidia is just making sure laptops don't melt or that nvidia had no choice since the 3080 is so power hungry. That and some people are even outraged at me for saying that the performance offered by the 3080 mobile is not enough, citing that its more than enough for laptop gaming.
This is why nvidia gets away with **** like this and I'm absolutely disgusted to say the least. -
Clamibot, seanwee and Spartan@HIDevolution like this.
-
If people want thin and crappy laptops so badly, they can have them. What pisses me off is that the stupid trends that are driving the so called "progress" of thin and light craptops is negatively affecting the products I want to buy. They're taking my choices away.
Vendors should at least throw enthusiasts a bone. The whole point of different types of computers existing in various form factors (tablets, phones, laptops, desktops, etc.) is to cater to different types of people. That's why there are markets for different types of consumers. Sure we enthusiasts are in the minority, but at least give us a few options. We may not be as profitable as other groups, but a market for products geared towards us still exists.
No matter how many crappy laptops are out there, if there is at least one decent option out there for us, we'll be satisfied.Papusan likes this. -
Kunal Shrivastava Notebook Consultant
Why is the rtx 3080 only limited to 115w? What about 150/180/200?
That's bad, it's basically on par with a desktop 2080 super at that wattage.
I was expecting it to edge out the 2080ti at the very least. -
Kunal Shrivastava Notebook Consultant
It's a fully unlocked GA104 die, so that's rtx 3070 desktop silicon rebranded as 3080 mobile. Why can't they do 200w on that? It'll be on par with a 2080ti.
-
Alienware, MSI, Clevo, and the like are going to put the 200W versions in the DTRs as always. -
I wonder how this works. Is Nvidia the one that decides and just say, "Hey OEMs I'm going to create GPU chips for laptops that have X TDP, X memory type, etc"? or do they talk to big laptop manufacturers and they come to an agreement about what to produce?. I wonder this because I remember at some point Nvidia said "Hey OEMs I'm going to produce 200W GPUs and you have to deal with it" So laptop manufactures had to design a proper cooling solution for that kind of TDP. But that didn't work as expected because laptop manufacturers pushed for GPUs that consume less power, so Nvidia came up with max-q crap. Maybe laptop manufacturers weren't buying as much 200 W GPUs as Nvidia expected... after all, this are business
-
Kunal Shrivastava Notebook Consultant
I couldn't find that part, but it really makes no sense to go with only 115w in DTR notebooks. They have to do atleast 180. An undervolted DT RTX 3070 does 180.
Well another thing entirely is will the card use the full power or not, because according to this video the 150w Clevo edges out the 200w MSI(fully powered) and Alienware(thermal throttled) in games, despite having lower wattage on the GPU.
Last edited: Jan 7, 2021seanwee likes this. -
The RTX 3080 can very well fit into a thin and light. There are RTX 3080 undervolts to 0.7v that consume only 144w, and that's including all the fans, rgb etc on the card and before laptop power optimisations are done. A 120w RTX 3080 Max-q with RTX 3060ti/3070 performance is very possible, Nvidia just cheaped out. -
-
It seems that RTX 3080 Mobile will be offered with either 16GB or 8GB memory capacity, which will segment this model even further. In reality, we might see as many as 10 models called RTX 3080 Mobile/Laptop, but each carrying a different clock speed, power limit, or memory capacity.
https://videocardz.com/newz/nvidia-...ns-emerge-6144-cuda-cores-clocked-at-1245-mhzseanwee likes this. -
Kunal Shrivastava likes this.
-
-
Kunal Shrivastava Notebook Consultant
While RTX 2000 notebooks were clocked lower they were still the same die, as in the RTX 2080 notebook and desktop were still the same TU104 with the same specs. In that sense nvidia can be given a pass for still calling it "apples to apples".
Looking at ampere though, there's no way it's apples to apples.
A GA104 GPU with gimped voltage should either be called RTX 3070 or 3080M. Calling it RTX 3080 this time around is just *false marketing when in fact they seem to be going back to the fermi/kepler days. Technically the 3080 'mobile' should cost around as much as a 3070 DT~~499-549.
But yeah, they want us to believe its still apples to apples don't they? That way they can charge a premium for it. -
-
Kunal Shrivastava Notebook Consultant
Nvidia is getting desperate -
Nvidia isn't leading you to believe anything, they are literally telling the truth. -
yrekabakery Notebook Virtuoso
Because the detailed GPU info is not listed in laptop specs, not that an average joe would even understand it, all they see is the 3080 and get mislead. Having 3080 in any part of the name, in a product that performs less than half as fast as a real 3080, is a goddamn disgrace.
-
we are not the elite, its simple. nvidia tells the specs and we accept. if you don't like it buy AMD. we have seen titan peroframnce at least with pascal in the form of 2080 super mobile. and its not stag. we will see 3080 performance eventually laptop sizes dictate this we are talking a faction of the size and it's amazing we can fit that level of performance in such a small package. desktops have an advantage and so do laptops..screen cpu ram ect in something less than an inch thick is a true feat. but complaining that is indefinite. weird NBR is forcing the incorrect spelling of performance in my text
-
I would be feeling mislead by Nvidia though if I spent money on an "upgrade" from say 2080 MQ to 2080S MQ. Hopefully 3080MQ(P) will perform more than few % better than 2080S MQ(P)
-
.
It's perfectly possible to have laptops and desktops perform the same. Part of the problem with Ampere is the power requirements due to a less than ideal process node, and the other part is the crappy cooling that is now going into mainstream gaming laptops. I think innovation is the problem here, not physics. Otherwise, how would it be possible to equip the Asus ProArt Studiobook One W590G6T with a Quadro RTX 6000? The Quadro RTX 6000 mobile in that laptop was given 250 watts to play with! This laptop is a little less than an inch thick by the way.
If Asus can make a monster laptop equipped with a 250 watt GPU less than an inch thick, so can other manufacturers. This tells me other manufacturers are just cheaping out.seanwee and Kunal Shrivastava like this. -
yrekabakery Notebook Virtuoso
http://forum.notebookreview.com/thr...-whopping-margin.833406/page-28#post-11067104 -
BTW Apparently the Asus Pro Art RTX6000 runs at 200W and is slower than the desktop card:
https://pokde.net/review/asus-proart-studiobook-one-review
200W is just about in the region of what can be achieved by a reasonable laptop - A51M gets there too, but neither laptop's form factor would suit me (ridiculous pricing aside), hence the market for lower powered variants.Clamibot likes this. -
Gotcha, I thought the TDP of the card in that laptop was 250 watts.
I wasn't really advocating for gimping desktop cards, just for more innovation in laptops so that they could perform as well or at least almost as well as their desktop counterparts while still being thin. That way, the thin and light crowd could be appeased while the trends that drive developments in those products would not negatively impact enthusiast oriented products.
Performance laptops are very gimped nowadays (with a few exceptions), but that shouldn't be the case since they're supposed to be engineered for performance. -
Looking at the 7 lbs AW 15 R2, and the 2x faster 4 lbs AW m15, I can see where the market is coming fromThe thing is it took me like weeks of tweaking, repasting and modding before I got the latter machine to the point of flawless performance...
IMHO Alienware of yore had a good solution with simply providing a 450W eGPU with dedicated PCIe lanes. The execution was meh though, with the chassis size and cooling lacking, and now finally the dumbed down version of the brand we have today have EOLed the solution....Last edited: Jan 7, 2021Clamibot likes this. -
When all is said and done you have seen perf increases steadily with laptop gpu's but it's a subterfuge as to what does what and how that compares to previous generations of gpus. Average joes just don't do the diligence us hobbyists do so we all get put in that category by manufacturers.
For high level gamers it would be nice if a white label chassis maker would come out with a mobile platform that works up to 250w gpu and thermally efficient cpu. Think if you could get a 250w card as an upgrade and get the latest low wattage cpu, if needed/desired, to not get terrible bottlenecking and thermally driven throttling. That would take old school interchangeable cpu/gpu not soldered stuff. The longer term profits are not enough for them or wall street is my guess. You'd have to have board makers cooperation, like bring back mxm or new similar standard. They don't listen to what people want they just give us what meets their business goals. Just thinking out loud here.
It's to the point where you should not upgrade for 2-3 generations, even just get a used system that's a nice step up in the same generation like 1060 to 1080 laptop, which the Dell DGFF alienware systems allow in the RTX cards, and leave it at that for the near future from what we are seeing. I guess the problem with that is that a 1080 laptop used is the same price as an rtx 2060 mobile new but it's the lowest rtx card and if it's the full fat version I'm not sure that the lower perf of a 2060ff is enough lower, approx 15% per 3dmark, to just not get a new system with a warranty and new/updated components etc. The name game is annoying since many assume any 20 level card is above any 10 level card. And every 2 gens or so the 60 trumps the 70 and 70 trumps 80, by a meaningful amount. Per UL a mobile 2070ff barely beats a 1080ff, raytracing/dlss aside. Unless prices go dramatically up for new gaming class laptops I can't see it from a holistic approach. And the wattage needed for more or the same perf comes down. Then new tech like raytracing and dlss comes along and makes it even more complicated. A lot of factors to consider. -
People don't drop thousands on luxury gaming laptops without doing a modicum of research. Every manufacturer sub-forum on this site, Reddit, and elsewhere has proven that tens of thousands of times over.Kunal Shrivastava, hfm and etern4l like this. -
yrekabakery Notebook Virtuoso
- They tacked -M onto mobile GPUs back then
- There was no precedent for mobile GPUs performing the same as desktop GPUs with the same number back then
Papusan, Kunal Shrivastava and seanwee like this. -
Since MXM cards cost a lot over their desktop counterparts, going the eGPU only route would mean spending significantly less on upgrades. Seeing a gaming laptop with a desktop CPU and no dGPU would definitely be an interesting offering. That would help satisfy the battery life and thickness requirements for general consumers. You could even upgrade the GPU from off the shelf parts!etern4l likes this. -
-
I still love my Gram 17 + eGPU... but it's definitely not going to please anyone that only cares about performance above all else. There's drawbacks unless you are like me and see the benefits far outweighing that stuff.seanwee likes this.
How will Ampere scale on laptops?
Discussion in 'Gaming (Software and Graphics Cards)' started by Kunal Shrivastava, Sep 6, 2020.