Sorry to nitpick. 460 is not full GF104, only 485M is. But your point is taken.
AMD rules the top (bottom) of that graph. Funny.
-
ah well, all discussions will be fruitless anyways once that gpu is out and about
well see whats up then....
Sent from my Nexus 5 using TapatalkCloudfire likes this. -
Why is n=1 using Fermi? Because it was the last time Nvidia was stupid enough to cut down something passed the knee of the perf/W curve. Nvidia learned its lesson. You're so caught up on context, not on logic or critical thinking, that you can only belittle and cry flawed example but can't explain why it's a bad example.
? It would perform worse than a fully enabled GM204 which plays once again into what n=1 said. Nvidia would be stupid to offer such a crippled GM200 chip as a mobile GPU, it would be a waste of big expensive GM200 dies.
-
As for the GTX 680 vs 880M argument, you keep forgetting that 880M is already heavily downclocked on the core and memory in order to reduce TDP down from 195W to 120/125W, even using what is presumably the best possibly binned GK104 dies. In other words, most of the TDP saving tricks used on the 880M has already been accounted for in your cut down GM200 to bring it down to 190W. From 190W to 120W you have nothing to go by except "special binning". Ok sure I'm not discounting lower voltages will reduce power consumption, but you're talking about a 70W difference here, or in other words, 37% TDP reduction just by binning alone.
The graph above is for illustrative purposes only, but drives home a key point: if you're already in the optimal zone on the efficiency curve, voltage has very little effect on power consumption. I mean going from +50mV to about +140mV gave a whooping 20-30W increase in power consumption. That's 20-30W for 90mV of voltage added! On a 290X no less! -
Robbo99999 Notebook Prophet
jaybee83 likes this. -
GTX 780 with 2304 cores runs HIGHER clocks than 780 Ti with 2880 cores. I literally just showed you that the GTX 780 runs 50W lower than GTX 780 Ti.
The power efficiency graph was a huge clue toward that as well, keeping the same power efficiency as GTX 780 Ti despite having lower core count. Its because power have gone down by a lot.
Power drop will also be seen from a cut down GM200. We are talking about a GM200 chip that is less clocked than the GTX 980 Ti and with 640 less cores. Which would mean a bigger drop than 50W.
Which will put it below GTX 980 aka GM204.
Getting it down from 180W to 125W or something like that will be a piece of cake. Its done through lower voltage and binning. You already see the GTX 680 in the chart above with similar power as the cut down GM200 will have. Which we got our GTX 880M from.
I`m done wasting time on discussing this. If you still don`t believe it based on everything Ive showed so far, then you just dont understand it.
Power requirements are no problem for a mobile GPU. -
Where did you get that 780 runs higher clocks than 780 Ti? According to AnandTech ( 780, 780 Ti), of the 2 games that they share in common, BF3 780 ran 12MHz faster, while in Hitman 780 actually ran 1 MHz slower. Also the memory is clocked at 7 GHz on the 780 Ti vs the 6 GHz on the 780, a pretty significant difference.
-
http://www.anandtech.com/show/7492/the-geforce-gtx-780-ti-review/15
http://www.anandtech.com/show/6973/nvidia-geforce-gtx-780-review/19
Frequency drop (core and memory) and voltage drop (core and memory), that's where 780's power savings come from, not just the disabled units.
-
GTX 780: 992MHz
780Ti: 980Mhz
Same about Crysis.
That resulted in a 50W drop because it have around 500 less cores.
Now imagine the same core count reduction as those two again, just with GM200, but with 100MHz lower clock instead of 12MHz higher clock.
Power reduction will go down by a lot.
A 180W to 125W is even better. Or if they manage the same power reduction again, 180W to 115W.
Read what I`m writing for gods sake. The whole concept of getting power and TDP down is using lower voltage on mobile cards. You should know that by now. Its not magic.
Its logic.Last edited: Aug 14, 2015Robbo99999 likes this. -
Robbo99999 Notebook Prophet
Yep, to me GM200 is possible in mobile from the discussions we've had, it's just gonna come down to what makes the most commercial sense for NVidia as to whether it's gonna be GM200 or GM204. I'm still 50:50.
-
GM204 is smaller than GM200, so per silicon it might be cheaper to produce for Nvidia.
However, they offer only 3072 and 2800 core GM200 chips. Between a full GM204 and GM200 there is 2048 > 3072 cores. A lot of room. What happens to the GM200 chips originally manufactured for GX 980Ti and GTX Titan X if they have more damaged cores? Throw them away as loss? Or find a product for them? Like GTX 990M?
Might be more financial smarter for them.
We already covered that cooling wise and overclocking wise a GM200 would be the best for us gamers.
Lets hope GM200 happensjaybee83 and Robbo99999 like this. -
Looks like the new quadro mobille m5000m is going to be based on the gtx980m and not 990m
http://cdn.wccftech.com/wp-content/uploads/2015/08/NVIDIA-Quadro-M5000M.jpg
And i had such high hopes for more power.Robbo99999 likes this. -
I don't care if it's 204 or 200. Either way it's a substantial upgrade. I'm much more concerned about pricing :/
-
As far as I know GTX 980M doesnt have any FP64 cores disabled, so that drivers can cause this huge increase in performance is a bit shocking to me. Unless there is disabled FP64 cores in the GtX 980M but not M5000 -
Cloudfire is telling everyone to stop responding and asks us to speak logically...
Well, here's some logic:
Cloudfire is living in the clouds, ladies and gentlemen.He has surely lost his head in the GM200 world of irrelevance. Pascal is coming, you'd think he'd be more excited about that. Or is it out of pure stubbornness, does he waste time arguing over what is soon to be ancient technology?
Why would anyone even buy a 990M? We know LESS about the 990M than we do Pascal. We expect it to run HOTTER than Pascal. We also expect it to run SLOWER than Pascal. And odds are, it will be MORE EXPENSIVE than Pascal. Logic dictates you wait until Pascal.
Stop wasting your energy fighting over this mysterious 990M.Last edited: Aug 14, 2015 -
Robbo99999 Notebook Prophet
-
Last edited: Aug 14, 2015
-
That Pascal thread was made waaay too early
-
-
theres no written law that states all mobile gpus have to have a desktop counterpart with identical model number
(or vice versa)
Sent from my Nexus 5 using TapatalkCloudfire likes this. -
Never said there was. That's just how it has been for years, so it's safe to assume the trend will continue.*
*To you sticklers out there: Yes, there are a few exceptions. I am aware.Last edited: Aug 14, 2015 -
moviemarketing Milk Drinker
jaybee83 and Mr Najsman like this. -
Imagine if it is soldered. The implications for the future of mobile GPUs will be huge.
-
Titan X to 980 Ti saved like what, 10W?
Your cut down GM200, to even get to 180W would already need downclocking on the core and memory, yet you seem to keep missing this fact. Unless you propose to go from 180W to 120W solely through undervolting, you'll have to downclock EVEN MORE to get to 120W. And that was my point, if you cut core speed by half just to get to 120W, your cut down GM200 will perform worse than a full GM204. -
-
Why are we even speculating GM200 again? Why would we even want that instead of a GM204 with more SMMs unlocked? That's like going into the past, and still choosing the GTX 480M instead of the GTX 485M, with everything we know now.
And you guys are close to being too 'all in' with your passionate predictions. Always save yourself some leeway to be wrong, lest you lose future credibility.jaybee83 likes this. -
Because the argument is you can create a GM200 with more SMMs than a full GM204 and still keep it within 120W, thus bringing better performance.
-
GTX 980Ti runs on average 60MHz higher clocks than GTX Titan X. The clocks made up for the core reduction. Please memorize that now. Im getting tired of repeating myself.
GTX 880M goes all the way up 993MHz.
GTX 680 to 1058MHz.
Thats not "heavily downclocked" lol
Cut down GM200: 2432 cores with 1050MHz. Easily down 70W since similar clocked GTX 780 with similar core drop (- 500 cores) saw 50W reduction compared to GTX 780Ti. That means around 170-175W in peak power consumption
Just happens to be identical with GTX 680 which GTX 880M was based on.
GTX 990M: 2432 cores with 1000MHz. Same clock reduction compared to the identical desktop chip its based on like GTX 880M and GTX 680
Sorry buddy, perfectly plausible with GM200 on mobile.
Nobody is saying GM200 is 100% happening either. But the possibility is certainly there.Last edited: Aug 14, 2015Robbo99999 likes this. -
680 max boost 1111 MHz
880M memory 1250 MHz
680 memory 1500 MHz -
Possible: "Able to be done; within the power or capacity of someone or something." You can use possible to talk about anything that might happen.
Let's not confuse the two. Pretty much anything is possible. Not everything is plausible.
This is for the record, so that any articles referring to this thread as a source may have the correct meaning. -
Ok so taking Crysis 3's example of 64 MHz speed advantage to the 980 Ti, look at the power consumption:
A measly 4W savings, big whoop. So what does this mean? 64 MHz bump in clockspeed is nearly enough to make up for the loss of 256 cores. So if we extrapolate, this means your 2432 core GM200 is equivalent to downclocking a 2816 core GM200 by 100 MHz. Do you honestly believe a 980 Ti that's downclocked by 100 MHz will suddenly become a 180W card?
Yes this is all before binning and voltage scaling before you point out the obvious. But point being, when looking at GM200, whatever TDP you lose from loss of cores is easily gained back by a minor bump in clockspeeds. Or in other words, losing cores don't seem to have as great of an impact on TDP vs Kepler.
Also your 780 vs 780 Ti comparison is also flawed because 780 Ti uses faster clocked memory. Comparing 780 vs the original Titan is much better as the nominal clocks are much closer. In fact AnandTech found that in BF3, they run at the exact same core clock! Now let's look at the power consumption.
So loss of 2 SMX for 384 cores in the 780 led to a 24W reduction in TDP. Coincidentally your hypothetical 2432 core GM200 will also be 384 cores less than 980 Ti. Obviously you can't directly compare the numbers, but I think this sufficiently shows that cutting out cores don't lead to as dramatic of a TDP decrease as you'd think.
So that's about 200 MHz downclock on the core, plus anothe 1 GHz (effective) downclock on memory. Pretty significant I'd ague.
As for the rest of your post, I'm just going to be repeating myself again, so I won't bother.Last edited: Aug 14, 2015 -
Also keep in mind 880M uses 1.018V @ 993 MHz while 680 uses 1.175V @ 1111 MHz, also a significant cutback. At its base clock of 1006 MHz, 680 uses 1.062V which is still ~4 voltage bins higher than 880M at 993 MHz even though it is only one frequency bin (13 MHz) higher.
Last edited: Aug 14, 2015 -
Again, its worse case scenario that is the base for peak power consumption.
1110MHz down to 993MHz is still a 117MHz drop which is no massive downclock like you said in your previous post.
Even 500 cores less and a 115MHz drop will be significant with GM200. Its a massive die with many cores. -
It makes no sense to jump from a 1536 core GM204 card, to a GM200, when you have so many GM204 cores left to exploit. Nvidia has literally never done this.
I love tech discussion, but it's implausible enough to be a worthless discussion. -
Can you please stop using the 780 vs 780 Ti comparison? You have to account for power draw from the faster GDDR5 chips so it's not an apples to apples comparison. I just showed you right above in my post that 780 vs Titan at the exact same clockspeed differs by about 24W TDP, so that's how much you save by cutting out 384 Kepler cores.
As for your 980 vs Titan X argument, again please calculate the average boost clock, it's 1119 MHz for Titan X vs 1194 MHz for 980, not 1215 vs 1050 as you claim. And again let's look at the graph here:
Titan X already runs 64 MHz slower than 980 in Crysis 3 according to the link you posted, yet simply due to having 1024 extra cores it uses 91W more power. Assuming linear scaling, this means each core contributes 0.0888W. Cutting out 640 cores to make a 2432 core GM200 that's already downclocked by 64 MHz relative to 980 will still use 34W more, or basically a 200W card give or take.
Does that make it clear now? Your cut down GM200 that's clocked 64 MHz slower than a 980 will still be a 200W card. How much downclocking and voltage binning do you think you'll need to get to 120W?Last edited: Aug 14, 2015 -
I only hope that prices on the rest of the stuff like 970 + 980s go down when a 990m release.
-
After all this, they release a statement saying they've decided to skip the 990 and 990M because Pascal is ahead of schedule.
loljaybee83 likes this. -
Or they say Pascal is delayed, so in the meantime we'll get Titan XYZ on desktop, while mobile will get 985M with 1792 cores.
-
Sent from my Nexus 5 using Tapatalk -
God I swear this forum is slowly but surely going down the same rabbit hole of stupidity as most desktop forums... -
I mean technically all mobile GPUs are variable TDP when they throttle trollolololJ.Dre likes this. -
It makes absolutely no sense to name a product that consumes 100W the exact same as a product that draws almost twice as much. None whatsoever.
I'm calling it. That part of the rumour is false. It's either several different products with distinct names to differentiate their performance class or it's entirely false altogether and there will be only one GPU with a significantly smaller TDP range.Cloudfire and Robbo99999 like this. -
Robbo99999 Notebook Prophet
Cloudfire likes this. -
-
Nvidia invests in die space for its GPUs, as does Intel for its GPUs, but Intel does not for its CPUs. Intel dies get smaller each shrink and furthermore relative die space allocated for the iGPU gets bigger each generation. It's a double whammy for the CPU portion of the chip.
In contrast, Nvidia die sizes don't get smaller when the process shrinks, if anything they get bigger. That's a metric butt ton more transistors and functional units. As a result GPUs grow ever more powerful and complex.
And because graphics workloads are infinitely parallelizable (we can always increase resolution/AA, do stereo rendering for 3D and VR, add fancier shaders, etc.), dropping a node to increase transistor budget on the same or bigger die size always translates into hefty performance increases.
CPUs aren't so lucky. Because many workloads are still serialized (like gaming for example, thanks to DirectX), and because single-threaded performance increase has stalled due to hitting the frequency wall, just building bigger CPUs with more cores and cache doesn't necessarily translate to significant real-world benefits to justify the cost.Last edited: Aug 14, 2015Robbo99999 likes this. -
That's pretty much "the big deal." Hope this helps you understand or see why. -
-
besides, i for one am not hellbend on jumping at other member's throats over defending an opinion on unreleased/unspecced hardware
Sent from my Nexus 5 using TapatalkMr Najsman and Robbo99999 like this. -
-
in the end, we shouldnt forget that its all just fun and games to speculate
were all on the same (mobile high performance) side here, after all!
Sent from my Nexus 5 using Tapatalk
nVidia 2015 mobile speculation thread
Discussion in 'Gaming (Software and Graphics Cards)' started by Cloudfire, May 9, 2015.