I am curious... where are you getting your data from?
Notebookcheck has results from 3DMark 2013 comparing 780m with 880m and 980m... the overall difference between 780m (which is based on 680m, but has more CUDA cores) and 980m, based on 3DM13 is 67% (780M = 5744 and 980M = 9598), perhaps just over 70% to account for some potential errors (but not over 130%).
http://www.notebookcheck.net/NVIDIA-GeForce-GTX-980M.126692.0.html
-
yea did some quick math and it looks like 580m to 680m was about a 60 percent performance increase, 880m to 980m about 68 percent. I'm guessing nvidia being who they are will most likely not blow their load all in one release.
-
3DMark Vantage is HIGHLY CPU bound, so likely a new CPU alongisde those GPU upgrades made up for the rest of the difference in Vantage.
-
HTWingNut likes this.
-
Robbo99999 Notebook Prophet
980M is 2.36 times faster than 680M (not 780M), which is the same as saying 136% faster. It also made sense that HTWingNut compared the first flagship card of each generation, so again valid to compare the 680M to 980M. Historically as he proved, changes in base architecture bring about massive changes in performance - in my mind always best to be buying just as that new wave comes in to enjoy that increased performance for a longer overall time before it becomes obsolete.
(Haha, apologies for the bolding & large fonts in both your own & my post, I'm drawing attention to the fact you're not comparing the same thing.)Last edited: Mar 6, 2016HTWingNut likes this. -
A lot of guessing in this thread so lets get the this straight once and for all:
GTX 680M over GTX 675M (580M) from Notebookcheck`s review.
Only comparing games (minus Fifa 12 since that one was too close)
51%+57%+32%+70%+40%+69%+46%+63%+35%+38%+39%
GTX 680M was in average 49% faster than GTX 675M
GTX 680M over GTX 580M from Anandtech`s review.
41%+41%+32%+68%+54%
GTX 680M was in average 47% faster than GTX 675M
Conclusion:
Its safe to say that GTX 680M, which just like GTX 1080M, was a new architecture and a new node, was 50% faster than GTX 580M.
Question:
Why should GTX 1080M be 100% faster than GTX 980M? Synthetically perhaps, but in gaming? -
That's a bad comparison to begin with. Each new generation brings other new and efficient technologies.
You shouldn't use one source and claim 100% accuracy. Compare the 580M to the 680M with data from multiple sources. There are newer technologies involved now, too. For example, DX12 which will improve efficiency and overall performance. The 1080M should be more because of all of this, collectively.
Performance will vary depending on the game and many other things. However, there are instances where the 680M is 100%+ over the 580M.Last edited: Mar 6, 2016 -
Tesla is coming... Everything else is on the burner... Goals have shifted and there is now a possibility of a GP104 before the end of the year but only for thick high performance notebooks that can barely cool it... GP106 seems likely for 3Q... Again, 10% or so faster than 980M.
GP104 will be a sizable gain... But if it does get released this year it's going to come with hefty cooling and a thermal throttle to match... Mod the vbios at your own risk... TSMC made a mess again and that's all I'm going to say about the matter.......Kaozm, TomJGX, jaybee83 and 1 other person like this. -
Robo99999 is correct. It is comparing the first flagship of each architecture. 980m is the first flagship mobile of the Maxwell architecture.
For the most part 3Dmark is fairly indicative of performance in gaming, of course every game varies, along with drivers, etc. But unless you personally run the benchmarks its the best we have. I could even go back to my own benchmarks if I wanted, but just don't have the energy at the moment.Last edited: Mar 6, 2016Robbo99999 likes this. -
Not sure what you have heard about GP104 though, but if its a mess thermal wise due to TSMC incompetence, isnt Clevo and MSI atleast used to 165W cards now anyway? Can it get any worse than that?
DX12 will be nice but I`m not expecting it to be much better than DX11. It will be like Mantle, better CPU utiliziation which makes systems with crappy CPU perform better. Which excludes most gaming notebooks.
SLI builds will get better performance than DX11 because it offer better scaling. -
-
680M, 780M and 880M were all Kepler based cards, as such I find it a bit selective/dubious to pick 680M out of the bunch and compare it to current top end Maxwell (980M) and claim an architectural jump will account for double the increase in performance between immediate releases, when this hasn't happened historically - Nvidia decided to release revisions of existing architectures while tweaking clock speeds, etc.
There's roughly 56% performance difference between 880M (Kepler) and 980M (Maxwell)?
Nvidia said they expect another 2x performance per watt increase vs Maxwell.
Now, there's also node decrease to take into account, so from one node to another, perhaps we might see an actual doubling of performance at same power consumption - though, will this actually be the case with top end mobile GPU's? Nvidia might end up milking Pascal, so they might not release the most powerful versions immediately.
However, there's something else... what's the TDP Nvidia expects to set Pascal at?
880M was at 103W, and Maxwell was at 120W.
I wonder if they will keep existing TDP's or try to go back down to 100W... or might they even try to increase it further? -
Last edited: Mar 6, 2016moviemarketing, PrimeTimeAction, hmscott and 1 other person like this. -
Robbo99999 likes this. -
That's stock. The actual draw is about equivalent to a desktop 780
-
Unlike BGA GPUs, MXM cards are not rated in TDP but rather TGP to include not only the core chip, but rather total board draw:
- The MXM GTX980M 8GB had a TGP of 103W, the BGA version a TDP of 85W
- The MXM GTX970M 6GB had a TGP of 80W, the BGA vesion a TDP of 62W
- The MXM GTX965M 4GB (old GM204 version) a TGP of 70W, the BGA version a TDP of 56W
- The new BGA GTX965M (GM206) has a TDP of only 50W, same as the old GTX960M
That all being the stock rated design power. Here the older cards in comparison:
Stock TGP: GTX780M 100W
Stock TGP: GTX880M 105W
The only 125W (TGP) MXM card is the GTX980 SLI variant used in the MSI GT80S...which is also considered as being the "in-official" MXM design limit.
(Not counting the mobile 980s fueled by additional external power phases on motherboards).
That being said, people ran heavily OCed GTX980M in SLI that (alongside OCed CPUs) have even tripped dual 330W AC Adapter...Last edited: Mar 6, 2016D2 Ultima, moviemarketing, TomJGX and 5 others like this. -
Between 880M and 980M, the difference is around 57%.
Even if a revised Maxwell comes out, people do not seem to be expecting more than 10% increase - and there's a question how much more can Nvidia push out of Maxwell (I suspect not a lot before they start seeing diminished returns due to higher clocks that start putting a dent in their 'efficiency' which pretty much goes out the window with such tampering - unless they decide to release a Maxwell on 16nm, though I doubt this will happen, as I think they will be saving the process for Pascal all together... although I guess there's a chance they could do it).
Plus, I took the liberty of checking 580M (Fermi) Futuremark results, and then compared them to 680M Futuremark results... there's a 45% performance difference between the cards (which accounts for both architecture and manuf. process shrinkage).
http://www.futuremark.com/hardware/gpu/NVIDIA+GeForce+GTX+580M/review
http://www.futuremark.com/hardware/gpu/NVIDIA+GeForce+GTX+680M/review
http://www.futuremark.com/hardware/gpu/NVIDIA+GeForce+GTX+880M/review
Of course, this increases to 135% if you compare 580M and 880M, but my point was to illustrate how past progression from one GPU release to the other did NOT yield double performance with new architectural releases immediately - even with changes to architecture and manufacturing process.
Whether Nvidia changes this strategy with release of Pascal remains to be seen - I'll admit there's a possibility of this happening, but how probable is it actually?
It might be that different sites may report different numbers, depending on usage, testing parameters, etc. We'll see.
At any rate, the performance difference between 880M and 980M is not double at same power levels (even though Nvidia said they expected double performance per watt).
I'll admit that we cannot say anything with any certainty when a new GPU release is 'imminent' while also featuring changes in architecture and manuf. process, but if you look at the progression from history examples... I wouldn't hold my breath for double perf. increase at top end Pascal.
It would be nice if it happened, yes... but how likely is it of Nvidia to deliver just that? -
What if Nvidia never offered rebrands? It's the same silicon just marketing trickery. I'm not interested in Nvidia's evil plan to stretch out an architecture, only in the actual performance numbers of the technology.
Since 980m does not and likely will not have a rebrand, then Pascal will likely be at least twice the performance of 980m at the same TDP. So what if there was a rebrand of 980m with slightly faster clocks and maybe some extra tomfoolery to increase it's performance by 30%. Then would I immediately be wrong that Pascal would only be 70% improvement and not 100% of top end MaxwellRobbo99999 likes this. -
Robbo99999 Notebook Prophet
-
GTC can't come soon enough! -
My point centres around successive GPU releases which we saw resulted in roughly 50% performance increase at same/similar TDP, with new architecture and new manuf. process between flagships.
Comparing architectural differences and manuf. process shrinkage of the latest flagship is certainly valid from a certain point of view.
But when you take into account how Nvidia released their GPU flagships and the amount of performance usually gained from them in comparison to the immediate predecessor (such as 680M vs 580M), it seems to me that for this particular situation, you cannot compare 680M (made on a new 28nm manuf. process) to a modern 980M that has a more advanced architecture and is on a mature 28nm process (unless Nvidia changes this release and ends up giving people top end Pascal with double the performance at same power consumption - in other words, 'going all out'). -
Pascal is blisteringly fast... Its cooling it that is the issue. The way I see it now, there's only one machine on the market that could cool it without a liquid cooling marketing gimmick and it would require a new heatsink. That's GP104. And before anyone starts with the "he said it would work and its his fault I spent money on a card that doesn't work" nonsense, I'm telling you right now there are no guarantees on prototypes and there are no guarantees on anything at this point. I can tell you TSMC made a blunder that may hurt in the short term but in the long run, may actually have propelled FinFet tech massively.
moviemarketing likes this. -
Either way, things don't look good for owners of Clevos prior to the 870DMUnless Clevo releases a heatsink redesign for the 750/770.
-
-
Ethrem and metacarpus like this.
-
haha i think thats a case where pure text cannot properly transport pronounciation and stress on a specific word to the receiving end / audience
i also think its clear hes talking about the phoenix, no secret there. furthermore, im guessing he wanted to stress that GP104 is ALREADY so blistering hot, that its gonna be insane to even THINK about anything above that in terms of cooling....Last edited: Mar 8, 2016Ethrem and metacarpus like this. -
Spartan@HIDevolution Company Representative
-
PrimeTimeAction Notebook Evangelist
-
LPP is a different beast and I don't have the answer to your question there.TomJGX and PrimeTimeAction like this. -
This might have more to gnusmas' design than the process. Just saying.
-
-
You hide your information behind claims you're not allowed to speak of it, yet you continue to "hint" to some disaster that's taking place as TSMC.
FYI: Not suggesting you're wrong but it's becoming an annoyance. If you're not "allowed" then why?! Don't tease us.
"TSMC will enter into volume production in the time-range of Q3 2015. TSMC and Nvidia have also confirmed on more than one occasion that the next generation (Pascal) GPUs will be produced on the 16nm FinFET+ node... With the initial production being more or less fully utilized by Apple, we should see the Pascal GP100 GPU in Q1 2016 by the earliest, with Q2 being the more conservative estimate."
"The upcoming ramp-up of 16nm production capacity will buoy TSMC's sales performance starting March, the report quoted market watchers as indicating. The foundry's 16nm FinFET processes consisting of 16FF (16nm FinFET), 16FF+ (16nm FinFET Plus) and 16FFC (16nm FinFET Compact) will generate more than 20% of its total wafer revenues in 2016."
I can't find a single article suggesting that NVIDIA Pascal is in danger because of TSMC. There are reports of huge delays from more than a year ago from TSMC due to the high demand of the next generation chips, but that has been resolved since. And nothing suggests that Pascal is doomed and massive delays on the horizon are inevitable. If you have the inside scoop on some of these companies, you should be hitting the stock market, not gossiping with people online.
But hey, if you must spread gossip and rumors, go for it. This thread was never meant to be a rumor thread, hence the title, "What do we know?"Again, this is not meant to be offensive or an "attack." And I don't really have the time nor willingness to debate or argue, so please don't take this out of context and start something... But I do look forward to reading your reply, hopefully with an explanation. I'm very confused about your claims of impending doom.
Last edited: Mar 10, 2016 -
Hopefully he isnt right...
Do u guys think, that clevo and other notebook manufacturer will bring new models with pascal (different chassis etc.), or just go with the same chassis and just smash pascal GPUs in it?
And what I've read a lot and still dont understand: Why should pascal run hotter than maxwell? I'am a totally noob, but from 5xxm to 6xxm (40nm to 28nm), the GPUs ran like 10°C cooler with same cooling system, why would it be different with pascal? In my opinion: more performance per watt = less heat?
And do u think that nvidia will ban overlocking entirely with pascal? I'am not up to date, but like 2008 (where notebooks were big and throttling was no issue), overlocking your GPU would easily result in a 10% performance-boost.
Hopefully 2016 will be the year, where AMD comes back into the notebook market, so there will be competition again, just like in the good old days.Last edited: Mar 10, 2016 -
PrimeTimeAction Notebook Evangelist
I dont think anything Ethrem has said is based on rumors or speculation. He is just sharing his knowledge based on facts he is aware of. And just because he cannot share all, doesn't mean he shouldn't share anything. He has been pretty clear about his statements and I don't see any ambiguity or tease.
In fact this thread has more information and insight on Pascal then any other forum on the internet.
But again that's just my opinion. You are more then welcome to disagreemoviemarketing, jaybee83 and Ethrem like this. -
-
Since when did it become okay to crucify someone for sharing information in a rumor thread? I know more than I can share... But the Chinese leaks should be coming soon. I'm out of this thread except in a moderator capacity
Starlight5, moviemarketing and PrimeTimeAction like this. -
even if @Ethrem is dead wrong, those info tidbits are very entertaining, and thats a worst-case scenario i can live with
steberg, hmscott and moviemarketing like this. -
Whoa, again you are all saying I'm attacking him or crucifying him. Ouch, that's not it at all.
See the smiley faces and stuff? You interpret it wrongfully. Don't think I could be more clear.
hmscott likes this. -
cj_miranda23 Notebook Evangelist
Will we see newly design clevo laptop to accommodate the pascal and polaris gpu? Is there any rumors about it?
-
moviemarketing Milk Drinker
-
so far,here i thought that polaris was in the lead...?
ah sorry, misunderstood, i though this was about release date, not performanceLast edited: Mar 11, 2016 -
cj_miranda23 Notebook Evangelist
-
-
-
moviemarketing Milk Drinker
-
Sometime ago there was a thing called choice, I hope it's not a thing of the past. As much as I love AMD, I would still like to see people have the choice. Even more so if they pick AMD instead of nGreedia.
-
moviemarketing Milk Drinker
-
@J.Dre musta been a couple hours past his meds, or something, I don't think he wanted to chase you away
Your posting got me reading about the TSMC delays, what I found is mostly related to Apple A10 production, but it made me realize there are a lot of high demand releases coming on 16nm, as well as resources dedicated to push 10nm.
I'm loving my 980m SLI for now, all games are doing well, so if Pascal needs to cook a little longer, that's ok with me.
So I read that TSMC says the earthquake erased 1% of production, then read that they aren't coming clean on the facility damages actually done during the earthquake, so what are we talking about??
Also read that TSMC wants to double production for A10 for Apple, as they are the sole supplier...
That makes me think everyone else's production will suffer for it, for Apple's need for increased demand, which may rise with demand from consumers.
What do you think?Last edited: Mar 10, 2016
Pascal: What do we know? Discussion, Latest News & Updates: 1000M Series GPU's
Discussion in 'Gaming (Software and Graphics Cards)' started by J.Dre, Oct 11, 2014.