Bruh. Mobile Barts/GF114 to Pitcairn/GK104 was +80%. GF110 to GK104 was +50%. GF110 to GK110 was over +100%.
-
King of Interns Simply a laptop enthusiast
What would be nice is a card that performs between desktop 980 to 980ti levels and sits in the same position as the 970M on mxm...
....with sub 100W power consumption and sub $500 prices
A nice final upgrade for my old but trusty Alienware!
Considering 980ti performs roughly 60% faster than the 980M not an impossible featLast edited: Nov 20, 2015 -
Sent from my SPH-L720 using Tapatalk -
GTX 580M GF114:
330mm2 100W
>>>>>>
GTX 680M GK104
290mm2 100W
GTX 880M GK104
290mm2 105W
>>>>>>
GTX 980M GM204
398mm2 100W
Now however:
GTX 980 (mobile) GM204
398mm2 165W
>>>>>>
Pascal
???mm2 ???W
As you can see:
Previous flagships had around 300mm2 die area. If Nvidia goes down to 300mm2 again for the upcoming Pascal chip, they will lose a lot of performance potential compared to if it was 398mm2 Maxwell vs 398mm2 Pascal. Meaning they lose 70-100mm2 die area where they could have fitted a lot of transistors.
Then its the power problem. Previously the cards have always had around 100W TGP/TDP limit. The Maxwell chip today is 165W. If Nvidia goes down to 100W for the mobile Pascal chip which is probably desired since more OEMs can build notebooks for it, it wil lose performance potential again. That extra 65W for the Maxwell card is eating in to the Pascal advantage.
It all brew down to what Nvidia will do for the mobile Pascal GPU. 30-50% performance increase over GTX 980, certainly. But if they cut down on power and die size, it will be hard to reach higher performance.jaybee83, Robbo99999 and hmscott like this. -
Robbo99999 Notebook Prophet
Cloudfire likes this. -
But we have regular sized MXM GTX 980s too for laptops with lower TDP, so it would be natural to compare against those I think if you think of 165W big sized GTX 980s as experiment and unfair to compare against.jaybee83 likes this. -
Robbo99999 Notebook Prophet
-
I doubt they'd cut down the size. It's unnecessary.
-
400mm2: ~56 chips per wafer
Thats how they make moneyTomJGX, Robbo99999 and hmscott like this. -
It's not necessary, especially with GDDR5X. Might happen when HBM comes to mobile, to extend the lifespan of HBM in the market. -
-
Robbo99999 Notebook Prophet
Last edited: Nov 22, 2015King of Interns likes this. -
King of Interns Simply a laptop enthusiast
Sounds interesting to me. The question isn't if amd have new architecture or not they sure do. Its more about whether they ever return to the mobile gpu scene....
Up until 2012 when they actively competed in the space nvidia never was far ahead. Since then it is pointless to compare because AMD have been absent in mobile -
AMD hedged their bets on the consoles and while they're largely been a disaster overall in terms of performance... Consoles are purchased every day... based on GCN... If Sony goes through with the PS5 next year which I highly doubt, they would probably partner with AMD even if just for the sake of compatibility. The consoles are AMD's revenue stream right now... and with Halo 5, Xbox sales blew through the roof = more money for AMD.
I don't really think that Greenland will be a new pasture for AMD. Why? Because they keep naming GCN after either volcanic or icelandic areas... Sounds like a GCN rehash to me. I guess we will have to see! -
The simple fact is they are obligated to offer a product that is better than Maxwell, and keep their word for 10x the performance, etc. in areas regarding memory. If they decrease the size, they'd be undercutting that promise. Not to mention, it's completely unnecessary with GDDR5X. And yes, it would hurt them financially. It doesn't matter if AMD doesn't have something "on par" with the 980M now because AMD products are always cheaper, offering better price : performance. People purchase NVIDIA products because they perform better. If they stop producing products that perform better (as good as they claim), they're lying to consumers and shareholders.
That's how they'll lose market share: misleading consumers and charging top dollar for mediocre products. Hell, I wouldn't be surprised if more lawsuits arose with Pascal.Don't even get me started on driver support. It just adds to the pile of crap.
Last edited: Nov 22, 2015 -
Unless AMD is in the business of making 5W GPUs, you can probably forget about 14nm GPUs.
The process is for low power only currently, they have a target voltage of 0.8V, TDP for these are typical <5W, GlobalFoundries and Samsung have said the 14nm LPP and LPE is for connectivity (modems, broadband, 3G/4G), DRAM, low power SOCs (Apple, Altera, Qualcomm etc make these).
Its a reason why GloFo only use 2 fabs out of 8 (I think they have 8) for 14nm. The rest manufacture chips in other processes which they also need to make money from.
Greenland and the upcoming GPUs are 16nm from TSMC for sure.
AMD have ARM chips that will be launched, even a project called Skybridge which combine x86 processor with ARM, these will probably be 14nm.
Point being they will never go overkill and make a 2x faster GPU, when the +50% 300mm2 Pascal is more than enough to get a ton of sales while still being much cheaper to manufacture.
Don`t forget that 16nm FinFET also cost more than 28nm to manufacture (silicon cost) -
But decreasing the size and only seeing 50% improvement would make what they claimed here impossible.
Seems completely unnecessary. Guess only time will tell. -
That claim is about "deep learning applications".
Does that translate 100% to general performance increase ? Or perhaps its just structured in a way to favor those "deep learning applications" ?hmscott likes this. -
*facepalm*
-
Maxwell does poorly in GPGPU applications that need double precision.
GM200, Maxwell biggest chip, have 1/32 of the cores with FP32 enabled.
GK110, Kepler biggest chip, however had 1/3 of the cores with FP32 enabled.
They butchered the computation performance this time.
Sooo, GP100 Pascal with more FP32 cores enabled compared to GM200, mixed with HBM2 with greater bandwidth, they should have no issues fullfilling the 5x performance gains over Maxwell in "Deep learning applications". You can see from the link above that even Kepler was 2.5x faster lol
We, the peasants, must unfortunately most likely settle with GP104 and GDDR5X anyway. But who cares, it should be a really good upgrade over GTX 980 (GM204) anyway -
I still can't believe they'd go with GDDR5X over HBM on flagship gaming GPU's.
-
im fairly sure nvidia said they'd be running with hbm2 across all lines. ill check when im home.
Sent from my SGH-M919V using Tapatalk -
GP100: HBM2
Everything else, desktop and mobile, GDDR5X
Hynix began production of HBM1 in Q2 2014. We didnt see a product with HBM1 until AMD launched Fury cards July 2015. Production of HBM2 begins Q1 2016 according to Hynix own roadmap. Source: http://www.kitguru.net/wp-content/uploads/2015/03/sk_hynix_tsv_roadmap_hbm.png
Samsung will also begin HBM2 production sometime during 2016.
It will take a good while to get HBM2 available for products. Maybe not available until summer, then manufacturing of GPUs take place. GPUs with HBM2 wont be available until after summer.
So I think that Nvidia will perhaps launch GP107, GP106 and GP104 first with GDDR5X since that will be available for them in the timeframe they plan to launch the GPUs (Q1 2016 hopefully)
Then later follow with GP100 in the autumn 2016 with HBM2.
AMD, who knows. They already have HBM1, perhaps they use that for all upcoming GPUs?
I think though, that with the price premium for HBM1, the easiness of just replacing GDDR5 with GDDR5X on mobile GPUs without redesigning the MXM which may not support HBM with current revision, the possibility to go for higher 8GB capacity for GDDR5X while being stuck on 4GB on HBM1, the similar bandwidth, makes GDDR5X appealing for both Nvidia and AMD until HBM2 is ready and yields are acceptableJ.Dre likes this. -
Found this ages ago, havent seen it here.
-
It was stated in various articles that Arctic Islands from AMD will be using HBM2 on 16nm process.
So as far as new gpu's are concerned from AMD end, most of them should be HBM2 (though some have speculated that only the high end cards will get HBM2, while low and mid end might be contained to HBM1 or possibly GDDR5x (dunno about GDDR5x since AMD was relatively clear about HBM being the future).
If Nvidia is considering GDDR5x, then it is possible they are doing so because they couldn't secure enough HBM2 production capacity (seeing how AMD was stated to have secured priority access from Hynix). -
Maybe AMD will surprise us. NVIDIA needs competition.
-
AMD needs a new architecture, not a tweak, and the 20nm snafu gave Nvidia plenty of time to work on Pascal, hopefully AMD was smart enough to do the same for AI -
PrimeTimeAction Notebook Evangelist
I think Nvidia realizes that they cannot tie down their policy directly to AMD market share. They could have milked Maxwell for one more generation. They must think that they have enough cards in their hand for future. They want people to upgrade their GPUs, it is only possible if they offer consumers worthwhile performance improvements regardless of what AMD is offering. I don't think they will milk Pascal any more than Maxwell. If they do, it will only hurt them as people will not be interested.
The Nvidia/AMD competition is not the same as Intel/AMD competition. -
I'll bet that Nvidia releases one mega-flagship, which has HBM2, a refresh after the 1st wave.
TomJGX, deadsmiley and hmscott like this. -
Probably, lol.
-
I can't believe Pascal is so close now. The gap between Maxwell and Pascal has felt so much shorter than the gap between Kepler and Maxwell. Time, where art thou going? Allow me to catch my breath and save up monies!
-
were getting old buddy, tempus fugit
Sent from my Nexus 5 using Tapatalk -
-
King of Interns Simply a laptop enthusiast
Nah Kepler was here too long!
-
-
I will be 51 on month from today. w00t!
Sent from my SM-G928V using TapatalkTomJGX likes this. -
golden midway then at 32
Sent from my Nexus 5 using Tapatalk -
Just turned 43. I'm an old fart.
-
Youngin's, all of ya... now git off my lawn!
Sent from my SM-G928V using TapatalkTomJGX, jaybee83 and Robbo99999 like this. -
i_pk_pjers_i Even the ppl who never frown eventually break down
jaybee83 likes this. -
i_pk_pjers_i likes this.
-
PrimeTimeAction Notebook Evangelist
35 here but already too old for anything
It hits you hardest when you try an online FPS and a kid on the other side starts blabbering about his sexual encounters with your mother. -
Sent from my SM-G928V using TapatalkNiaphim, triturbo, i_pk_pjers_i and 4 others like this. -
I thought this was the pascal thread. You all trying to figure out how old you will be when pascal actually launches lol.
Since everyone's giving away there age I'm 41 years old and enjoying gaming on my laptop more than ever. -
i_pk_pjers_i Even the ppl who never frown eventually break down
I used to think that there weren't any older gamers out there and all gamers had to be young, but after seeing my 53 year old teacher play games, I feel a lot less worried about not being able to play games when I am older. I respect older gamers a lot.
I mean hell, there's literally even 70+ year olds that play games these days, which is just AWESOME imo.TBoneSan, Niaphim, jaybee83 and 1 other person like this. -
It is natural, it helps to learn new things, it is extremely fun!
Of course, there are many other games besides computer games, but computer games are great
I am going to play games for a loooong-long time, don't give a damn about how old I am (25 now btw)i_pk_pjers_i likes this.
Pascal: What do we know? Discussion, Latest News & Updates: 1000M Series GPU's
Discussion in 'Gaming (Software and Graphics Cards)' started by J.Dre, Oct 11, 2014.