What are your thoughts on AMD's upcoming mobile graphics cards? I was a bit surprised to learn from their recent presentation that they haven't given up completely on mobile gaming hardware.
![]()
![]()
![]()
Apparently 'Polaris' is not quite comparable to the traditional family of GPUs we have seen in the past - instead it is an umbrella term that covers disparate hardware, including both GDDR5 and HBM parts:
As for the mobile gaming cards, it seems their goal is 'console caliber performance in a thin and light notebook':
-
moviemarketing Milk Drinker
-
"Console-caliber performance in a thin and light notebook"
Ok, I already dislike your statement that "console-caliber" is acceptable. Not that it isn't already achieve-able with 960M cards.
That being said, none of this means anything if the specs itself suck, or if it's hot like all other GCN cards are.
I sincerely hope great things come from this.Ashtrix, i_pk_pjers_i, Ionising_Radiation and 2 others like this. -
-
-
moviemarketing Milk Drinker
There have been some rumors however, that they will update the Xbox with a Polaris card.Ashtrix and i_pk_pjers_i like this. -
The GPU they tested against GTX 950 is def perfect for mobile.
Load voltage is 0.83V which is even lower than idle voltage of today`s GPUs. The system consumed about 85W, so I`m guessing thats a 30W GPU performing like a 90W Maxwell which is very impressive.
It also seems like low/midrange Polaris will be 14nm due to that process have a target voltage of 0.8V. Beefier GPUs need more voltage so like Anandtech says, AMD will use both 16nm and 14nm this time.
Laptops could only get 14nm perhaps. Its also confirmed that AMD will use both GDDR5 and HBM for their Polaris GPUs. So like I thought earlier, no HBM for mobile (its tricky with current MXM specifications).
AMD is ready for notebooks once again so yay. The one way Nvidia street during the several last years have been booooringghegde, transphasic, n=1 and 1 other person like this. -
That does nothing; the performance will still equate to that of its current piss-poor card.i_pk_pjers_i likes this. -
Will it MXM-B? That's all I care about.
-
For a mid-range mobile GPU, that looks to be pretty good. Better than their R9 m385x that's for sure, but I hope AMD doesn't call this test GPU the R9 m480x.James D, CaerCadarn, TomJGX and 2 others like this. -
moviemarketing Milk Drinker
Seems we will see the first Polaris laptops before the end of Q3:
TomJGX likes this. -
GDDR5 might be reserved for lower/mid range gpu's whereas HBM 2 could be reserved for higher end.
I don't see how MXM would be incompatible with HBM 2 specifically. The main thing that would likely be affected is the cooling design that would accommodate HBM 2 gpu's.
And besides, it might be possible that the MXM interface could undergo changes as well.
Besides, AMD was the one that introduced HBM cards in the desktop space first... its possible it might do the same again for HBM 2 in mobile (alas, nothing has been stated on this end, and you could quite likely be accurate that mobile parts might not use HBM at all).
Performance-wise, laptops wouldn't necessarily need HBM, but given the size constraints in laptops, HBM would reduce space requirements a lot and this could be an incentive for OEM's to modify cooling.TomJGX likes this. -
-
We may see near Fury level performance (if not more considering the architectural improvements in Polaris ment to remove the bottlenecks in Fiji) with mobile chips this year at which point bandwidth will become an issue. As long as AMD having 3 different GPUs this year is true the middle one will most likely be the top mobile offering (exactly what Pitcairn was when GCN launched) which means they don't have to do anything crazy and should just put 2 4-Hi HBM2 stacks with 512GB/s bandwidth and 8GB total memory. Let's say ~ 250mm^2 GPU + 2 HBM2 (should have surface area <= HBM1) stacks on an interposer which would be much smaller than Fiji due to GPU size and HBM modules only on one side, which also means a less complex interposer (due to less stacks) and less pins so maybe this would actually work with an MXM module, and the module itself could be smaller because the only other things necessary are voltage regulation and other power delivery components. Not sure what will happen with mobile Pascal considering that nVidia is only supposed to use HBM on their highest end model (GP100? or that arrives next year?) and the issues with the Drive PX2 module and it only having 8 TFlops of SP compute performance for 250W TDP (that's just R9 Nano level) so 4 Tflops per GPU in the best case (if they didn't count the Tegras), all this doesn't speak confidence, looks like the efficiency reduction from putting all those things necessary for compute back that they removed from Maxwell is hurting them.
-
I just hate all this speculation at this point. I just want to see PRODUCT from either Nvidia or AMD.
-
killkenny1 Too weird to live, too rare to die.
-
i_pk_pjers_i likes this. -
saturnotaku Notebook Nobel Laureate
If Apple doesn't completely abandon discrete GPUs, I'd be willing to bet we will see some version of these in subsequent generations of the MacBook Pro, and probably the iMac as well.
-
moviemarketing Milk Drinker
Last edited: Jan 13, 2016 -
saturnotaku Notebook Nobel Laureate
-
hmscott likes this.
-
And speaking of HBM2, interesting stuff just popped up on Videocardz.
http://videocardz.com/58127/jedec-updates-hbm2-specifications
Sadly no indication of die size.
Edit: Straight from SK Hynix website
so apparently HBM is coming to consoles and maybe even notebooks (I know it only says PC but there is an image of a laptop at least!).Last edited: Jan 13, 2016triturbo, moviemarketing and hmscott like this. -
If we do get a top MXM card from AMD, I will be the FIRST to test it! Can't wait!!!!! Fingers crossed for mobile top end MXMB with HBM.
TomJGX, TBoneSan, triturbo and 1 other person like this. -
It seems a huge step forward, reminds me for the 3dfx's Rampage chip back in 2000. Hope AMD will manage to come out with and they stay in the business;
Besides with what mobile GPU equivalent a Geforce 950? -
Hence why I'm not interested in such a low end card. It might be great for the ultrabook-type machines, but compared to the rest of Pascal and Arctic Islands it isn't even gaming-class. That's like saying a GTX 550 equivalent from Kepler drawing minimal amounts of power and being put into a notebook was great. No, it's pointless. Nobody cares about the power of a GTX 550 using less TDP and running cool when you have a 780Ti on the market... that's the stuff that goes into cards like a GT1040M or something. Not even a gaming card.
No, AMD wants to impress me? Let them shove a 980Ti into a laptop at 100W TDP, have it be overclockable and run cool like a 980M. Then we'll talk. -
980Ti performance inside of 100W might be possible at 14nm. AMD just needs to show some interest for once. I get the feeling they simply couldn't fund or were too understaffed to properly commit to mobile.
-
The desktop GPU (not yet fully optimized) consuming only ~30W at load, even if it's an entry-level 950 equivalent, is an incredible improvement for the red team. Moreover, this colossal per-watt improvement comes only from the node shrinkage and does not include the HBM memory, so I agree with sniffin that making a 100W card with at least a desktop 980 performance (ideally 980Ti) shouldn't be too difficult. It would be very VERY disappointing if AMD doesn't make use of this efficiency for mobile segment.
Meanwhile, notebookcheck has also posted an article about it:
http://www.notebookcheck.net/Energy...s-set-for-mid-2016-launch-Video.158182.0.htmltriturbo likes this. -
Well, looking at these figures which compare AMD R9 Nano and GTX 980, I think AMD could accomplish the said task and deliver 980ti performance in 100W :
http://www.legitreviews.com/amd-radeon-r9-nano-versus-nvidia-geforce-gtx-980_177681
Nano is slated at 175W.
Polaris is indicated to have 2.5 times more performance per watt than Fiji.
So, at 175W, a Polaris Nano would easily surpass 980Ti (even an overclocked one)... plus we don't know what kind of additional benefits HBM 2 would bring in terms of performance and power savings when compared to HBM 1.
However, if you drop down the TDP from 175W to 100W, what kind of performance difference would there be?
I mean, there wasn't too much of a performance drop for Nano compared to Fury X (about 10% drop in performance for 100W lower TDP).
So, would that mean that a 75W Nano part would be 20% slower than Fury X or would the difference be greater?
Ok... if we theorize the drop would equate 50% performance reduction for a 75W Nano part... that's about 15-20% below 980ti (if my math is accurate), but a 100W part could be as fast if not faster than 980ti.
Of course, this is working off R9 Nano (Fiji) - which is more or less a special case scenario and has HBM 1.
We have 0 clue how this could translate to mobile parts, or what AMD might do with Polaris to improve efficiency and performance before its actually released. -
Wait 3 months after AMD releases, and buy NvidiaLast edited: Jan 15, 2016Ethrem likes this. -
-
then Nvidia comes in and announces their new releases...
Nvidia releases just enough product at a low price to put the kibosh on AMD sales...
then Nvidia waits a little while longer before releasing enough product to fulfill demand, so prices rise to dizzying heights...
AMD desperately drops prices... but noone buys cause they are caught up in the Nvidia hype...
AMD bundles a bunch of games with their cards, people start buying...
Nvidia then introduces better bundles, higher prices, and then the vendors start spinning out custom boards.
AMD sits down and wonders how it all happened just the same way, all over again.Last edited: Jan 15, 2016 -
In fact, I'd love the opportunity to help Huang wipe the hubris off his face. -
hmscott likes this.
-
The Best NVIDIA Corporation Headlines in 2015
"The graphics chip designer had an exceptional year, winning market share and announcing plenty of new products."
"NVIDIA managed to grow its unit market share to a record 81.9% at the end of the second calendar quarter."
http://www.fool.com/investing/general/2015/12/16/the-best-nvidia-corporation-headlines-in-2015.aspx
http://www.fool.com/investing/general/2016/01/06/why-shares-of-nvidia-corp-surged-64-in-2015.aspx -
If this becomes a trend I won't buy NV ever again. Maxwell was just alot better than any of AMD's mobile counterparts so I feel like I had no choice, but if 12 months down the line the M295X ends up better than my 970M due to Nvidia completely forgetting Maxwell exists then that will be the nail in the coffin and I just won't ever consider them again. I'll take slightly less performance at release if it means my GPU will be supported properly in the long term.
On the desktop the choice is easy as there is a proper alternative to NV's lineup. The 290X ended up having incredible legs, great card imo. -
-
These performance gains on AMD's end (2.5 times perfomance per watt over Fiji) would effectively be consistent with what Pascal is supposed to bring over Maxwell (about twice perf. per watt)... and that's just a very baseline extrapolation off the entry level Polaris desktop card that was indicated to be open to further optimizations.
So, both upcoming AMD and Nvidia gpu's should be more or less comparable (whoever gains an upper hand on DX12 though is an open game... however, we do have indications for now that AMD may be ahead of Nvidia in this area).
Laptops with AMD gpu's (high-end MXM ones) were usually cheaper than Nvidia's counterparts while offering similar performance.
At any rate, we won't know anything conclusively until either company comes out with more information or until they release their upcoming gpu's.
But I wouldn't discount AMD. They may have made grandiose claims in the past, but they hadn't done this for years now, and their predictions in power saving and performance increases have been fairly consistent.Last edited: Jan 15, 2016 -
-
-
I think AMD coming out with Polaris before Nvidia does with Pascal would be good, since it would help them capture some of the market share back and possibly go unopposed for a while.
This is a major shift for both companies since there's a manufacturing process differential (AMD is using 14nm, and Nvidia will apparently be using 16nm) along with 2.5 times performance per watt for AMD and 2 times performance per watt for Nvidia.
Oh and here's more news about Polaris using both GDDR5 and HBM 2 :
http://wccftech.com/amd-confirms-polaris-will-feature-hbm2-and-gddr5/
It would seem that GDDR5 will be reserved for mid-range and enthusiast products, while top-end will get HBM2.
Still no word on what mobile parts will be using, but I'm hoping top-end mobile Polaris gets HBM2.
Only confirmation is that laptop Polaris parts will be available before back to school shopping season (or whatever the heck they call it) - which means this summer - or roughly 5 to 6 months from now.
Oh, and the article does confirm that the power reduction on demo-ed Polaris desktop part was tested with GDDR5, not HBM 2.
So, top end solutions might end up with more performance than estimated (and maybe even Polaris Nano is in the works?).
Tidbits of info, but I'm liking what I'm reading... it is encouraging.Last edited: Jan 15, 2016 -
For Fury, not allowing an AIB design for fury X and having a weird fury nano that I still have no idea who it targets. First of all, Nvidia can afford to do that with Titans by because they had the performance and the market share to be arrogant. And also nvidia release an equivalent card with better non reference designs soon after. AMD doing it just kinda hurts them. There is no point in buying a WCed card without a proper block on a reference PCB. In my eyes, Fury X was a total failure. Fury nano is weird because 2 slot cards works perfectly fine with most matx/mitx builds, whats the point of a fury nano? I have no idea. It would be nice to push to the mobile market but they didnt. The fury non x is actually a decent card with ability to unlock shaders for certain cards except its overshadowed by nvidia's offering and produce non significant improvements over nvidia counter parts outside of 4k/dx12. 390x/390 arent exactly bad rebrands but they are just still rebrands.
Fury series wasnt exactly a bad generation, just that nvidia have far impressive offerings. You could complain all about nvidia's tricks on 970 and maxwell clock switching, but the fact remains that nvidia absolutely dominates on the mobile market and have more or less a crown on desktop market with cards like the 980 TI KINGPIN.
/rant -
Mobile will probably get GP104 which I doubt will use HBM2. Probably the new GDDR5X standard.
Mantle set in motion events that lead to DX12. It was about reducing the pointless overhead that existed on PC APIs. What we have is a better DirectX, and Mantle itself is the basis of Vulkan. It was one of the best things to happen in a long time. Freesync was not knee jerk, it is in many ways a better technology than Gsync from a cost perspective. And there is nothing wrong or bad about providing a free open alternative to another one of Nvidia's proprietary technologies. My notebook has a Gsync compatible panel and GPU, but I can't use it because my model didn't come with a Gsync "license" - basically I bought it a few months to early. Nvidia's ******** is tiresome.
AMD's desktop lineup has far better performance/$ in basically every segment. The fact 390X is a rebrand doesn't change the fact it often BEATS the GTX 980 yet is available for $100 cheaper. Like seriously, explain to me how the GTX 980 is a better buy than the 390X. What does it being a rebrand have to do with anything if it performs the same if not better and is available for cheaper? It's weird logic like this that leads to Nvidia owning 80% of the market. It makes no sense at all but there it is. Hawaii > GM204 but you wouldn't know it by looking at sales figures.
The mobile market though I agree is a mess for AMD. They need to follow Nvidia's lead and throw TDP out the window. Release Fury Nano in MXM form and piss all over NV's parade.Last edited: Jan 16, 2016 -
I am look at it in terms of business decisions. As long as free sync seemed like a knee jerk movement, most people see it as that. And I think there is still a stigma against free sync. Same thing as mantle, great product, bad marketing. No one uses the name mantle now and thats the thing.
GTX 980 have technically lower power consumption and a lot more OC headroom. From what I saw, 390x is around the same as 980.
Being a rebrand means your used cards are heavily cannablizing your own sales. If someone can walk out and buy a 290x for way cheaper, why would he bother with a 390x. You have to give your consumer on incentive to buy new products.
Again purely business decisions.hmscott likes this. -
Meaker@Sager Company Representative
Mantle did what it needed to do and with a strong high end option the price of free-sync will be very attractive.
-
It'd just be all green by now wouldn't it?
Sent from my E5823 using Tapatalkhmscott likes this. -
Seems like news on Polaris are just keep coming!
http://wccftech.com/amd-unveils-polaris-11-10-gpu/
The article mentions two GPUs: one enthusiast class for desktops to replace Fury X and the other for thin and light gaming laptops, a so called "console-class gaming". Well, I am really concerned about this wording, since console-class gaming on laptops has been already available from 2010! My 7970M is already overkill for such comparisons. Hope we're not getting a 970M equivalent at a smaller form factor...moviemarketing likes this. -
Sent from my E5823 using Tapatalk -
moviemarketing Milk Drinker
hmscott likes this. -
Console-class gaming is what hears of majority of people would like to hear. Don't get this 5-th page of concerns about this therm. Or does anybody really believe that AMD is going to bring 965M -class of mobile GPU? That would be too AMD-ish... I mean old AMD not nowadays one.
P.S. All it means just minimum 60fps gaming on any game of 2016 on medium, nothing else (and nothing less), I bet. -
moviemarketing Milk Drinker
The GTX 950 is only slightly faster than 965M (70W), handily beat by 970M(100W) from 2014.Last edited: Jan 16, 2016
Mobile Polaris Discussion
Discussion in 'Gaming (Software and Graphics Cards)' started by moviemarketing, Jan 4, 2016.