Sorry if this was brought up previously, but I'm interested to see what comes of this!
Hopefully it isn't the same as the Iris Pro attempts which made it into relatively few chips and even less machines on the market.
https://liliputing.com/2017/11/intel-launch-laptop-chip-amd-radeon-graphics.html
Well this was quick, have some benchies!
https://liliputing.com/2017/11/benchmarks-leak-intels-new-cpus-amd-graphics.html
-
-
yrekabakery Notebook Virtuoso
"AMD Radeon graphics": https://www.anandtech.com/show/1200...with-amd-radeon-graphics-with-hbm2-using-emib
Intel statement: https://newsroom.intel.com/editoria...nce-cpu-discrete-graphics-sleek-thin-devices/
AMD statement: http://www.nasdaq.com/press-release...s-chip-for-new-intel-processor-20171106-00859Last edited: Nov 6, 2017 -
Hopefully this means we can get a Vega variant in laptops soon, too.
I believe this is the first use of HBM2 in a laptop we've seen. -
yrekabakery Notebook Virtuoso
If not already out, these should be in shipping laptops soon. -
I'll wait and see. Once iGPUs start nipping at the heels of a 1050 (and similar generational equivalents) is when the game will really change, I think.
-
Well, that depends on when the manuf. process allows for GPU's to reach that performance level.
Right now, the Vega igp in Raven Ridge is touted to be at GT 1030 level of performance... which is pretty good for light gaming... 7nm (and Navi) might be when we finally see much higher performance from AMD in the igP and desktop space as it will likely connect multiple GPU's via infinity fabric.hmscott likes this. -
-
Funny, I noticed that too. This feels like a very "Apple" based play.
The anandtech article posted above they mentioned Apple specifically too.
I wonder if we'll see these chips just in NUCs, Surface devices and Apple products along with some premium Asus / Dell thinbooks........... Just like Iris Pro. Go figure.hmscott likes this. -
Don't get me wrong, strides are clearly being made. But I don't think Raven Ridge is going to make much difference to the dGPU landscape. As I said, iGPUs need to catch up to the point where they cut into 1050 notebook sales. I would guess that the number of people who will now go with with an iGPU laptop over a dGPU laptop due to Raven Ridge is small.hmscott likes this.
-
Raven Ridge will likely make a sizeable dent because the Vega igp is equivalent to GT 1030 (if not better - depending on the system RAM speed used)... that's what the point of an IGP basically is, to cut into the dedicated entry level GPU performance space... or at least, provide that level of performance for those who don't need more than that.
Also bear in mind that the AM4 platform used will allow for upgrades to Raven Ridge with Ryzen 2, 3 that will likely have Navi igp... unless of course OEM's decide to solder the APU and prevent people from upgrading.
I'm not entirely certain on OEM history with APU's that are socketed or soldered. If I'm not mistaken, a lot of OEM's made the APU's they used as replaceable.Last edited: Nov 7, 2017 -
Does the GT 1030 have any significant market share to be taken away, though? I confess that's not a market segment I watch much, but I rarely see a mention of dGPU laptops packing less than a 1050.
Templesa likes this. -
yrekabakery Notebook Virtuoso
I fail to see how that is important. Radeon (an AMD brand) is stated explicitly in the video, and AMD is mentioned 5 times in Intel's press release, including a quote from an RTG exec.
I doubt it. Last 3 mobile APU gens have been BGA. AMD mobile went BGA around the same time as Intel did. Considering these APUs are intended for thin-and-light devices, zero chance of socketed versions I'd imagine. You have to remember that the last socketed mobile APUs, Trinity and Richland, were actually in thick bulky gaming notebooks like the ill-fated MSI GX60/70. -
-
GT 1030 goes under MX150 for mobile GPU, and yeah, I see a decent amount of MX150. The problem with it seems to be that it's a little too expensive for it's own good- for around the same price you can usually get a regular non Ti 1050 equipped notebook.
Once the MX150 drops to around 940mx levels you'll see it a bit more I think. -
I'm actually hoping it's faster because AFAIK I haven't seen any of the APU's mention HBM2 yet.hmscott likes this.
-
Same here.
Besides, we don't know how AMD arrived at the said iGP performance. Was is via single-channel, or dual-channel? Was it tested in an AMD reference laptop (which AMD really needs to start making themselves to optimize everything) or OEM laptop?
Also, which RAM speeds were used for testing?
We also don't know if the Vega IGP comes with HBM2 or not (it might not with this iteration because you'd think it would be one of the features touted as part of this APU's design - plus, if HBM2 manufacturers have yield issues, it might result in lack of availability). If it does come with HBM2, then the igp likely won't need to rely on the system RAM... however, if it doesn't, then system RAM becomes a decisive factor in terms of GPU and possibly CPU performance given the APU connects everything via Infinity Fabric... so higher system RAM speeds would (at least theoretically) increase all aspects of the APU... HBM or no HBM (unless of course AMD was able to lower the latencies)... though, this APU has only 1 CCX... so latencies wouldn't really exist or would be so low to the point of being negligible. -
Based on the fact that each of the announced Raven Ridge laptops only come with 8GB of RAM, I would hope there's VRAM of some sort packed in.
-
So Radeon logo is stated explicitly @ ?what time? in this mf video, Sir?
Tell me, please. -
The OP's youtube video posting has links to articles that highlight AMD in their title and text:
New Intel Core Processor Combines High-Performance CPU with Custom Discrete Graphics from AMD to Enable Sleeker, Thinner Devices
https://newsroom.intel.com/editoria...nce-cpu-discrete-graphics-sleek-thin-devices/
"...The new product, which will be part of our 8th Gen Intel Core family, brings together our high-performing Intel Core H-series processor, second generation High Bandwidth Memory (HBM2) and a custom-to-Intel third-party discrete graphics chip from AMD’s Radeon Technologies Group* – all in a single processor package.
It’s a prime example of hardware and software innovations intersecting to create something amazing that fills a unique market gap. Helping to deliver on our vision for this new class of product, we worked with the team at AMD’s Radeon Technologies Group. In close collaboration, we designed a new semi-custom graphics chip, which means this is also a great example of how we can compete and work together, ultimately delivering innovation that is good for consumers."
...
"**“Our collaboration with Intel expands the installed base for AMD Radeon GPUs and brings to market a differentiated solution for high-performance graphics,” said Scott Herkelman, vice president and general manager, AMD Radeon Technologies Group. “Together we are offering gamers and content creators the opportunity to have a thinner-and-lighter PC capable of delivering discrete performance-tier graphics experiences in AAA games and content creation applications. This new semi-custom GPU puts the performance and capabilities of Radeon graphics into the hands of an expanded set of enthusiasts who want the best visual experience possible.”"
8th Gen Intel Core
https://newsroom.intel.com/press-kits/8th-gen-intel-core/
Last edited: Nov 7, 2017 -
yrekabakery Notebook Virtuoso
-
Ya gotta love Intel, here's what they say about the AMD and Radeon copyrighted material used in that video:
"Other names and brands may be claimed as the property of others."
-
AMD's performance per watt is pretty poor. This seems.. questionable..
-
On what is this based?
AMD's GPU's are clocked on lower frequencies than Nvidia and are built on a manuf. process not suited for high clocks (in addition to being overvolted from factory) while still delivering same or better performance.
Power draw as I said is down to the GPU's being overvolted and clocked too high for the 14nm GLOFO process. There's a chance 12nm LP process that they will use will be more suited for higher clocks though... at which point AMD might be able to clock as high as Nvidia while consuming same amount of power as Nvidia counterparts... or less.
If the mobile components are appropriately clocked, there's no reason to thing the mobile GPU will have low performance per watt. Plus, this seems to be based on Vega igp, and possibly the version that has infinity fabric optimized for gaming. -
I was just remembering power draw of Vega 64 vs a 1080 where Vega used much more power for practically the same performance. I don't see why that wouldn't translate to mid-tier level.
-
Well this was quick, have some benchies!
https://liliputing.com/2017/11/benchmarks-leak-intels-new-cpus-amd-graphics.htmlhmscott likes this. -
If accurate this chip is way more powerful than I expected:
"Both are expected to features quad-core Intel processors based on the company’s Kaby Lake architecture paired with a custom AMD Radeon graphics processor with 24 compute units and 1536 stream processors. The result is two processors that should be able to outperform any Intel mobile chips with integrated Intel HD graphics… but which should (at least theoretically) consume less power than a typical laptop with an Intel processor and discrete graphics."
It appears this iGPU will be somewhere between the GT 1030 and GTX 1050. I'm still surprised that they managed to pack so many SP's in.Last edited: Nov 8, 2017hmscott and Starlight5 like this. -
GT 1030 results for those interested;
https://gfxbench.com/device.jsp?benchmark=gfx40&os=Windows&api=gl&cpu-arch=x86&hwtype=dGPU&hwname=NVIDIA GeForce GT 1030&did=49951639&D=NVIDIA GeForce GT 1030hmscott likes this. -
But that's not because of the architecture itself... its mainly down to the manuf. process limitations and voltages.
Also bear in mind that AMD has more stream processors on their GPU's vs CUDA cores - which only add to the power draw.
Plus, when Vega is set to similar voltages like Nvidia's counterparts without touching the clocks, their power draw does reduce to similar/same level - and of course there's the fact that an undervolted Vega 56 which is also overclocked on the core and HBM ends up performing like GTX 1080 while drawing less power than 1080.
So, again, it's not Vega's architecture to blame here... its the manuf. process.
I wonder what would have happened in AMD used same TSMC 16nm process for Vega... clocked it as high as Nvidia did their chips (along with increasing the HBM frequency) and operated on similar/same voltages.
I would imagine that we could see a much better performance per watt ratio.
The 12nm LP process for Vega and Ryzen refresh could bring that about (along with supposedly optimized for gaming infinity fabric on Vega)...
TSMC 16nm process vs Glofo 14nm process is no contest in favor for TSMC because its designed for high performing parts... Glofo process is designed for low performing parts (plus there's the additional problem of necessary gpu yields on that process - which is why it was mentioned AMD might go to TSMC for 7nm Navi) - and of course, AMD would be incurring monetary loss as a result of switching fabs. -
It can be due to many different aspects of the node and drivers, but the fact was that it was using a sizable amount more power for the same relative performance in real world use cases.
That doesn't sound like a win unless they are bringing process improvements and optimizations to the architecture along with this implementation with Intel to improve their performance per watt. -
Ha! I just realized that if this does threaten the 1050, that actually does put some pressure on Nvidia to stop dragging their damned feet and get Volta out the door. Who would have thought Nvidia would be receiving pressure from an iGPU at the start of the year.
hmscott likes this. -
Likely to put a hurt on Xmas season for Nvidia with both AMD and now Intel with iGPU killers
-
Drivers have little to do with Vega's power draw.
Yes, they can modify power profiles for example at expense of performance, however, the manuf, process is the main culprit here, because it was designed for low power mobile parts, not high performing desktop ones.
The fact that AMD was able to get the performance they did at the clocks they are pushing on Vega is pretty good. If you add undervolting to it, you get a really good performance for same or better perf. per watt like Pascal on a manuf. process that's not a good choice for high performance desktop parts.
Undervolting is crucial here as it's not 'cheating' in any real sense of the word. Nvidia effectively put a stop to undervolting since they tied the clocks and voltages together... so you can't drop the voltage without dropping clocks, which negatively impacts performance.
Look at AMD's touted number for Raven Ridge and Vega iGP for example. Those parts are clocked on low levels and still, they seemingly surpass Intel's mobile solutions in the same TDP in multicore (Even though Intel maintains a 10% advantage in single core due to 5% better IPC and 5% higher boost on single core, although I'm skeptical if Intel even has an IPC advantage considering most industry software is optimized for Intel to begin with and Cinebench still uses Intel compilers - but, regardless, that 10% in single core is pretty negligible) and the Vega igp seems to be equivalent to GT 1030 (MX 150).
Vega seems to be AMD's basis for a scalable GPU architecture. Raven Ridge is where 14nm Glofo process actually shines through (unless of course AMD used 12nm LP process, but I doubt it as this APU is supposed to be released this year, and 12nm is slated for early next year).
Either way, I would suggest to wait and see what 12nm refresh brings. Considering it bears the same LP designation as the upcoming 7nm process from IBM that Glofo will be using for Ryzen 2 (and specs indicate base clock of 5 GhZ), there's a chance this process is similarly designed for higher performing parts - in which case, AMD could easily bring a much higher performing Ryzen and Vega refreshes to the table with improved efficiency.Last edited: Nov 8, 2017 -
Vega 64 is 14nm where 1080 is 16nm and it's still wiping the floor with the Vega in performance per watt. I don't see how AMD competes here
-
Again, you are missing the point. There are clear differences between the types of manuf. processes used... 14nm Glofo is primarily made for mobile parts and low frequencies. 16nm TSMC is made with high performance parts in mind (and there's only 2 nm difference between the two processes).
That's why performance per watt is better out the factory for Nvidia... but as I said before, by undervolting AMD gpu's in turn, you improve performance per watt close to or at Pascal levels.
Pascal is nothing more than overclocked Maxwell... TSMC 16nm allows it to be clocked much higher than Maxwell at similar or better power draw... hence the performance advantage.
AMD is forced to use lower clocks because the manuf. process will otherwise incur even higher power draw.... however, it does manage to deliver similar/same/better performance on lower clocks... but in order to achieve somewhat of a parity with Pascal, Vega needs to be unvervolted - that's why AMD provided Wattman for Polaris and Vega. -
yrekabakery Notebook Virtuoso
What I don't get is, like, Vega 64 is clocked some 60% higher than Fury X, but somehow it's only 30% faster? How does that make sense. Performance usually scales linearly with clock speed, at least that's what we see with Pascal being a higher clocked Maxwell, since at the higher clock speed every part of the GPU runs faster. So this makes me wonder if there's some other internal bottleneck going on with Vega. Could it be ROP-limited like Fiji was rumored to be? Vega still has 64, no increase there from Fiji (or Hawaii for that matter). Maybe memory bandwidth? It's actually lower on Vega than on Fiji.
As a result, Vega's perf/FLOP is actually lower than Fiji. If Vega 64's performance actually scaled up linearly with its clocks/FLOPS increase, it would be nipping at the heels of the 1080 Ti, not be a 1080 competitor.Last edited: Nov 8, 2017 -
Vega 64 IS a 1080ti and Titan X competitor... in professional software mostly.
You have to bear in mind that most games were not written with AMD hardware in mind... also, Vega DOES have various features which games simply don't use (apart from Wolfenstein II it would seem where Nvidia has much lower advantage, and even there, more optimizations are needed by the devs as the game is not well ported it would seem).
It could also be another issue with what Koduri mentioned (namely that the Infinity Fabric is not optimized for games).
As for performance scaling... uhm, never have I seen a GPU (AMD or Nvidia) where clock increases result in linear increase in performance. With Vega, independent reviews confirm that HBM2 overclocking produces higher performance increase for a very low increase in power consumption (about 5w to 10W) vs clock increases (where a minor bump increases power draw by a much larger amount).
Vega works better when undervolted on both core and HBM and then overclocked on HBM to ~940 MhZ (Vega 56) and ~1100 MhZ (Vega 64). -
yrekabakery Notebook Virtuoso
It's there with Pascal. Look at 1050 vs. 750 Ti, 1050 Ti vs. 950, 1060 vs. 970M, 1070N vs. 980, etc. -
I would appreciate some sources please
-
I realize this isn't the kinda source you mean but I did an experiment a while back on the boards here and compared my 1070 to my 970 (Desktop versions) then lowered the clocks of my 1070 to 970 speed and it was almost dead even.
Edit:
http://forum.notebookreview.com/thr...00m-series-gpus.763032/page-397#post-10278070Last edited: Nov 9, 2017 -
I wonder if AMD is going to regret this partnership for how much Intel will learn from it.
Oh, and there's mention of discrete Intel graphics in this article.
https://liliputing.com/2017/11/intel-hires-amds-former-gpu-chief-focus-discrete-graphics.html -
yrekabakery Notebook Virtuoso
750 Ti vs. 1050 (+38% boost clock, +40% FS, +44% 3DM11)
950 vs. 1050 Ti (+18% boost clock, +24% FS, +16% 3DM11)
970M vs. 1060 (+65% boost clock, +72% FS, +75% 3DM11)
980 vs. 1070N (+34% boost clock, +31% FS, +30% 3DM11)
HBM2 AMD iGPU's in Intel CPU's
Discussion in 'Gaming (Software and Graphics Cards)' started by Templesa, Nov 6, 2017.