(I agree, but I also like that they've explained why the results were wrong, and I like that they've decided to change their test methodology in the future - kudos to them for that part.)
-
Robbo99999 Notebook Prophet
-
Robbo99999 Notebook Prophet
AMD have updated their policy on use of 3rd party heatsinks (ie heatsinks not bundled with the CPU):
https://www.gamersnexus.net/news-pc/3291-amd-updates-warranty-to-allow-aftermarket-heatsinks
My interpretation is 3rd party heatsinks are not disallowed anymore, it's just that the heatsink used needs to conform to AMD's specifications - I'm guessing this is related to AMD mounting specifications as well as TDP of the cooler - so as long as heatsink is advertised as compatible with AMD socket then I guess we're good to go. -
-
Here's the new CPU Warranty HSF FAQ section:
https://support.amd.com/en-us/search/faq/147
" Is the warranty for my AMD Processor-in-a-Box (PIB) still valid if I use a different heatsink/fan (HSF) other than the one provided in the PIB?
Yes, provided that the selected HSF, when properly installed and used, supports operation of the AMD processor in conformance with AMD’s publicly available specifications. Use of HSF solutions determined by AMD as incapable of such performance or which are determined to have contributed to the failure of the processor shall invalidate the warranty."
The top level of the CPU Warranty FAQ - maybe if we find something else we don't like AMD will consider changing it?
AMD Processor Warranty Coverage and Eligibility - Frequently Asked Questions (FAQs)
https://support.amd.com/en-us/kb-articles/Pages/AMDCPUWarrantyFAQs.aspxRobbo99999, jaybee83, Raiderman and 1 other person like this. -
Last edited: Apr 26, 2018Raiderman likes this.
-
a CPU of 35% more performance does not equate to 35% more space. as far as cost, it is part of doing business. If you need a 10m server rack today, pre Ryzen it would have cost 20-25m. Now with the mitigations it costs lets say 15m, prices change, this goes on every day. Does it make Epyc look more attractive, sure. I hope everyone buys AMD, I am a realist though, it is not going to change overnight.
You make the Intel issues sound insurmountable, trust me they are not. Those determined to stick to Intel will, that is part of life. I do believe AMD could have given us a bit more though from 12nm. This would have made that nail just a bit heavier and steadfast in the Intel coffin. Hopefully though they will accomplish this with 7nm but 12nm has made me a bit Leary there.
Edit and slightly off topic, volume on AMD today was over 140 million shares, can anyone see the short sellers running and hiding.Last edited: Apr 26, 2018 -
On 12nm, I do not see why people are making a huge deal on it. Going from 14nm to 12nm is a half node at best. It is a minor shrink, which likely aided in some refinements, but is still likely based on Samsung's design. I noticed someone else earlier mentioned Samsung when discussing GloFo 7nm design comparing that to 10nm. I did not address that then, but will now. GloFo seems to be licensing IBM's 7nm design, not Samsung's design. Further, we would need to ask whether it is what Samsung calls its 8nm or 6nm design (likely 8nm as timelines suggest the 6nm reference is their 5nm design). But, that too is missing the point. What has been shown with GloFo and TSMC sample process designs is that, regardless of who they are licensing the design from or using their internal design process, they are achieving par to higher densities than Intel, with them closer to matched with the Ice Lake 10nm+ design. Further, there is variance on energy efficient designs, like was seen with the 14nm and 12nm Samsung processes, or if they are the power/performance variant. At the transistor conference referenced when discussing Intel 10nm transistors, GloFo presented, last minute, their 7nm energy efficient design, while giving just some specs on the performance variant. @tgipier and I discussed the implications of the GloFo design months ago, especially with reaching a potential 2.4x density in the shrink, IIRC. So, the questions come in the rumor of Epyc being done at TSMC making those numbers useless as a reference, or whether Epyc will be TSMC, but if Vega 7nm on GloFo goes well, Ryzen may get the IBM 7nm performance transistors at GloFo. GloFo did not publish specifics on 12nm finFET, unfortunately, only publishing their FD-SOI 12nm transistor targets. We know they are targeting 5GHz at 7nm, but we do not know IPC on that yet, although the Ryzen 1 team has been working on the shrink the entire time. Then is the rumor of the centralized control chip for Epyc as well. Anyways, this means we don't know targets on TSMC speeds or IPC either. All we know is that it does not seem like they are using the Samsung process for the next die shrink, which is a full node. I actually need to go through and do a write up on my analysis in this regard and what it means for my predictions. But, with all of these factors, it suggests we cannot use the 14nm to 12nm to indicate anything on the 12nm to 7nm shrink.
With that said, I do agree that if more was given with this release, it would have made that nail stick much better. But I hope my explanation gives you a bit more hope for 7nm.
And you gotta love the volatility. -
-
-
Wrote it as a fiction. But still very true
Edit. Btw, The second one in short time who escape from the (sinking?) ship. Wonder why they prefer the competitor. Money, yees. But maybe more than this?
Ryzen Architect Jim Keller Joins Intel
by btarunr Thursday, April 26th 2018 11:06 Discuss (51 Comments)
Jim Keller, the VLSI guru who led the team behind AMD's spectacular comeback in the x86 processor market with "Zen," has reportedly quit his job at Tesla to join AMD's bête noire, Intel. Following his work on "Zen," Keller had joined Tesla to work on self-driving car hardware. Keller joins Raja Koduri at Intel, the other big former-AMD name, who led Radeon Technologies Group (RTG).
Intel Reports First-Quarter 2018 Financial Results
Intel reported Q1 2018 revenue of US$16.1 billion, up from US$14.8 billion in Q1 2017. High demand from data centers for chips from the California-based firm helped boost figures above expectations.
Last edited: Apr 26, 2018Vistar Shook, Mr. Fox, Talon and 1 other person like this. -
Bear in mind this process is not based on IBM's for high performance as we were led to think (hence the frequency limitation on standard cooling and the need for high voltages on relatively small overclocks)... the high performance process might be reserved for 7nm (but we won't know this until Ryzen 3 is released). -
How are Ryzen 2000 series desktop CPUs in Linux?
Level1Linux
Published on Apr 26, 2018
We test the Asus Crosshair Hero VII, The MSI M7 ACK and ECC memory, among other things.
No unhappy surprises with Ryzen 2000!
jclausius likes this. -
one more way for them to undermine the competition, by hiring their skilled staff
Sent from my Xiaomi Mi Max 2 (Oxygen) using Tapatalk -
yrekabakery Notebook Virtuoso
-
well i wasnt just talking about the high end tiers, but the overall lineup. putting aside gpu pricing due to mining, amd is able to compete with nvidia in all performance classes except the 1080 Ti/Titan Xp.
ajc9988 likes this. -
yrekabakery Notebook Virtuoso
-
OverTallman Notebook Evangelist
4-core 8-thread CPU + Vega 10 integrated GPU, all in 12-25W TDP, this doesn't speak inefficient to me at all. -
yrekabakery Notebook Virtuoso
Last edited: Apr 27, 2018 -
News Corner | AMD Readies 7nm Vega, Nvidia GPP Blocking Kaby Lake-G?
Hardware Unboxed
Published on Apr 27, 2018
News Topics:
00:35 - AMD Has Working 7nm Vega GPUs
01:57 - Ryzen is Making AMD Money
02:54 - AMD Updates Ryzen Warranty to Include 3rd-Party Coolers
03:24 - AMD Bundles Ryzen, Motherboard and GPU into Combat Crates
04:20 - AMD Launches Ryzen 3 2200GE and Ryzen 5 2400GE
04:57 - Intel Confirms Z390 and X399 Chipsets
06:09 - Corsair Developing a Gaming Monitor?
06:38 - Philips Momentum 43 is DisplayHDR 1000 Certified
07:33 - Samsung Launches 970 PRO and 970 EVO SSDs
08:22 - Nvidia GPP Blocking Kaby Lake-G Adoption?ajc9988 likes this. -
From what is understood, Vega was not Raja's design, since it was already well underway before he took charge.
If he worked at that project from the start, then yeah, maybe I'd agree with you, but its also possible that Raja decided to leave of his own accord.
I don't think you could easily respin a product when the manuf. process its made on was designed for lower clocks and mobile parts, plus they experienced yield issues (as usual).
Vega wasn't really a failure to me.
It worked with a limited manuf. process... matched comparable Nvidia GPU's (at lower clock rates no less) and was only about 10% less powerful than 1080ti in liquid cooling form.
Undervolting Vega worked quite well and it managed to bring power consumption to Nvidia levels, or surpassed Nvidia's efficiency.
Furthermore, we know that the HBM was bandwidth starved more or less, as Vega experienced much larger performance increases if people overclocked the HBM.
If AMD made Vega on TSMC's high performance manuf. process like Nvidia did with PAscal... It is quite likely that Vega would match or surpass Pascal at same efficiency levels due to the premise that AMD would simply be able to clock the core and HBM much higher (vs what we got) without huge power consumption (power consumption would likely be at Nvidia's levels by default).
Since Nvidia has auto voltage regulation from factory and higher yields, they might retain some advantage in that area, but, consumers would still be able to independently undervolt Vega and clock it much higher with TSMC's high performance process.
The problem with Vega (and Ryzen) was the limiting mauf. process.
If you ask me, AMD did good considering what they had.
Don't think we can blame Raja for that.
AMD signed a contract with Glofo, locking themselves into a deal and had to use their manuf. process. Switching over to a new process would have been cost prohibitive at the time, but AMD did say they will split Ryzen 3 and Vega/Navi on 7nm between Glofo and TSMC.
The TSMC's 7nm process as we know will be high performing one much like 16nm was... but as for 7nm Glofo... from what we know right now, its supposed to be based on IBM's manuf. process designed for high performing parts as well, allowing 5GhZ baseline and lower power consumption vs current offerings.
Mind you, 12nmLP was also supposed to be based on the high performing process from IBM, but it seems AMD opted for a different/cheaper 12nm which is based on lower clocks and mobile parts (like 14nmLPP).
So effectively, AMD was able to drop their voltages a bit on Ryzen refresh and bump up the clocks a bit, but the process is still limiting them to go beyond that point - and AMD never changed the layout of Ryzen refresh to accommodate for 15% higher chip density the new process offered.Last edited: Apr 27, 2018 -
1. Vega is not inefficient.
2. 14nmLPP proces on which vega is made on was designed for low power and mobile parts (hence the lower yields which resulted in higher voltages, limited clocks and higher power consumption past certain frequency threshold - if AMD has access to TSMC's manuf. process, things would be radically different).
3. When undervolted, Vega easily reaches Nvidia's level of efficiency or goes beyond it and can also further be overclocked to surpass Pascal in performance (while consuming less power at load).
4. Putting Vega into a laptop is possible with existing manuf. process if AMD and OEM's did a few things:
a) Drop the clocks a bit but keep HBM frequencies where they are.
b) Drop the voltages - though this would require independent stability testing of each unit (which is what OEM's really should do anyway).
When Asus put the RX 580 into GL702ZC (my laptop), they limited the GPU to 68W, cut the VRAM in half, and dropped its clocks by about 15% compared to the desktop version.
Nvidia's top mobile GTX 1060 with 6GB VRAM is limited to 80W.
My RX 580 matches the GTX 1060 in performance or surpasses it in some DX12 titles, and I was able to further undervolt the GPU to consume less power and produce lower heat emission (which you can't do on Pascal, since Nvidia tied the clocks and voltages together).
By all estimates, Vega 56 can easily be put into a laptop and modified into 2 versions:
1. Undervolted with stock clocks (easily matching GTX 1070 in efficiency and performance).
2. Undervolted with overclock (matching/surpassing GTX 1080 in efficiency and performance).
Another alternative is to cut/disable the amount of compute units on Vega to match what Nvidia has in CUDA cores. That will probably drop power consumption by quite a bit on its own without even touching the clocks on the mobile versions as compute hardware is power hungry.
All Vega GPU's can undervolt easily enough, but OEM's are more likely to just drop the clocks a bit (Which should take care of the power consumption and not really sacrifice performance by a great amount - and as I said, they can probably disable 500 compute units on it in BIOS to drop power consumption without touching the clocks).
Besides, the mobile GTX 1070 and 1080 are also underclocked versions of desktop GPU's...Last edited: Apr 27, 2018 -
But in all seriousness (and yes, Intel is struggling), his hiring is likely less about money and more about the 2021 architecture Intel was hiring for in 2017 (I think they even had the offer on LinkedIn for their architecture to replace iCore in 2016, but I may be remembering that incorrectly). Then, you ignore Intel's master architects jumping ship like rats last year, and you ignore that entirely! Funny how that happens, but you read so much into his joining Intel. More than likely, they hired him to figure out how to do smaller chips with interconnects, without which Intel will fall further behind in their ability to stay competitive on yields. I'll address the Raja point below with other comments. But, more than likely, it is because of that and not just cash.
Also, source on that rumor? Because above they are correct that Vega was not his design, Navi was. In fact, I think the 1080 Ti was the only card to beat it on mining, could do it at lower energy costs, but costed more, so the ROI was still moving tons of Vega cards. Also, you do NOT seem to understand the GPU market and sound ignorant. First, they make the Commercial cards. Then, they cut that down and release the low and mid-tier cards, maybe also having the halo product, the Titan, make it to consumers. Then they release the consumer optimized Ti cards. Pascal has been in consumer's hands for years, it is NOT just making it to consumers. That is just false. In fact, miners preferred AMD. Only because AMD could not keep up with demand did they go to Nvidia's cards. Started with 1060s and 1070s. Not only that, due to shortages in Pascal for commercial use, along with pricing, the Titan and Ti cards were being bought up primarily to work in the server market instead of them buying the quadros and similar commercial cards. They did not enter mining service because of cost until much later when the middle tier card's prices were inflated so high it made sense to purchase them as well. Then Nvidia held back on producing more to meet demand. Why? Because they remember what happened to AMD with bitcoin half a decade ago. When the mining craze ended, the used cards flooded the market and cratered pricing. They fear this will hit them this time, which it will. So keeping it minimal was done strategically. But let's ignore facts, shall we.
What you should be wondering is why they are bleeding the market and still don't have a Vega card to market by now. This is multi-faceted. One is competition. Second is that they fear it will cannibalize their high end Vega offerings like the 1080 Ti did in servers (which is also why they changed the geforce agreement with the release of Titan V), although this is mitigated after finding that the Titan V is rife with rounding errors, similar to the reason Intel had to introduce math coprocessors in the early 90s for those here old enough to remember that fiasco. I have not seen whether those rounding errors were found in the Volta cards like the V100 or the GV100, etc. I hope someone checked to verify if those, too, were kicking rounding errors. It also makes one question if Nvidia built that into the Titan V on purpose to prevent people from buying the $3000 card over the $6000-7000 quadro variant or the $10,000 original (which can be bought on discount for $6000-7000 now as well, but had such limited supply that the market had to look for alternatives to meet their needs, which also contributed to buying the 1080 Ti for data centers). Either way, your comment, in light of all of this, looks to not quite understand the market forces at play.
Now, I can talk about the Vega 7nm at last and the differences in processes used. But let me get some coffee first. I'm sure I missed a couple points above, but I'm just waking up. @Deks has some really good points. I also need to talk on the effect of ram shortages and delays of HBM2 to market and AMD's design of memory controller being blamed on time to market, how some GDDR players were hoping to kill HBM with the delay, even though HBM3 is when it really gets good, etc. There is so much that needs discussed and folded in here, it is a bit much without coffee in me. I will continue shortly.Last edited: Apr 27, 2018triturbo, jaybee83, bennyg and 1 other person like this. -
So, continuing on, it is time to look at HBM availability. The yields on these chips were low and volume production delayed. Because they did not have an alternative memory controller in the wings to use with Vega, as well as GDDR5 and GDDR5X being bought up like crazy and not giving the same results as HBM on voltage and bandwidth, etc., without adding chips, there was reason to wait for its availability. This greatly, on its own, delayed the release. Now, there are questions on why alternatives were not designed, but that is questioning a business decision. Either way, AMD switched memory integrator after the release of Vega as well, as the package is impacted by that.
Now, from here, we can talk about processes. Nvidia has a close relationship with TSMC, including being their partner on 7nm and generating their own tweaked variant of the process for their upcoming cards. AMD has done similarly with GloFo. A lot of what I want to say here is speculation, so I will try to limit my comments because of this. What I can say is that there is likely a good reason they used the lead performance (which does NOT mean low power, like many here are saying and are confusing LP previously being applied to low power variants). But, for Polaris, the upcoming Vega, etc., the low power variant of the process is being used. The reason low power variants are likely important, and the reason energy efficiency was haralded as a benefit of Samsung's process for this architecture, even with the performance variant being employed with Zen, is simple if you think about it. They are headed to multi-die processes with Zen and Navi. When you use multiple dies, low power processes and energy efficient processes allow for controlling of heat. Even though heat is lower using these processes, when you start having many dies on the same chip or device, especially with relatively close placement, you have to make sure the dissipation of the heat is possible with current cooling tech. That means you need to develop with an eye that once you start stacking multiple dies up, they will start sucking down way more power as a collective package and put off significant heat once on the same package. If you are using the most powerful process to create the transistors, you could shoot past the collective package limits without even trying. Yes, you could clock further and get a bit more performance, but not before you exceed what you get with the lower power variants stacked. This also needs mentioned that Zen was designed specifically to use four chips together on a single package, then they scaled the chips down on die count to create the entire feature stack. As such, it is easier to see why the Samsung process was used, aside from looking at the Joint Venture of GF, Samsung, and IBM, with Samsung having a working process that their fab could easily employ with limited retooling to get it up and going, while tossing aside its own process. There was the will they, won't they on the 10nm process for awhile on whether they would renew the license from Samsung. Instead, with the jumps from IBM fab being given to them and the engineers from the 7nm having their contracts fully paid for by IBM to keep them on, along with the headaches of 10nm that EVERYONE has, you saw them move toward the IBM process, which GF will be using with the upcoming Power series, most likely. Samsung pushed off the move to 7nm slightly, instead choosing to use their own 10nm process this year, then moving to 8nm and 6nm variant later (which may have to do with sdram, etc.). Either way, this got GF back in the game as Samsung's process still was better than their 14nm. We have to remember that AMD is one of the major contracts with GF and they have a long history (if you are not aware of this, you are under a rock). They traditionally give the CPU work to GF, and TSMC or GF do graphics chips, primarily TSMC. Now, to be clear, Zen used the performance variant of the transistors, not the low power variant, but used a very energy efficient process with 14nm which was Samsung's process. With 7nm, see the attached document on process and power used with IBM's process. IBM is known for getting the power out and getting high frequencies with good power efficiency, plus their designers now working for GloFo were at the cutting edge of the 7nm process (although TSMC really kicked out their process and transition, though this was also paid for in part with Nvidia's push, potentially). But, with getting out Zen to market (a year late I might add, similar to Vega), it needed pushed quickly. Now, refining the architecture to another process should greatly increase what AMD will be able to achieve while meeting the time to market requirements.
But, there are rumors Navi will only target the low power segment that Polaris currently has. And Vega 7nm is targeted for high performance in AI and machine learning. This leaves an obvious gap in their lineup. You also have 7nm Vega being reportedly split between TSMC and GF, you have 7nm Epyc replacement being reported as being at TSMC, potentially, with Zen 7nm still being reported as being with GF, you have Zen design team working with the Navi team, you have rumors that engineers are already preparing a Zen 5nm design (with the embedded question whether IBM's 5nm Gate-All-Around process being implemented or not and how far along that process is, which can combat Intel's changes in their transistor design coming with 10nm/+).
So, here is my best guess. Navi may be a low power variant, but the reason they do not have a high power variant is this is a repeat of what they did to Intel. With Zen, they signaled to the market and Intel that they were ceding the high performance market to their competitor. Intel sat on its laurels. Zen dropped and Intel **** its pants! It is the same thing for the graphics card market. The Zen engineers are helping for integration of HBM interposers and the interconnects of multi-die chips based on scaling their Navi dies to create a full stack, similar to how they did with Zen, which makes even more sense if the rumors of using a control chip and only having the cores and L1 and L2 on the core chips for Zen 2 7nm. That controller chip, when applied to a GPU design, can make it so that it, along with HBCC, can help control the movement between the GPU dies, allowing for excellent scaling. To get to a 4-die GPU, you need each die to then be 70W or less chips. For 4 dies, that is 280W, plus the power needed for memory, where GDDR5X or GDDR6 may take 30-40W stock and higher overclocked, HBM2 takes 20W and has good energy control when OCed. This means that you could have about the power of 3x gtx 1080 in a single package (It is actually looking at 4x1080, then multiplied by a scale for inefficiencies of switching dies, etc., which graphics card scaling is at 70%-90% usually unless optimized otherwise (meaning SLI and crossfire), while Zen had an 85-96% scaling with additional dies; which is why I assume the lowest level of scaling which comes up to a 3X multiple, although they could achieve 3.5x easily if they hit the 90%+ scaling, which is an incredible gain by generation). For comparison, the new Volta based GPUs for consumers show a 1.5x improvement over pascal based 1080. So, this is another reason for the Zen engineers to be there, that way to get the scaling with multi-dies up. Even with the higher performance jump with the IBM process, it is the balance of energy consumption and heat dissipation that can be why Navi is going with the low power variant, unlike Zen. But, due to availability of HBM2 holding up release, there is a chance a GDDR6 controller has also been developed for Navi. This is an important backup in the event that something needs to change last minute to keep timelines.
Now, this is not crazy to think of, as Nvidia is also looking to a multi-die GPU beyond Volta chips later this year (referring to consumer variants). The difference is AMD actually has a multi-die product to market and is designing multiple successor products. Nvidia has had to increase the 12nm transistor count by 40% and give the product more memory bandwidth (and potentially memory capacity) to achieve the 50% increase between generations (and we must remember Pascal is a very old architecture at this point, being out for years). I am actually very unimpressed with the 50% number by increasing the die size 40% with the full node shrink (16nm to 12nm) and the extra 10% being from memory changes, including 16-18GHz ram speeds, which is a large jump from 10-12 on GDDR5X. That's not innovation, that is throwing more at it and hoping people don't recognize that very little has changed since Maxwell (unless tensor cores were incorporated, which is their biggest innovation in the past couple years). It almost is like re-releasing Skylake over and over again with process refinements. It is a joke!
So, on the graphics card side, they are going after Volta's market in the server side with Vega 7nm, which its limited availability will be compared to the limited availability of the 12nm Turing card from Nvidia. Also, Vega 7nm at GF is to clean the chute and prove the process is ready and to alleviate issues of amount that can saturate the market. You have low cost Navi coming which will help combat the costs of VRAM and can scale to the entire stack. So I see nothing but good things in the future. On Zen, the rumors of the 5nm design I find most interesting. Zen 2 and 3 takes them to 7nm and 7nm+. This means they are working on the changes from Zen and how to get to 5nm easily, which will go against, most likely, the new Intel iCore replacement currently being worked on by Keller, most likely. So, refined Keller design versus new Keller design in 2021, battling at 5nm vs 7nm, respectively, most likely. Talk about putting your stamp on the industry!
Lots of information here, TONS of speculation and rumors, but this is getting you closer to seeing my refined analysis for the industry moving forward. I know I might have misstated a couple things and welcome corrections and debate (so long as not playing fanboy). Hope this helps.
https://segmentnext.com/2018/04/13/amd-navi-high-end-gpu/
https://fuse.wikichip.org/news/641/iedm-2017-globalfoundries-7nm-process-cobalt-euv/
https://digiworthy.com/2018/01/25/amd-7nm-vega-zen-2-tsmc/ -
yrekabakery Notebook Virtuoso
Kyle Bennett at HardOCP called it in May of last year (Raja left in November).
https://m.hardocp.com/article/2016/05/27/from_ati_to_amd_back_journey_in_futility
And those are some massive walls of texts (looks like jimmies were rustled) to not answer my question about the perf/W data of those Ryzen APUs and whether that perf/W scales up. -
As to the internal political power struggles, that is just par for the game. After Raja did not deliver with Vega, regardless of if it was his baby or not, he was on the outs. Su is savvy with business and likely agreed with HardOCP's analysis, even if she didn't like the airing of dirty laundry. As such, she is trying to get the Radeon spin-off under control. Raja, meanwhile, is ambitious, sold Intel on incorporating the iGPU from radeon (meaning AMD has practically all embedded solutions ATM), and then sold himself to Intel. This means after Intel shed thousands of designers on the graphics side, they are now reversing direction, which means it will take awhile before they are up and running on this front.
Meanwhile, you did not ONLY ask about perf/W, although I do admit in explaining the forest to you, I lost track of spelling out one of the trees to you. You got me there. The perf/W, because it is less aggressively volted once integrated onto the CPU side, is great. It is a cut down version of the stand alone brother, while having efficiency minded tweaks. This has been documented numerous times since the announcement. And considering it is based on interconnect knowledge from Zen or the EMIB form at Intel, it was made to be efficient. What your real question is is whether it can be scaled higher under the embedded package limits. To that, the only thing I can point you to is the multiple Vega variants being used for embedded solutions, comparing Intel's inclusion to the more powerful variant that AMD is using with its APUs. So it has range. But, if you mean comparing embedded to the mid and high end discrete graphics cards, you are just trying to push an agenda, not get to the root of what is being done and how the industry is changing.
This is before I even address how Intel is using AMD to shed licensing from Nvidia and act to attack them as Nvidia's Commercial GPUs are cutting into some of Intel's revenue and they see this as enough of a threat that they need partners to take on the bull. But, yes, focus on only one thing unanswered. Good job. Had you actually not put it the way you did in this comment and just asked for me to address that point because I missed it, I'd have a better opinion of you. Instead.... -
yrekabakery Notebook Virtuoso
Sheesh, so much hostility.
-
dont mess with ajc, he´ll drop a knowledge bomb on you
Vistar Shook, ajc9988 and hmscott like this. -
yrekabakery Notebook Virtuoso
-
2nd Gen AMD Ryzen™ Processors: XFR 2 and Precision Boost 2
AMD
Published on Apr 24, 2018
Join Robert at the whiteboard as he discusses, and illustrates, updates to AMD SenseMI Technology: Precision Boost 2 and Extended Frequency Range 2.
AMD sponsors the Scuderia Ferrari Formula 1 Team
AMD
Published on Apr 27, 2018
AMD is a proud sponsor of the Scuderia Ferrari Formula 1 Team. AMD’s passion for high-performance computing products perfectly matches with Scuderia Ferrari’s dedication to high-performance racing leadership.
2nd Gen AMD Ryzen™ Desktop Processors – Bring Your Imagination to Life
AMD
Published on Apr 26, 2018
The 2nd Gen AMD Ryzen™ Desktop Processors are here! With incredible performance and cutting-edge technologies, the 2nd Gen AMD Ryzen™ processors enable real people from all over the world to follow their passions. From extreme gaming experiences to high-end design and engineering, the 2nd Gen AMD Ryzen processors Bring Your Imagination to Life.ajc9988 likes this. -
yrekabakery Notebook Virtuoso
Damn, Lisa. She might be more butch than Jen-Hsun Huang.
-
AMD CEO Lisa Su: Our Long-Term Strategy Is Paying Off | CNBC
CNBC
Published on Apr 26, 2018
Lisa Su, Advance Micro Devices CEO discusses the company's big quarterly beat; growth in gaming; her outlook on blockchain, and strategy to beat the competition and retain talent. In tech, it's a talent war, says Su.
ajc9988 likes this. -
-
FAILING At 10nm - Intel's Time At The Top Is OVER?
UFD Tech
Published on Apr 27, 2018
Intel Core i7-9700K 8c/16t Coffee Lake Z390
http://forum.notebookreview.com/thr...-coffee-lake-z390.811225/page-5#post-10718975ajc9988 likes this. -
You've seen both sides of my fury, with agreeing and disagreeing. If you need berated for not saying enough, I suppose I could rip on that (like clarifying things in January and February)... LOL.
Some Math
We wanted to see where GF stands when compared to other leading edge foundries in terms of density.
Leading Edge
Fab GF (7nm) Intel (10nm) TSMC (7nm)
HP 0.0353 µm² 0.0441 µm²
HD 0.0269 µm² 0.0312 µm² 0.027 µm²
MTr/mm² ~86 MTr/mm² (2F6T) ~102.9 MTr/mm²
GlobalFoundries reported very dense 6T SRAM bit cells. For the SRAM bit cells, GlobalFoundry is actually over 15% denser and is somewhat similar to TSMC’s. We believe this large gap in density came from some performance related attributes that Intel simply could not sacrifice for their own products. In an ideal world, to really compare the two nodes we would take an open synthesizable core and fabricate it at each of the foundries. Unfortunately since that’s not really possible, we have to resort to less than ideal ways of comparing nodes. We have tried to apply Intel’s MTr/mm² equation to both GF and Intel’s to get some sort of a comparison. As a sanity check, Intel reports 100.8 MTr/mm² for their own process and GF reported 0.36x compaction vs their 14nm which works out to roughly our numbers. GF is around 86 MTr/mm² or roughly 15% lower density than Intel, despite having a shorter cell. Much of this is due to Intel’s innovative “ hyperscaling” techniques which include the elimination of dummy gates at the cell boundaries, resulting in tighter packing of cells.
With all of this in mind, there is no clear winner here. Both technologies can certainly exchange punches. For chips that make use of large caches, GF can have a significant lead over Intel. Back in October Canard PC Hardware made the bold claim that AMD’s 7nm-based Zen 2 will feature 64 cores and a whopping 256 MiB of L3 cache (or 16 cores and 64 MiB of L3 per die if they still use quad-chiplets). For this kind of application AMD will have significantly denser chips. However, Intel’s higher mix-logic density, superior local interconnects, and higher performance cells over GF 7nm 6-track cells do have their own distinctive advantages. GF did not detail anything about their own high-performance cells but we expect them do very well and able to push IBM’s next generation z15 to at least 5 GHz to not regress in single-core performance.
Final Thoughts
A while back, when GlobalFoundries initially announced they had developed a 14nm process, they ended up really struggling with their first generation FinFET. In fact, things went so badly that they eventually gave up on their own process, ending up licensing Samsung’s 14nm process. What we are seeing today is a very different company with a very impressive leading edge process they can offer to customers. It’s worth noting that a good chunk of the credit should be attributed to IBM whom patent portfolio and expertise GF acquired back in 2015 helped make this process a reality.
As impressive as the process is on paper, the real test is when they ramp-up production and start shipping 7nm chips in volume.
Update: We originally incorrectly reported that CPC Hardware stated Zen 2 will feature 128 MiB of L3 cache instead of 256 MiB. This has since been corrected.
https://fuse.wikichip.org/news/641/iedm-2017-globalfoundries-7nm-process-cobalt-euv/5/
Also, this:
"GlobalFoundries presented a very impressive 7nm process aimed at mobile and SoC as well as high performance through two sets of standard cells and two metallization stacks. The process features a 2.8x routed logic density over their 14nm with 40% higher performance (or equivalently, 50% lower power). Similarly they reported over 2x in density increase for their SRAM with complementary 2x increase in performance.
This process features a very aggressive fin pitch of 30nm which uses quad-patterning with a gate pitch of 56 nanometers which uses SADP. However, in order to maintain higher flexibility for their customers, GlobalFoundries restricted their BEOL to double patterning and a more relaxed metal pitch of 40nm. GlobalFoundries introduced for the first time cobalt into their process, but only for the liner and caps. Finally, GF introduced their (IBM’s) 2nd generation Multi-WF process which has been extended to cover the entire Vt range. This was done through the use of eight work-function materials covering four different threshold voltages."hmscott likes this. -
Intel is now 2 years late with 10nm, 3 by the time production happens if it does at all in 2019.
Maybe Intel will just skip 10nm altogether and ask to borrow a cup of 7nm production from AMD?Last edited: Apr 27, 2018 -
BTW, I have been dumping on Intel, but here is where they innovated, even if they are ****ting the bed on getting 10nm out: https://fuse.wikichip.org/news/525/...els-10nm-switching-to-cobalt-interconnects/4/
This is impressive and should properly be compared to what I just discussed with AMD's process at 7nm.
Edit: So if you look at my last post, you see two bars, one on each side. Those are the dummy gates. Here, it shows Intel was able to remove one of them so that cells share the gate, which increases density. This is why Intel targeted the 2.7x density on this shrink. If you look above, the HPC density increase is around 2.8x. The standard cell is less, but is really a huge jump in performance (not expected to get that on Intel 10nm). This is why, with density approaching the same levels between the competitors, that the jump in power or energy efficiency means AMD should get a fair uplift in processing power, adding to the claims that the 7nm Zen will further close the gap between Intel and AMD so that there is little light between them. And that is before discussing uarch changes that will further take the low hanging fruit which didn't seem taken in the Ryzen 2000 series.
Edit 2: If anyone cares about the 5nm node and the GAA being made by IBM, here are 2 patents from this month that are directly applicable to that process and new potential replacement to finFET:
http://www.freshpatents.com/-dt20180405ptan20180097118.php
http://www.freshpatents.com/-dt20180419ptan20180108787.phpLast edited: Apr 27, 2018hmscott likes this. -
AMD Ryzen 5 2600 vs. 2600X - Is the X worth it?
Hardware Unboxed
Published on Apr 28, 2018
-
AMD’s Upcoming Ryzen 2000 and Ryzen Threadripper 2000 Series CPUs Spotted – Threadripper 2950X Confirmed For TR4 Socket
AMD Ryzen 2000 Upcoming Desktop Processors:
It looks like the AMD Ryzen 5 2500X will be an entry-level quad-core chip in the Ryzen 5 family while the Ryzen 3 2300X is expected to remain a quad-core with four threads. The Ryzen 3 2100 naming scheme reveals that this could be a dual-core chip with four threads although we can’t say for sure. Based on the codenames, the X series chips will feature a 65W TDP while the Ryzen 3 2100 can feature a lower TDP than that.
AMD Ryzen Threadripper 2000 Upcoming High-End Desktop Processors:
AMD is also going to launch a new a new generation of Ryzen Threadripper 2000 series processors based on their 12nm Zen+ core design.Vistar Shook, Robbo99999, jaybee83 and 1 other person like this. -
Ryzen 2700x 4.4Ghz EXTREME overclocking! (2000 pts in Cinebench?)
Published on Apr 28, 2018
Pushing the AMD Ryzen 7 2700x to the limits using a custom loop, an ice bucket and a whole lot of patience. is 4.4 Ghz possible with out LN2? Can I break the 2000 point mark in Cinebench R15?? Watch the video already and find out.
Exponential Ryzen Voltage-Frequency Curve (Overclocking)
Gamers Nexus
Published on Apr 24, 2018
We demonstrate AMD Ryzen 2's exponential volt-frequency curve (normal for Intel, too) & overclocking vs. safe voltages on an R7 2700X vs. 1700.
Article: https://www.gamersnexus.net/guides/32...
Der8auer's video: https://www.youtube.com/watch?v=ogYes...
This testing benchmarks the AMD Ryzen 7 2700X vs. Ryzen 7 1700 for volt-frequency performance and overclocking, mostly plotting a V-F curve to help illustrate the exponential trend of voltage required to maintain a given clock. As expected, the R7 2700X is a better overclocker than the R7 1700 -- not news -- and also maintains its higher clocks at a lower relative voltage than the R7 1700. That said, it also hits a wall at some point, and that's the point at which sub-ambient cooling solutions would be required to push further in our AMD Ryzen CPU benchmarks. This partly looks at Ryzen 2 thermal performance and power consumption of the 2700X, but only in the context of our V-F testing. The article explains the test methodology used for our R7 2700X vs. R7 1700 frequency testing, and also talks a bit about how "safe" voltages are less important than just maintaining safe temperatures for Ryzen 2. Besides, if you can't keep temperature low, it'll be hard to hit an unsafe temperature.Last edited: Apr 28, 2018ajc9988 likes this. -
that "extreme oc" attempt is kinda hilarious when u consider that jayztwocents succeeded with breaking 2000 pts in cb15 "just like that" with an AIO by also overclocking the RAM. that dude somehow just focuses on cpu clocks alone
Last edited: Apr 29, 2018ajc9988 likes this. -
It really is that easy once you got it figured out, small differences, big differences - details no one else takes the time to notice.
It really is just the disciplined application of organized time, and knowing what matters by seeing if it matters - twiddle this, tune that, put'em together and see what you get.
There seems to be enough in the potential to push 2700x CPU's on a large AIO or closed loop to 4.4ghz without too much "danger" voltage.
As long as the 2700x CPU is stable, and it survives long enough at "danger voltage" to be replaced by the next generation AM4 release, let'er rip!Last edited: Apr 29, 2018 -
What a improvement vs. its predecessor.
https://hwbot.org/submission/3829867_remarc_cinebench___r15_ryzen_7_1700x_2019_cb
-
That Ryzen 1.0 1700x is the *only* air/AIO/h2o 4.3ghz 1700x since the release over a year ago with a CB15 2k score:
https://hwbot.org/benchmark/cineben...Id=processor_5394&cores=8#start=0#interval=20
Already in a week there are way more 2700x's scoring that high on air/aio/h2o:
https://hwbot.org/benchmark/cineben...Id=processor_5695&cores=8#start=0#interval=20
There are also far more higher clocking entries too. It's obvious what the difference is for Ryzen 2.0, they are clocking higher.
There are already more people able to break 2000 with the Ryzen 2.0 CPU's than Ryzen 1.0.
That's the difference.Last edited: Apr 29, 2018ajc9988, triturbo, bennyg and 1 other person like this. -
hmscott likes this.
-
-
Better binning will help yes, but seeing how this 12nmLP is still designed for low power and low frequencies, we can probably expect a minor increase.
If 2700x has a base frequency of 3.7GhZ... the 2800x might have 3.9GhZ base frequency (possibly 4?) and similar boost to 2700x.
The 2950x on the other hand, might have a base frequency of 3.7GhZ though (consistent with 300MhZ bump).jaybee83, ajc9988, TANWare and 1 other person like this. -
Correct, I do not expect much as far as a desired upgrade. I no longer expect much from 7nm either other than a worthwhile upgrade. TBH, even if they just can get all core to 4.4 down then line on 12 nm I am hopping for all core to 4.8 on 7nm. At 4.8 it would be 20% on the clocks not including other enhancements too the original Zen. At that point it is a worthwhile upgrade to 7nm, but I think we were all hoping to see 5.2 GHz all core or greater to finally put it too Intel.
Edit; I am thinking a 4.8 GHz 16 core TR but would not be too upset if it were instead a 4.8 Ghz 24 core TR. -
-
That is of course, if the 7nm is indeed made for high performing parts, in which case, high frequencies and same or slightly smaller power consumption won't be a problem.
I don't see an issue with TR hitting 4.8GhZ on 7nm (it would be consistent with 40% increase against 1950x)... that's of course for 16 cores and 32threads.
As for higher core count, well, if AMD releases the desktop and mobile parts first, followed by TR with better binning, then we 'might' see 24 core TR on 4.8 GhZ (though, I don't know).
We also don't know if use of 12nm gave AMD other useful pointers.
AMD's Ryzen CPUs (Ryzen/TR/Epyc) & Vega/Polaris/Navi GPUs
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Rage Set, Dec 14, 2016.