The official news from AMD seem to indicate this.
5nm doesn't seem in the pipeline until 2021 and Zen 4.
Plus, I don't think AMD would be able to just switch Zen 3 to 5nm so fast (and probably not without taking full advantage of it such as increasing number of cores, etc.).
-
Apple went for 5nm and was the reason AMD got enough 7nm chips so they could expand their portefolio. And 5nm need to mature on mobile first. AMD won't get a foot inside the door before Apple have got what they need. AMD on 5nm won't happen on this side of 2021. Maybe summer 2021 at earliest. But most likely fall 2021.
raz8020 and tilleroftheearth like this. -
Hate to be bubble popper but we will not see any high end gaming machines. While AMD has come a long way they still can't beat Intel on the high end gaming CPUs. Anyone looking to spend on a RTX2080 mobile is not going to satisfy wit just a 4900H driving it. So we can all wish but the writing is on the board.
-
https://hothardware.com/reviews/amd-radeon-vega-7-performance-in-dell-g5-15-se
Hmmm... it'll be interesting to see how RX5600m standup to ehh... RTX2070? -
So I will address all of these together. Two items came up from Digitimes, only one of the two had to be translated.
https://pbs.twimg.com/media/EZGUpGvUwAAlyqA?format=png&name=4096x4096
This is the translated digitimes report that referenced AMD using 5nm+.
In other words, "DigiTimes heard from its industry sources that TSMC's enhanced 5nm process (N5P) will begin mass production in Q4 this year, ahead of schedule." https://hexus.net/tech/news/cpu/143050-amd-ryzen-4000-desktop-cpus-fabbed-tsmc-n5p/
"Of course, it's also been reported that TSMC dropped the orders from Huawei due to the US-China tensions. Therefore it's possible that TSMC may be looking for another customer to fill up the now-remaining 5nm production slots." https://www.tomshardware.com/uk/news/amd-ryzen-4000-cpus-tsmc-5nm-questionable
So, there are two parts to the rumor and you all seem to attack the first part, so let's start there. You focus on older timelines on when 5NP was set to go into volume production, which was Q2 of 2021. The rumor says that 5NP was pushed up 2 quarters to Q4 2020. If true, that is jumping ahead production by a good amount, and yes, comes after Apple's initial 5nm run.
As Papusan brought up, Apple went 5nm for this fall, leaving space open for AMD on 7nm. But AMD must produce 7nm for the Xbox Series X, PS5, RDNA2 and RDNA refresh. You also have Nvidia coming back begging for fab time for the 3000 series. And you have CDNA2 for servers. All of that comes out roughly from September to December according to rumors.
"Huawei also had TSMC’s 5nm capacity booked, but the renewed tensions between the US and China have reportedly brought an end to that deal. The manufacturer could be looking elsewhere to fill Huawei’s slots, though the chances of AMD securing the process for Zen 3 appear slim. It's worth noting that another rumor claims Nvidia is preparing a mystery 5nm product." https://www.techspot.com/news/85425-zen-3-based-ryzen-4000-desktop-chips-reportedly.html
So, after Q4, it's been rumored TSMC will be dropping Huawei, its SECOND LARGEST CUSTOMER AFTER APPLE. That leaves a vacuum. And they are who had the 5nm fab time after Apple. So it would make sense to offer 5nm fab time at a discount or for the same price as some 7nm already booked.
Now this would require a slight redesign, even though TSMC has tried to keep the generational nodes fairly compatible to help speed up the node shrink timelines. So, if AMD accepted, it would push Zen 3 into Q1 or Q2 of 2021 rather than fall of 2020.
I doubt customers would mind, as 5nm+ would have about 22% performance over 7nm. If you combine that with a 15% IPC, you would have a beast of a chip on your hands.
Wishful thinking, I agree, is not useful. But there are other elements surrounding what is happening in the industry that could be what is going on here. It also puts launching a Zen 2 refresh in July of 2020 into perspective, as why would you refresh a line when the new chips will be out in 3 to 5 months (with most rumors saying October)? Do you really think it is just to try to eek out a technical win against Intel in certain tasks?
Edit:
https://www.anandtech.com/show/15219/early-tsmc-5nm-test-chip-yields-80-hvm-coming-in-h1-2020
Yields have been over 80% since December of 2019. It is not unreasonable that TSMC is pulling ahead a process for 5nm by a quarter or two, as N5P was set for volume production around Q2 (probably late Q2) 2021. If you count each letter in the year as a quarter, then that could clarify the timeline.
Edit 2: Also, Intel's 7nm is supposed to come out late 2021, meaning chips on it in early 2022. This push puts it where AMD could use 5nm+ on Zen 3 all throughout 2021, then push production into the mid or latter part of 2022 for Zen 4, potentially opening up access to 3nm instead of 5nm, which would put Intel in the hot seat. Even if 7nm is working, unlike 10nm, for Intel, 3nm is equivalent to Intel's 5nm, which would not be ready for years, giving AMD the process node advantage again and again.
And AMD is who helped create a 5nm advanced node with TSMC. https://www.pcgamer.com/amd-zen-4-specific-5nm-enhanced-node/ . That was reported a month and a half ago. So, AMD would be using the enhanced node they helped make for 5nm production in Q4, which would be a release pushed to Q1 or Q2 of 2021. Remember, 15 month cadence, give or take a month, until now. This blows up that cadence, but only to jump ahead on node and better position compared to Intel's 7nm.Last edited: May 29, 2020 -
What they are talking about is size dependence and logic versus sram yields. Even in Anandtech article you mention, they put a zen type die at 41% yield estimated due to defect density. But that was before it reached high volume manufacturing in April of 2020. Apple will be the first large customer, which is July/August manufacturing time frame and through the third quarter, for the most part.
Instead, look at the 5nm enhanced node AMD developed with TSMC, announced in April of 2020 (see second edit above). This is specific to them. This could be the node that is being referenced, which is a customized variant of 5NP. "TSMC is said to have developed a 5nm enhanced version of its process specifically for AMD, which has a capacity requirement of no less than 20,000 12-inch wafers per month." "One of those translation sources didn't seem to suggest that it was necessarily an AMD-specific node, so there does need to be a certain amount of sodium ingested at this point."
So, the news of that says AMD has been working with the 5NP process for awhile now and are very familiar with it. And even thought PCGamer suggests Apple is taking up the extra fab time from Huawei, Apple still only needs so much and the article instead focused on Apple and AMD fighting over wafer supply, which suggests the same 300mm wafers are used for both 7nm and 5nm. So AMD could switch what node the wafers are used on.
And yes, with a 1.87x density, or around there, there is about a 45% reduction in die size, which results in almost double the dies per wafer. So if you have a high enough yield rate 1 year after the reports say over 80% for SRAM and 41% for a Zen style chiplet, you could easily be looking at higher effective yields switching to 5nm.
Further, HiSilicon (Huawei) produced in Q4 into Q1, after Apple commonly fabs their chips for production runs (not to mean they do not do later runs if necessary). So this is getting into the timeline of when other ARM manufacturers would start to try to get fab time.
And considering 7nm to 5nm is 15% performance, and they estimate 5nm+ will give 7% performance like 7nm to 7nm+, you could get 22% more performance from the node shrink separate from the 15% IPC increase, which would be a potential 31% performance increase for Zen 3 over Zen 2, blowing out any projections on backported Willow Cove to 14nm.
We have to remember, Rocket Lake is looking at a potential 20%+ IPC. If Intel is able to keep the high speeds with it, not even Zen 3 with the 15% IPC and latency gains would likely look as good. If you could push a product 1-2 quarters, but increase the performance where it would be competitive or beat Intel's upcoming offerings, would you? Especially when that means pushing your Zen 4 offerings which could then use a newer node (3nm) as Intel starts with 7nm production... -
Even if 5nm starts on day 1 of q4 2020, don't expect to see in market until at least q1 2021. Probably much, much, later than that as well.
-
So 5nm started last month in volume production. Apple will use it starting around July or August to build inventory for Sept. Or Oct. Release. They usually take up production on a node through October, IIRC.
The 5nm enhanced, which may or may not be 5nm+, was originally scheduled for next spring. TSMC also pulled in 7nm+ or an enhanced 7nm, so this is not without precedent.
So, it is only if AMD could use the enhanced 5nm, which was also announced last month as a thing between AMD and TSMC, that is the question.
I wholly agree if they start production in Q4, the absolute earliest the chips would come out is Q1 2021, possibly Q2. So think CES announcement and March or April release.
Now, tape out to production takes around 9 months. Let's say because it is using the enhanced 5nm that AMD made with TSMC, they have worked out a couple bugs already. If the nodes use similar enough designs between nodes, and AMD started last month or early this month, that would be, if they could shave off 1 month, December.
That would mean they need all of December to March to finalize and build inventory. So if true, March or April at the earliest.
Sent from my SM-G975U1 using Tapatalk -
I think we're looking at minimum of 15% IPC increase for Zen 3...though it was stated to be BETWEEN 15-17%.
And now it looks as if it may even be faster than that (20%) according to this:
https://hothardware.com/news/amd-ryzen-4000-zen-3-desktop-cpus-20-ipc-lift
Bearing in mind that's just another rumour, but AMD has a decent track record in understating performance of their CPU's. They seem to have decided to officially provide some conservative values (which seem to give them better standing in case the final product doesn't result in any more performance gains).
As for Intel rocket Lake achieving 20%... yeah, that would be interesting to see if it pans out, but regardless, AMD ALREADY has an IPC advantage over Intel with Zen 2.
Even if Rocket Lake manages 20%, it would at best achieve parity with Zen 3 on 7nm enhanced node IPC-wise (that's assuming that Zen 3 doesn't get 20% IPC uplift but keeps it at 15%)... and keep in mind that Rocket Lake will still be on 14nm - whereas the iGP will be on 10nm (which means that even if Intel pushes clocks high like they do right now, it will end up sucking down a LOT of power for a minimal advantage, or none at all).
The main thing giving Intel advantage right now in games (however minor) are higher clocks and lower latency - and the higher clocks are hammering Intel because it drastically increases power consumption.
If AMD abolishes the latency factor from the equation, the higher clocks on the Rocket Lake won't necessarily save it... it will just lead to more power draw (but it also depends on how Rocket Lake handles power distribution in general and whether it runs away from its TDP like current incarnations do - AMD does it too, but to a much lower extent).
Intel already priced its 10th gen excessively high, and still gets its rear end whooped in productivity by Zen 2 at much lower power draw.
Do we have any estimates on Rocket Lake prices?
Its at least possible they will be priced similarly to 10th gen. -
It seems that 5600m (at least according to the current reviews) is happily sitting between RTX 2060 and RTX 2070, however, in 'some' games its only up to RTX 2060 performance (above 1660ti).
It will be interesting to see how driver optimizations may improve this, or if repasting the GPU and CPU may improve performance of both the CPU and GPU. -
There are just too many scenarios that will affect CPU or systems' performance due to different usage and usage environment.
Hence, if i'm a maker, i would push out conservatively average performance results as well... leaving surprises to buyers and owners and users to discover what they can do in their condition... "Surprise!!!~~~singly~~~" -
One other thing that can affect CPU (and GPU) performance is developers using Intel/NV as a platform of choice (due to their dominance in the market).
It took a while for devs to come round and start releasing optimizations for Zen in relation to games alone (not everyone did that)... but even Adobe now started to incorporate greater use of GPU acceleration for Radeon (on the CPU side - mainly multithreading, they still seem to be lacking somewhat).
Its not a level playing field yet though. -
I see you post rumors about Ryzen vs. Intel.
Here is a few more... The flip side of the coin.
Rumor: Intel’s Next-Gen 10nm Willow Cove Cores Bring 25% & Golden Cove Cores Bring 50% IPC Increase Over Skylake CPUs – 7nm Ocean Cove With 80% IPC Uplift Over Skylake
We've seen the last of Intel's 14nm Skylake architecture with the release of Comet Lake CPUs with the chip giant now focusing on next-gen cores to be utilized in its upcoming desktop and mobility families. Skylake lasted several generations and got various iterations on refined 14nm processes but the next-generation 10nm and 7nm lineups are going to bring incremental IPC gains. Rumors on what to expect from Intel's next-generation CPU cores have been posted by MebiuW who just a few days ago gave us a first look at AMD's next-generation Warhol & Raphael desktop processors.
raz8020, tilleroftheearth and ajc9988 like this. -
Now I can believe sunny is 18% (known), and Intel regularly does 6-7% IPC when not focusing on single thread, hence 25% for willow, which I reduced to 20% for rocket lake versus tiger for inefficiencies and cutting features during a backport of the design. You then estimate around 20% for each generation after and get the estimates from that rumor. May be true, but looks more like educated guess work.
I also do not believe the Warhol rumor. It's based off the assumption of AMD milking a lead that doesn't exist. It makes no sense.
I believe AMD will do one of two things: 1) release Zen 3 around October or November and release Zen 4 in Q1-Q2 of 2022, or 2) push Zen 3 to 5nm enhanced node which releases in March to may of 2021 and use that push to push the release of Zen 4 to summer of 2022 and either beat Apple to first on volume node, or do another 5nm chip depending if Intel reached 7nm in the first or second quarter. But that depends on their birdies telling them where Intel is on that process node development and whether or not it is delayed from the second half of 2021.
But that is my take.
Sent from my SM-G975U1 using Tapatalk -
Isn't Zen 3 around Oct/Now well early? We are now in June and the refresh is still not out.
AMD Ryzen 9 3900XT, Ryzen 7 3800XT, Ryzen 5 3600XT Matisse Refresh Zen 2 CPUs Listed Online
https://wccftech.com/amd-ryzen-9-3900xt-ryzen-7-3800xt-ryzen-5-3600xt-cpu-online-listing/
"It will also be interesting to see how well the new Matisse Refresh CPUs handle their boost clocks considering AMD had to release several fixes for the original lineup for them to touch advertised clock speeds"
I wonder if we see AMD finally eat up last piece of the available overclocking headroom. Or if it will remain on top of the raised clocks.Last edited: May 31, 2020 -
July is the refresh. And, if on 7nm, Zen 3 is October time frame. I think the refresh means they pushed Zen 3, but this is why the rumors have so much argument. Already, Zen 3 has seen A0 stepping, meaning it has been trapped out and refined a bit. This supports the October release theory. But that means the refresh, which is also not a full line refresh, just 3 CPUs, is really close on time.
Sent from my SM-G975U1 using Tapatalk -
I still think its unlikely that Zen 3 will be on 5nm.
Its too soon and switching over fabs like that at the drop of a hat so close to the original release schedule of Zen 3 (which was announced to be later this year - not next year) is highly unlikely.
AMD mentioned nothing about changing Zen 3 release schedule as of yet...
Unless Zen 2 refresh is meant to be used as a 'placeholder' until Zen 3 is released on 5nm in Q1 of 2021 (in which case, waiting a year for Zen 4 makes sense... but as it is, I think we are more likely to see Zen 3 on enhanced 7nm late this year, with Zen 4 and 5nm coming in late 2022.
https://www.extremetech.com/computi...-5nm-for-zen-3-despite-rumors-to-the-contrary -
I call bull.
First, AMD never gave an official release date and cannot be held to delays from saying Zen 3 by end of the year at CES.
Second, if changing process node, you are not delayed, so you would say nothing!
Third, even if you want to call it a delay, they never publicly gave a time for release, so there is nothing public to call a delay.
Fourth, there are rumors of 7nm EUV not being ready by fourth quarter anyways, which would mean a delay as is. That makes switching to 5nm enhanced more likely.
Fifth, two architectures listed as Zen 3 on 7nm process, if both are 7nm EUV, makes even less sense two years in a row. This in light of Intel having willow cove in rocket lake at 20%+ IPC, then the 7nm design for early 2022 having nearly 50% IPC over skylake. And you are saying AMD is going to do a whole year Zen 3 refresh. Give me a break.
Sent from my SM-G975U1 using Tapatalk -
You seem to be making some wild claims without necessarily backing them up.
https://www.pcgamer.com/uk/amd-ryzen-4000-release-date-specs-performance/
"AMD has confirmed that both the new Zen 3 CPUs and RDNA 2 graphics cards are on track for release later this year. When exactly? Our best lead so far is the rescheduled Computex tech show in Taipei, which has now been set for September 28-30 this year."
Yes, you are accurate that AMD never specified a release date, but they DID mention they are on track with releasing both RDNA 2 and Zen 3 later THIS YEAR (which makes it that much more unlikely they will be using 5nm this soon).
By 'release schedule' I was referring to 'later this year' in general not a specific date (which was never stated).
Furthermore, 5nm is a flat out slap in the face rumor that came out of nowhere and contradicts virtually every previous statement from AMD about 7nm being used for Zen 3 (even AMD's own roadmaps still reflect use of 7nm for Zen 3).
Are the roadmaps subject to change? Of course, but as it was stated in the extremetech article I posted above, it seems unlikely 5nm would be used so early on and that it takes quite a bit of time to 'transfer' CPU designs from one node to another.
The EUV extension was later dropped by AMD from being used in 7nm and subsequently Zen 3. AMD confirmed they would be using 'enhanced 7nm' instead (but not EUV).
https://www.anandtech.com/show/1558...7nm-7nm-for-future-products-euv-not-specified
I never said AMD will be doing a Zen 3 refresh for a whole year (why would they?).
If the existing projections are adhered to and AMD continues to release products year after year, they will release Zen 3 later this year, with followup cpu's (such as mobile versions and desktop APU's) coming shortly after (unless they manage to greatly accelerate the release of these compared to Zen 2).
Zen 4 would then by those estimates be released in late 2021 (a year after Zen 3 first release). -
Bull. Why? Because under your crap, AMD is doing Warhol, which makes no sense. And with their current 15 month cadence, a release in Q4 this year means a release of Zen 4 in Q1 2022 at minimum. And that is subject to Apple and other 5nm orders, likely on the plus variant next year.
Now that is if Warhol is involved, AMD does two 7nm Zen 3 designs year on year, right as Intel comes hitting back. And with a 14-16 month timeline, welcome to 2023.
Further, when AMD said that and their road maps, that was before TSMC and Huawei issues, possibly brood fully figuring out the enhanced variant or EUV issues are as concerned, etc. Hence why even the refresh need came after and all rumors would have arisen the conditions to make changes after your sources.
You do realize companies can change timelines whenever, right? You say it, but you may not internalize it.
Either way, we'll see. But digitimes is pretty accurate on their rumors of TSMC and only a fool doesn't consider it.
Sent from my SM-G975U1 using Tapatalk -
This is not 'my crap'.
You are dismissing information released by AMD to date based on a leaked RUMOR (which means it shouldn't be taken seriously until we have more concrete data to back it up - which we don't) - its fun to speculate yes, but we cannot verify this (also, dismissing my claims even when I backed them up with previous data released by AMD themselves seems rather presumptuous).
Yes, roadmaps can change... but STILL, AMD remains quiet on the Zen 3 being released on 5nm rumor (wouldn't we have heard something about this sooner than now?).
Also, who said anything about having two 7nm Zen 3 designs? I never mentioned that. You did.
If AMD sticks to releasing Zen 3 on enhanced 7nm later this year (as all to-date released data indicates), then AMD will only have those products based on Zen 3 and enhanced 7nm to keep them going for a year (first they will release stuff for data centers, then consumer stuff, followed by mobile - pretty much same as Zen 2).
No 'refreshes' of Zen 3 again on 7nm (unless they decide to make 'XT' versions like they are now doing with Zen 2 - although I fail to see the point of XT's here so close to Zen 3 release - like I said before, this refresh 'might' be used a placeholder to move Zen 3 to 5nm and then release that in Q1 next year, with Zen 4 being pushed to a year later) but AMD still hadn't said anything about changing the release of Zen 3 on enhanced 7nm later this year - yet).
Speaking of leaks... here's something that may better explain 5nm:
AMD’s upcoming Ryzen C7 mobile chipset gets leaked
https://www.gizmochina.com/2020/06/02/amd-ryzen-c7-soc-leak/
Again, this is a leak and a rumor (and the article explains the misspells indicate fakery). -
AMD should reconsider to do a redesign/rework on their CPU which was used in tablets some years ago to boost it's capabilities to the Ryzen status.
After looking at this list...
https://www.notebookcheck.net/Mobil...e=1&tdp=1&mhz=1&turbo_mhz=1&cores=1&threads=1
...it's kind of pathetic to find that AMD made only 2 products in the compact mobile computing world, while INTEL churns a big number of those CPUs that managed to capture the hearts and occupy a certain position on small mobile computing users.
From there, AMD will perhaps have much better chance to perfect their smaller than 7nm design? -
Intel has atom processors. But they are x86. AMD instead would likely go after arm designs. And arm would not help, but would be a good addition to the portfolio.
The C7 rumor hits at that. It would be an 8 core arm design with three types of cores in it. Getting familiar with big.Little would also help with alder lake competition.
I do question reports trying to say alder lake can beat a 3950x as that is ludicrous. 8 atom and 8 mainstream cores is NOT as powerful as 16 full power Zen 2 cores.
Sent from my SM-G975U1 using Tapatalk -
The leak I posted about C7 may be addressing this (also as @ajc9988 noted).
You have to bear in mind that AMD remains a much smaller company compared to Intel and that just before Zen 2 they were heavily tied to Global Foundries. AMD's primary focus lies in data center (where the big money is), desktops, laptops and consoles.
They may not have had time to downscale their designs to tablets and smartphones... or just didn't have time to 'expand' into this particular mobile segment.
An all AMD chip for tablets and smarphones would be sweet though, but I guess they had to wait until gaining back market share and re-establishing themselves in the field.
Also, it wasn't until 2016 that AMD was in talks with Samsung in using their gpu's for the smartphones.
The most current leak suggests expansion to include CPU's as well.
Will be fun to see whether these leaks turn out to be accurate at all. -
That was my point. If you ignore all other rumors except C7 and add in that AMD wanted to create a socket that would support ARM and X86 at one point (meaning they did prior development on arm, so doing so now, along with the Samsung JV, makes sense), then a Zen 3 in October, ARM chip in Q4 to Q1 next year, then Zen 4 in Q4 2021 to Q1 2022 makes sense.
Too bad it is likely fake.
Sent from my SM-G975U1 using Tapatalk -
A little bit of bad news:
Amd confirms that smartshift tech-only shipping in one laptop for 2020
https://www.anandtech.com/show/1583...ift-tech-only-shipping-in-one-laptop-for-2020
This implies that we will NOT be seeing any more all AMD laptops throughout entire 2020... at least none that use both Renoir and Navi.
This is ridiculous.
100 new laptop designs and 99 of them have seemingly decided not to use AMD dGPU's?
Idiots (if it turns out accurate). -
So it's either MSI hv no interest to employ such power management to their Bravos or the SmartShift is not compatible to 5500M+4000s CPUs?
-
AMD's smartshift tech is a no go for 99 of 100 laptop. And no high end Nvidia graphics card in company with AMD chips... This is confirmed by Frank Azor. Of course Dell added in the new tech. Azor jumped of Dell last year. Payback time or better say the middle man to help his old company

https://twitter.com/AzorFrank/status/1268657743482757122?s=20 -
It's for the best in my opinion. AMD hopefully will be doing better with big Navi and Navi incorporated into APUs. Until then...
Now I do find it interesting they are not pairing the highest end Nvidia cards with the AMD CPUs considering benchmarks showing how well the CPUs perform.
As a side note, did you see Intel's CEO talk about ignoring benchmarks?
Sent from my SM-G975U1 using Tapatalk -
Why is it for the best?
I mean this could also mean we won't see any all AMD laptops this year anymore (which means, NO ONE will be using the 5600M or 5700M).
NV and Intel get to push all their experimental features out the door the moment they announce them... and yet with AMD its always 'we need to wait and test and gimp it to see if it pans out' (well of course if you gimp it, very few will buy it). -
Which GPU is better, the 2070 or the 5700m? Which would be better if they made it, a 2080 or a 5700m?
Although AMD is making headway, they are not there yet in the GPU space in my opinion. They have CPU. And with Intel not dropping rocket this year, AMD will have Zen 3 unanswered for 3+ months. And their new APUs will be nice (although rumors are saying Vega AGAIN, undercutting their value). But that is CPU. GPU, coming out with cards to replace the 5700 XT level up through an estimated 10% less than the 3080 Ti, AMD will be back and competitive, assuming the right price.
It seems Nvidia will be releasing around August and AMD in September (trying to get those pre- orders, but likely will take 2 months to release like the 20xx series). So we'll find out in months.
But until AMD has better drivers and better performance...
Sent from my SM-G975U1 using Tapatalkraz8020 likes this. -
By that analogy, the desktop navi GPUs would have never sold and are utterly irrelevant (which is not the case).
The mobile variants of 5600 and 5700 have identical specs except for reduced clock and vram frequencies (not by a huge amount though) which should technically allow the 5600m and 5700m to perform about 10% behind their desktop counterparts.
Also, plenty of people want an all AMD laptop (especially because they can get equivalent performance to nv GPUs like 2060 and 2070 for a lower price).
If we see no other laptops with AMD dgpus, it's an indication the market could be manipulated not to include them (and even if you prefer NV GPUs, wouldn't you want competitive options on the market?).
Also why are people still moaning about the amd drivers?
They're not a problem anymore (you'd also need to include nv driver problems but yeah, let's rather not talk about that under the radar issue). -
win32asmguy Moderator Moderator
Here in a month we should have numbers from the 5700m in the Alienware Area 51m R2. I would not be surprised if it were on par with the 2070 Refresh while being priced at the same level as the 2060. -
Actually, according to early announcements of MSI Bravo 17, it seems that SmartShift IS is enabled in that unit.
Its a feature baked into Renoir it seems on a hw level which activates when an AMD dGPU is recognized. -
Well, specs-wise, the 5700m is identical to its desktop counterpart except for the core/VRAM frequencies (which are somewhat lower). Given that a drop in frequency is not too high, the mobile chip should perform about 10% less than the desktop chip (if its given adequate cooling).
Although its interesting DELL decided to put 5700M into the Alienware laptop.
Strange they hadn't opted to use 4800H/4900H as well or at least the far more efficient AMD desktop CPU's such as 3700x, 3900x and 3950x (I mean, they DID decide to go all the way up to 10900k which is ridiculously inefficient and draws twice as much power... whereas the 3900x and 3950x don't draw over 145W at max).
Just saying that if DELL expects to manage the thermals somewhat for 10900k, they would have a FAR easier time doing that with AMD 105W TDP cpu's which stay a lot closer to that original TDP.Last edited: Jun 7, 2020 -
Alienware is a gamingbook. AMD Ryzen desktop chips is still behing Intel in gaming.
-
Yes but you have to looking to spend north, and sometimes well north, of $6,000 USD on building a dedicated gaming system for just a slight overall advantage.
-
'Still behind Intel in gaming'.
Forgive me, but I find that argument questionable.
Intel doesn't have a massive edge over AMD in gaming (also, with the 10900K power requirements, you can rest assured that ANY potential advantage in games Intel may have had won't be noticed on the Alienware because the unit probably won't be able to supply the needed power to the CPU to produce the same performance like on desktop, nor dissipate nearly enough heat - in fact the CPU will probably be forced to run at its BASE clocks to meet the TDP requirements mainly because of the chassis size and thermal constraints that come with it - any noticeable boost in frequencies would blow past that 125W TDP as if its nothing - unless DELL uses composite carbon metamaterials as part of their cooling assembly that are able to dissipate 125W at least - which I'm pretty sure they don't).
Don't get me wrong, its not as if OEM's cannot design viable cooling assemblies for 17" units which can dissipate a LOT of heat and keep the system cool and quiet in the process (Acer demonstrated that with Helios 500 2700/V56 - but even they limited it to 65W TDP desktop CPU, and a 105W CPU from AMD might be a bit much for the cooling assembly to handle - although its uncertain because the max temperatures on 2700 don't go above 73 degrees Celsius when under maximum load - and GPU sits at barely 65 degrees C when maxed out).
AMD's 105W TDP CPU's (like 3900x and 3950x) would be far easier to cool in Alienware because they don't exceed 145W under full load (which is 40W higher than their TDP... Intel's higher end CPU's require much more than that (and are WORSE in productivity).
Furthermore, AMD by default still kicks Intel in the rear end with productivity (and I think we can say with relative certainty that people who are willing to splurge THAT much cash on a laptop, probably won't be using it ONLY for games).
The 10900K needs to be clocked to the EXTREME and pull AT LEAST twice the power just to match 3900X in multicore tasks - which is ridiculous... and the cost of 10900K is higher.
My Acer Helios 500 with 2700/V56 was also advertised as a gaming machine, and yet, I use it more for productivity than for gaming (which I manage to 'somehow' max out on my V56 - and I'm not even using high frequency RAM).
Also, the overall cost of the system would go DOWN if they were to use AMD hw instead.
I'm sorry, but, to me, 10900K simply doesn't work in a laptop unit unless its severely castrated (which kinda defeats the purpose of having it in such a unit to begin with because it won't perform anywhere near its advertised potential, and it costs the same as the desktop version). -
tilleroftheearth Wisdom listens quietly...
-
No, but I have now, and that same article also says:
"The two CPU-based scores (physics) are nearly identical, which is very impressive considering the AMD chip has twice the number of cores. This suggests Intel has achieved a massive increase in IPC (instructions per clock) on the Tiger Lake-U generation of processors thanks to the new microarchitecture. Still, there is one thing that's important to note: 3DMark 11 is, at this point, is nearly ten years old, and was never coded to handle the high core counts of today's processors properly -- so keep in mind that CPU performance based on today's applications will have a larger performance delta."
Now let's put this into proper context:
Ignoring the fact 3dMark11 is outdated and doesn't necessarily take advantage of high number of cores (a recurrent theme), what's interesting to note is that Tiger Lake's CPU seems to have much higher base clocks than the 4800U (by about 1 Ghz higher - or 55% higher).
Intel also historically demonstrated it usually has a higher all core boost than AMD (which tends to go maybe 200MHz above baseline on all cores), and another 'leak' from WCCFTECH indicates that the Intel part is boosting to 4.7GhZ on a single core (whereas 4800U boosts to 4.4GhZ on a single core).
https://wccftech.com/intel-tiger-la...ity-cpus-xe-gpu-shows-incredible-performance/
When put into proper context, and coupled with purported IPC increases, its not a wonder to see a much higher clocked 4c/8th Tiger Lake matching AMD's 8c/16th part in the U category.
GPU though is barely surpassing AMD's Vega enhanced iGP in that same test (however, its still early days and these are 'leaks').
Also, we weren't talking about Tiger Lake... we were talking about 10900K which is supposed to be used in the Alienware.Last edited: Jun 8, 2020tilleroftheearth likes this. -
One other thing we don't know is which TDP ratio was Tiger Lake CPU using in that leak (seeing that Its configurable from 12-28W) and whether the part repeats the pattern of running away from its TDP like current Intel CPU's do (all of which will affect final performance numbers in a given chassis - but we already know that OEM's don't shy from giving Intel priority in regards to cooling and hw quality).
-
win32asmguy Moderator Moderator
Dell will not use AM4. It costs more to engineer than simply moving from Z390 -> Z490. Also, with the Clevo NH58AF1 you have to essentially "castrate" the Zen2 processors as well if you care about noise at idle. PBO is essentially unusable. The nice thing is Clevo does not usually pull shenanigans of releasing BIOS and software updates reducing performance. If we are wishing for things that do not exist it would be better to want a Clevo X570 AM4 system with a MXM 5700M. -
Clevo tried to squeeze a 105W TDP chip into a 15" chassis and forced it to run in ECO mode (65W - whereas it actually consumed about 85-90W when fully stressed).
There is a difference here because it was clear that both the chassis size and cooling system were both inadequate (even Acer didn't try using a 15" chassis for 65W TDP CPU's - I mean SERIOUSLY, what was Clevo thinking?).
The 10900K doesn't have an official 'ECO' mode... and the performance loss for 3900x in ECO mode was less pronounced than on the 3950X running in ECO mode (and they would still perform faster 'castrated' than 10900K would under same conditions and consume 35W less power to boot).
If Clevo had any sense, they would have used a 17" chassis with a cooling similar to what Acer put in Predator 500 with 2700/Vega56 (but obviously, more powerful to accommodate/dissipate 145W on the CPU when fully stressed).
I know that Alienware won't use AM4... I'm just saying its a mistake on their part.
As for it costing more to engineer... please.
Z490 and Z390 are not the same mobo with a BIOS upgrade (even though they could be). They'd have to go through the same procedure to adapt Z490 to the Alienware and make a viable accommodation for the GPU too.
The procedure would be same with B450... and in both instances, Dell and Intel would need to cooperate on the project (just as much as DELL had to cooperate with AMD to construct the G5 15 SE, or Acer when they designed the PH517-61).
And still, AMD's hw is cheaper. -
Btw, the WCCFTECH article confirmed TigerLake was configured to 28W as can be read here:
https://wccftech.com/intel-tiger-la...ity-cpus-xe-gpu-shows-incredible-performance/
Coming to the TDP figures which are important too, the said Core i7-1165G7 was tested at 28W (cTDP up). It's not reported whether the Ryzen 7 4800U was sticking to their default 15W TDP or also configurable TDP up to 25W.
Its more than probable the 4800U was configured to 15W, and not 25W.
It was mentioned that only the Lenovo Yoga Slim 7 would use 4800U with cTDP up to 27W, and this is where things get interesting because according to the following article, the 27W 4800U is 35% faster than the 15W 4800U:
https://optocrypto.com/amd-ryzen-7-4800u-at-27-w-delivers-a-34-increase-in-performance/
That puts things into perspective, doesn't it?
28W Tiger lake vs 27W 4800U = about 30-34% difference in favor of Zen 2.
Admittedly this is with twice the amount of (much lower clocked cores for AMD), however, that's not all. Zen 3 is due to be released soon with purported IPC increase of 17-20% (more unified design which eradicates latencies between cores, a bunch of other improvements) and an unknown increase in frequencies (by how much though, we don't know - maybe generous 200MhZ if its on the 'enhanced 7nm' as originally stated - but I don't know if AMD has had enough time to optimize the process that far).
Oh and it's still rather problematic that Intel is incapable of putting more cores onto 10nm+ (they may have been prioritizing frequencies for this though).
P.S. I should mention (again) that 3D Mark is OUTDATED and not necessarily indicative of cpu performance - which necessitates the need for proper testing of Tiger Lake to ascertain its 'real' performance in both CPU and GPU.Last edited: Jun 8, 2020 -
tilleroftheearth Wisdom listens quietly...
The perspective that I pick up is that more cores don't mean more performance. Same as its always been for most workloads.
I'm also unsure of your math (and assumptions) there too.
-
More cores don't mean more performance?
Depends on what you use the cores for.
Also, we no longer live in an era where single core is much of a priority. We've seen a much bigger shift lately in this area ever since AMD launched Zen to the scene.
The Tiger Lake part has 55% higher base frequencies and 200-500MhZ higher boost clock on single core (compared to 4800u at 15W).
If a software is configured to measure CPU performance, but that software doesn't scale well beyond initial 4 cores, then it stands to reason the 'leaked' numbers would be skewed and tend to favor higher frequencies (and possibly higher IPC) rather than more cores.
This was evident in games (and other software) too when Zen 1 and + showed up to the scene. Before optimizations were released, performance did not scale well on CPU's with higher number of cores.
This was evident in various gaming and benchmark tests which showed that lower core parts (such as 4 and 6 core parts) with higher frequencies were favored more than higher core CPU's with reduced frequencies (for both AMD and Intel).
Considering that 3d Mark was released 10 years ago, I don't think its a useful benchmark for ascertaining Tiger Lake multi-core performance (this is why we should wait for official tests using comprehensive suit of software that can give us more useful numbers).
Furthermore, I DID post information on 4800u with cTDP set to 27W (which the Lenovo Yoga Slim 7 will use) and how an increase from 15W version to 27W increases performance by 34% (this is a pretty massive change in performance), and its already been well established that all other 4800U's currently on the market were set to 15W and tested using that value (not 25W, and most definitely not 27W).
I don't really have much/any 'assumptions' here. I think I based my responses on well known data about outdated software, frequencies, etc. (and how this compares to the currently released data we have on Tiger Lake). -
tilleroftheearth Wisdom listens quietly...
Yeah, not for most workloads. Again, your assumptions (and mine) are just that right now; educated guesses (especially on the TDP values).
The underlying methods of how the performance is delivered doesn't matter. The performance delivered does.
As long as the benchmark is the same, it is a valid comparison.
Everything most people run then must be 'outdated software'. Very few times where more than a handful of cores, is superior to a faster (clocked) CPU.
In December of this year, it will be 4 years since AMD stood up to Intel, effectively. In the mobile space, the only thing that changed is still only a hint of what AMD may be able to do. On the desktop space, the only thing that has changed is who can charge more for what (just like I had predicted, years ago, with AMD showing just how greedy it can be too in the last few years), but the performance available for the majority of workloads has barely moved for most. When compared between the two offerings from each at any given time.
I don't care about benchmark 'scores', just real improvements to my daily workloads. AMD may be offering something worth considering, depending on the workloads. But Intel hasn't stopped doing that either in the last 3.5 years either. -
You need to be careful with that. 3d mark 11 is showing its age and is not a leading predictor of CPU performance.
Now, for your workloads, AMD cannot do what Intel can because your workload benefits not just in IPC and high core clock, but is still sensitive on latency.
AMD in many other workloads really does beat Intel at price point, not per core. This is why, generally, everyone knows AMD is competitive throughout the stack.
It's like comparing the 3300X and the 7700k when one was the flagship and the other is entry level years later.
Now, tiger is impressive, but Intel also is how late releasing it?
Also, what did Intel recently say about benchmarks?
Sent from my SM-G975U1 using Tapatalktilleroftheearth likes this. -
tilleroftheearth Wisdom listens quietly...
I am careful with that (I simply don't use it). Spot on for the latency issue on AMD too in my workflows.
For people that don't transcode/render video or game/stream simultaneously all day long, AMD doesn't offer much in the way of a performance edge. 'Most users' for me is mostly not anyone likely to be on these forums. Here, I hope we know what we want and know which platform offers a direct benefit for our workloads/workflows.
'Most users' though, whether they buy AMD or Intel, are getting more than they need. I thank AMD for that push they gave Intel (and yeah, they needed it). The issue is that there is no real 'value' in buying AMD anymore (again, except for highly specialized workloads. There, and for certain workloads, they are leading, no doubt). It took just a few years for AMD to prove what corporations do best; get greedy. And effectively forget about their 'loyal' customers and supporters.
And nothing is 'late' being released. That is just online media twisting perceptions. Things get released when they're ready. Just ask AMD about 'late'.
Intel is on the right track with benchmarks. They're meaningless when showing just one aspect of overall performance. -
I agree with all but two points.
Last one first, Intel is late. How? 10nm. There is zero arguing about that. Sure, on revisions, it is right on time, but Intel did not originally intend to be on Skylake for 4 years.
Second, AMD does have plenty to offer. In fact, the 3600 is enough for the majority of users, outperforms Intel's 10400, and beats the $100 more expensive 10600K at many tasks outside of gaming. In gaming, it even does enough. So let's not pretend here.
Now, if a pure gamer, the 10600K is compelling over the 3700x at that price point. But if you do anything aside from gaming, except in limited circumstances, the 3700x rofflestomps the 10600K in performance.
For the 10900K, we are now pushing memory bandwidth limits and graphics card limits, with the latter solved in August and September. But for the price, other than benchmarking, you can pick up the 3900X for cheaper and outperform in everything but single threaded workloads and games.
But there are still programs out there that are latency sensitive aside from games. If you use those, then Intel's offerings may outperform the higher core count.
This is why it is important to know your work load. But trying to say AMD is offering nothing worth considering at this point makes you look absurd. And as AMD has said Zen 3 will drop this year, and Intel has nothing else on the road map this year (meaning rocket lake), AMD will strike a hard blow to Intel.
This comes in two parts. One is reduced latency by ditching the CCX divide. Now, you'll have an undivided 8- core chiplet with 32MB L3 and an ability to select two cores further apart for boost, which increases frequency due to spreading heat better, which more than makes up for the larger cache latency due to larger table size. And that's just some of the benefits dropping the CCX divide.
If you look at the 3100 vs 3300X, you see a 15-35% increase in performance without the split in CCX and hopping to get to other cores. That's with the simple latency improvement.
You combine that with other changes to the architecture, you will have a massive jump in performance. After all, AMD hinted this will be a large jump, while hinting Zen 4 won't be as large a jump (but gets DDR5, so memory bandwidth improves drastically).
That means rocket lake, expected Q1 2021 now, will possibly need to really deliver for the backport on both IPC and maintaining frequency. If so, then it may be just like 10th gen vs zen 2.
But just wanted to mention those two points.
Sent from my SM-G975U1 using Tapatalktilleroftheearth likes this.
AMD's Ryzen CPUs (Ryzen/TR/Epyc) & Vega/Polaris/Navi GPUs
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Rage Set, Dec 14, 2016.