Look, read my statement. I said DIY market. Not overall market. When you strip that information from the statement, it makes it a false statement.
-
-
ajc9988 likes this.
-
But, generally, the DIY market is smaller than the consumer market as a whole, which is dominated by OEM builds. Now, AMD has increased OEM offerings this year, in part due to Intel's shortage.
I just did not want to be misunderstood for saying AMD has 2/3rd of the entire consumer market, which they don't, or the market overall, which is what Tiller was trying to make it sound like I said.
But, you are correct. We will see it in the Q3 earnings report and the 10-K for this year, published in late January to February. -
tilleroftheearth Wisdom listens quietly...
You are clearly not in a state to discuss this topic. Except it seems only if everyone agrees with you.
Nobody told you what to buy.
Nobody cares about IPC like you seem to (in a vacuum).
Choosing ARM over Intel for performance is more absurd than anything I've stated so far.
Don't be so quick to judge what I know or don't know. It makes your assumptions look kinda dumb.
I talk about productivity because this is the only term I know that encompasses all aspects of a manufacturer, their platform and the ecosystem they all support.
You talk about productivity as not ' regular person speak', but as someone that doesn't realize that the sum is much more than the individual parts.
Please don't re-write what I have already stated many times on this forum. I did not recommend what you wrote. Re-read it. The soundest advice I have ever come across and have given.
As to the sales, I have already quoted your original post in my initial post. Nowhere do you state the source of your "That is why in the DIY market, AMD has flipped the script in desktop and according to those that publish their figures, AMD is nearly 2/3rds of all new sales." statement. Now that you have stated the source, it undermines that statement even more.
When you have AMD's and Intel's numbers for the last decade, we'll talk.
Here's my original statement (yeah; copy and paste) of what I recommend:
Quoting myself from the post above:
Please, go on and show your passion for what AMD might become one day and how you're so much smarter than me too.
In the meantime, I'll try to read up on the big words you used like IPC, Frequency, PR speak and the really tough math like IPCxFreq that estimates performance for you.
But know that estimates are not what I deal with or tolerate in my tech tools. I only pay for actual real-world improvements in my own testing vs. what I have now, and not based on what the kiddies bought the most of in the EU.
As to the shilling comment you make about me? Proven wrong. But as I show above, it does seem more applicable to you now, doesn't it?
-
Who critiqued not being able to watch Su's keynote for her voice versus who went through and critiqued the benches used in the AMD keynote? Who went through a couple pages back here to show that mitigations to security vulnerabilities reduced performance on Intel chips by 6% on one of their benches used to claim 18%, versus who gave nothing.
I regularly cite my sources. You, not so much.
Really, have you examined the performance of the ARM chips this gen versus Intel's performance? And when it comes to tablets, phones, etc., which is the devices I said I would buy, yes, I want those to be android OS and Intel just isn't hitting the mark in that segment. Sorry, but those are cheap throw aways compared to a laptop. For a laptop, sorry, the BGA just DOES NOT DO IT FOR ME (or most people in this forum).
As to the source, it is common knowledge. But here, if you want.
https://www.overclock3d.net/news/cpu_mainboard/amd_is_outselling_intel_2_to_1_at_mindfactory_de/1
Edit: here is to March 2019 https://www.techpowerup.com/254503/amd-outsells-intel-2-1-on-european-retailer-mindfactory-de
and GN
As to your advice: BS! When you tell someone to buy a 9900K for gaming when they could save money with the 9700K or the 8086K/8700K and pour the savings into going an extra step on the GPU for gaming, then it is BAD ADVICE!
You have single threaded workloads which are not heavy compute. Yet you recommend people on the basis they need what you need, which is ABSURD!
As to shilling, not proven wrong. You making an assertion IS NOT proof, it is a statement without support.Last edited: May 28, 2019hmscott likes this. -
tilleroftheearth Wisdom listens quietly...
Yeah, nice! Giving the links to your sources... finally.
I really am confused about how you can write so many words when you can't seem to comprehend the ideas behind the few that I write?
I commented on both AMD's and Intel's presentations because I took the time to watch both. Those are my sources. See? Nobody stated I didn't watch the presentations because of anyone's voice either.
Your attempts to analyze future platforms are not consistent nor impartial between the two main ones, so, I ignore them and I suggest most should too if they're looking to buy something today.
And please. Stop putting words into my mouth. When have I ever recommended what you state below? Never is the correct answer here.
The reasons people have asked me in the past and continue to ask me today for a system recommendation is because my recommendations have been proven correct, not just once or twice, but over decades. Let me try to re-word that for you so it might sink in finally.
For a given $$$$ amount, buy the most current platform you can, while also getting the highest performing CPU and then putting the most RAM in it possible.
The above is the start of a solid platform when maximum productivity is required.
Please! Don't veer off into long-winded explanations of cases, power supplies, cables, HDD's and SSD's and, gulp, GPU's and try to confuse me with those things I have no knowledge about and make me look bad in the process.
And btw, the latest ARM platform that will ship sometime in 2020 or later is almost equivalent in power to a 2-year-old Intel i5-8250U.
Duck! Ftwww! There goes that theory of yours too.
See:
https://www.windowscentral.com/snapdragon-8cx-benchmarks
Granted, it has the battery life and it is at a disadvantage vs. the i5 because it is emulating Windows programs. But this is the real world and it still hasn't shipped.
I'll take the Intel 10nm any day over that junk. And it's shipping now, too.
See:
https://www.windowscentral.com/xps-13-7390-computex
Yeah, even a Dell is up for consideration by me, because I don't blindly exclude possibilities just because of the manufacturer's badge on it.
-
They didn’t overclock the CPU as 25w is one of the cTDP that Intel has listed right on their SPEC sheet during the announcement.ajc9988 and tilleroftheearth like this. -
You literally misquote me in many cases, twisting my words, why should I not analyze the same. Under your "general guidance," you would make that recommendation, which is absurd. Applied strictly, your own statement on what you recommend would be applied in my example. Hence why I gave that example.
Once again, you are wrong. It may be correct in certain professional settings, but is absurd for consumers for gaming. You cannot over-generalize. Not in the instance of choosing hardware. For an AI programmer, for the moment, depending on compute density in a given area, graphics cards are recommended and which one is used is based on different criteria, like power consumption or node density. If density and power consumption is less than getting the most bang for the buck, I'd recommend 1080 Ti/Titan Xp's. If they need more power conscious or denser nodes, then Volta or RTX is my recommendation. But ram, for a gamer, there is little benefit over 16GB. There are times, depending what else they do, I'd recommend 32GB. For certain workloads, professional workloads, 32GB are insufficient. Same with content creators.
So under your general advice, you would tell a consumer gamer to buy 4 dimms of 16GB or 32GB each if they could afford it. That is just applying your own statement. Do you see the absurdity of that on a mainstream platform for a gamer?
If you don't understand any other components ON A PLATFORM, how can you make recommendations ON A PLATFORM? I mean, if being honest, with the discussion of GPUs for small scale AI I mentioned above choosing Nvidia, that recommendation I'm actually watching closely because Intel's solution of chiplet integration with dedicated AI hardware likely may change the equation. At that point, I'd have to say pick up the Intel GPU for that purpose if Intel delivers a competent product for the use case (which they very well may).
If you don't understand choices on SSDs and HDDs, or even GPUs, then you don't understand decisions regarding platform features for motherboards. That means all you are doing is pushing CPUs and RAM.
Also, considering it will be in tablets and chromebooks (which I'd never buy), and considering the cost differential of products with that 8cx versus one with an Intel U series chip, yes, I'd go with the ARM chip. As I said, it depends on the device being purchased. I even made it clear I'd grab an Intel socketed laptop to replace my P770ZM most likely, or a market alternative if they do the same with the Zen 2 chips.
I personally don't buy thin and light notebooks. Personal choice there. But I also am not saying Intel's chips are bad for that purpose. I am saying they are feeling pressure from AMD and ARM in that segment due to their chip shortage, which if rumors are true, OEMs won't really have enough volume to launch a product line on the Ice Lake-U series for like 5 months. I question their marketing material, including the tests used and not giving enough information to verify the validity of their claims, just like I did with AMDs presentation, a post that even you liked (which you only like my comments critiquing AMD, none of my comments critiquing Intel, curious!).
Also, Intel's chips are shipping to OEM partners now. Considering Dell said holidays for a product, show me a device I can buy today with Ice Lake-U. Oh, wait, you can't. See how you just fell victim to your OWN analysis.
Also, I'm not ragging on Dell. I'm saying that they have said the Ice Lake-U variants of their line won't ship for MONTHS. That isn't saying don't get them. I'd have to analyze the final performance and feature set to make a buy recommendation. So you clearly, here, on this point, MISSED THE POINT I WAS MAKING.
But, the CPU they used was specifically the 15W variant, which does make it an overclock, even if it is allowed under spec and common. But, since it is within spec for allowed function, it is somewhat fair.
I argue, though, that cTDP being allowed does not remove it being an overclock. I can set my cTDP on my 1950X from 180W to 250W. Would you agree that 250W on my 1950X is an overclock? That change alone would allow my system to boost higher for longer due to a higher ceiling.
Now, I'm not meaning this combatively, what is your conceptualization of overclocking? The reason I ask this is due to features on both AMD and Intel CPUs, the concept of what overclocking is has become murky. For example, do you consider multi-core enhancement an overclock? Or do you consider AMD's Precision Boost Overdrive or XFR overclocking? Do you consider exceeding the TDP and power ratings of a chip overclocking? Is it overclocking if you have a cTDP allowing for it to be changed?
It's an interesting question with the changes of features within chips these days and I'd like to get your opinion on it. Some have drawn the line at the chips behavior versus settings in the bios that manipulate the basic functions of the CPU (in other words, where PBO and XFR are a function of the chip, but manipulating the boost on Intel's chips manually, like setting Boost 3.0 is overclocking). It's kind of a meta question here. -
custom90gt Doc Mod Super Moderator
@ajc9988 and @tilleroftheearth, let's keep it civil and not resort to sarcastic comments and personal attacks. This is the second or third time I've had to say something. You both are capable of making points without trying to drag each other through the mud. This really is the last warning though.
katalin_2003, Aroc, Papusan and 3 others like this. -
@Talon the new intel 10nm so far looks pretty bad. they did try to make it sound nice and all, however with still no new product to show. its all benches and measurement from intel themselves on a slide, as opposed to AMD's demonstration of actual product vs intel in real time. we already know not to give too much credit to intel when they do thing like this.
also, icelake desktop or beyond will default to ddr5 and i think intel is using that as part of their slide, knowingly vast majority of cheap ddr4 will be at 2100-2400 range with high cl profile. what those number gonna do for us if we aleady have people with 4400mhz ram in desktop for a chip that originally spec'ed for 2400-2666? this isnt a guarantee of superior IMC, if anything it may be worse due to first gen 10nm.
@ajc9988 we can only hope to see that 16 cores all core at 4.6-4.7ghz come to reality. -
Additionally, many claim that companies will opt for the 25W cTDP, but Dell's specs, I believe, use the 15W TDP, which would cut against the claim.
As to Ice Lake desktop, NO roadmap has shown an Ice Lake desktop. After Comet Lake is Rocket Lake from all roadmaps I've seen.
Now, next year, Ice Lake server chips will have DDR5. But so will Zen 3. Nothing I've seen or heard has DDR5 for consumers until 2021, if all goes well.
But, a note on DDR5: unlike prior revisions, at the same clock speed, DDR5 is said to give 30% extra performance. Granted, when looking at total system performance, except for Zen based CPUs where the speed effects the speed of the Infinity Fabric, the gains are in single digit percentages overall, but that fact is still worth noting.
Another thing I noticed, but isn't really being discussed, is that Intel has brought back the FIVR on Ice lake-U! This was rumored years ago, or at least it would be on Tiger Lake, hence Ice lake being the uArch change before it suggested Ice Lake would also have it, but turns out that leak was correct.
Now, everyone remembers Haswell had WAY lower voltages. But they remember Haswell better as Hot-well, because it ran HOT. That will be another thing that needs watched for in these Ice Lake-U chips, which may shed light on the average boost speed seen in the geekbench leak, as though some throttle down for all core may have occurred.
In arguendo, the lower frequency could have just been that Geekbench was misreading the speed of the chip.
But, if true, and if it is running hot, then it would make sense why Dell would list the 15W instead of 25W. But, once again, in arguendo, they may have put 15W from the spec and entered it as a place holder, while possibly revising it before release.
Time will tell.
As to the 16-core at 4.6-4.7, have you seen the Bits-and-Chips tweet? To discuss that, it is better to move it over to the AMD thread, though. -
Also, I haven't seen a description of the 28w BGA details - ball count - die size, etc - so I don't think it's part of the laptop realm - it's likely the mini-desktop part - or motherboard as a DIY part soldered-on.
Perhaps there will be a 28w 10nm Ice Lake CPU in a laptop, but as was commented in the Dell XPS 13 video I posted, the cooling for that size laptop for the previous generation wasn't designed for the added heat of the 15w part so Dell pumped up the cooling system, not sure if they will do that for the 28w part in that form factor, but I think it's unlikely:
First look: Dell XPS 13 2-in-1 with Intel Ice Lake
http://forum.notebookreview.com/threads/intels-upcoming-10nm-and-beyond.828806/page-14#post-10916277 -
With that said, no one has seen the 28W part. Here is Ian Cuttress at Anandtech.
https://www.anandtech.com/show/14436/intel-10th-gen-10nm-ice-lake-cpustilleroftheearth and hmscott like this. -
Leak: Dell releases specifications of the first Ice Lake U chips tomshardware.com | May 28, 2019
-
In the past have we seen many implementations of the full cTDP? As far as I can recall I can only think of one, and that was used in a server applications.
https://www.spec.org/cpu2017/flags/ASUSTekPlatform-Settings-AMD-z11-V2.0-revA.html
Unlikely given the 2in1 thin form factor that there will be room for 25w cTDP let alone the 28w part - I wonder if the 28w part has a higher cTDP potential?
If the 15w part does 3.9ghz, then maybe the 28w part does 4.1ghz?
I wonder what laptop ships with the 28w part, and what part number it sports? Is it an i7, or an i9?
Is there a matching spec sheet for the 11 10nm Ice Lake CPU's that include the TDP / cTDP ratings + family name + part name?
"Intel is officially launching 11 different CPUs in the 10th Gen Core lineup, ranging from Core i3 to Core i7. Details on the specifications of those CPUs has not actually been released, which raises a number of questions of how much of a launch this actually is, however we do know that the best CPUs will have a turbo frequency up to 4.1 GHz and a top GPU frequency of 1.1 GHz. Users might consider this lower than 9th Gen mobile parts, which again raises questions. CPUs will be coming to market with 9W, 15W, and 28W variants."
https://www.anandtech.com/show/14436/intel-10th-gen-10nm-ice-lake-cpus
That article has "?"'s in the tables instead of the data...but it does say the top part is the i7... maybe the 28w part is meant for another form factor? Like last years 10nm 28w NUC?Last edited: May 29, 2019ajc9988 likes this. -
Started with the unlocked $600 Mobile Scam last year.
Ashtrix, Aroc, ajc9988 and 1 other person like this. -
tilleroftheearth Wisdom listens quietly...
Just want to set the record straight here; I most definitely do understand the components that make up a platform. I was being sarcastic about myself.
If you really believed that, you really don't know anything about me at all and basically, I am not a one-dimensional character on the 'net.
Don't want to belabor the points you seem so keen on winning on.
But nothing I've stated is wrong. You're just refusing to see that there is another side to this conversation. I am not over-generalizing either. My words are honed down after many decades of dealing with these issues. They are merely few and precise.
If anything, you are going to far the other way and being too narrow, specific and wordy. That's fine, your choice, but you really need to give the same assumptions of succeeding to both sides you're arguing. Otherwise, your bias will continue to keep showing by blindly trying to convince others, if not me, that my recommendations and therefore my thinking is absurd. Rather than discuss the actual products that are available today.
And I am not claiming that I am not biased to a platform and company philosophy that has given me and my company the tools to succeed and prosper.
But you really need to stop making claims that you somehow know my workloads and workflows and that somehow clouds my judgment on what the current platforms offer. For me, or for anyone else that is more mainstream consumer and not someone that has actual workflows that require the most cores possible, now.
Because most computer tasks are still single-core driven. And for the few examples that are not, the available 4C/8T platforms are good enough for the masses today, still.
I've asked this months ago and still no answer, but I'll ask you again. Show me a modern example of any general consumer workflow that needs more than 4C/8T and we can keep talking. Otherwise, the points I've made still stand.
-
Without a doubt this thing looks like a total turd...
https://www.notebookcheck.net/Dell-...i7-1065-G7-spotted-on-Geekbench.422721.0.html
What they are achieving at low clockspeeds is impressive IMO. Wait till they get the yields up and get the clockspeed to follow. Desktop 10nm+ is going to stomp hard when they get that clockspeed up. Exciting times. -
-
Papusan, tilleroftheearth and hmscott like this.
-
As to uArch, ice lake uses sunny. I think tiger uses the one after that (golden?; sorry, getting my shots today so at the clinic).
Moreover, there has been NO leaked roadmaps with 10nm/+ on desktop. None.
Now, at this point, we are seeing Intel still not have 10nm capacity. They are doing mobile chips on 10nm+ this year, server next year as they transition mobile to 10nm++. Comet and rocket are still rumored for 14nm++, both of which should be desktop S CPUs.
With limited capacity, it suggests they will then move at least mobile to 7nm after GPU in 2021. That should be followed by 7nm server in 2022. Since desktop seems to come last, we may see 10nm++ for desktop late 2021 to Q1 2022 if the pattern continues. -
ajc9988 likes this.
-
But, so far, it seems comet lake is going to have some changes, but no word on it using sunny cover (I saw whispers they would, but with more recent whispers saying just changes, no sunny, who knows). I'd expect by rocket lake we will see a uArch change. In fact, I'd hope they port the uArch for tiger, if the successor to sunny, to rocket lake. Then, some of those changes would mitigate my fear of back porting a 10nm design to 14nm.
So we have until comet is announced to find out more (thinking 4-6 months). -
ajc9988 likes this.
-
That won't come to larger chips than foveros tiny chips for years. -
It may be too early for PCIE 5.0, but we may hear more about the time frame June 18th:
Join us for the PCI-SIG DevCon 2019 in Santa Clara, June 18-19
http://pcisig.com/join-us-pci-sig-devcon-2019-santa-clara-june-18-19?PageSpeed=noscript
PCI-SIG® Achieves 32GT/s with New PCI Express® 5.0 Specification
The organization doubles PCI Express 4.0 specification bandwidth in less than two years
https://www.businesswire.com/news/home/20190529005766/en/PCI-SIG®-Achieves-32GTs-New-PCI-Express®-5.0
“ AMD congratulates PCI-SIG on the release of the PCI Express 5.0 specification to the industry and the future 2x increase in performance it is expected to deliver. We expect to bring our first PCIe 4.0 specification CPUs to market this year and look forward to meeting the future bandwidth demands of end-users with PCIe 5.0 technology.” ~ Gerry Talbot, AMD Corporate Fellow, Technology & Engineering Group, AMD - Source: PCI-SIG
“ Intel believes that open standards foster platform innovation, create healthy ecosystems, and accelerate market growth. As a founding promoter of PCI Express architecture, we fully support the newly-released PCIe 5.0 specification, and look forward to continuing the PCI Express specification tradition of high-performance, multi-platform, open interconnect.” ~ Dr. Debendra Das Sharma, Intel Fellow and Director of I/O Technology & Standards, Member of PCI-SIG® Board of Directors, Intel Corporation - Source: PCI-SIG
PCI-SIG Finalizes PCIe 5.0 Specification: x16 Slots to Reach 64GB/sec
by Ryan Smith on May 29, 2019 6:30 PM EST
https://www.anandtech.com/show/14447/pcisig-finalizes-pcie-50-specification
"...Meanwhile the big question, of course, is when we can expect to see PCIe 5.0 start showing up in products. The additional complexity of PCIe 5.0’s higher signaling rate aside, even with PCIe 4.0’s protracted development period, we’re only now seeing 4.0 gear start showing up in server products; meanwhile the first consumer gear technically hasn’t started shipping yet.
Even with quick turnaround time for PCIe 5.0 development, I’m not expecting to see 5.0 show up until 2021 at the earliest – and possibly later than that depending on what that complexity means for hardware costs.
Ultimately, the PCI-SIG’s annual developer conference is taking place in just a few weeks, on June 18th, at which point we should get some better insight as to when the SIG members expect to finish developing and start shipping their first PCIe 5.0 products."
I suppose it's possible PCIE 5.0 might come out sooner than 2021, as Intel is promoting an FPGA with PCIE 5.0 PGA (PCIE 4.0 Q3 2019)... others could be in mid-implementation as well - and were waiting for the PCIE 5.0 standard to be ratified so they could freeze the features and move to production.
Intel Agilex: 10nm FPGAs with PCIe 5.0, DDR5, and CXL
by Ian Cutress on April 2, 2019 1:00 PM EST
https://www.anandtech.com/show/14149/intel-agilex-10nm-fpgas-with-pcie-50-ddr5-and-cxl
"Agilex will come in three flavors: F, I, and M, with exact support listed below. The Intel Quartus Prime software will support these variants from April 2019, and first device availability of the F-series (PCIE 4.0) will be from Q3 2019."
Oh, yeah, that's right:
Yorgos - Sunday, April 07, 2019
https://www.anandtech.com/comments/14149/intel-agilex-10nm-fpgas-with-pcie-50-ddr5-and-cxl/640075
"That's why Intel has to purchase other companies to advertise their products.
Intel doesn't have a 5g modem, but pays to get press attention for it.
Intel doesn't have a 10nm chip, but pays the press to "review" some samples
Altera has been shrinking since they got acquired by Intel, but they just hit the meme button and put DDR5, 10nm, PCIe-5.0 in one article as bait for the clueless.
I don't regret at all, in fact I am more than happy everyday that I was lucky to work on the Xilinx eco-system for over 6 years.
Back in the day Altera was pretty competitive, they came and showed us the first, unreleased back in 2013, OpenCL SDK with goodies like matlab integration... but after that, nothing.
Altera/Intel PSG or whatever they call it the kids nowadays, isn't just an fpga company anymore. Their numbers are inflated by adding "5g" transceivers, embedded boards that intel will kill off in a few quarters and a ton(SI) of other devices that altera never made or intended to make. So, yeah, those numbers on YoY growth are not only for fpgas, they contain Altera + Intel's embedded division sales.
Also note that Intel killed the support for the "V" line of fpgas due to *Customer demand declined and continued support for these devices is no longer viable* as they put it exactly in their announcement. Guess what! devices with Spartan III are STILL made. head over to Ettus Research and grab those sweet mini USRP's.
Since I work in r&d and have close ties with many uni research groups, everyone is developing on Xilinx. Even the freshly "opensourced" MIPS designs where sent to research groups were paired with UltraScale [Xilinx] fpgas... in fact there are several on the next room where 2-3 phds are working on them.
[Edited for spelling / typos]
More Intel Custom Foundry, 10mm, and now PCIE 5.0 BS?Last edited: Jun 3, 2019Starlight5 and ajc9988 like this. -
https://browser.geekbench.com/v4/cpu/13476154 -- Intel i7-1065G7
https://browser.geekbench.com/v4/cpu/13516046 -- AMD 3700U
10nm Ice Lake looks impressive AF for thin and lights. Nice gains over the previous entry spottings. Can't wait to see what the final tweaking offers consumers as this is likely not even it's final form.Robbo99999, Papusan, ajc9988 and 1 other person like this. -
https://www.semiaccurate.com/2019/06/05/a-look-at-intels-ice-lake-and-sunny-cove/
https://wccftech.com/intel-10nm-ice-lake-sunny-cove-14nm-comet-lake-amd-ryzen-3000-cpu-z-benchmark-leak/
Edit: I feel I need to post this here as well. To be clear, Ice lake at 3.7GHz is batting on Single Thread against a 3800X at 4.7GHz and an Intel 9700K ST at 5.3GHz. That is a VERY impressive architecture for Sunny Cove!
Last edited: Jun 15, 2019Robbo99999, Talon and hmscott like this. -
-
But, moving to eDRAM and HBM on package WILL happen. So it is more a question of when. What I see Intel doing on that side of things is creating an active interposer and the chiplets to set on top, likely of multiple types, potentially moving I/O stuff to the interposer as well along with an L3 or L4 cache, considering they are putting a cache on Foveros's base chip already. Then, you just incorporate a first layer of memory for HBM as almost an extended level of cache, with the IMC keeping that primed from DDR5. This is likely around HBM3 to HBM4 time frame, and due to costs, likely on a server chip project with AI chiplet and possibly a graphics chiplet on package.
The real question is whether the plan remains of still sticking HBM directly on the GPU chip/lets on supercomputer server chip packages, then allowing it to be shared with the CPU core chiplets. But this is all working theory on old designs planned years ago, while the newer chips suggest they may go different ways in incorporating the different chips and chiplets on an active interposer, while assuming they would also incorporate the I/O elements into the active interposer like on Foveros rather than an independent I/O die like AMD has done. So a butt tonne of speculation here! -
ajc9988 likes this.
-
Eventually it might go this way
-Foveros with base I/O die on the bottom, and HBM + Compute die(like this: https://images.anandtech.com/doci/13699/Foveros (7).jpg)
-Same thing as above, but with EMIB connecting the HBM and Compute, like with Kaby Lake G.
We'll likely see both being used depending on the segments. -
This is the image that changed my mind, and now you are saying this image from Intel isn't going to be implemented by Intel in the first generation? The use of I/O and Cache on the active interposer is the ONLY thing that made Intel's design more advanced. They had novel ways of doing the contact mounts, but AMD evidently has a novel way as well.
Granted, the filler to even do a tiny chip like this for 3D stacking is nice, but we've even seen Apple do similar already, and that was YEARS ago.
EMIB is just an interconnect, and would not be used on this chip. This chip is an active interposer (meaning routing capability through the interposer chip) while connecting to the above DRAM layer through TSV, if I'm not mistaken.
Edit: Also, if doing 2.5D stacking with an active interposer, using EMIB rather than TSV with the HBM on the active interposer would be potentially less performance than stacking HBM right on top of the Active interposer. What you propose then would be closer to if AMD put HBM on package with their MCM organic substrate but traced through something like IF to the HBM on substrate. So I'm trying to see why you suggest that. It makes very little sense, especially since Intel already has done HBM on passive interposer with the Xeon Phi series, if I'm not mistaken.
Edit 2: What you are likely relying on is this from anandtech:
"Intel has also uses full interposers in its FPGA products, using it as an easier and quicker way to connect its large FPGA dies to high bandwidth memory. Intel has stated that while large interposers are a catch-all situation, the company believes that EMIB designs are a lot cheaper than large interposers, and provide better signal integrity to allow for higher bandwidth. In discussions with Intel, it was stated that large interposers likely work best for powerful chips that could take advantage of active networking, however HBM is overkill on an interposer, and best used via EMIB.
Akin to an interposer-like technology, Foveros is a silicon stacking technique that allows different chips to be connected by TSVs (through silicon vias, a via being a vertical chip-to-chip connection), such that Intel can manufacture the IO, the cores, and the onboard LLC/DRAM as separate dies and connect them together. In this instance, Intel considers the IO die, the die at the bottom of the stack, as a sort of ‘active interposer’, that can deal with routing data between the dies on top. Ultimately the big challenges with a multi-die strategy come with in thermal constraints of the dies used (so far, Intel has demonstrated a 1+4 core solution in a 12x12mm package, called Lakefield), as well as aligning known good die for TSV connections."
https://www.anandtech.com/show/14211/intels-interconnected-future-chipslets-emib-foveros
Let's break this down. First, "the company believes that EMIB designs are a lot cheaper than large interposers." This is the TRUE reason Intel is pushing EMIB instead of mounting on the active interposer, because of "thermal constraints" and "aligning known good die for TSV connections." What they do NOT discuss in the article or section is the added issue of height. Even in 2.5D designs, if the chips and chiplets on the active interposer have different heights, then you have to find a way to fill the extra space between that and the eventual IHS. Too much filler and you lower the ability to transfer heat away from the chips and chiplets that are shorter than the others significantly, meaning you have to engineer around it. You then have an extra packaging contact that if the TSV goes bad or something goes wrong, the chip dies and you lose all worth in the chip. Already, just as AMD only has two companies in the world that can mount their 7nm chips on package and had to grow copper contacts with a small lead free solder to connect it to package, Intel has its own process for contact mounts with 10nm. By introducing one more element which could complicate or destroy a good chip, they would rather use a data interconnect like EMIB to reduce that possibility, while also working with HBM having extra height relative to that of normal die chips, which would have them sit higher, potentially (depends on a lot of things, including the silicon block on the die, etc.) and increase difficulty and costs significantly, while adding risks for yields after packaging. There is NOTHING wrong with that, and it is a smart move, but it also isn't innovation!
Then, Intel uses two more statements to try to steer people away from asking why they are not putting HBM on the active interposer. The first is that "large interposers likely work best for powerful chips that could take advantage of active networking." An example of that would be like Epyc Rome, with 8-core chiplets, which if on an active interposer would have a very active networking occurring, better utilizing the routing capabilities of an active interposer. Good thing AMD has a white paper on that from around 2014 or 2015, specifically examining disintegration of the chips and which topology would be optimal for an active interposer, along with a cost analysis from late 2017 on implementing active interposers (meaning AMD has plans to use an active interposer and costs to come down are what they are waiting for).
The next is the conflicting statements of Intel on the matter, that EMIB "provide better signal integrity to allow for higher bandwidth," yet in the next sentence saying "however HBM is overkill on an interposer." Which is it? If HBM is overkill on an interposer, how is it overkill when you just said you get better performance allegedly the other way? The truth is, you should have lower latency with HBM on package, and with HBM3 having TBps rather than GBps potential on bandwidth, signal integrity is but one aspect, while latency to HBM is reduced by putting it on an interposer, like they do with GPUs and Xeon Phi.
Meanwhile, an active interposer is a chip that doesn't just have traces on it, but active routing logic on it. So to move some of the lower power I/O elements to the active interposer is a natural progression. Nothing too special or extreme there, it just makes sense. This is why them discussing a cache on the interposer and only showing L2 cache and faster caches for the cores on the mounted dies is why I got excited at Foveros as actual innovation, as then their mismatched cores with bigLITTLE implemented can have the L3 on the active interposer shared between the core dies done as 4 atom type cores and one mainstream, high-performance core. And doing that through TSV on top of it being shared, that is true innovation.
Then you are here saying their chip, which they previously said had it, does NOT actually have that. Welp, there goes the innovation status.Last edited by a moderator: Jun 18, 2019 -
About the cache part: Ok, I did notice it said Cache. Sorry on being wrong on that. But, if you look at the block diagrams for Lakefield, you'll notice the L2, and the L3 LLC(stands for Last Level Cache) is on the compute die. So either they made a mistake, hiding it, or its a possibility for the future.
Compared to current solutions based on HBM, EMIB saves cost. That's why Kaby-G uses it. Also with EMIB, the EMIB chiplet serves as a connection, instead of IF or UPI.
EMIB is higher power per bandwidth, but its cheaper than Foveros, and easier to integrate. They said EMIB will be combined with Foveros for some products. Foveros isn't needed in Lakefield for the bandwidth, but compared to EMIB it saves space. EMIB also has the advantage that it doesn't need to do 3D stacks so it can be used for higher power chips.
For larger chips, EMIB works fine. Also higher power per bandwidth isn't a problem for those chips either. Intel said 0.15pj/bit for Foveros and 0.3pj/bit for EMIB. That's 300mW for 100GB/s of bandwidth. 1TB/s with video cards can be served with 3W.
And bolded part above doesn't make sense. EMIB replaces the expensive silicon interposers.tilleroftheearth likes this. -
As to HBM on that, why do you think I mentioned IF traced through the substrate? Good job paying attention there. Interposer with data fabric. So you are catching half of what I'm talking about, at least that is good.
Now you go to repeating Intel's PR line. Bored.
So, double the watts vs actually doing it on active interposer, while having extra latency, got ya.
Also, ignore the strike through and read it anyways. I accidentally hit that on my second edit and cannot find where this forum has it to turn it off on that part.
Edit: Here are some images to show the state of Intel's claim and where I got L3 on the active interposer, as well as showing HBM mounted on interposers for GPUs, where for Intel's implementation, it would go off the interposer to EMIB to connect to HBM.
Now, they could always be using the MLC as the go between from the L2 for the small cores and the big core having its own cache on it, but with them labeling cache on the interposer, I figured that is where they stuck the L3. Let me know if you know more on that.Last edited: Jun 18, 2019
Intel's upcoming 10nm and beyond
Discussion in 'Hardware Components and Aftermarket Upgrades' started by ajc9988, Apr 25, 2019.