Right now the 1950x is going for about $750 USD. This does then make some sense for the 2990x to be in the $1,500 range. Possibly even lower, I can only hope and people over at Intel can only cringe.
My other thought is the 7980xe goes for $2,000 and I am not sure AMD wants to price match but again there is nothing as powerful yet from Intel.
-
-
hmscott likes this.
-
I do not know if AMD is ready to take that marketing position. This as with Epyc they are still undercutting Zeon costs significantly. They seem to still be in the phase of establishing market presence and penetration before getting to the premium cost strategies. That is a good thing for now. They may be waiting till 7nm for that to occur.
On the leak it was mentioned 4.2 GHz had strange temperature issues, I am now wondering it they were VRM temp issues? With the Taichi I can see an all core easily of 4.0 but if diminishing returns of only like an additions 200 or so in CB R15 I wouldn't bother.ajc9988 likes this. -
Granted, with their comeback they COULD technically do it (as they are still a corporation that wants to make profits - which are number 1 priority), though I don't know if they would because they'd only be hurting themselves.
AMD knows that if you overprice a product, it can deter consumers.
Besides, they don't hold majority of the market yet... and if they continue to undercut Intel in pricing (and even if they win majority of marketshare from Intel at some point, they would be smart to continue offer more affordable hardware), AMD definitely stands to not only sell a large amount of CPU's, but also gain back even larger market share.
Besides, the 12nmLP also dropped costs for Ryzen+ in comparison to Ryzen 1.
AMD has high yields and can scale up performance thanks to IF while keeping costs low.
They are still a small company, and need to win back a lot of market share on the GPU front (and server side) too.
$1500 - $2000 tops seems like a price range for TR 32cores... but I'm expecting/hoping it will be $1500, because its a double increase in cores... and this price would be keeping in line with the difference between Ryzen 1800 and 1950x.Last edited: Jun 24, 2018hmscott likes this. -
hmscott likes this.
-
This could be reminiscent of the original issue with the 7980xe. A lot of boards with only 8 phase and then only the main CPU PSU connection. The Taichi besides having the 11 phase also has the 8 PIN CPU power connection to the board. The lower end boards will probably be OK stock and overclocked with 400 watt packages and up to maybe 500 watts but that is about it. 500w probably is iffy too as we all saw what was up with the early x299 boards.
Good thing here is the TR2 will offer quite a bit more computing power over the 7980xe. This and come when the 28 core Intel is out even stock there is no way it will run an 8 phase board. You probably need 16 phase just to think about running stock if at 4 GHz and even that may be an iffy situation.ajc9988 likes this. -
-
UFD Tech
Published on Jun 25, 2018
What are your thoughts on Intel's "tactics" throughout the years with regards to how they've treated the CPU market? Does it make you view them/their products in a different light?
-
Intel has only tried to protect itself in every way possible. They are not the first, nor last, to use anti-trust. why compete when you can just squash? No one goes to jail they just pay a slap on the wrist fine and it become just another day.
ajc9988 likes this. -
This is what the timer issues that some have brought up with Ryzen CPUs with Windows 10. I was wondering if someone with the Kaby Lake platform, Coffee lake, and or Skylake-X platform could replicate this testing and post their results:
So, did a little testing of my own this evening. I looked at using the ITSC, the HPET, and the RTC timers in Windows 10 Enterprise Build 1803, April version. I performed 10 runs using each timer running at 100bclk x 39.5 multiplier and 10 runs of 102bclk x 38.75. This wound up comparing about 3948.96 to 3951.55. That would guarantee that the higher bclk score should be slightly higher than using the 100MHz base (in this case, faster, meaning lower time for completion). I am sharing the avg score with the highest and lowest scores thrown out to try to control, in part, for outliers. Here are the results:
ITSC HPET RTC
SPI 100 10.49975 10.51238 10.51063
SPI 102 10.48513 10.50463 10.53063
GPUPI 100 7.279375 7.313125 7.34025
GPUPI 102 7.281125 7.286875 7.30775
As we can see, the problem persists, but I find the results curious. If you look at the RTC timer, it is off in the wrong direction with SPi, but is correct on GPUPI (measurement in seconds). When examining the ITSC timer, we see the inverse, with SPI correct, but with GPUPI the inverse of what it should be. With HPET, we see the correct result with both programs. Granted, this needs further testing, as well as an examination of the latency effects with each program with different timer resolutions (and a verification that even though GPUPI at the time was reading a specific timer, that the timer specified was actually being used, except for HPET, which is forced when turned on in the BIOS and on the OS), but it is confirmation that the problem is there.
ITSC is the default for Win 10 on these chips when the BIOS has HPET turned on, but the OS has it turned off. When HPET is off in the BIOS, the RTC timer was used (I cannot remember if I turned HPET on in OS or not while off in the BIOS to turn on the RTC timer). Then, the HPET timer was used when turned on in both the BIOS and in the OS. This is to give information to the community so that they can see the bug on this platform with actual numbers for what is going on. This was a simple project I could do to show the issue with publicly available software. Related to this, there was a discussion of this issue, although in a different way, in this Anandtech article due to discrepancies in their benchmark results related to gaming on Intel CPUs (more related to the latency of HPET when HPET is turned on for Intel's platform). A Timely Discovery: Examining Our AMD 2nd Gen Ryzen Results. I am sure the timer agnostic programming of some software vendors may also play a role in the issue as well.
I hope that this clarifies what was being discussed to a degree with actual examples that can be examined and replicated. I will be sending the images to the creator of CPU-Z as well to inform them that their program does not correctly display the speed in MHz of the CPU when the OS is using the ITSC and RTC timers. I think I saw that Ryzen Master my also display the frequency incorrectly when HPET is not being used (something to test for another night).
hmscott likes this. -
@Mr. Fox @D2 Ultima @bloodhawk @tgipier @yrekabakery @Papusan - please see the post above and if you can run similar tests on Intel's platform, it would be greatly appreciated.
This can also be used to identify the best timer resolution for a given bench. I cannot tell you where your MB mfr may hide the HPET bios switch, as it can vary. But I'd like more data points.
@Raiderman - could you do the same with your 2700X.
This is trying to get more data points, generally.
Sent from my SM-G900P using Tapatalkhmscott likes this. -
https://www.tomshardware.com/news/amd-threadripper-core-i7-8086k-replacement,37357.html
https://www.pcworld.com/article/328...ntest-a-16-core-threadripper-cpu-instead.html
https://www.extremetech.com/computi...-2-teaser-video-hurls-down-gauntlet-for-intel
A side note, less than a year ago I dropped $1000 on a 1950X, at $2500 now that might just make me have to wait a bit longer or even for 7nm. Can't just keep throwing it away like that.Last edited: Jun 26, 2018 -
2nd Gen AMD Ryzen Threadripper Teaser
AMD
Published on Jun 21, 2018
Behold, The Destroyer of Threads. 32 core 2nd Gen Ryzen Threadripper coming in Q3 2018. Create with Heavy Metal.
TSMC Hits Volume Production Of 7nm Silicon With AMD Zen 2 And Vega 7nm Incoming This Year
by Rob Williams — Saturday, June 23, 2018
https://hothardware.com/news/tsmc-h...ith-amd-zen-2-and-vega-7nm-incoming-this-year
"AMD has had some great momentum since it unleashed its first Zen-based processors to market early last year. A little while later, we saw EPYC hit the enterprise market and Threadripper hit soon thereafter, to cater to high-end enthusiasts and workstation users. Recently, the company followed-up its initial Ryzen release with second-gen parts, and Threadripper 2 is also en route and due in a couple of months (which includes a beastly, 32-core part).
Based on what we're seeing right now, it doesn't appear this momentum is going to slow down anytime soon. We've already known for a little while that AMD planned to launch 7nm product later this year, and based on new information, it looks like the company won't have much problem delivering 7nm GPU and CPU solutions manufactured at TSMC by the end of the year.
One thing to note regarding AMD's initial 7nm moves, though, is that gamers and desktop users are not the initial targets. Enthusiasts are not even on the radar just yet. Instead, second-generation EPYC server processors are set to launch on 7nm, and begin sampling later this year, with full market availability coming in 2019.
On the graphics side, the idea of a 7nm Radeon RX Vega sounds great, but it'll actually be a Radeon Instinct card and, potentially, a new product in the Radeon Pro line-up. The Radeon Pro mention is interesting, because it's as close to a desktop use case as we'll get from 7nm in 2018, but that assumes the products all launch according to plan.
SIGGRAPH, the professional graphics conference, take place in Vancouver in Augist, and it's at this show where AMD has generally announced their proviz products, like the Radeon Pro. We could learn a lot more at that event, though since it's still two months off, we may very well hear about something sooner than that. The rumor mill has been running rampant lately.
AMD's Lisa Su Holding A 7nm Radeon With 32GB HBM2
Even when it has bleeding-edge 7nm products in its portfolio, however, AMD will have a steep, uphill climb with its Radeon Instinct cards, as NVIDIA simply dominates the market right now with its Tesla GPUs. Just last week, Oak Ridge National Laboratory dropped over $100,000,000 on NVIDIA Volta-based GPUs used in the world's fastest supercomputer. That kind of success is hard to come by.
On the CPU side, 7nm EPYC processors are a very enticing prospect, especially considering Intel's current 10nm woes. Using the more advanced manufacturing process and incorporating the architectural tweaks expected with Zen 2 could allow AMD to eke additional IPC out of their chips, and increase frequencies, while also keeping power in check. All of those things are ideal for desktop and workstation users, but they mean higher density, more performance, and a lower TCO in the data center as well.
Though, with a 32-core 2nd Gen Threadripper incoming, users will not need to wait for 7nm before getting their hands on a truly high-end CPU option from AMD."
Waiting for the 7nm TR3 ODD rev might make your current motherboard have a better PSU fit for 7nm than the super high demands of TR2. TR2 might require a new motherboard with higher power delivery.Last edited: Jun 26, 2018 -
For now they should concentrate on their core values so far, as good or better performance over Intel and a price under cut as well. Since Intel to consumer offering is $2,000 they should centralize on that, forcing Intel to not only price reduce the 7980xe but the 22 core coming to the platform. Once the 28 core is out, if ever, a re-evaluation should occur.
Thing there is too the 7nm may be just about due as well. At that time the normal Ryzen shews may need a re-evaluation of their own. This is the reason I was pushing for unification of release of all 7nm in the consumer space. All skews could be re-evaluated and brought into the premium cost space at one time. So the time frame I am looking at here is Q3/Q4 2019 -
-
-
Hardware Unboxed
Published on Jun 27, 2018
-
As you can see, the RTC clock clearly drifts away from what is seen with the HPET time and the ACPI (I'm pretty sure the ACPI is the ITSC timer). This is easier than the multiple runs (which I would still like that information), but can identify the issue as well. -
yrekabakery Notebook Virtuoso
-
I see a little more variance than this screenshot with an 8700K with HT on, but it really isn't there like with AMD timers on Win 10.
Now, I also noticed this year, the 2000 series consumer products all use B2 stepping (revision 2), which in the original releases were only seen in Epyc. Granted, a die shrink still occurred, but I wonder how much of the latency and other issues were already dealt with by using that revision versus actual tweaks beyond the die shrink. If that did save money on R&D, it means AMD last year were pouring massively more into the 7nm Zen2 versus a split effort, which would be nice. If they have another year with 40+% over the original zen, similar to the 52% over FX, then AMD would sweep up next year. Just insane amount more power than this year.jaybee83 likes this. -
yrekabakery Notebook Virtuoso
-
well well well, would u look at that...
here is a VERY interesting news article where intel's first commercially available 10nm cpu from that lenovo laptop was analyzed in detail:
https://www.computerbase.de/2018-06/intel-10-nm-cpu-cannon-lake-die-size/
the analysis basically confirms what i previously stated concerning the marketing names for the different nodes by various manufacturers: 7nm is not equal to 7nmand 10 nm is not automatically "worse" than 7nm!
so according to this article, both gate pitch, transistor density and interconnect pitch are far superior not only to other 10nm nodes but even 7nm nodes by the likes of samsung, tsmc and glofo.
the transistor density alone is 2.7x higher than on the latest 14nm++ intel node and still double that of 10nm process nodes by other manufacturers. so going by transistor density intel's 10nm process is more comparable to "5nm" nodes by other companies.
and THATS the reason why they took to long to get sufficient yields on that process, they just bit off more than they could chew, trying to create a 10nm process that is by faaar superior to other "10nm" fabrications.
disclaimer: im NOT an intel fanboy!!! if i were to get a new desktop system right now id opt for a TR2 system, no questions asked. BUT i always thought it fishy that after 20+ years of humongous R&D advantage compared to everyone else, intel now "suddely" lost their **** on a mere single node advancement.
so yeah, as i said above: marketing names for nodes dont say ****, they should rather state naked pure transistor densities and/or gate/interconnect pitches!
all in all, id expect intel to be able to compete with AMD 7nm cpus with 10nm skus of their own without too much trouble. theyre not just gonna suck just cuz of "marketing naming" differences between 10 and 7nm nodes
Sent from my Xiaomi Mi Max 2 (Oxygen) using TapatalkLast edited: Jun 28, 2018 -
-
edit: corrected -
1) They designed 10nm for EUV, and EUV doesn't arrive until next year. EUV was supposed to be available for use around 2015. It cannot be ready until next year and they are still working on getting pelicles correct. In light of this, my other comments will make sense.
2)Cobalt substrate is used to help fight electromigration. Many of the others use 40nm Cobalt. Intel went with 36nm. This gave an advantage in transistor density, which is why Intel has enjoyed such an advantage. The problem is, now that they are reaching a size for production, that density is problematic for die shrinks due to defects (talked about in another point). So, without EUV, this, which once was an advantage, is now a hindrance to getting 10nm working properly.
3) They removed a dummy gate. This would have significantly helped in increasing density. The problem is, if you have high defect density and cannot resolve that without EUV lithography, you wind up with imperfect gates or outright defective gates on the transistors, without a second dummy gate to guard against the defect in its neighboring transistor. What does that mean, YIELDS SUCK! Not all transistors are usable and one defect can destroy the efficacy of two transistors. Sounds pretty bad to me.
4) Multi-patterning on current techs. Because of EUV not yet being available, you have to use quad patterning or more to cut the channels in. Well, the more times you have to go over the substrate with the lithography, the larger the likelihood to have defects. That is just part of the game. So, Intel is really struggling with this at the moment, especially with the choice of size on Cobalt, the removal of dummy gates, and their "hyper scaling."
Intel hit the same wall as everyone else. They banked on a tech being ready which was not actually available for 4 years after it was supposed to be available. Everyone else, during those four years, caught up on physical size of the transistor, and everyone has dual designs that work well with or without EUV, while Intel has only one process meant solely for EUV, that they are trying to make work on older lithography. Also, Intel compares to other 10nm designs, not as a euphemism, but literally the fabs other 10nm designs that were all but abandoned. This means Intel is obfuscating reality by not comparing the 7nm nodes to their 10nm, because it would show that the process gap is closed and Intel has to win on uarch. Guess what, we already know SMT is better than HT. We already know AMD was around BW/SKL with current chips, but will actually gain a lot more going to 7nm than Intel going to 10nm, as they have already confirmed that they will not have a better chip than 14nm++ (Coffee and Cascade) until 10nm++. That sounds pretty bad to me. So Intel isn't at 5nm of other companies, and if you believe that Intel is, you should sign over your power of attorney to me.
Edit: Here is Intel's own press slide from last spring. Note were the process nodes are and the transistor efficiency. Notice how the 14nm++ node (coffee and cascade) has more performance than their 10nm or 10nm+ process. Also, Intel has said delays in 10nm will delay their 7nm process. That means Intel has major issues.
https://www.custompcreview.com/news...efficiency-2-7x-increased-transistor-density/
Here is a post on reddit collecting links:
https://www.reddit.com/r/Amd/comments/7yhtuc/tsmc_samsung_and_globalfoundries_have_overtaken/
https://www.eetimes.com/document.asp?doc_id=1332965
Here is a good talk on density:
https://www.semiwiki.com/forum/content/7191-iedm-2017-intel-versus-globalfoundries-leading-edge.html
2.4 Density
When comparing process density there are many options in terms of metrics.
The size of a single transistor is the Fin Pitch (FP) multiplied by the Contacted Poly Pitch (CPP). The transistor sizes for the 2 processes are presented in table 3.
Table 3. Transistor size comparison.
By this metric GF's aggressive FP leads to a smaller transistor size. The problem with transistor size as a metric is it doesn't consider routing and isn't reflective of actual design area.
Actual logic design is done using standard cells so metrics describing standard cell size are more useful. Figure 4 illustrates a 7.5 track cell similar to Intel's 7.56 track cell.
Figure 4. 7.5 track standard cell.
In process density comparisons from a few years ago it was common to use CPP x MMP as a cell size metric. Table 4 presents that calculation for the two processes.
Table 4. CPP x MMP comparison.
By this metric Intel would appear to have the smallest cell size. The problem with this metric is that in recent years Design Technology Co-Optimization (DTCO) has become an important practice in technology development and track heights have become another scaling nob. From figure 4 we can see that the actual cell size is Track Height x MMP x CPP. Table 5 presents this data for both processes.
Table 5. Standard cell sizes.
By this metric GF has the smallest cell size. However, in the Intel section we discussed how Intel eliminated dummy gates at the cell edges and this enables tighter cell packaging.
Intel has recently tried to reintroduce a metric to the industry based on the area of a NAND cell weighted at 60% + the area of a scan flip flop weighted at 40%. Figure 5 presents the Intel method, this was also shown and discussed in the Intel paper.
Figure 5. Intel density metric.
The claim is these cells and weightings are typical of logic designs. Intel has disclosed that by this metric their 7nm process archives 100.8 million transistors per millimeter squared. There are two problems with this metric, the first is Intel is the only company reporting based on this metric, the second is the foundries contend this metric doesn't captures the subtleties of routing. In spite of these issues I have attempted to make my own estimates based on this. For Intel I get 103 million transistors per millimeter squared versus the 100.8 they report and for GF I get 90.5 million transistors per millimeter squared. The big difference here is that GF requires dummy gates at the edge of the cell and Intel doesn't and that gives Intel a big advantage in the scan flip flop area.
High density SRAM cell size is 0.0269um2 for GF and 0.0312um2 for Intel so SRAM heavy designs will see an advantage with the GF process.
Ideally someone would design an ARM core in both processes and disclose how the size compares, baring that, after evaluating all of these metrics it appears these two processes offer similar density and the size of a design will depend on how the specifics of the design match up with the process characteristics.
https://www.extremetech.com/computing/254209-details-leak-intels-upcoming-ice-lake-cpu-10nm-schedule
https://fuse.wikichip.org/news/641/iedm-2017-globalfoundries-7nm-process-cobalt-euv/Last edited: Jun 28, 2018 -
Here is another great discussion from anand forum and the links from the OP post as sources:
https://forums.anandtech.com/threads/in-depth-intels-10nm-was-definitely-not-too-ambitious.2548698/
Further reading
https://newsroom.intel.com/newsroom.../11/2017/03/Ruth-Brain-2017-Manufacturing.pdf
https://newsroom.intel.com/newsroom.../2017/03/Kaizad-Mistry-2017-Manufacturing.pdf
http://fpga.org/wp-content/uploads/2017/03/10nm-Hyper-Scaling.png
https://images.anandtech.com/doci/8367/14nmFeatureSize.png
https://en.wikichip.org/wiki/7_nm_lithography_process
https://en.wikichip.org/wiki/10_nm_lithography_process
https://electroiq.com/chipworks_real_chips_blog/
https://www.semiwiki.com/forum/cont...ersus-globalfoundries-leading-edge-page2.html
https://forums.anandtech.com/thread...nm-at-iedm-2017.2523567/page-10#post-39459384
https://twitter.com/witeken/status/1007220571745210368
https://twitter.com/lasserith/status/1007266404033335296
Now, when reading the forum post, keep in mind what I said about EUV and defects effecting the single dummy gate. -
Meanwhile, with Intel saying their 10nm and 10nm+ is slower than 14nm++, and that AMD was shooting for competing with 10nm+, but with smaller nodes giving more transistors and other benefits, Intel coming late to the party puts them in a bad spot. Intel also announced delays in 7nm here: https://www.pcgamesn.com/intel-7nm-production-delays . "And that makes it a very different beast to the current 10nm design, with far fewer steps involved in production with some estimates putting the process at nine steps versus the current 34 steps that the 10nm design requires right now." So, yes, 7nm is pushed, but not as far as 10nm was. Also, reducing the number of steps by almost 4x the number for 10nm, Intel jumps right back in.
"
But once 10nm does become a genuine thing for Intel don’t expect it to be quickly replaced by the subsequent 7nm lithography even when it does come good. Intel have found a huge amount of intra-node performance within the 14nm design, delivering a 70% speed bump from the first 14nm chips to the current Coffee Lake CPU. And they’re going to continue down that path.
“That isn’t a one off strategy for us,” Renduchintala says. “We’ll continue to see nodes living longer and an overlap of one node as we transition into another node and it will be a case of make before break, mix and match type capability. So, I think you’ll see that being a greater and greater part of our product roadmap going forward.”
The 14nm chips will continue on until the beginning of 2019 with the 8th Gen design, and from there Intel will begin moving on to 10nm and the 9th Gen cores.
“I'm excited about where we are in nine,” says Renduchintala. “Clearly very aware of the competitive environment, and I am sure we’re going to need to deliver our very best in order to make sure we maintain our lead.”
I’m sure they're going to have to be at the top of their game too, especially given that the 7nm AMD Zen 2 processors could well be on the market by then..."
But, then comes a hard truth, in 2021, AMD will be using the 3nm process from GF, skipping 5nm because it doesn't offer enough of a performance increase. TSMC is considering jumping to 3nm as well for their primary node, but are moving ahead on the 5nm fab as well. This means Intel will be right beside the other fabs on process for the foreseeable future, or possibly behind if they do not get 7nm running soon enough. Otherwise, they will be on 7nm, which they are saying a 2.4-2.7x density over 10nm, while that is the same amount estimated for going to 3nm from 7nm at the other fabs on their processes. But, density does not always mean the same as estimated transistor performance relative to existing nodes. That is why I showed the normalized curve for performance gains between nodes at GF, presented at the transistor conference.
So, there is a lot more going on here and if things go according to expected timelines, Intel will have issues until the new uarch arrives around 2021 or 2022, replacing the iCore uarch. Then, it is a question of what AMD can do with Zen 5 (they are skipping 4 because the number 4 is unlucky in some cultures). This all is why I am saying that Intel is in a bad spot, but that competition is back! Now if only AMD can do something to make independent software vendors adopt NUMA for GPUs....jaybee83 likes this. -
Last edited: Jun 29, 2018
-
Intel new CPU release delay may thin global notebook shipments
Cage Chao, Taipei; Willis Ke, DIGITIMES, Wednesday 27 June 2018
https://digitimes.com/news/a20180627PD207.html
"Global notebook vendors including HP, Dell, Lenovo, Acer and Asustek Computer will be unable to launch new models fitted with Intel's new-generation CPUs in the second half of 2018 as scheduled, as the release of Intel's new (10nm) offerings will not come soon enough for this year's high season, according to industry sources.
The delay has prompted the brand vendors to adjust downward their notebook shipment goals for 2018 while also weakening the growth momentum at supply chain players, the sources said.
Without the support of Intel's new-generation CPU, notebook vendors will have little to stimulate replacement demand, the sources said.
All they can do, the sources stressed, is to focus more on promoting gaming and business-use notebooks while continuing to lower the costs for consumer models by suspending the incorporation of innovative applications and functions originally designed to go with Intel's new CPU.
As a result, Taiwan's notebook ODMs said that their internal R&D departments now virtually have come to a standstill.
IC designers have also seen clouds cast over their revenue prospects for high seasons in the second half of the year, as the suspension of value-added functional designs will defer the demand for fingerprint recognition chips, touch control pens, and Type-C interface devices, among others.
The designers continued that notebook vendors are taking a conservative marketing approach, and new notebook models rolled out in the second half of 2018 will not bear high price tags. Accordingly, they opined, both ODMs and OEM contractors must work hard to lower the costs of related parts and components.
Now that Intel's new-generation CPU will not be available to support shipments of new notebook models in the second half of 2018, the global notebook shipments for the year are expected to fall further from 2017, with the declining trend likely to carry into the first half of 2019, industry sources indicated."
It's real, Intel's failing...and they are taking the vendors that relied on Intel with them...we can only hope AMD can deliver enough product to help some of those vendors survive.
Or, I guess enough clueless Intel fanboi's could step in and buy the same turgid marketing branded crap from Intel they always do, and Intel can skate through this with only a little 10nm ****-sandwich instead of the gigantic ****-storm Intel deserves.
Good news is, that 10nm ****-sandwich might only be the appetizer, and the whole 9 course ****-storm is still on the way for Intel.Last edited: Jun 30, 2018ajc9988 likes this. -
hmscott likes this.
-
Meanwhile, if AMD pushes up to do the next mobile chips on 7nm first, and deliver them in Q1 of next year, talk about a way to eat into Intel's market. But, most likely, those mobile chips will be 12nm, so....hmscott likes this. -
Last edited: Jun 29, 2018ajc9988 likes this.
-
Posting this video both here and in the AMD Ryzen thread for different reasons. This is based on research papers which included an AMD researcher, but it references mesh network interposers, which may explain why Intel went with mesh on their current lineup, although Jim at AdoredTV did misstate when Intel adopted the mesh network, as it theoretically existed with project Larabee and was found on the Xeon Phi chips (at least by gen 2) before it was used widely on their server and HEDT lineup. Meanwhile, this has obvious potential uses on graphics cards and CPUs by AMD, which may be incorporated into future products, so properly belongs there as well (as some monitor one thread and not both). So here is the video and the papers on which the video is based:
http://www.eecg.toronto.edu/~enright/micro14-interposer.pdf
http://www.eecg.toronto.edu/~enright/Kannan_MICRO48.pdf -
Can not figure out the bad news part of below;
https://www.pcper.com/news/General-Tech/Threadripper-2990X-rumours-come-good-and-bad-newshmscott likes this. -
always appreciate your in depth analyses!
-
IDK how one would even set a price on a desktop 32 core CPU... it's just too wild and "out there" to know what it's worth to a potential owner.
2x the original price of the 1950x would be "fair", but would chase away a lot of potential investment, and I think getting a lot of TR2 32 core's sold and running for the social payback - making AMD shine in the public's view - would be even more important.
Not saying AMD should take a loss on them, in fact the opposite - AMD should make sure to keep their margins up and put it out there at current yield cost, and then lower the price as they did TR1 over time.
Then again, TR1 16 core is only $699-$749 so a price of $1350-$1500 would make sense in the TR space today.
I don't think AMD *needs* to put it out at $1000, but if that's what the yield costs are supporting, why not?Vasudev likes this. -
Thing is the TR1 had 4 dies, just two were binned and active. Stepping B1 and the substrate apparently did not support 4x running cores but the cost was still fairly high. My bet here is at the $650 mark they would still make money but not that much. $1,300 to $1,500 is very reasonable to me and they deserve the profit.
hmscott likes this. -
I was thinking $1500-$2K, personally, which would have been wonderful. Meanwhile, the retailer who put up the price had some information incorrect, so the listing was probably a placeholder, not the final.
And AMD is doing fine at even $1500. If you take the dimensions, look at approximate yields, costs of testing and speed binning dies, package integration, etc., they should be doing just fine even if closer to $1500, although considering the price of the 16-core currently matching the 12-core, retailers could be just trying to move inventory in fears that they will now be getting the 24 and 32 core over the 16-core chips, which very well may be justified. Either way, the HEDT is a fairly high margin product, usually, so I don't think they are stabbing themselves in the foot.
If you estimate a defect density of 0.1 per sq. cm, on a 300mm wafer, if the cost per wafer is $3000, you would estimate 211 good dies per wafer at a cost of $14.22 per die. Even doubling the cost of the wafer, that is $28.44 per die. Multiply that by 4, you get a cost of $56.88 on the lower wafer cost and $113.76 for the higher wafer cost. Then you have integration, packaging, etc. I highly doubt that they are having an issue on the margins due to higher yields from the disintegrated dies.
Edit: Let's hope 7nm they use a partially active interposer with 1-10% active. That with the Butterfly Donut looks AMAZING on reduction of latency and inter-die and CCX routing. I would gladly pay the extra from current prices for the interposer and integration onto the interposer. Same if they put on HBM2 on the package (4x8GB stacks for 32GB running at 126GBps per stack giving 512GBps bandwidth on chip, while still supporting DDR4 off package for slower ram, which would add at least $384+ to the price, but you would not have to buy DDR4 or could go with lower cost DDR4, all while having incredible bandwidth on package).
I cannot decide if it would be better at 7nm to have 8-core chips and potentially do 64 cores that way, or to do 16-core chips to reach 64 cores, but have HBM2 on chip (just estimate adding $500 onto the price) for much higher bandwidth memory which Intel would not be able to address for at least a gen or two. Interesting thoughts.Last edited: Jul 1, 2018hmscott likes this. -
I am not a fan of HBM2 on die for the CPU as it could then limit expansion, That being said if there were 4 HBM2 8GB modules on an active Interposer and maybe even a 8 HBM2 Module option, well.
-
Read the second article that went into the AdoredTV video. It almost suggested HBM2 as the first line of memory, with DDR memory for the larger amounts. As such, if you did 32GB on chip with each on own channel, that is 512GBps cumulative, about. You then could spend less and get slower DDR4, like 128GB of 2133, and still have more performance.
Right now, I get 106GBps read. This is more than quadrouple that. I paid $400 for that. Imagine getting 4X that speed for 32GB of ram for around $500 more on the CPU. That really is value added, especially since you can still have the off-chip DDR4 (or 5 in the future).hmscott likes this. -
-
what are the chances whiskey lake using 14nm+++ instead of 14nm++. i wouldnt mind extra 2 cores for additional 100+ mhz.
-
The situation with 14nm is analogous to what GlobalFoundries and TSMC have done with their own process nodes — Intel just isn’t calling it an entirely new node. But there’s an inevitable limit to how much fine-tuning Intel can do, and given that it never planned to keep 14nm around as long as it has, I’d wager they’ve depleted most of the improvements they can offer."
https://www.extremetech.com/computi...ew-details-on-10nm-delay-future-14nm-products
Most people confuse the 14nm++ process for 14nm+++, but they are wrong. Here is a way to know what was built on which process:
14nm = Broadwell, Skylake, BW-E
14nm+ = Kaby, SK-X
14nm++ = Coffee, Cascade-X/SP, possibly whiskey as well unless they keep the Coffee nomenclature
There is an open question of whether the new 20 and 22 core Skylake-X will get the 14nm++ process, but I have my doubts on that. All of the 24-28 core Cascade chips will get it.
Also, there will be two chipsets for Coffee. Z370 is clearly labeled coffee, Z390 is labeled cannonlake chipset. Some digging has suggested they are the exact same chipset, but calling them different could be market segmentation to limit forward compatibility with upcoming 10nm chips, which I fully expect to be the new Broadwell, if being honest.
Edit: Also, by Intel's own words, 14nm++ is the best process until 10nm++. 10nm+ is better than 14nm+. This could be why Intel is waiting until Q4 to release Cascade-X (MB vendors laughed at that timeline and said likely pushed to 2019), which also means potentially waiting until this fall to winter for the mainstream 14nm++ chips, as Intel left 10nm in 2019 open ended, meaning likely second half, and considering that will be like 2.5-3 years late (original 2016 release for 10nm) or more, they have to make this last until they can get out cannon or Icelake. Even then, if comparing to 14nm++ chips, the 10nm chips may be lackluster to many. This is why I really have my money on AMD next year taking the crown, potentially.Last edited: Jul 1, 2018hmscott likes this. -
You have to remember with the over 22 core a new socket will most likely be needed along with power phase delivery etc. Now since the original 28 core Xeon is 6 channel memory will that require a different chipset etc. as well?
The other issue is if they do not go the 6 channel memory route will this then effect the performance they were touting of the 28 core system? There are just so many variables left on the table because it was not a real system that will ever see market. I mean even the cost of supporting the CPU, a 16 phase board capable of 1Kw to the CPU, 6 channel memory and the sticks, a 1,500w PSU, etc. and etc.!hmscott likes this. -
https://wccftech.com/intel-8-core-coffee-lake-s-september-22-core-skylake-x-2018-launch-confirmed/ -
Right but what I am saying is there are a lot of other considerations to those over 22 core chips. Way more than Intel is admitting. Just the almost $400 case to house it all as compared to my $100 case. The cost over all will be insane, not including the cost to run it. In the end I am saying Intel's solution is not really a viable one.as all it is then is throwing money and power at a problem to get the needed performance..
-
honestly speaking, intel's own words we would have 10nm by 2015. intel can say what they wish though right now closest one thats coming out is whiskey lake. i have doubts about 14nm+++ exist, it'll be great if it did or otherwise it'll just be a 2+ core for mainstream with extra heat we'd have to take care of in a laptop.
intel mentioned whiskey lake will be the 3rd refinement/optimization while calling Coffeelake the 2nd refinement. if thats the case theres a small hope. -
To top it off, Zen 2 APU's could feature better APU's... say with 2 APU's of current capabilities (let's say 2500u or 2700u - or perhaps 2400G) interconnected with Infinity Fabric, you end up with an 8 core APU with augmented Vega IGP (there's still the issue of the OS reading it as 2 GPU's as opposed to just one... but if AMD can make GPU MCM invisible to the OS, or Microsoft and Linux devs work with AMD towards this, they might be able to solve the problem before Zen 2 launches).
Or it could come implemented in Zen 2 with software patches for GPU MCM coming shortly after. If not then, in that case, 7nm+ seems like a good candidate with stacked GPU's (by which point it will HAVE to be done).Last edited: Jul 3, 2018hmscott likes this. -
Core i5 9600 Spotted - Coffee Lake Desktop processor to be positioned in Core i5 9000 series-Guru3d.com
A microcode guidance file reveals Coffee Lake S series featuring whats listed as 6+2 and 4+2 configurations
ajc9988 likes this.
Ryzen vs i7 (Mainstream); Threadripper vs i9 (HEDT); X299 vs X399/TRX40; Xeon vs Epyc
Discussion in 'Hardware Components and Aftermarket Upgrades' started by ajc9988, Jun 7, 2017.