ALL laptop GPUs were gimped compared to the desktop versions. The 980GTX and pascal changed that.
Here is how the 980M was gimped:
NVIDIA GeForce GTX 980
Graphics Processor GM204
Cores 2048
TMUs 128
ROPs 64
Memory Size 4 GB
Memory Type GDDR5
Bus Width 256bit
NVIDIA GeForce GTX 980M
Graphics Processor GM204
Cores 1536
TMUs 96
ROPs 64
Memory Size 8 GB
Memory Type GDDR5
Bus Width 256bit
-
custom90gt Doc Mod Super Moderator
-
tilleroftheearth and custom90gt like this.
-
Richard Zheng Notebook Evangelist
Nvidia gimped the 980m, but it was obvious given the “m”. But they sell the MX 150s as all the same despite huge TDP differences -
Richard Zheng Notebook Evangelist
Papusan likes this. -
-
Richard Zheng Notebook Evangelist
-
Considering the price of NVME, why wouldn't you get it for your browser typewriter? -
Because sata ssd is cheaper and I have no use for a machine I can't service not to mention I will not benefit from nvme speeds in a consumer workload, not to mention typwriting and browsing.
There isn't an explicit or implicit NEED for nvme on such a machine -
Is SSD cheaper? 1tb for $110... How much is a 1Tb SSD? 512 for $86. Is it worth not spending the extra few bucks for NVME?
Do I need NVME? Nope. But at these prices I thought it would be dumb not to get it.
Also talking about laptop innovation and you're talking about how innovations in storage don't apply to you cause you use a closed system Mac from 2013, a machine explicitly designed to prevent upgrades. OK.... -
-
My sig doesn't state a Mac anywhere in it, does it? I was asked why I would consider the 2012 MacBook pro and I delivered that scenario. No where in that scenario does the system 1) even work with nvme (or the standard in general) 2) warrant nvme in any possible way for a consumer workload let alone a workload of that level.
I mean, I even in my statement, I liked the 2012 version because it's not closed. Osx seems to still work well and Linux is quite friendly with it (or at least I'm told)
So while you're quick to clown on me for (not) owning the 2013 MacBook pro (of which I don't) I would suggest reading more carefully next time. I'm all for constructive criticism / arguments as Im happy to learn but at least criticize MY scenario and not something out in the ether.
It's worth noting I wouldn't bother with anything more than 240gb 2.5 ssd to dual boot Linux and Mojave. 300 is the grand total I would spend on something like this. -
I apologize then. I don't see sig with.my browser settings on mobile. All I saw was discussion of the Mac 2012.
Reciever likes this. -
Richard Zheng Notebook Evangelist
Damn keyboard takes up more than half the screen -
It does, but I only use it to look at sigs and go back to portrait.
-
custom90gt Doc Mod Super Moderator
-
Richard Zheng Notebook Evangelist
But with max Q you have no idea what beats what and why. A 1050ti MQ beats a 1050ti. But then a 1080 beats a 1080MQ.
What fresh hell is this? -
custom90gt Doc Mod Super Moderator
*on edit*
Not everyone realized that the laptop GPUs of the past were cut-down variants. There was no such thing as a full desktop equivalent. Now we have desktop equivalents and underclocked/TDP limited desktop equivalents for thin and light solutions.Last edited: May 11, 2019katalin_2003 likes this. -
Richard Zheng Notebook Evangelist
-
boost clock 5ghz on 12 cores zen 2 cpu. gotta cinebench all day and see how well it performs.Last edited by a moderator: May 12, 2019ajc9988 likes this. -
custom90gt Doc Mod Super Moderator
Sadly no one knows what Zen2's boost clocks are going to be yet, still have to wait a bit. -
You are correct on Intel. 10nm+ (Ice Lake with Sunny Cove) should have LOWER transistor performance than 14nm++. That is from Intel itself. But, 10nm+ will be CLOSE to 14nm++. Then there is the IPC increase from Sunny Cove. That should give 11% IPC, which is the average Intel generational IPC jump from new core arch. To put that in perspective, from all rumors out there, that is a 4-5% estimated IPC over Zen 2. Not that impressive, but still a win. So the question is how much of a frequency regression Ice Lake will have.
Further, Intel plans for "S" skus is 14nm Comet Lake. Rocket Lake is rumored to be 14nm++ as well, meaning if they don't move that sku to 10nm, then Intel is trying to jump straight to 7nm in 2021-22.
Now, we also have the rumors surrounding AMD, such as the Zen 2 12-core running at 5GHz. We do not know if that is single core boost or all core boost.
If we look at the AMD page ( here), we see they only list the single core boost. Even assuming the same, that means reduce 5GHz by 200MHz and you have a chip estimated to run 4.8GHz with 50% more cores.
Now, we have already seen the 65W 8-core match a stock Intel 9900K. The only question is what frequency was used. If the 13% IPC number is to be believed, then we are talking about around 6% IPC over the current Intel chips. At 4.8GHz all core, you should see the equivalent performance of an Intel 9900K/F at 5.088GHz, or 5.1GHz, estimated. If the 12-core CPU does hit 5GHz, and there isn't a regression from having two dies on the chip, then that would be equivalent to the 9900K running at 5.3GHz, except that it also has 4 more cores, meaning it would be matching heavily overclocked Intel CPUs in single thread, potentially, while also having more multi-thread power.
Meanwhile, AdoredTV released a vid discussing the upcoming Zen 3 Milan server product. There is not an assurance it will come to consumers. With threads, although it speeds up workloads, it is not linear. For example, SMT can add up to 33%, approximately, over a non-SMT CPU (talking both AMD and Intel). By adding two additional threads per core, what you are doing is adding a more efficient queue system to lower down time of the cores, thereby speeding up processing, roughly. This also needs a more efficient and better scheduler to cause that not to cause slow downs. So you may get a case where it takes SMT from 33% up to 50%, but it will not be doubling the performance. This also can increase core heat, which can require lower frequencies (not always the case, but it IS a possibility that one would need to be aware of).
As to the stacked mem I/O, that would really be a boon. People have argued with me saying that the latency is too high on HBM for this use, but they have ignored the lower latency optimized HBM2, as well as the higher speed HBM2 chips that have been shown, which faster speed with roughly same latency means lower real world latency. Then, you just need to keep the HBM fed from DDR. Imagine 16GB-32GB of HBM2 or HBM3 on the die, which would have a latency between 30ns-60ns (note, AMD's memory call latency is already around 60-80ns), which would be fed by a larger ram pool off chip. This is coupled with the bandwidth of HBM2, which would be 512GBps to 1TBps bandwidth, which would be fed from an 8-channel DDR4 or DDR5 system, which would be 160-320GBps, approx. Also, those speeds are peak, not continual. Overall, that means, even with the latency, the bandwidth should make up for it in the context of a datacenter CPU.
So, either way, we have about 2 weeks to Lisa Su's keynote at CES. Then, it will be about 5 weeks from that that the first Zen 2 CPUs drop. That means in about 7 weeks, all the reviews will be out and we can put the silliness on saying AMD isn't competing on the high end to bed.
Also, when I mentioned 10nm, I have to re-emphasize Intel has ONLY said they will have low core count "u" and "y" type variants and entry level Xeons. That is NOT desktop parts. That is not high core count server parts. That is very limited.
Edit: I forgot to mention that the 65W desktop AMD 8-core already showed equivalent performance to a stock 9900K. Intel has a mobile 8-core with 5GHz single core boost, but we all know that you often get 2+ cores active. That 65W AMD chip will likely, then, outperform the Intel offerings for laptops, excluding clevo DTR.
The only question is whether they are partnering with Asus or Acer on a desktop chip in a laptop this round, whether they will require an AMD GPU or allow Nvidia GPUs to be paired, or if the pairing will be like a Navi GPU (not talking the APUs, just desktop CPU paired with a dGPU). There is also the question of if the Asus or Acer AMD desktop CPU machines will receive bios updates or mods from the community to allow the drop in of the desktop Zen 2 CPUs. If they do, then those owners will get a HUGE jump in performance.Last edited: May 12, 2019ole!!! likes this. -
@ajc9988 imo giving 11% ipc increase on intel's side is way too much of an improvement. we no longer have any large gain and i doubt it'll be more than the usual 3-4% ipc boost which should be on par with zen 2 assuming zen 2 is getting a 10-13% boost over zen+.
i'd be interested in seeing the power efficiency and overclocked throughput once 10nm chip drops, however we do know desktop chip wont come until 10nm+ because intel knew 10nm is junk with likely crap frequency. we may see something along the lines of the old 5775c with moderate frequency.
also, the 4way smt is rumour right now but i have little doubt that it wont come true, if AMD wants to win it needs to win now with everything they have, they will need as much money as possible with zen archs and have enough money for R&D to combat future intel's arch and GPU side of things as well.
and yea that is single core boost 5ghz, people will be crazy to think all 12 cores running at 5 ghz, its hard to think AMD will throw something power efficient as a sku, this isnt the FX9590 days anymore.
the reason of not going laptop no more. if zen2 16 cores aren't put into the laptop then the difference between laptop/desktop just too much. im looking at next TR for 32 cores so a laptop 16 cores is a must to fit that gap until TR 3 happens.Last edited: May 12, 2019ajc9988 likes this. -
https://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/9
So, if it winds up being less, that could be a problem for Intel unless they hit a high average, like they got from the architecture change from Ivy to Haswell. Broadwell to skylake IPC was rather low, like Haswell to broadwell. So, you have a very good point there.
Edit:
I am interested as well, but if Intel is claiming lower performance, they are likely talking frequency.
As to what AMD is doing, I'm wondering with the 15 chip rumor, how AdoredTV speculated that it is the GPUs and CPUs on a single SoC, whether or not AMD is breaking out the active interposer for Cray's supercomputer. It would make sense because for the performance and markup they will get, paying what it would cost AMD to implement it (which is the markup to cost if they were doing a monolithic chip, unless the prices have come done a fair amount in the past 2 years, Dec. 2017) would actually be worth it, potentially.
Either way, AMD really is looking to make all the moves they can right now. So between Active interposers for interconnects (they have the white papers and are just waiting for cost at certain production nm packaging before adopted; which is similar to foveros, which is active interposer that contains IMC and I/O and Cache, IIRC), increased threads (which will also force M$ to optimize a scheduler for their chips), and continuing process node shrinks, I really think they have a good thing going.
Now, if you are right and the IPC on 10nm is closer to 6%, Intel will be on par, but still able to hit higher frequencies. Then the question is price, especially if it is a 4.8GHz vs 5.1GHz same core count, same IPC (around 6% performance difference; that would be Zen 2 chips versus Comet Lake under the lower IPC hypothesis). Obviously, that is something we need the chips in hand, so worth a revisit in about 6 months, give or take a couple months.
But, yeah, an all core 4.8GHz on a 12 core would be a beast. And however high the 16-core can clock, but it is competing with the 7960X/9960X, not just the 2950X/1950X. But if the IPC is higher (13% over Zen and about 9-10% over Zen+, which would mean approximately 5-6% over Intel's 9900K), so it should be able to even hit on that level with around 4.35GHz to beat it, give or take plus workload dependent. This is assuming no regression with the dual die setup.Last edited: May 12, 2019ole!!! likes this. -
tilleroftheearth Wisdom listens quietly...
I see it's back to the future (2015, this time), again.
Faulty logic; Assuming that Intel is the same company today as it was a year ago, let alone 4 years ago.
Faulty logic: Skylake will equal Ice Lake/Sunny Cove improvements.
Faulty logic: Nested, multiple, if/if/if statements.
See:
http://forum.notebookreview.com/threads/intels-upcoming-10nm-and-beyond.828806/page-10#post-10909761
Why is that link above included here? As a reminder that testing old processors on the latest platforms possible and the latest O/S improvements available do not show the actual increase in performance/productivity, they brought for their time.
Nice write up if there was any content to sink our gray matter into... -
i'd still take 4.7ghz 16 cores anyday over intel's 10 cores at 5ghz though, the heat is a big problem with 14nm++ at this point, assuming zen2's ipc is really a 10%+ over zen+.
as for the 15 chiplet thing, for me/you is not that relevant as it is for enterprises. i'd not want a binned chip be stuck with GPU that has chance to fail, and vice versa. though in this case if they are all on the same package, then issues are way less no vrm etc as those should be on mobo.
i'd sitll wanna get binned cpu/gpu separately.
we have valid example of 7nm improvement, vega vs radon iiv is an example. a boost of 25% clock speed on same arch, while still having more of an acceptable temp/power vs it's predecessor.
assuming zen+ 2700x getting average of 4.1ghz top, 8 core zen2 will likely be able to hit 5ghz while using same, if not less power than 2700x at 4.1ghz and that is bloody amazing, all the while not taking into fact theres IPC increase from new design.
a 12/16c at 4.8ghz however will probably mean 150-200w tdp but that is still damn fine consider what we got now, 8 core 9900k at easily 200w tdp. -
tilleroftheearth Wisdom listens quietly...
And the assumptions continue, and flip/flop depending on who we want to post a 'win' for.
The credit they deserve has been paid in full and shows on their bottom line and we've seen those results already. Bravo to what they have accomplished (and I mean that sincerely), but the same can be said for Intel too if a balanced and reasoned approach is taken (as it should be taken). Educated guesses and leaked info are one and the same, not to mention daily clickbait fodder for online rags today.
Unless you can vouch that you are the one who leaked any of it and can show us the proof that it is actually 'info', then an educated guess in tech is worth less than the electrons used to post it. Will some of this be proven to be true in the not so distant future? Possibly, but until then, your guess is just as good at being the wrong one as anyone else's.
So, based on years old tech, Intel's outlook looks pretty dim to you. Cool.
But, AMD will be able to catch up to the already in market i7-9900K and that is a great outcome for AMD? Cool again.
Wait! I forgot that it will have 4C/8T more when 99.99% of consumer workloads can't saturate the 8C/16T current champion today (yeah; Intel is on top).
And another faulty argument is presented: GPU year to year performance increases with node jumps or any other advantage is not comparable to CPU architecture in any way shape or form. They do different things and those differences matter when you are setting down your expectations between the two. So, forget what you state below with regards to how well GPU's adapt to newer nodes and trying to transpose it to CPU's.
See:
https://www.tomshardware.com/news/amd-ryzen-3000-everything-we-know,38233.html
With regards to 7nm nodes:
Last edited by a moderator: May 13, 2019 -
most of your factual argument falls on deaf ear, simply because you're talking to consumer who knows what they want and getting into, so sadly you'll need to do better than that
back then when intel had the IPC + frequency lead, it was definitely my go to while others at the time is more about ethic in business practice, value, future proofing etc.
with zen2 coming up will not doubt be within intel's own neck, if not triumph while having more cores, better value and efficiency and that is good enough for me to switch, regardless of how many "years" of validation they have had in the market, since AMD has been around just as long.
nice try changing my mind though, i'll change when intel comes up with their newly stacked chiplet arc + 10nm+ minimum.
Last edited by a moderator: May 13, 2019 -
Platform is more than a CPU. Meanwhile, you are trying to justify buying a new board each CPU generation. Such a joke, as anyone with a modded bios to use up to 8000 and 9000 series chips on Z170 to Z270 can tell you.
As to them not talking about it, what company gives detailed information on the chip and performance before the official announcement and release. Let's take Intel's recent launch of the 9900K last fall. Do you remember Intel's internal benchmarks that were published during the NDA period before reviewers could discuss actual performance? Do you remember how that company intentionally gimped AMD's performance?
That happened about 2 weeks to a month before the official release after the official announcement. CES is in about 2 weeks. The launch for purchase is about 5 weeks after that. But what you are asking for is to see performance numbers almost 7 weeks before it is available on the market, which not even Intel does. With Intel, you know core counts, just like with AMD leaks, you can guess at frequency, but that is not made certain until we get close to launch, like with AMD, etc.
Now, AMD doesn't have as long a history on delivering on their products each generation. So it is proper to be skeptical. But also with Intel's recent history, they have shown they cannot be trusted either. Because of that, it is always best to wait for reviewers and official reveals that SHOW the actual performance, not just PR puff pieces or monster water chillers modified to run at negative degrees Celsius.
As to the 15-chiplet chips, those are what AMD have been pushing for supercomputing for years. Chiplets, HBM on package, CPU and GPU same package, etc. Now, the cool thing about chiplets is that you can bin the chiplets BEFORE being integrated onto the silicon substrate! That means you can find the best performing CPU core dies, the best GPU dies, the best performing I/O dies, then marry them with the package integration. But, by doing so, you can wind up setting clocks near the peak (less fun for overclockers, but consumers don't have to worry about performance left on the table). It is an interesting time to see this transformation in computer tech.
Also, for consumers, aside from me mentioning the binning before they are integrated on package, getting the better binned separate still is better for end consumers for the moment, because even with factory binning, there is still variance.
As to AMD's approximate TDP's, the 16-core allegedly is around 135W TDP. Now, there are reasons for that. If you can do the same performance for half the power, which is AMD's claim, that means theoretically the 16-core could run at 90W and still get the same performance of a 1950X. That is impressive! Now, when you add the half-way point, so that you are at 135W, you can get a good boost in frequency, like 12.5% improvement on clocks (the 25% boost in clocks was looking at performance at the same power draw). This is then coupled with the IPC improvement. Which a 13% IPC gain over Zen 1 and a 12.5% boost in clock speed over Zen+ sounds pretty good. But then we have to remember that TSMC 16/12nm process is NOT the same as GF 14/12nm. TSMC has a better process. So you also get the gains on switching fabs. At 4.2GHz on a 1950X (which is what my machine is running at on a golden chip with 1.375V, so not representative of the ordinary 1950X) or on a 2950X (more common), 12.5% would clock the chip to 4.725GHz. Just wanted to put the isopower/isoperformance chart into perspective here. So for the same price, you will be approaching single core performance, but will have twice as many cores as the Intel 9900K. That sounds like a pretty good bargain.
See, people say the leaked pricing is too good to be true. But it mirrors what was done to the HEDT market. AMD came in offering a 16-core for $1000, which matched the cost of an Intel 10-core and was $700 cheaper than a 7960X 16-core, or about 41% cheaper. If AMD comes in with their 8-core performance being able to meet the 9900K at stock with a 65W chip priced like half of what the 9900K is priced at, then they sell two to one what Intel sells. That is the approximate amount seen at MindFactory for their HEDT market segmentation, where 2% of all chips sold were AMD Threadripper, while Intel's HEDT made up 1% of their sales. In other words, it is an unfair fight and AMD will kick Intel while they are having manufacturing issues and supply issues to grow market share. The pricing matches what was done with the HEDT segment precisely. Not only that, due to the crypto bust last year, they sold less of their graphics cards. Because of that, the percentage of revenue that came from the CPU side grew, which also grew their margins. That means the CPU side actually has better profit margins than the GPU side (no surprise there, if being honest). So AMD has run the numbers and has a plan.
His not believing it, while not doing the above analysis to show whether there is possibility or not, does not impact the reality of the situation. It isn't about belief. It is about whether with the information at hand there exists the probability of the occurrence (how likely it is to occur), and whether the magnitude of the implication has the necessary and preceding conditions to argue for the occurrence.
Between my discussion of TDP and frequencies above, plus my cost analysis per core CPU die, with the knowledge that you are binning the dies and have higher margins on some products in the stack than others, and that the 6-core variants are likely from effective yields harvesting defective dies that otherwise would have yielded ZERO profits, so would be considered less than that $22 per die, I'd say that fits well within the 50% margin cited by Su, even with the lower pricing.
So his dismissive non-analysis, which is similar to your dismissive non-analysis, just seems to miss the mark.
As to the quotes on the 7nm EUV process, that is 7nm+, not 7nm. That is Zen 3 compared to Zen 2, not Zen 2 compared to Zen/+. So you are misleading people with that quote. In fact, I've been saying that there is likely a slight frequency regression from Zen 2 to Zen 3 due to going from 7nm DUV to 7nm+ EUV which only does a couple layers in EUV on TSMC 7nm+. TSMC 5nm I believe is the first time EUV will be used for the entire stack, and that a recent 6nm half node was mentioned, which does more layers with EUV than 7nm+, but not full like 5nm.
Then, in regards to the density mentioned, those are SRAM numbers, not fully transferable to what is seen with final silicon, where Intel uses less dense for their CPUs, and each "+" added actually is less dense than the original, which helps with heat density allowing for increased frequency. Even with that, what is cited is Intel's theoretical density compared to Apple's ACTUAL chip density, while ignoring HiSilicon's Krilin density, which was at either 93 or 98 MTr/mm2, which even the theoretical densities beat Intel's as of 2017 ( https://www.semiwiki.com/forum/content/6713-14nm-16nm-10nm-7nm-what-we-know-now.html).
Meanwhile, the Snapdragon 8CX reached up to 94.6MTr/mm2. https://www.anandtech.com/show/13687/qualcomm-snapdragon-8cx-wafer-on-7nm
So you lecture us on theory, then throw out theory on density when convenient. Kind of funny.
Intel NEVER gives final, actual density, but it often is less than the theoretical limit for SRAM chips. Intel, because they were losing on density calculations, even created their own new way to calculate density. In other words, if you are losing, change the rules of the game so you always win. There has been tons of discussion surrounding Intel's new proposal and its accuracy, but by no means has it been adopted. But you seem to leave all those important notes out of your analysis. That is quite curious.
So I do agree, reality is a PITA, but it is hilarious you do not adequately cite context.
And waiting for Intel to deliver, considering their record since 2014 on process issues (yes, I'm including the delay of 14nm with the problems of not getting a working 10nm until easily 3-4yrs after they were supposed to have it).Last edited by a moderator: May 13, 2019Aroc likes this. -
This is the list on approximate transistor densities with some information on context. (full source: https://www.techcenturion.com/7nm-10nm-14nm-fabrication ).
As you can see, when you compare Intel's SRAM numbers, the only one provided, to TSMC's HPC process, that is where the comparing 100.8 to 66.7MTr/mm2 comes from. But take notice that TSMC's low power process for mobile chips is at 96.5MTr/mm2.
Now, let us examine an Anandtech chart looking at actual densities in final silicon:
https://www.anandtech.com/show/13687/qualcomm-snapdragon-8cx-wafer-on-7nm
Notice that for the companies that used the TSMC 7nm FF/FF+ node, Qualcomm reached to up to 94.6MTr/mm2, the HiSilicon Kirin reached 93.1MTr/mm2, and the Apple A12 Bionic reached 82.9MTr/mm2. Those are 98%, 96.5%, and 86%, respectively, of the theorized transistor density. That is pretty good.
But let's examine what happens when we look at Intel's 14nm process, with the theoretical density of 43.5MTr/mm2. Intel, with Skylake 4+2 achieved just 14.3MTr/mm2, or a 33% density compared to the theoretical value that Intel published.
Let's look at AMD's results. Using Samsung/GF 14nm process with a theoretical transistor density off 32.5MTr/mm2, they achieved an actual density of 25MTr/mm2, or 77% of the theorized density. That is pretty good.
So, assuming that the achieved density versus theoretical will be approximately the same, while AMD is using the HPC TSMC process rather than the more dense low power variant, you would take the 66.7MTr/mm2 * 0.77 (77%), which equals 51.3MTr/mm2.
Now it is time for Intel. So, taking the theoretical 100.8MTr/mm2 * 0.33 (33%), you get 33.3MTr/mm2, or roughly 18MTR/mm2 less dense than AMD.
Now, one reason to do less density is heat. By making it less dense, the neighboring transistors contribute less so that the heat density is lower which can allow for higher frequencies at the same temp as a denser chip with lower frequencies. This is part of where Intel gets their high frequency. But, with that, you also wind up with fewer transistors to contribute to doing the work. So, there is a theoretical IPC trade-off. This also isn't comparing the final transistor count nor the die area, although those are provided in the table above. When doing that, you can see why I am very impressed with Intel's engineers ability to design microarchitecture. They have great performance with about 60% of the density of a Ryzen chip, while achieving 25% more frequency, with the rest coming from IPC due to architectural design.
One should always show respect for achievements. AMD deserves respect for achieving the density that they have, Intel on microarchitecture. But to look at densities in a vacuum, especially theorized on SRAM instead of actual results achieved, is more than misleading.
So, the question comes whether AMD will loosen density a bit to achieve the higher frequencies or if TSMC's process alone is enough while keeping the higher density. That is an open question (which I am looking forward to seeing the answer).
To quote another in this forum you might recognize: "So, the assumptions you would like everyone to believe have been effectively nullified, for now."
-
tilleroftheearth Wisdom listens quietly...
Yeah, just like I thought. Nothing to see here folks, let's find some more random articles from nobody's that support our undefendable position, or, just keep moving along.
At least @ole!!! is being honest, I'm just talking to deaf ears.
Btw, I'm not trying to convince you or anyone else to change your mind. Just trying to expand it a little.
The points I've made stand and the walls of text from random people on the web do not change anything.
AMD started three years ago by pushing 'more cores'. Intel was forced to show they were trying to keep up to that mantra while battling chaos from internal and external forces. In the end, Intel got its focus back and the products they have made available during this dark period prove it as do the balance sheets.
Today, AMD is continuing to push 'more cores' along with (finally) better efficiency vs. Intel for the mobile sectors. All accounts though (from their own sources, please see my previous posts for links) show that increased performance vs. Intel (which is still holding the performance crown during all the past battles) may not be as much as should be expected. Of course, the AMD faithful gloss over those points I've made.
Intel, on the other hand, has emerged from their battles a little bit scathed, a little bit more humbled, and with a new and deeper understanding of what they need to execute on next. Their latest plans (again, please see my recent posts for links) are solid and show them executing them with newfound confidence (isn't it great when we find our way back to the path again).
From all the known facts from both sides, not merely mangled rumors and wishes, Intel will continue its dominance for the foreseeable future (yeah; and that includes the future that AMD just dropped their new TR from the public view just recently).
As I've said before, even without all the innovation and new projects going exactly as Intel would like, they are still on very stable ground for the time being.
Let's wait for each of them to have their swing for the fence in this crucial year ahead and see if the order of tech will be re-arranged.Aroc likes this. -
Same goes by saying Intel holds the performance crown when that is not exactly true. They will likely barely hold onto the single thread performance crown, but get demolished in multi-threaded workloads. But you ALWAYS seem to miss that point.
Moreover, what kept Intel's balance sheet looking good was that due to the problem with their process, they had to keep too many products on 14nm. So they prioritized the high margin products over the low margin products. They then charged obscene pricing on those products. That worked for awhile because demand was so high for replacing the lost processing capacity from the fixes to Spectre and Meltdown, among other vulnerabilities, that many companies had to scale out their deployments to make up for the lost processing power.
What happens when Intel's new Cascade-SP CPU costs $18,000 (an increase of about $8,000 over their prior flagship costing $10,000), while AMD comes in with a CPU likely to cost $6000-8000, but comes with 64-cores rather than their prior 32-core chips while offering similar frequencies? I'll tell you, the frontier contract from Cray, the contract with the company that runs the LHC, etc. Now, if AMD can do lower than $6000 on their flagship, which, it should be mentioned that Intel's CPU uses nearly double the power and basically would need water cooling in a server, Intel is going to be faced with the problem of now competing on price, and the low yields on 10nm, the production capacity issues for 14nm which will continue at minimum into or through Q3 2019, and a price war which will cause Intel's margins to shrink, you wind up in a situation where your argument that Intel is still profitable is on shaky ground. They have already revised down their revenues significantly, and that is BEFORE the release of Zen 2. If, as I explained above on the cost per die analysis, AMD does hit those price targets, or is anywhere close to it, Intel gets hit with a new wave of pain.
As to the TR rumor, that dropping from Q4 release was quickly followed with the rumor of Zen 3 being released in Q1 of 2020 on the server side, which begs the question if AMD plans on releasing TR alongside the Zen 3 Epycs, which would make an extra 3 month wait or less COMPLETELY worth it, as you would go from Zen+ to Zen 3 with 2 years of architectural updates. Or, they could get rid of the Ryzen TR and switch the workstation chips into overclockable, speed optimized (like the 7371) 1P chips with the full feature sets of an Epyc for $300-600 more than TR cost (which is currently the approx on cost of Epyc over TR) with 8-channel memory, 128-160 PCIe 4.0 or 5.0 lanes, etc. So do you really want to use that as an example? That also would be able to compete with the 6-channel Xeon 28 core beast, but with better I/O and more memory bandwidth (at that core count, memory bandwidth is often more important than the memory latency for many uses).
But you are correct. Let's let the batters get up and swing. It is a mere matter of a couple weeks.Last edited: May 13, 2019 -
I expect @tilleroftheearth talk about Mainstream.
Regarding HEDT. If Amd's 32 cores would struggle beat an 28 cores Intel, <I mean something has to be wrong>. Because it is quite natural that 15% more cores should perform better.
+ This is an laptop threadThe title say *Is laptops innovation dead at the moment*.
Last edited: May 13, 2019tilleroftheearth likes this. -
Sometimes the fighter that lands the hardest punch or draws first blood catches the attention of the raging crowd, but that pisses off his opponent and he may ultimately end up going down in a TKO after his opponent regains his senses. For now I am sticking with Intel and NVIDA, but I am open to the idea of kicking them to the curb if AMD pulls a rabbit from their magic hat, assuming that Intel and NVIDIA do not respond with a death punch. Maybe we will see some long overdue evidence that "for every action there is an equal and opposite reaction" begin to play out over and over again on the PC tech stage. And, that's the thing... AMD went for a very long time with no horse in the either race (CPU and GPU) and it's nice to see a real demonstration that they are not asleep at the wheel regardless of who ultimately emerges victorious. The winner will still be the one that gets my money, but who can hate the competition? The analysis and speculation kind of bores me though, and I'd rather just wait and see how it turns out. Everyone watching a fight wins and bitter rivalry is welcomed, especially in PC technology.
Last edited: May 13, 2019Papusan, ajc9988 and tilleroftheearth like this. -
Either way, one thing we all can appreciate is that with the extra revenue FINALLY making AMD profitable (literally, until past quarter or two, they still were losing money), AMD is doing good work investing in further R&D. But, they haven't gotten enough, even with nearly quadrupling their server market share, to make a decent graphics card. See this from AdoredTV
Although everything, if priced right, will sell. So expect more pain on that front and we can hope Intel's new graphics cards can help (but remember, Raja is running the shop, which did have a hand in Vega and Navi, although there are many reasons Vega sucked it up).
But the next couple years will be interesting. -
AMD Readies Radeon RX 640, an RX 550X Re-brand
Mr. Fox and tilleroftheearth like this. -
As to HEDT, that is likely due to not having direct memory channels to two of the four dies, which Intel has 6-channels of memory helping with the bandwidth per core, along with the IPC deficit of about 3-4%, the frequency deficit of a couple hundred MHz, and the SMT being a better implementation by AMD, but not making up for the other deficits.
Now, let's discuss how this fits into laptops. Intel is starting to make 14nm chips up to 8-cores for laptops, which do still have a single core frequency of 5GHz, but an all core frequency that is WAY lower than the desktop parts.
AMD showed with the 65W chip that they can give the performance in a power limited envelope. Once those chips are within a 45W envelope, Zen 2 will actually force Intel to work harder, whether through IPC or frequency changes, to increase the performance of their mobile chips.
For DTR systems, those use desktop mainstream chips, as no mfr. will try like they did for the P570WM to squeeze an HEDT into a mobile form factor. But, if they can find a way to support the 135W 16-core, I think the few that need it would handsomely pay for that (even if I think that is a pipe dream).
Meanwhile, this means AMD will finally, starting as soon as they get their APU and mobile lineup out, which is a couple quarters away, be able to compete with Intel on the higher powered CPU in laptop front, excluding DTR systems, although with that 65W chip, they may be competing on the DTR front as well before too long.
Navi was a product that was developed in conjunction with Sony. Vega was in conjunction with Apple. What comes next? Well you have the next-gen developed with Microsoft with Arcturus being a specific chip in that lineup, potentially. You have the GPU that will be developed from the money received from the Frontier deal with Cray for the 1.5 Exaflop system deliverable in 2021, plus the cash coming from Shasta from Cray, deliverable in 2020, etc.
So, what started development in 2017, when Ryzen dropped and AMD started pouring more into R&D, puts the fruits around 2021-22. That is something to remember.ole!!! likes this. -
Support.3@XOTIC PC Company Representative
-
Although Intel branding its juicy $600 BGA chips i9 it's far from being an HEDT. Not even closeLast edited: May 13, 2019 -
for intel's 10nm, they could be denser than TSMC's 7nm and ultimately it still comes down to real world performance. so far anything i have to go by is their failed plan by paying lenovo to have 10nm cpus with corrupted iGPU disabled in their machine. anandtech tested the efficiency and its hardly any improvement. i know thats from at least a year ago so maybe the new 10nm has improved, too little data to go by, so i'll stick to assuming zen2 will obliterate intel. -
when only talking about value with 1700x/2700x to me who wants performance its not that enticing. i'd still pay 60% premium for that extra performance if my budget allows it. same would apply when you talk about intel's historical fact and future fact etc, they've got nothing to show.
with AMD they currently do have some leaks and leaks suggest it is on par with intel. even assuming that it is close enough within 2-3% ipc or frequency, or both. ASSUMING it is BOTH and zen2 is STILL behind intel 2-3%, i'd say it is good enough for me to take 3% less frequency AND 3% less IPC while having double amount of cores and uses less power.ajc9988 likes this. -
On the new 10nm Ice lake, there should be more of an improvement. Cannon was only a die shrink, not an architectural refinement. Ice lake should have the refinement, plus should be on 10nm+ process, meaning that there should be less frequency regression compared to what was seen from that token chip.ole!!! likes this. -
Support.3@XOTIC PC Company Representative
tilleroftheearth and Papusan like this. -
Laptops desperately need 5Gig-E. Gig-E Ethernet isn't good enough, and nobody wants to carry around a dongle.
tilleroftheearth likes this. -
-
tilleroftheearth Wisdom listens quietly...
-
Yeah I hope soon the 10g is the standard.
-
I go back and forth internally on whether innovation is dead or not but I feel like I see promising ideas here and there, mostly small things.
As an example, HP’s new Omen gaming laptop will be the first to have liquid metal applied at the factory which I’m thinking will be a positive trend if it catches on. The 2nd screen aspect, I’m not so sure on yet but it’s definitely an innovation and not something I’ve seen, at least in that location before.
ole!!! likes this. -
Support.3@XOTIC PC Company Representative
Thought ASUS was going the LM thing too. Second screen, I'm not sold on, there were a couple of models that did that in the past (usually replacing or flipping the numpad) and I don't recall that being terribly popular.
Is laptops innovation dead at the moment
Discussion in 'Hardware Components and Aftermarket Upgrades' started by cooldex, Apr 30, 2019.