The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    Is laptops innovation dead at the moment

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by cooldex, Apr 30, 2019.

  1. custom90gt

    custom90gt Doc Mod Super Moderator

    Reputations:
    7,907
    Messages:
    3,862
    Likes Received:
    4,808
    Trophy Points:
    331
    ALL laptop GPUs were gimped compared to the desktop versions. The 980GTX and pascal changed that.

    Here is how the 980M was gimped:

    NVIDIA GeForce GTX 980
    Graphics Processor GM204
    Cores 2048
    TMUs 128
    ROPs 64
    Memory Size 4 GB
    Memory Type GDDR5
    Bus Width 256bit

    NVIDIA GeForce GTX 980M
    Graphics Processor GM204
    Cores 1536
    TMUs 96
    ROPs 64
    Memory Size 8 GB
    Memory Type GDDR5
    Bus Width 256bit
     
    tilleroftheearth and Porter like this.
  2. Porter

    Porter Notebook Virtuoso

    Reputations:
    786
    Messages:
    2,219
    Likes Received:
    1,044
    Trophy Points:
    181
    I think he means compared to the desktop 980. All mobile "m" variants were like 30-50% slower until that point. Nowadays they are sometimes single digit difference, so close that a good overclock can make the mobile version as fast or faster than the desktop. That is crazy good compared to the old days. No way I will ever go back to a desktop now.
     
    tilleroftheearth and custom90gt like this.
  3. Richard Zheng

    Richard Zheng Notebook Evangelist

    Reputations:
    41
    Messages:
    340
    Likes Received:
    158
    Trophy Points:
    56
    I meant how variants of the 980m are different, not how the ms are different from desktop versions.

    Nvidia gimped the 980m, but it was obvious given the “m”. But they sell the MX 150s as all the same despite huge TDP differences
     
  4. Richard Zheng

    Richard Zheng Notebook Evangelist

    Reputations:
    41
    Messages:
    340
    Likes Received:
    158
    Trophy Points:
    56
    Oh, I meant as in gimped versions of the same card. Most 980ms performed about the same as each other IIRC, while newer GPUs are so different despite having the same names
     
    Papusan likes this.
  5. rinneh

    rinneh Notebook Prophet

    Reputations:
    854
    Messages:
    4,897
    Likes Received:
    2,191
    Trophy Points:
    231
    Desktop cards can be gimped like that as well. There are actually 2 RTX 2080 sku's present for example and 1 doesnt clock as high as the other. All those cheap blower style 2080RTX cards have the lower rated GPU and cant clock as high. Differences like this are common for quite some time now.
     
  6. Richard Zheng

    Richard Zheng Notebook Evangelist

    Reputations:
    41
    Messages:
    340
    Likes Received:
    158
    Trophy Points:
    56
    I know they did it with 1060s... Now on 2080s?
     
  7. Zymphad

    Zymphad Zymphad

    Reputations:
    2,321
    Messages:
    4,165
    Likes Received:
    355
    Trophy Points:
    151
    Can't think of a reason not to get a NVME though. I got a ADATA 8200 pro 512 3500 mb/s for $86. And a 1TB ADATA 6000 Pro 1500 mb/s for $110.

    Considering the price of NVME, why wouldn't you get it for your browser typewriter?
     
  8. Reciever

    Reciever D! For Dragon!

    Reputations:
    1,525
    Messages:
    5,340
    Likes Received:
    4,299
    Trophy Points:
    431
    Because sata ssd is cheaper and I have no use for a machine I can't service not to mention I will not benefit from nvme speeds in a consumer workload, not to mention typwriting and browsing.

    There isn't an explicit or implicit NEED for nvme on such a machine
     
  9. Zymphad

    Zymphad Zymphad

    Reputations:
    2,321
    Messages:
    4,165
    Likes Received:
    355
    Trophy Points:
    151
    Is SSD cheaper? 1tb for $110... How much is a 1Tb SSD? 512 for $86. Is it worth not spending the extra few bucks for NVME?

    Do I need NVME? Nope. But at these prices I thought it would be dumb not to get it.

    Also talking about laptop innovation and you're talking about how innovations in storage don't apply to you cause you use a closed system Mac from 2013, a machine explicitly designed to prevent upgrades. OK....
     
  10. rinneh

    rinneh Notebook Prophet

    Reputations:
    854
    Messages:
    4,897
    Likes Received:
    2,191
    Trophy Points:
    231
    Yeah, There is an A and B line and the lesser line is used in the budget offerings, which even dont clock well if you slap a liquid cooling block on top of it. Kinda sucks because you really have to check the component serial numbers on the boxes to see what you get. The base specs are the same but it wont boost as far and cant clock as high.
     
  11. Reciever

    Reciever D! For Dragon!

    Reputations:
    1,525
    Messages:
    5,340
    Likes Received:
    4,299
    Trophy Points:
    431
    A Mac system from 2013 is not even being considered and never presented as a potential option in my scenario.

    My sig doesn't state a Mac anywhere in it, does it? I was asked why I would consider the 2012 MacBook pro and I delivered that scenario. No where in that scenario does the system 1) even work with nvme (or the standard in general) 2) warrant nvme in any possible way for a consumer workload let alone a workload of that level.

    I mean, I even in my statement, I liked the 2012 version because it's not closed. Osx seems to still work well and Linux is quite friendly with it (or at least I'm told)

    So while you're quick to clown on me for (not) owning the 2013 MacBook pro (of which I don't) I would suggest reading more carefully next time. I'm all for constructive criticism / arguments as Im happy to learn but at least criticize MY scenario and not something out in the ether.

    :)

    It's worth noting I wouldn't bother with anything more than 240gb 2.5 ssd to dual boot Linux and Mojave. 300 is the grand total I would spend on something like this.
     
  12. Zymphad

    Zymphad Zymphad

    Reputations:
    2,321
    Messages:
    4,165
    Likes Received:
    355
    Trophy Points:
    151
    I apologize then. I don't see sig with.my browser settings on mobile. All I saw was discussion of the Mac 2012.
     
    Reciever likes this.
  13. Reciever

    Reciever D! For Dragon!

    Reputations:
    1,525
    Messages:
    5,340
    Likes Received:
    4,299
    Trophy Points:
    431
    Rotate landscape for that info to appear :)
     
    Porter likes this.
  14. Richard Zheng

    Richard Zheng Notebook Evangelist

    Reputations:
    41
    Messages:
    340
    Likes Received:
    158
    Trophy Points:
    56
    Landscape browsing sucks :(

    Damn keyboard takes up more than half the screen
     
  15. Reciever

    Reciever D! For Dragon!

    Reputations:
    1,525
    Messages:
    5,340
    Likes Received:
    4,299
    Trophy Points:
    431
    It does, but I only use it to look at sigs and go back to portrait.
     
  16. custom90gt

    custom90gt Doc Mod Super Moderator

    Reputations:
    7,907
    Messages:
    3,862
    Likes Received:
    4,808
    Trophy Points:
    331
    My point was they gimped the hardware on the mobile cards, now they gimp the firmware on the MQ cards. At least some of the firmware gimps can be circumvented with flashing the bios from another vendor, you couldn't add cuda cores...
     
  17. Richard Zheng

    Richard Zheng Notebook Evangelist

    Reputations:
    41
    Messages:
    340
    Likes Received:
    158
    Trophy Points:
    56
    I found hardware gimps to be quite uniform across cards. You knew a 980m was worse than a 980. You knew a 950 was worse than a 950ti.

    But with max Q you have no idea what beats what and why. A 1050ti MQ beats a 1050ti. But then a 1080 beats a 1080MQ.

    What fresh hell is this?
     
  18. custom90gt

    custom90gt Doc Mod Super Moderator

    Reputations:
    7,907
    Messages:
    3,862
    Likes Received:
    4,808
    Trophy Points:
    331
    There is the convolution of the naming (and when they don't label it as MQ), but a XXXX will beat a XXXXMQ card. I haven't seen a 1050TI-MQ beating a 1050ti unless it's been overclocked or in a laptop with much better cooling which allows for better boost.

    *on edit*

    Not everyone realized that the laptop GPUs of the past were cut-down variants. There was no such thing as a full desktop equivalent. Now we have desktop equivalents and underclocked/TDP limited desktop equivalents for thin and light solutions.
     
    Last edited: May 11, 2019
    katalin_2003 likes this.
  19. Richard Zheng

    Richard Zheng Notebook Evangelist

    Reputations:
    41
    Messages:
    340
    Likes Received:
    158
    Trophy Points:
    56
    I’ve seen 1050ti MQ besting 1050ti if the cooling was good. Enough. The Max Q part of the name doesn’t mean squat sometimes, but other times it does in fact mean lower performance.
     
  20. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    30% more performance going from m to non-m and cost went up by 150%. 600$ for 680m/780m sold by dell alienware, now the 1080 cost $1500, wowzer.

    boost clock 5ghz on 12 cores zen 2 cpu. gotta cinebench all day and see how well it performs.
     
    Last edited by a moderator: May 12, 2019
    ajc9988 likes this.
  21. custom90gt

    custom90gt Doc Mod Super Moderator

    Reputations:
    7,907
    Messages:
    3,862
    Likes Received:
    4,808
    Trophy Points:
    331
    No, it always means lower potential. If you have a laptop with great GPU cooling that won't limit the card, the non-MQ will win over the MQ. It's not a difficult thing, the TDP limited part is just that, limited...

    Prices of gaming GPUs didn't increase in the laptop. The MXM cards were always a random expensive price. $600 sounds cheap for a 780m from what I remember.

    Sadly no one knows what Zen2's boost clocks are going to be yet, still have to wait a bit.
     
    Aroc and ole!!! like this.
  22. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    OK, so I missed this until last night, but was not in a clear frame of mind to discuss this.

    You are correct on Intel. 10nm+ (Ice Lake with Sunny Cove) should have LOWER transistor performance than 14nm++. That is from Intel itself. But, 10nm+ will be CLOSE to 14nm++. Then there is the IPC increase from Sunny Cove. That should give 11% IPC, which is the average Intel generational IPC jump from new core arch. To put that in perspective, from all rumors out there, that is a 4-5% estimated IPC over Zen 2. Not that impressive, but still a win. So the question is how much of a frequency regression Ice Lake will have.

    Further, Intel plans for "S" skus is 14nm Comet Lake. Rocket Lake is rumored to be 14nm++ as well, meaning if they don't move that sku to 10nm, then Intel is trying to jump straight to 7nm in 2021-22.

    Now, we also have the rumors surrounding AMD, such as the Zen 2 12-core running at 5GHz. We do not know if that is single core boost or all core boost.

    If we look at the AMD page ( here), we see they only list the single core boost. Even assuming the same, that means reduce 5GHz by 200MHz and you have a chip estimated to run 4.8GHz with 50% more cores.

    Now, we have already seen the 65W 8-core match a stock Intel 9900K. The only question is what frequency was used. If the 13% IPC number is to be believed, then we are talking about around 6% IPC over the current Intel chips. At 4.8GHz all core, you should see the equivalent performance of an Intel 9900K/F at 5.088GHz, or 5.1GHz, estimated. If the 12-core CPU does hit 5GHz, and there isn't a regression from having two dies on the chip, then that would be equivalent to the 9900K running at 5.3GHz, except that it also has 4 more cores, meaning it would be matching heavily overclocked Intel CPUs in single thread, potentially, while also having more multi-thread power.

    Meanwhile, AdoredTV released a vid discussing the upcoming Zen 3 Milan server product. There is not an assurance it will come to consumers. With threads, although it speeds up workloads, it is not linear. For example, SMT can add up to 33%, approximately, over a non-SMT CPU (talking both AMD and Intel). By adding two additional threads per core, what you are doing is adding a more efficient queue system to lower down time of the cores, thereby speeding up processing, roughly. This also needs a more efficient and better scheduler to cause that not to cause slow downs. So you may get a case where it takes SMT from 33% up to 50%, but it will not be doubling the performance. This also can increase core heat, which can require lower frequencies (not always the case, but it IS a possibility that one would need to be aware of).

    As to the stacked mem I/O, that would really be a boon. People have argued with me saying that the latency is too high on HBM for this use, but they have ignored the lower latency optimized HBM2, as well as the higher speed HBM2 chips that have been shown, which faster speed with roughly same latency means lower real world latency. Then, you just need to keep the HBM fed from DDR. Imagine 16GB-32GB of HBM2 or HBM3 on the die, which would have a latency between 30ns-60ns (note, AMD's memory call latency is already around 60-80ns), which would be fed by a larger ram pool off chip. This is coupled with the bandwidth of HBM2, which would be 512GBps to 1TBps bandwidth, which would be fed from an 8-channel DDR4 or DDR5 system, which would be 160-320GBps, approx. Also, those speeds are peak, not continual. Overall, that means, even with the latency, the bandwidth should make up for it in the context of a datacenter CPU.

    So, either way, we have about 2 weeks to Lisa Su's keynote at CES. Then, it will be about 5 weeks from that that the first Zen 2 CPUs drop. That means in about 7 weeks, all the reviews will be out and we can put the silliness on saying AMD isn't competing on the high end to bed.

    Also, when I mentioned 10nm, I have to re-emphasize Intel has ONLY said they will have low core count "u" and "y" type variants and entry level Xeons. That is NOT desktop parts. That is not high core count server parts. That is very limited.

    Edit: I forgot to mention that the 65W desktop AMD 8-core already showed equivalent performance to a stock 9900K. Intel has a mobile 8-core with 5GHz single core boost, but we all know that you often get 2+ cores active. That 65W AMD chip will likely, then, outperform the Intel offerings for laptops, excluding clevo DTR.

    The only question is whether they are partnering with Asus or Acer on a desktop chip in a laptop this round, whether they will require an AMD GPU or allow Nvidia GPUs to be paired, or if the pairing will be like a Navi GPU (not talking the APUs, just desktop CPU paired with a dGPU). There is also the question of if the Asus or Acer AMD desktop CPU machines will receive bios updates or mods from the community to allow the drop in of the desktop Zen 2 CPUs. If they do, then those owners will get a HUGE jump in performance.
     
    Last edited: May 12, 2019
    ole!!! likes this.
  23. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    @ajc9988 imo giving 11% ipc increase on intel's side is way too much of an improvement. we no longer have any large gain and i doubt it'll be more than the usual 3-4% ipc boost which should be on par with zen 2 assuming zen 2 is getting a 10-13% boost over zen+.

    i'd be interested in seeing the power efficiency and overclocked throughput once 10nm chip drops, however we do know desktop chip wont come until 10nm+ because intel knew 10nm is junk with likely crap frequency. we may see something along the lines of the old 5775c with moderate frequency.

    also, the 4way smt is rumour right now but i have little doubt that it wont come true, if AMD wants to win it needs to win now with everything they have, they will need as much money as possible with zen archs and have enough money for R&D to combat future intel's arch and GPU side of things as well.

    and yea that is single core boost 5ghz, people will be crazy to think all 12 cores running at 5 ghz, its hard to think AMD will throw something power efficient as a sku, this isnt the FX9590 days anymore.

    the reason of not going laptop no more. if zen2 16 cores aren't put into the laptop then the difference between laptop/desktop just too much. im looking at next TR for 32 cores so a laptop 16 cores is a must to fit that gap until TR 3 happens.
     
    Last edited: May 12, 2019
    ajc9988 likes this.
  24. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    You seem to be correct. Now, we have to remember on this image the tick-tock for architecture, then die shrink:
    upload_2019-5-12_15-53-18.png
    https://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/9

    So, if it winds up being less, that could be a problem for Intel unless they hit a high average, like they got from the architecture change from Ivy to Haswell. Broadwell to skylake IPC was rather low, like Haswell to broadwell. So, you have a very good point there.

    Edit:
    I am interested as well, but if Intel is claiming lower performance, they are likely talking frequency.

    As to what AMD is doing, I'm wondering with the 15 chip rumor, how AdoredTV speculated that it is the GPUs and CPUs on a single SoC, whether or not AMD is breaking out the active interposer for Cray's supercomputer. It would make sense because for the performance and markup they will get, paying what it would cost AMD to implement it (which is the markup to cost if they were doing a monolithic chip, unless the prices have come done a fair amount in the past 2 years, Dec. 2017) would actually be worth it, potentially.

    Either way, AMD really is looking to make all the moves they can right now. So between Active interposers for interconnects (they have the white papers and are just waiting for cost at certain production nm packaging before adopted; which is similar to foveros, which is active interposer that contains IMC and I/O and Cache, IIRC), increased threads (which will also force M$ to optimize a scheduler for their chips), and continuing process node shrinks, I really think they have a good thing going.

    Now, if you are right and the IPC on 10nm is closer to 6%, Intel will be on par, but still able to hit higher frequencies. Then the question is price, especially if it is a 4.8GHz vs 5.1GHz same core count, same IPC (around 6% performance difference; that would be Zen 2 chips versus Comet Lake under the lower IPC hypothesis). Obviously, that is something we need the chips in hand, so worth a revisit in about 6 months, give or take a couple months.

    But, yeah, an all core 4.8GHz on a 12 core would be a beast. And however high the 16-core can clock, but it is competing with the 7960X/9960X, not just the 2950X/1950X. But if the IPC is higher (13% over Zen and about 9-10% over Zen+, which would mean approximately 5-6% over Intel's 9900K), so it should be able to even hit on that level with around 4.35GHz to beat it, give or take plus workload dependent. This is assuming no regression with the dual die setup.
     
    Last edited: May 12, 2019
    ole!!! likes this.
  25. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    I see it's back to the future (2015, this time), again. :)

    Faulty logic; Assuming that Intel is the same company today as it was a year ago, let alone 4 years ago.
    Faulty logic: Skylake will equal Ice Lake/Sunny Cove improvements.
    Faulty logic: Nested, multiple, if/if/if statements. ;)

    See:
    http://forum.notebookreview.com/threads/intels-upcoming-10nm-and-beyond.828806/page-10#post-10909761

    Why is that link above included here? As a reminder that testing old processors on the latest platforms possible and the latest O/S improvements available do not show the actual increase in performance/productivity, they brought for their time.

    Nice write up if there was any content to sink our gray matter into...
     
  26. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    the leaks for AMD so far seems to be real good. it is strange however if it's good, you'd think AMD would just come out and say it, perhaps they are saving it for computex. theres also a part of me think its too good to be true, when company hides something and not talk about it then it prob means its got issues. im at 50/50 right now.

    i'd still take 4.7ghz 16 cores anyday over intel's 10 cores at 5ghz though, the heat is a big problem with 14nm++ at this point, assuming zen2's ipc is really a 10%+ over zen+.

    as for the 15 chiplet thing, for me/you is not that relevant as it is for enterprises. i'd not want a binned chip be stuck with GPU that has chance to fail, and vice versa. though in this case if they are all on the same package, then issues are way less no vrm etc as those should be on mobo.

    i'd sitll wanna get binned cpu/gpu separately.


    yes it is a lot of assumption. you gotta give amd credit for what they're able to do in just 2 yrs span. also the assumption are from educated guesses and with lots of leaked info to back them up. intel's next cpu will most likely be still on core arc and theres nothing left to squeeze in terms of performance. not to mention power hog, assuming 10nm is any good which right now we know its no good, at best as their 14nm counter part.

    we have valid example of 7nm improvement, vega vs radon iiv is an example. a boost of 25% clock speed on same arch, while still having more of an acceptable temp/power vs it's predecessor.

    assuming zen+ 2700x getting average of 4.1ghz top, 8 core zen2 will likely be able to hit 5ghz while using same, if not less power than 2700x at 4.1ghz and that is bloody amazing, all the while not taking into fact theres IPC increase from new design.

    a 12/16c at 4.8ghz however will probably mean 150-200w tdp but that is still damn fine consider what we got now, 8 core 9900k at easily 200w tdp.
     
  27. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    And the assumptions continue, and flip/flop depending on who we want to post a 'win' for. :rolleyes:

    The credit they deserve has been paid in full and shows on their bottom line and we've seen those results already. Bravo to what they have accomplished (and I mean that sincerely), but the same can be said for Intel too if a balanced and reasoned approach is taken (as it should be taken). Educated guesses and leaked info are one and the same, not to mention daily clickbait fodder for online rags today.

    Unless you can vouch that you are the one who leaked any of it and can show us the proof that it is actually 'info', then an educated guess in tech is worth less than the electrons used to post it. Will some of this be proven to be true in the not so distant future? Possibly, but until then, your guess is just as good at being the wrong one as anyone else's. ;)

    So, based on years old tech, Intel's outlook looks pretty dim to you. Cool. :p

    But, AMD will be able to catch up to the already in market i7-9900K and that is a great outcome for AMD? Cool again. :p

    Wait! I forgot that it will have 4C/8T more when 99.99% of consumer workloads can't saturate the 8C/16T current champion today (yeah; Intel is on top). :p

    And another faulty argument is presented: GPU year to year performance increases with node jumps or any other advantage is not comparable to CPU architecture in any way shape or form. They do different things and those differences matter when you are setting down your expectations between the two. So, forget what you state below with regards to how well GPU's adapt to newer nodes and trying to transpose it to CPU's.

    See:
    https://www.tomshardware.com/news/amd-ryzen-3000-everything-we-know,38233.html

    With regards to 7nm nodes:
    So, the assumptions you would like everyone to believe have been effectively nullified, for now. :(



     
    Last edited by a moderator: May 13, 2019
  28. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    dont really need to vouch or anything. it is a fact that intel is in big trouble, no matter how you sugar coat it. you can take all the "historical" facts you want and present your view that intel is doing just fine, but reality is not. spending ton of money acquiring business like mcAfee and bunch of other stuff in the end back fire. should have just become like nvidia pump out 20-30% more powerful CPU each year along with increasing core count and since they didnt do that they are in the position they are at.

    most of your factual argument falls on deaf ear, simply because you're talking to consumer who knows what they want and getting into, so sadly you'll need to do better than that :(

    back then when intel had the IPC + frequency lead, it was definitely my go to while others at the time is more about ethic in business practice, value, future proofing etc.

    with zen2 coming up will not doubt be within intel's own neck, if not triumph while having more cores, better value and efficiency and that is good enough for me to switch, regardless of how many "years" of validation they have had in the market, since AMD has been around just as long.

    nice try changing my mind though :D, i'll change when intel comes up with their newly stacked chiplet arc + 10nm+ minimum.
     
    Last edited by a moderator: May 13, 2019
  29. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Sorry, that article is NOT standing for the proposition you think! It is a straight up comparison of CPU performance from different generations to now. It does NOT show things like DDR4 and how it improved performance, it doesn't show PCIe changes and I/O increased connectivity and how that benefits workloads, etc. It shows CPUs being compared for performance.

    Platform is more than a CPU. Meanwhile, you are trying to justify buying a new board each CPU generation. Such a joke, as anyone with a modded bios to use up to 8000 and 9000 series chips on Z170 to Z270 can tell you.

    If you want to build hype, do you leak the end performance, or do you release tasty morsels every so often to stay in the news cycle?

    As to them not talking about it, what company gives detailed information on the chip and performance before the official announcement and release. Let's take Intel's recent launch of the 9900K last fall. Do you remember Intel's internal benchmarks that were published during the NDA period before reviewers could discuss actual performance? Do you remember how that company intentionally gimped AMD's performance?

    That happened about 2 weeks to a month before the official release after the official announcement. CES is in about 2 weeks. The launch for purchase is about 5 weeks after that. But what you are asking for is to see performance numbers almost 7 weeks before it is available on the market, which not even Intel does. With Intel, you know core counts, just like with AMD leaks, you can guess at frequency, but that is not made certain until we get close to launch, like with AMD, etc.

    Now, AMD doesn't have as long a history on delivering on their products each generation. So it is proper to be skeptical. But also with Intel's recent history, they have shown they cannot be trusted either. Because of that, it is always best to wait for reviewers and official reveals that SHOW the actual performance, not just PR puff pieces or monster water chillers modified to run at negative degrees Celsius.

    As to the 15-chiplet chips, those are what AMD have been pushing for supercomputing for years. Chiplets, HBM on package, CPU and GPU same package, etc. Now, the cool thing about chiplets is that you can bin the chiplets BEFORE being integrated onto the silicon substrate! That means you can find the best performing CPU core dies, the best GPU dies, the best performing I/O dies, then marry them with the package integration. But, by doing so, you can wind up setting clocks near the peak (less fun for overclockers, but consumers don't have to worry about performance left on the table). It is an interesting time to see this transformation in computer tech.

    Also, for consumers, aside from me mentioning the binning before they are integrated on package, getting the better binned separate still is better for end consumers for the moment, because even with factory binning, there is still variance.


    This has multiple parts. First, the successor to core for Intel is expected around 2021-22. That isn't coming for awhile, but will arrive around the time of the new 7nm Intel process. Intel may still be able to get some IPC gains. They had planned for Sunny Cove with Ice Lake in 2017. Two years later, still a no show for another two weeks to two months, then only in up to quad core mobile parts. They are likely going to backport it to 14nm. They have three core uarch tweaks in the wings. But because they wanted to do tick-refinement-tock with the node shrink-uarch-refinement, since their miniaturization went off the rails with the new nodes starting with the delays of 14nm and the no-show that was broadwell, we then have been on the same node since as every other major fab caught up and passed Intel on fabrication process. The bad node process, though, should NOT take away from their architectural designs, which are good. Unfortunately, when you get stuck on the same architecture for 4 generations, you cannot squeeze much out on refinements to get more IPC. The low hanging fruit from that is gone. So without a new uarch tweak, like sunny cove, and without much on the process tweak front, you stagnate HARD.

    As to AMD's approximate TDP's, the 16-core allegedly is around 135W TDP. Now, there are reasons for that. If you can do the same performance for half the power, which is AMD's claim, that means theoretically the 16-core could run at 90W and still get the same performance of a 1950X. That is impressive! Now, when you add the half-way point, so that you are at 135W, you can get a good boost in frequency, like 12.5% improvement on clocks (the 25% boost in clocks was looking at performance at the same power draw). This is then coupled with the IPC improvement. Which a 13% IPC gain over Zen 1 and a 12.5% boost in clock speed over Zen+ sounds pretty good. But then we have to remember that TSMC 16/12nm process is NOT the same as GF 14/12nm. TSMC has a better process. So you also get the gains on switching fabs. At 4.2GHz on a 1950X (which is what my machine is running at on a golden chip with 1.375V, so not representative of the ordinary 1950X) or on a 2950X (more common), 12.5% would clock the chip to 4.725GHz. Just wanted to put the isopower/isoperformance chart into perspective here. So for the same price, you will be approaching single core performance, but will have twice as many cores as the Intel 9900K. That sounds like a pretty good bargain.

    See, people say the leaked pricing is too good to be true. But it mirrors what was done to the HEDT market. AMD came in offering a 16-core for $1000, which matched the cost of an Intel 10-core and was $700 cheaper than a 7960X 16-core, or about 41% cheaper. If AMD comes in with their 8-core performance being able to meet the 9900K at stock with a 65W chip priced like half of what the 9900K is priced at, then they sell two to one what Intel sells. That is the approximate amount seen at MindFactory for their HEDT market segmentation, where 2% of all chips sold were AMD Threadripper, while Intel's HEDT made up 1% of their sales. In other words, it is an unfair fight and AMD will kick Intel while they are having manufacturing issues and supply issues to grow market share. The pricing matches what was done with the HEDT segment precisely. Not only that, due to the crypto bust last year, they sold less of their graphics cards. Because of that, the percentage of revenue that came from the CPU side grew, which also grew their margins. That means the CPU side actually has better profit margins than the GPU side (no surprise there, if being honest). So AMD has run the numbers and has a plan.

    They are NOT nullified, as people did not calculate the wafer yield. A leak has the wafer yield at 70%. We do not know if that is ACTUAL yield or EFFECTIVE yield. What's the difference? Actual yield is good dies from the wafer. Period. Effective yield is the yield of all good dies plus the defective dies without a critical defect that can still be used by cutting off cores, etc. Now, at 70% yield, far above Intel's yield, which is rumored to be half of what 14nm yields are, which are in the mid to high 80s (Intel had a real problem when they started with 14nm on yields, but did solve the problem), that means Intel isn't even at 50% on yields on 10nm, which is a shame, AMD is able to harvest enough so that at the cost of $11,000 per wafer, each die to be used costs around $22 (this is from memory, but I actually used a wafer-die calculator with that yield, backworked that the defect density is 0.45/mm2 if using ACTUAL yield rather than effective yield, meaning that defect density may be higher if the 70% was EFFECTIVE yield), and then you add in the cost of the I/O die, packaging costs, R&D recoup per chip, etc., and you could easily be able to sell at that price and receive a healthy margin.

    His not believing it, while not doing the above analysis to show whether there is possibility or not, does not impact the reality of the situation. It isn't about belief. It is about whether with the information at hand there exists the probability of the occurrence (how likely it is to occur), and whether the magnitude of the implication has the necessary and preceding conditions to argue for the occurrence.

    Between my discussion of TDP and frequencies above, plus my cost analysis per core CPU die, with the knowledge that you are binning the dies and have higher margins on some products in the stack than others, and that the 6-core variants are likely from effective yields harvesting defective dies that otherwise would have yielded ZERO profits, so would be considered less than that $22 per die, I'd say that fits well within the 50% margin cited by Su, even with the lower pricing.

    So his dismissive non-analysis, which is similar to your dismissive non-analysis, just seems to miss the mark.

    As to the quotes on the 7nm EUV process, that is 7nm+, not 7nm. That is Zen 3 compared to Zen 2, not Zen 2 compared to Zen/+. So you are misleading people with that quote. In fact, I've been saying that there is likely a slight frequency regression from Zen 2 to Zen 3 due to going from 7nm DUV to 7nm+ EUV which only does a couple layers in EUV on TSMC 7nm+. TSMC 5nm I believe is the first time EUV will be used for the entire stack, and that a recent 6nm half node was mentioned, which does more layers with EUV than 7nm+, but not full like 5nm.

    Then, in regards to the density mentioned, those are SRAM numbers, not fully transferable to what is seen with final silicon, where Intel uses less dense for their CPUs, and each "+" added actually is less dense than the original, which helps with heat density allowing for increased frequency. Even with that, what is cited is Intel's theoretical density compared to Apple's ACTUAL chip density, while ignoring HiSilicon's Krilin density, which was at either 93 or 98 MTr/mm2, which even the theoretical densities beat Intel's as of 2017 ( https://www.semiwiki.com/forum/content/6713-14nm-16nm-10nm-7nm-what-we-know-now.html).

    upload_2019-5-13_6-25-10.png

    Meanwhile, the Snapdragon 8CX reached up to 94.6MTr/mm2. https://www.anandtech.com/show/13687/qualcomm-snapdragon-8cx-wafer-on-7nm

    So you lecture us on theory, then throw out theory on density when convenient. Kind of funny.

    Intel NEVER gives final, actual density, but it often is less than the theoretical limit for SRAM chips. Intel, because they were losing on density calculations, even created their own new way to calculate density. In other words, if you are losing, change the rules of the game so you always win. There has been tons of discussion surrounding Intel's new proposal and its accuracy, but by no means has it been adopted. But you seem to leave all those important notes out of your analysis. That is quite curious.

    So I do agree, reality is a PITA, but it is hilarious you do not adequately cite context.

    Most of his factual arguments are FLAWED. That is often why I argue in the way I do. He is trying to do spin rather than analysis, or so it seems much of the time.

    And waiting for Intel to deliver, considering their record since 2014 on process issues (yes, I'm including the delay of 14nm with the problems of not getting a working 10nm until easily 3-4yrs after they were supposed to have it).
     
    Last edited by a moderator: May 13, 2019
    Aroc likes this.
  30. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    So, I thought at first that the 66MTr/mm2 was from the estimated density for the Apple A12, which was floated around the time of its release. I was wrong. Here is the information to analyze density comparisons for process nodes to put them into context:

    upload_2019-5-13_9-1-13.png
    This is the list on approximate transistor densities with some information on context. (full source: https://www.techcenturion.com/7nm-10nm-14nm-fabrication ).

    As you can see, when you compare Intel's SRAM numbers, the only one provided, to TSMC's HPC process, that is where the comparing 100.8 to 66.7MTr/mm2 comes from. But take notice that TSMC's low power process for mobile chips is at 96.5MTr/mm2.

    Now, let us examine an Anandtech chart looking at actual densities in final silicon:

    upload_2019-5-13_9-4-43.png

    https://www.anandtech.com/show/13687/qualcomm-snapdragon-8cx-wafer-on-7nm

    Notice that for the companies that used the TSMC 7nm FF/FF+ node, Qualcomm reached to up to 94.6MTr/mm2, the HiSilicon Kirin reached 93.1MTr/mm2, and the Apple A12 Bionic reached 82.9MTr/mm2. Those are 98%, 96.5%, and 86%, respectively, of the theorized transistor density. That is pretty good.

    But let's examine what happens when we look at Intel's 14nm process, with the theoretical density of 43.5MTr/mm2. Intel, with Skylake 4+2 achieved just 14.3MTr/mm2, or a 33% density compared to the theoretical value that Intel published.

    Let's look at AMD's results. Using Samsung/GF 14nm process with a theoretical transistor density off 32.5MTr/mm2, they achieved an actual density of 25MTr/mm2, or 77% of the theorized density. That is pretty good.

    So, assuming that the achieved density versus theoretical will be approximately the same, while AMD is using the HPC TSMC process rather than the more dense low power variant, you would take the 66.7MTr/mm2 * 0.77 (77%), which equals 51.3MTr/mm2.

    Now it is time for Intel. So, taking the theoretical 100.8MTr/mm2 * 0.33 (33%), you get 33.3MTr/mm2, or roughly 18MTR/mm2 less dense than AMD.

    Now, one reason to do less density is heat. By making it less dense, the neighboring transistors contribute less so that the heat density is lower which can allow for higher frequencies at the same temp as a denser chip with lower frequencies. This is part of where Intel gets their high frequency. But, with that, you also wind up with fewer transistors to contribute to doing the work. So, there is a theoretical IPC trade-off. This also isn't comparing the final transistor count nor the die area, although those are provided in the table above. When doing that, you can see why I am very impressed with Intel's engineers ability to design microarchitecture. They have great performance with about 60% of the density of a Ryzen chip, while achieving 25% more frequency, with the rest coming from IPC due to architectural design.

    One should always show respect for achievements. AMD deserves respect for achieving the density that they have, Intel on microarchitecture. But to look at densities in a vacuum, especially theorized on SRAM instead of actual results achieved, is more than misleading.

    So, the question comes whether AMD will loosen density a bit to achieve the higher frequencies or if TSMC's process alone is enough while keeping the higher density. That is an open question (which I am looking forward to seeing the answer).

    To quote another in this forum you might recognize: "So, the assumptions you would like everyone to believe have been effectively nullified, for now. :("
     
  31. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Yeah, just like I thought. Nothing to see here folks, let's find some more random articles from nobody's that support our undefendable position, or, just keep moving along.

    At least @ole!!! is being honest, I'm just talking to deaf ears. :)

    Btw, I'm not trying to convince you or anyone else to change your mind. Just trying to expand it a little. :)

    The points I've made stand and the walls of text from random people on the web do not change anything.

    AMD started three years ago by pushing 'more cores'. Intel was forced to show they were trying to keep up to that mantra while battling chaos from internal and external forces. In the end, Intel got its focus back and the products they have made available during this dark period prove it as do the balance sheets.

    Today, AMD is continuing to push 'more cores' along with (finally) better efficiency vs. Intel for the mobile sectors. All accounts though (from their own sources, please see my previous posts for links) show that increased performance vs. Intel (which is still holding the performance crown during all the past battles) may not be as much as should be expected. Of course, the AMD faithful gloss over those points I've made.

    Intel, on the other hand, has emerged from their battles a little bit scathed, a little bit more humbled, and with a new and deeper understanding of what they need to execute on next. Their latest plans (again, please see my recent posts for links) are solid and show them executing them with newfound confidence (isn't it great when we find our way back to the path again).

    From all the known facts from both sides, not merely mangled rumors and wishes, Intel will continue its dominance for the foreseeable future (yeah; and that includes the future that AMD just dropped their new TR from the public view just recently).

    As I've said before, even without all the innovation and new projects going exactly as Intel would like, they are still on very stable ground for the time being.

    Let's wait for each of them to have their swing for the fence in this crucial year ahead and see if the order of tech will be re-arranged. :)
     
    Aroc likes this.
  32. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Once again, instead of trying to address my points made, you just dismiss and diminish ACTUAL analysis of the situation, including ignoring citations from reputable sources. Speaks to your character.

    Same goes by saying Intel holds the performance crown when that is not exactly true. They will likely barely hold onto the single thread performance crown, but get demolished in multi-threaded workloads. But you ALWAYS seem to miss that point.

    Moreover, what kept Intel's balance sheet looking good was that due to the problem with their process, they had to keep too many products on 14nm. So they prioritized the high margin products over the low margin products. They then charged obscene pricing on those products. That worked for awhile because demand was so high for replacing the lost processing capacity from the fixes to Spectre and Meltdown, among other vulnerabilities, that many companies had to scale out their deployments to make up for the lost processing power.

    What happens when Intel's new Cascade-SP CPU costs $18,000 (an increase of about $8,000 over their prior flagship costing $10,000), while AMD comes in with a CPU likely to cost $6000-8000, but comes with 64-cores rather than their prior 32-core chips while offering similar frequencies? I'll tell you, the frontier contract from Cray, the contract with the company that runs the LHC, etc. Now, if AMD can do lower than $6000 on their flagship, which, it should be mentioned that Intel's CPU uses nearly double the power and basically would need water cooling in a server, Intel is going to be faced with the problem of now competing on price, and the low yields on 10nm, the production capacity issues for 14nm which will continue at minimum into or through Q3 2019, and a price war which will cause Intel's margins to shrink, you wind up in a situation where your argument that Intel is still profitable is on shaky ground. They have already revised down their revenues significantly, and that is BEFORE the release of Zen 2. If, as I explained above on the cost per die analysis, AMD does hit those price targets, or is anywhere close to it, Intel gets hit with a new wave of pain.

    As to the TR rumor, that dropping from Q4 release was quickly followed with the rumor of Zen 3 being released in Q1 of 2020 on the server side, which begs the question if AMD plans on releasing TR alongside the Zen 3 Epycs, which would make an extra 3 month wait or less COMPLETELY worth it, as you would go from Zen+ to Zen 3 with 2 years of architectural updates. Or, they could get rid of the Ryzen TR and switch the workstation chips into overclockable, speed optimized (like the 7371) 1P chips with the full feature sets of an Epyc for $300-600 more than TR cost (which is currently the approx on cost of Epyc over TR) with 8-channel memory, 128-160 PCIe 4.0 or 5.0 lanes, etc. So do you really want to use that as an example? That also would be able to compete with the 6-channel Xeon 28 core beast, but with better I/O and more memory bandwidth (at that core count, memory bandwidth is often more important than the memory latency for many uses).

    But you are correct. Let's let the batters get up and swing. It is a mere matter of a couple weeks.
     
    Last edited: May 13, 2019
  33. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,706
    Messages:
    29,840
    Likes Received:
    59,619
    Trophy Points:
    931
    Did you put the Mainstream into the bag as well? How is the 2700x performance vs. 9900K? And how much more performance will you get from AMD's 6 cores 2600x model vs. Intel's? Since you talk about in multi-threaded workloads we can start there. Thanks

    I expect @tilleroftheearth talk about Mainstream.

    Regarding HEDT. If Amd's 32 cores would struggle beat an 28 cores Intel, <I mean something has to be wrong>. Because it is quite natural that 15% more cores should perform better.

    + This is an laptop thread :oops: The title say *Is laptops innovation dead at the moment*.
     
    Last edited: May 13, 2019
    tilleroftheearth likes this.
  34. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,218
    Messages:
    39,333
    Likes Received:
    70,631
    Trophy Points:
    931
    Sometimes the fighter that lands the hardest punch or draws first blood catches the attention of the raging crowd, but that pisses off his opponent and he may ultimately end up going down in a TKO after his opponent regains his senses. For now I am sticking with Intel and NVIDA, but I am open to the idea of kicking them to the curb if AMD pulls a rabbit from their magic hat, assuming that Intel and NVIDIA do not respond with a death punch. Maybe we will see some long overdue evidence that "for every action there is an equal and opposite reaction" begin to play out over and over again on the PC tech stage. And, that's the thing... AMD went for a very long time with no horse in the either race (CPU and GPU) and it's nice to see a real demonstration that they are not asleep at the wheel regardless of who ultimately emerges victorious. The winner will still be the one that gets my money, but who can hate the competition? The analysis and speculation kind of bores me though, and I'd rather just wait and see how it turns out. Everyone watching a fight wins and bitter rivalry is welcomed, especially in PC technology.

     
    Last edited: May 13, 2019
    Papusan, ajc9988 and tilleroftheearth like this.
  35. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Very true. But to accomplish this fight, AMD took the money it was using to barely keep up with Nvidia and poured it over into the fight with Intel. Within a short time, that fight heated up, although we all have seen how that cost them going against Nvidia (and the resulting price hikes from Nvidia due to lack of competition, which remarkably resembles Intel's recent price hikes with competition).

    Either way, one thing we all can appreciate is that with the extra revenue FINALLY making AMD profitable (literally, until past quarter or two, they still were losing money), AMD is doing good work investing in further R&D. But, they haven't gotten enough, even with nearly quadrupling their server market share, to make a decent graphics card. See this from AdoredTV
    upload_2019-5-13_12-45-17.png


    Although everything, if priced right, will sell. So expect more pain on that front and we can hope Intel's new graphics cards can help (but remember, Raja is running the shop, which did have a hand in Vega and Navi, although there are many reasons Vega sucked it up).

    But the next couple years will be interesting.
     
  36. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,706
    Messages:
    29,840
    Likes Received:
    59,619
    Trophy Points:
    931
    Talk about pulls a rabbit from their magic hat... :vbbiggrin:
    AMD Readies Radeon RX 640, an RX 550X Re-brand
     
    Mr. Fox and tilleroftheearth like this.
  37. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    So, for 2700X vs 9900K, you currently have Intel winning by 22% in productivity and 11% in gaming, while charging a 60%, approximately, premium for that performance. Moreover, a 65W 8-core, set to be released shortly, already was shown with equal performance on cinebench only months ago (I'll return to that in a minute). As to the 8700K and 8086K, it was still a fair amount. We shall revisit that in 7 weeks.

    As to HEDT, that is likely due to not having direct memory channels to two of the four dies, which Intel has 6-channels of memory helping with the bandwidth per core, along with the IPC deficit of about 3-4%, the frequency deficit of a couple hundred MHz, and the SMT being a better implementation by AMD, but not making up for the other deficits.

    Now, let's discuss how this fits into laptops. Intel is starting to make 14nm chips up to 8-cores for laptops, which do still have a single core frequency of 5GHz, but an all core frequency that is WAY lower than the desktop parts.

    AMD showed with the 65W chip that they can give the performance in a power limited envelope. Once those chips are within a 45W envelope, Zen 2 will actually force Intel to work harder, whether through IPC or frequency changes, to increase the performance of their mobile chips.

    For DTR systems, those use desktop mainstream chips, as no mfr. will try like they did for the P570WM to squeeze an HEDT into a mobile form factor. But, if they can find a way to support the 135W 16-core, I think the few that need it would handsomely pay for that (even if I think that is a pipe dream).

    Meanwhile, this means AMD will finally, starting as soon as they get their APU and mobile lineup out, which is a couple quarters away, be able to compete with Intel on the higher powered CPU in laptop front, excluding DTR systems, although with that 65W chip, they may be competing on the DTR front as well before too long.


    For investment into R&D, you won't see the fruits for 3-5 years, which is the same as for the CPU side.

    Navi was a product that was developed in conjunction with Sony. Vega was in conjunction with Apple. What comes next? Well you have the next-gen developed with Microsoft with Arcturus being a specific chip in that lineup, potentially. You have the GPU that will be developed from the money received from the Frontier deal with Cray for the 1.5 Exaflop system deliverable in 2021, plus the cash coming from Shasta from Cray, deliverable in 2020, etc.

    So, what started development in 2017, when Ryzen dropped and AMD started pouring more into R&D, puts the fruits around 2021-22. That is something to remember.
     
    ole!!! likes this.
  38. Support.3@XOTIC PC

    Support.3@XOTIC PC Company Representative

    Reputations:
    1,268
    Messages:
    7,186
    Likes Received:
    1,002
    Trophy Points:
    331
    I think all this is because we got onto putting HEDT processors in laptops.
     
    Papusan, ajc9988 and Mr. Fox like this.
  39. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,706
    Messages:
    29,840
    Likes Received:
    59,619
    Trophy Points:
    931
    Still, many years ago since we had HEDT processors in laptops :D

    Although Intel branding its juicy $600 BGA chips i9 it's far from being an HEDT. Not even close :)
     
    Last edited: May 13, 2019
    Arrrrbol and Mr. Fox like this.
  40. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    im assuming you're talking about me and my dismissive non-analysis. i am only trying to analyze tsmc's 7nm performance gain will have effect on the zen2 arch. the power effiency is already shown, we have data from vega to radeon 7 imo those are decent data to go by and as for value/margin, i dont care about any of that because we know AMD will bring value period.

    for intel's 10nm, they could be denser than TSMC's 7nm and ultimately it still comes down to real world performance. so far anything i have to go by is their failed plan by paying lenovo to have 10nm cpus with corrupted iGPU disabled in their machine. anandtech tested the efficiency and its hardly any improvement. i know thats from at least a year ago so maybe the new 10nm has improved, too little data to go by, so i'll stick to assuming zen2 will obliterate intel.
     
  41. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    @tilleroftheearth the quote above is basically the thing i care much less about, but the same could be said for intel and your "historical" fact.

    when only talking about value with 1700x/2700x to me who wants performance its not that enticing. i'd still pay 60% premium for that extra performance if my budget allows it. same would apply when you talk about intel's historical fact and future fact etc, they've got nothing to show.

    with AMD they currently do have some leaks and leaks suggest it is on par with intel. even assuming that it is close enough within 2-3% ipc or frequency, or both. ASSUMING it is BOTH and zen2 is STILL behind intel 2-3%, i'd say it is good enough for me to take 3% less frequency AND 3% less IPC while having double amount of cores and uses less power.
     
    ajc9988 likes this.
  42. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    No, I was meaning the author of the article he cited, not you, in that instance. I do apologize about any confusion on that! It was not directed at you at all.

    On the new 10nm Ice lake, there should be more of an improvement. Cannon was only a die shrink, not an architectural refinement. Ice lake should have the refinement, plus should be on 10nm+ process, meaning that there should be less frequency regression compared to what was seen from that token chip.
     
    ole!!! likes this.
  43. Support.3@XOTIC PC

    Support.3@XOTIC PC Company Representative

    Reputations:
    1,268
    Messages:
    7,186
    Likes Received:
    1,002
    Trophy Points:
    331
    I'm all for the real thing again.
     
    tilleroftheearth and Papusan like this.
  44. pitz

    pitz Notebook Deity

    Reputations:
    56
    Messages:
    1,034
    Likes Received:
    70
    Trophy Points:
    66
    Laptops desperately need 5Gig-E. Gig-E Ethernet isn't good enough, and nobody wants to carry around a dongle.
     
    tilleroftheearth likes this.
  45. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    why not 10 GE
     
  46. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Still takes too much power, today.

     
    ole!!! likes this.
  47. Zymphad

    Zymphad Zymphad

    Reputations:
    2,321
    Messages:
    4,165
    Likes Received:
    355
    Trophy Points:
    151
    Yeah I hope soon the 10g is the standard.
     
  48. rickybambi

    rickybambi Notebook Consultant

    Reputations:
    8
    Messages:
    110
    Likes Received:
    79
    Trophy Points:
    41
    I go back and forth internally on whether innovation is dead or not but I feel like I see promising ideas here and there, mostly small things.

    As an example, HP’s new Omen gaming laptop will be the first to have liquid metal applied at the factory which I’m thinking will be a positive trend if it catches on. The 2nd screen aspect, I’m not so sure on yet but it’s definitely an innovation and not something I’ve seen, at least in that location before.

     
    ole!!! likes this.
  49. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    woulda been nice if it was a 20 inch laptop.
     
    Kyle likes this.
  50. Support.3@XOTIC PC

    Support.3@XOTIC PC Company Representative

    Reputations:
    1,268
    Messages:
    7,186
    Likes Received:
    1,002
    Trophy Points:
    331
    Thought ASUS was going the LM thing too. Second screen, I'm not sold on, there were a couple of models that did that in the past (usually replacing or flipping the numpad) and I don't recall that being terribly popular.
     
← Previous pageNext page →