The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    AMD's Ryzen CPUs (Ryzen/TR/Epyc) & Vega/Polaris/Navi GPUs

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by Rage Set, Dec 14, 2016.

  1. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    OK, let's talk some of this out:

    First, there is this video by Wendell of Level1Techs and Ian Cuttress. Everyone should take a peep if you like deeper discussion, actual deep discussion, on tech.


    It is a really good discussion, but you should pay attention specifically to the portion discussing the I/O die and chipset. Go to 14:28 for this discussion. Specifically, the chipset is the 14nm version of the I/O die cut down. (*GASPS AT THE REVEAL*).
    Einhorn is Finkel. Finkel is Einhorn.
    [​IMG]


    But, there is the discussion of server chips in there, like Intel and AMD's back and forth on Rome versus Cascade-AP.
    They do discuss why MB mfrs went all in on X570. They discuss Intel's performance regression at 10nm and TSMC overcoming.

    That means the I/O die is either the 11 or 15W variant inside the matisse chips, meaning that the TDP of the mainstream chips, like the 65W chips, the I/O die is like up to 15W, and the core die is as low as 50W! Also, that means potentially having way more configurable options on the chipset in the future.

    Next we can go into the rumors regarding the upcoming Zen 3.

    First, 7nm+/6nm are not full EUV. It is 4 layers and 5 layers, respectively, with EUV, while the rest is the DUV like what Zen 2 has.

    Second is that 7nm I/O die is rumored for server chips. Threadripper may get it as well, but there isn't a guarantee, because costs may make it where they get the I/O design of 14nm that Second gen Epyc gets. There are no guarantees.

    Because of cost, we don't know when mainstream will get 7nm I/O die, or if Zen 3 mainstream will get it or not. We know server will, but not mainstream. If they do it, then they have to pump out a LOT of I/O chiplets to fill demand for the Xx70 boards, the Xx99 boards, the server boards, then enough I/O dies for all of their product stacks. Think about how much that is. That is two I/O dies per platform; one in chip and one for the board! So when that comes will depend on costs and yields at TSMC on 7nm/7nm+/6nm. This is compounded that only two packaging companies are able to do microbumps for mounting 7nm right now! You would need to use that for 7nm chipsets, meaning the costs would be EVEN HIGHER on X570 boards if they did 7nm.

    1usmus posted an image of an X590 teaser that mentioned Asmedia. We do not know that for sure that means chipset, as Asmedia offers many products, but people do think chipsets when they think Asmedia.

    Moreover, if you are going TR3, I recommend a custom water loop. If you are doing a water loop, plan on getting a chipset water block. Problem solved.

    See, the rumor at WCCFtech on 64 core released this fall was that it might be 14nm. That would mean current 64 core Epycs moving to TR platform, likely to clear inventory. Why? Because it still smashes Intel's Cascade-X offerings, especially after the CCX awareness in the Windows scheduler. But, with that rumor is they have something special planned for CES 2020, and the new X599 boards will be released by late January. What does that mean? It means that the next gen of TR likely isn't coming in Q4, they are just sliding current Epycs to that platform to clear inventory, then are releasing TR at CES which might even be based on Zen 3. In other words, for TR, you may be getting that at CES. This is all rumor and speculation at this point.

    But, if that is the case, I do NOT foresee the chipsets moving off of 14nm, unless down to 12nm process from GF by then. I'm just trying to be realistic and manage expectations. Expect to buy a chipset waterblock and go custom water loop. As Ian Cutress mentioned, out of the two places that can package, it was his understanding AMD had to couch one of them on how to do it! That means capacity of them to mount 7nm chipset on silicon in addition to the server and TR chips and mainstream chips may exceed their packaging capacity. Facts of life here. I wonder, since they are adding ram coolers with CPU AIOs if they could do a 90mm low profile or 120mm low profile AIO for chipsets for those not wanting a fan on the chipset to be mounted on the exhaust area above the I/O area on the back of machines? lol.

    But, before crapping on the chipset, you should look what you are getting and what the challenges are, such as doubling the bus from 256 to 512. I posted the Ian Cuttress article on the Zen 2 architecture a couple days ago. Go back to that and read the parts about the I/O die, Infinity Fabric, and PCIe 4.0, then apply that with the revelation that the chipset is basically the I/O die cut down and reconfigured. Also think of what that means for future GPUs that can utilize IF to the CPU or through the chipset. Then think about what it means for scalability. Compare that to Intel's new interconnect tech introduced alongside PCIe 5.0. And think of where they will make advances for the introduction of PCIe 5.0 on AMD's I/O dies and chipsets. There are some really incredible possibilities coming.
     
    Last edited: Jun 15, 2019
    Rage Set and hmscott like this.
  2. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    Having the 1950x has taught me 16c, 32t is more than ample. So the 3950x eventually may be my sweet spot.
     
    Rage Set, ajc9988 and hmscott like this.
  3. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    See, I'm betting that 16-core will be the new entry level TR. And with more surface area on package and IHS, you will get higher clocks by staying on the hedt platform. But that is my assumption.
     
    hmscott likes this.
  4. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Linus said about as much, that there isn't a *need* for a 64 core AMD Threadripper, as 64 cores is well beyond what anyone can use, and that of course is about as good of a mistaken thought as:

    " When we set the upper limit of PC-DOS at 640K, we thought nobody would ever need that much memory." — William Gates, chairman of Microsoft - Source

    Of course AMD should make a 64 Core Threadripper, and let users / owners figure out what to do with it. People will buy it.

    Even if it's only bought by owners that would think it "neat" to have a 64 Core CPU at home - or work - for their desktop.

    Then there is the refresh of the 32 core and perhaps even the 24 core and 16 core - providing a step up from the 3950x for memory and PCI expansion.

    It would be interesting to see how the 16 core 3950x CPU performance might benefit from "Threadripperization"

    AMD said the lower core count Threadrippers don't sell, so maybe don't refreash those, but 16 / 24 / 32 / 64 core Threadripper 3's make sense to me.

    IDK, maybe drop the 24 core if ya gotta sacrifice something to the bean counters??

    It's hard to believe that Luke didn't know about Epyc "Rome"!!??

    AMD has GONE MAD... 64 Core Threadripper!
    Linus Tech Tips
    Published on Jun 15, 2019
    32:10 - 64 core Ryzen Threadripper rumor


    There is a big jump in design / requirements between Threadripper and Epyc. You wouldn't want an Epyc "server" build in your office, on your desktop. We need Threadripperized Epyc CPU's to use outside of the Datacenter.
     
    Last edited: Jun 15, 2019
    ajc9988 likes this.
  5. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331

    To top it off, most people who activate RTX don't even notice the visual differences.
    Sure, in some games it might be more noticeable, but the average user won't really know or care one way or the other.

    Plus, AMD doesn't need customized hw for raytracing given that software implementation can do the same on general compute, and AMD already offered Radeon Rays for a LONG time... especially if its properly optimized (which to date, very few devs seem to be doing in the first place), in that particular (fully optimized) scenario, I doubt the performance hit would be larger than what NV offers with Tensor cores.

    Also, if specialized hw for RT is too expensive as AMD claims... its probably a combination of factors (larger monetary cost for relatively minimal visual gain while suffering a large performance hit), and they might be in the right for not wanting to implement currently.

    Plus, with so many upcoming node changes, efficiency and performance enhancements, it might be good to perfect that bit first before integrating hw acceleration for RT (depending on adoption, performance impact, etc.).

    Also, NV is a far bigger company than AMD... so, whenever NV throws some new 'gimmick' at people, they will say acknowledge its not worth it, but if AMD doesn't have it (or basically it does, but just devs don't really bother implementing or optimizing for it via open source) then suddenly AMD is somehow 'inferior' and 'not worth it'.

    Looking at open source vs closed source, there's nothing in NV proprietary features that open-source couldn't do same or better. In fact, open source usually runs better when its properly optimized for and stresses the hw less (and its intricately easier to program)... but the market seems intent in supporting closed source features because a company like NV can afford throwing money at them, which in turn harms the overall quality of the features.

    Open source can get numerous revisions, fixes and updates in a much shorter time frame than proprietary stuff.
     
    Last edited: Jun 15, 2019
    hmscott likes this.
  6. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    See, if they did a 24-core for around $1200 ($100 discount) or so for entry, but use 3x8 core dies instead of 4x6 core dies, unless the 6-core does can truly reach the same speeds as the binned 8-core dies, thereby giving extra free L3 cache for those chips, I'd stick with the jumps by 8-cores.

    Now, if to combat 6-channel memory or Intel's upcoming 8-channel memory configurations next year (no idea yet if HEDT will get it for Intel), AMD might give 8-channels on memory on the upcoming TR chips, but leave the PCIe at 64 lanes. Who knows, I'm just thinking out loud.
     
  7. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Yup, there's a good justification for 24 core too. :)
    That's what I first thought of when the x499 motherboard failed to appear: AMD Ryzen Threadripper: X499 motherboards might launch Q1 2019 ...me thinking then that if AMD wanted to fix some Threadripper limitations as compared to Epyc and perhaps include Ryzen 3 / Zen 2 features it might not end up shipping x499 at all.

    It's funny that now I am thinking that AMD might want to re-jigger Threadripper 3 / x599 to more closely match Epyc Rome to accommodate needed resource expansions on the desktop.
     
    ajc9988 likes this.
  8. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Do you remember not to long ago me hypothesizing of moving TR from Ryzen to Epyc branding, making it an Epyc Threadripper? Use the same Epyc I/O die, or, like you mentioned, Rome's I/O die on 14nm while using 7nm I/O die for Milan Zen 3 with PCIe 5.0, instead of 4.0, use Zen 3 for both Epyc and TR, same memory channels (edit: server gets DDR5, TR gets DDR4 octo-channel) and number of lanes, just different gen PCIe. (Theory updated with new information).

    No new research on an I/O die or cutting it down for TR while crushing Intel's offerings, while server gets the 7nm I/O dies with the price premium associated. Clear stack differentiation by PCIe gen bandwidth. And then compatibility with single socket server boards and high end desktop workstation/gaming boards.
     
    hmscott likes this.
  9. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    I suspect the primary reason for AMD using 14nm I/O atm is because of their leftovers from GLOFO.
    It was part of the stipulations for AMD breaking away from GLOFO, that they continue to use their fabs in a limited capacity until a specified time frame (2020 or 2021).
    Otherwise, I see no real reason why the I/O couldn't be moved over to 7nm.
    We might see the 14nm I/O persisting with Zen 3 (7nm+), but I doubt that it will go beyond that.
     
    ajc9988 and hmscott like this.
  10. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    It isn't that. It's cost of production and surface mounting of 7nm dies for packaging. Read my earlier post on only two packaging centers being able to do 7nm attachment to MCM organic substrates. That means packaging assembly is harder and more costly, while 7nm designs cost 3x what 14nm does to create, all while having lower yields than the mature node.

    By saving 7nm for servers for next year, it works out the bugs, but more importantly is it subsidizes the research and production costs. DDR5 should come to mainstream in 2021. By then, packaging will be more mature, 7nm will be more mature, possibly 5nm EUV entire stack will be used for core dies, etc.

    So it is a bit more complex on why to do I/O at 14nm, although contracts may be a contributing factor.
     
  11. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    ajc9988 likes this.
  12. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
  13. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Why would you think Rome would get a 14nm IO die? I would assume Rome would get a different die - larger in size due to increased function - not larger in size due to being on a less dense process... do you have a link / text to quote?
     
  14. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    It says that in the article you just cited!

    "The IO die for the EPYC Rome processors is built on Global Foundries' 14nm process, however the consumer processor IO dies (which are smaller and contain fewer features) are built on the Global Foundries 12nm process."
     
    hmscott likes this.
  15. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    It doesn't make sense for AMD to use an IO die using the 14nm process in Epyc "Rome" that will need a larger IO die due to increased IO function, and use an IO die on the 12nm process in the consumer Ryzen "Matisse" CPU.

    It does make some sense I suppose for AMD to use 14nm on the motherboard chipset for both, but still I wonder if the TDP / thermals would be less if they also used a 12nm part instead for the motherboard chipset.

    Have you seen the Epyc IO die process detail info anywhere else?
     
    Last edited: Jun 15, 2019
  16. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    The 14nm I/O die has been known since between the Next Horizon event in October or November of last year to around CES.

    The person in the comments section saying Epyc is TSMC doesn't realize that only refers to the core dies, not the I/O die.

    Probably about the time they figured out they had to cut the I/O die for the chiplet down to reduce power from 15W to 11W, which was done by reducing the lanes to the chipset from 8 to 4 lanes, they realized to hit TDP targets on their chips at base clock, they needed to reduce power consumption of the I/O die. They couldn't cut down lanes like the chipset, so they had them produced on 12nm.

    Or at least that is my theory. I was that comment right after it was posted (I comment on some of his articles here or there, same handle) and ignored it because of prior coverage and what I just explained.
     
    hmscott likes this.
  17. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Also, forgot to mention, going from 14nm to 12nm on zen to zen+ never reduced the area footprint. It added efficiency and reduced heat allowing for a larger boost speed. Something to remember.
     
    hmscott likes this.
  18. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Also why it makes no sense to put a 14nm IO die on Epyc and a 12nm IO die on Ryzen.

    I updated my original post when you added the article quote, before you answered, no need to change your response again though.

    I'll wait to see what the AMD sourced Epyc Rome release info says.
     
    ajc9988 likes this.
  19. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    We will get further confirmation at Hot Chips in August! Only a couple months away.

    Now, for a couple watts, like maybe 5W on mainstream, it matters. On server products, it would be max of 20W extra (probably closer to 16W or so) or less due to all the extra IF interconnects for the server products, which need to connect up to 8 core chiplets. That plus sampling of Rome starting in Q3/Q4 of last year, I don't think they would really make this change this late in the game before full release (god that sounds like a double entendre).

    So, I think it is just a matter of cost, not switching streams, and the change making much less of an effect for a much higher TDP product.

    Edit: We also have to remember that AMD already has Intel's best 56-core product beat by over 100W on TDP, which means an extra 10-20W isn't going to really effect the value of the chip relative to Intel's power consumption.

    Edit 2: Also, without cutting down lanes to the chipset like on mainstream, there could have been less power savings relative to mainstream, although if those lanes were re-apportioned on the mainstream chip, then that may be moot and not matter. Just trying to speculate.
     
    Last edited: Jun 15, 2019
    hmscott likes this.
  20. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    gotta get that 24 cores minimum.

    just some speculation about frequency here. 2700x was 4.4ghz boost, lots were able to do 4.1ghz all core boost at around 1.35v to 1.375v but imho that voltage is cancerous, i wouldnt dare to go over 1.3v even if the heat allows it.

    so a 4 ghz all core boost with 2700x, going to 7nm should have given 30% frequency boost which shown on GPU however CPU side of thing isnt happening. the 16 cores boost to 4.7 turbo so at best, we can have 4.3ghz 16 cores at a decent voltage.. until we saw the demo at computex LOL
     
    hmscott likes this.
  21. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    Contracts are probably a big part of it... but taking your comment into account, it is possible we won't see the I/O shrinking to 7nm (or less) until at least the node yields are improved and costs go down... which should happen with 7nm+, but I remember that AMD has to continue using GLOFO for some production until 2021 I think... which means that it won't be until Zen 4 (or 5) that we see I/O moved to a smaller node.

    https://www.anandtech.com/show/1391...th-globalfoudries-set-to-buy-wafers-till-2021

    "AMD on Tuesday said it had amended its wafer supply agreement with GlobalFoundries. Under the terms of the new deal, the two companies agreed about prices and volumes of wafers that AMD will purchase from the U.S.-based foundry through at least 2021"

    Zen 2 = 2019 - 7nm
    Zen 3 = 2020 - 7nm+ EUV
    Zen 4 = 2021 (I think) - 5nm or 6nm
    Zen 5 = 2022?

    Considering that Zen + will be running its course by the end of this year (probably due to the APU's), the only thing left for GLOFO to produce that's AMD related remains to be the I/O die.

    Granted, the yields and costs associated with the production of I/O die will probably be excellent due to node maturity...
    But, it may be that AMD and GLOFO could consider moving the I/O to GLOFO 12nm node by 2020 or 2021... that would help on dropping the amount of space the I/O occupies (roughly 20%) along with costs... and who knows, AMD could also do some modifications to the I/O as well.
     
    ajc9988 and hmscott like this.
  22. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    just saw your signature. is acer gonna release bios update for you to swap in a 3950x? :D
     
    ajc9988 likes this.
  23. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    It was 20-25% boost I thought, but may have been 30%. But there is a significant difference in CPU and GPU architecture. Because of that, you can get close, but not guaranteed. Also, AMD originally planned to do a straight die shrink. Then they said screw it and shot for an architectural refinement WHILE on a node shrink. Why? Because at first TSMC was seeing a frequency regression similar to what is seen on Intel 10nm. Turns out that TSMC figured it out and did get the frequency up (potentially part of the reason for a re-tape on Navi).

    So you get both frequency and architectural changes. But those architecture changes mean you may not get all of the frequency increase. That is why I'm waiting to see the max OC frequency. Because that will tell me if it was worth the change. They did get a large IPC increase, larger than expected. But they had to take some lumps on frequency because of it. Compromise.

    We also have conflicting information. Like the 16-core scoring in the 61,000 range on GB4 at 4.3GGHz, while the OC to 5-5.35GHz on LN2 only got 64,000 range, which is much less scaling than I would expect since the LN2 had mem at 4500MHz range. One thing I speculated is that the 4133MHz memory on the 4.3GHz score was done with 1:1 on the mem to IF speed. But it also could have been using very different revisions, where the LN2 was an earlier sample than the later 4.3GHz score. So there is a lot to figure out still.

    Great parts, but from digging in further on the I/O die used for the chipset, a german site, I believe it was, that covered the story mentioned that they did 12nm on matisse because it gave better memory performance than on 14nm. Now, why not try for that for Epyc Rome? Because with the extra memory channels, speed is already constrained, mem only exists for ECC up to 3200MHz, etc., so that doing a 12nm likely did not also yield benefits, while also addressing your point above (but 12nm and 14nm use the same wafers, if not the same lines).

    But, after seeing the mem controller story, I think you are right for moving to 12nm for 2020, as that would be potentially used for threadripper, and there the memory speed support DOES matter more!

    Meanwhile, AMD, between Zen and Zen+ did not make the die smaller even with the space savings. They left the floor plan the same and used the smaller node, which it seemed that Ian hinted at in that video them doing the same thing. But maybe they did take the area savings instead of just taking the reduction but keeping the stuff more spaced out which can help with heat density.

    But AMD has been reducing their obligations to purchase from GF under the WSA significantly in the renegotiations since GF announced they would not do 7nm.

    Now, as they transition to DDR5 and PCIe 5, I could see servers next year get a 7nm I/O die, keeping TR and Matisse on 12nm I/O dies (moving from 14nm to 12nm for the same I/O die Epyc uses) while using 14nm or 12nm I/O dies for chipsets (did you also notice Ian mentioned a 21W configurable TDP for the I/O chipset?). Obviously, they want 7nm for reduced power for PCIe 5.0 for server chips. But, the question is if DDR5 support, which comes to mainstream in 2021, can be done on 12nm GF I/O chips. They don't really plan on mainstream getting PCIe 5.0 anytime soon. But they need DDR5 by then. So question is DDR5 support on the larger node, which TSMC 7nm and GF 12/14 are so different, they are completely different development.

    So lots of open questions and plenty to speculate on. But I do appreciate this post of yours. Great analysis in it! I know I've complained and ranted at you at prior times, but this really is some good analysis on your part.
     
  24. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,615
    Trophy Points:
    931
    Ryzen 2700 is an 65W Chips :) 2700X is also a no go.
     
    raz8020 and tilleroftheearth like this.
  25. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    And unfortunately there are not bios gurus working to unlock the platform either... :-(
     
    Papusan likes this.
  26. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,615
    Trophy Points:
    931
    I think the same for Asus which come with first gen Ryzen 1700 as well. Couldn’t upgrade to 1700X. Not sure if Acer and Asus has locked down firmware similar as Dell. But we should have seen X branded processors from last gen already if it was possible.

    Damn greed!
     
    raz8020 and ajc9988 like this.
  27. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    With MSI doing a 9900K laptop they are pushing, there is no reason they couldn't do one with a 3800X or possibly 3900X. But, for mobile, I think they are VERY hesitant to try AMD in it, mainly because AMD often locks the contract to force use of AMD GPUs, which would limit sales anyways. Plus, AMD hasn't shaken the low end mindset for laptops AT ALL yet.

    But, all MB mfrs have said forget Intel's platform and focus on making AMD's X570 the highest quality boards, which is a hell of a change. Also, wondering if the I/O die being used as the chipset is part of why we are seeing like 8xUSB 3.2 ports, due to much higher bandwidth possible.

    Next year, though, the APUs will have Zen 2 and Navi, which although Navi is... that means laptops with the APUs will be similar to the consoles coming out. So who knows how laptop mfrs will treat AMD moving forward. Just a shame all the locking down on laptops, plus charging premiums for the same or similar hardware just because put in a laptop.

    Greed!

    Edit: here is the article mentioning the X570 chipset is the I/O die
    https://www.heise.de/newsticker/mel...-ab-September-auch-mit-16-Kernen-4443362.html
     
    Last edited: Jun 15, 2019
    Papusan likes this.
  28. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,615
    Trophy Points:
    931
    From what I know... At least the Asus model can’t handle 95W chips. Not sure about Acer’s model. Isn’t 3900X 105W Chips? Probably all too much as well. @Deks can probably tell more as he have own both models. Not interested in the Mobile part.
     
    raz8020 and ajc9988 like this.
  29. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Yeah, they didn't design them as well for heat (but look at the trend).

    Now that we are seeing them trying to properly cool Intel desktop parts like the 9900K, they have to be able to do it moving forward for AMD's 105W chips.
     
  30. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,615
    Trophy Points:
    931
    This is the notebook trend nowadays.... http://forum.notebookreview.com/threads/bga-venting-thread.798775/page-225#post-10922543

    Not sure if desktop chips will be a part of it. Sad but true :(
     
    Ashtrix, raz8020, jclausius and 2 others like this.
  31. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    They won't be. AMD's TDP is at base clock. Look at how they are beefing up the VRM on both AMD and Intel boards, all while pushing for higher core counts.

    Intel cannot compete on core counts for a couple years until they move to chiplets. That's fine. But this WCCFtech article, if the CPU-Z leaked scores are even remotely true, paint an awesome picture for Sunny Cove, just too bad it has a frequency limit due to a broken 10nm process.
    https://wccftech.com/intel-10nm-ice...met-lake-amd-ryzen-3000-cpu-z-benchmark-leak/

    Meanwhile, AMD's PBO of 200MHz means that the upcoming 12 and 16 cores for single core boost will DEFINITELY compete with Intel's 9900K, if not the KS. That means Intel will narrowly keep the crown in ST performance. And 10nm isn't coming to desktop for Intel. But they are beefing up the core count to 10 cores, and we still have no idea on AMD's all core speeds.

    Servers are also marching up the power draw on both sides, along with core count. So not really a worry, they won't be dropping power on those two any time soon, I believe. But the efficiency per core IS going up, which also needs mentioned here.

    Edit: to be clear, Ice lake at 3.7GHz is batting on Single Thread against a 3800X at 4.7GHz and an Intel 9700K ST at 5.3GHz. That is a VERY impressive architecture for Sunny Cove!
     
    Last edited: Jun 15, 2019
    raz8020, hmscott and Papusan like this.
  32. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    My point is not that there should be only 16 cores but that it seems to be my sweet spot. Assuming on lag a 32 core TR is tempting s well. Unlike the 1950x though I do not see a personal need for 32 cores.
     
    hmscott and ajc9988 like this.
  33. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Same here. Kind of why I'm hoping 16-core is the entry level for the new TR. Otherwise, I'd be buying more than I need (although, could be fun).
     
    hmscott and TANWare like this.
  34. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    I am not too sure how memory bandwidth will play out on the 3950x compared to that of a quad channel 16 core TR3. most benchmarks will care less. Real world could be a different story. With the TR3 though there does have to be an entry level.
     
    Last edited: Jun 16, 2019
  35. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    So you're saying that Ice Lake will have 27% boost in single thread due to UArch changes?

    I hadn't seen anything saying that.
    I've read that Intel will have an 18% boost in IPC relative to skylake.
    That would put Intel roughly by 8-9% in the lead vs Zen2 on the IPC... And we know that Intel is currently losing performance due to security patches (about 15% in total)… plus the 1903 windows update gave a nice overall boost to Zen overall.

    Is intel going to address those vulnerabilities, and if so, are we looking at IPC increase over Skylake before the security patches or after? And how will this IPC be reflected in regards to the 1903 update that boosted Zen performance?
     
    hmscott likes this.
  36. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    So, let's give Intel the benefit of the doubt for the claims for the moment, meaning that the claim of 18% was over a completely unpatched Skylake, while the new CPUs don't need the software mitigations, and ignore zombieload and others that require turn off HT. Of course, all claims MUST be verified by independent third parties down the road.

    Coffee lake has about a 7% IPC over Zen and 3-4% over Zen+. Let's use Ian Cutress saying the 15% IPC is over Zen+. So, Coffee would be 11-12% slower IPC than Zen 2. Add 18% to Intel's and you are at 6-7% IPC over AMD's Zen 2 on average. So we are in the ballpark.

    Let's say the way CPU-Z tests IPC is something Intel excels at. So the 6-7% IPC average is an average of multiple tasks. Getting 8-9% over Zen 2, then, is right there in the ballpark. All of this is ignoring the security threats dropped in May.

    I don't know where you are getting the 27% boost, but Intel, in a tech press event, directly said Ice Lake cores (not sku specific, but generally) can reach up to 4.1GHz. What we have seen in SKUs is 3.6, 3.7, and 3.9GHz. Semi-Accurate discusses that here: https://www.semiaccurate.com/2019/06/05/a-look-at-intels-ice-lake-and-sunny-cove/

    You are now conflating different aspects of my statement. I'm not bringing in security performance, I'm giving proper respect when it is due, which is that Sunny Cove is a great architecture, just like I've sung Zen 2's awesome changes and have been teasing out slowly information on what changes were made to their chips. None of this has anything to do with cheerleading either camp, and you damn well should know that by now. In fact, I posted in the Intel 10nm thread that this performance is awesome BUT FOR the 10nm process and its frequency performance being broken. Yet you get defensive?

    Hell, if anything you should point out the obvious, which is that 10nm would be a side grade, just like Zen 2 8-core would be for anyone already with a 9900K and in some circumstances a 9700K. Intel was this late on 10nm and all they could do is reach parity on performance according to that leak. Doesn't make the Sunny Cove architecture less impressive. They made major changes and you should read the coverage and deep dive of those changes before you cast stones at those talented engineers! You need to give credit when something is done right! Here, Sunny Cove is DEFINITELY right!

    Now, I do have one problem with the leak. Specifically, if you look at the alleged 6-core ES at 3.6GHz to the 4-core 3.7GHz Ice Lake, there is only 9 points between them. When you look at the 3 AMD chips, you get 13 points between them and 100MHz each step. If the Ice lake leak is real, why is there a smaller step up in 100MHz than what is seen with AMD's Zen 2 when Ice Lake should have a higher IPC? One possible reason is they found a way to squeeze a little extra performance from the six-core chip, thereby making the step smaller, but I do find that dubious. Increased MHz should increase the cycles within the period roughly linearly. IPC then should be simple enough to calculate. Now, not all times does it work out perfectly like that on scaling. And it is showing roughly 15-points per 100MHz on the 9900K, which would suggest a higher IPC than Zen 2. There are other factors not included, like ram speed, differences of cache on the different chips there, etc. So you can only tease so much information out. It is a leak and may not be true at all.

    But, Intel did the work and there IS an extra IPC gain in Sunny Cove. Hell, Charlie at semi-accurate points out that the 18% claim looks phenomenal, except Intel left out the IPC gains on Cannon Lake and left it off their chart showing 18% IPC. That means you have two generations of IPC gains in it, as they had to rework parts of Cannon lake to make it work on 10nm, rather than just a straight port of skylake cores to 10nm.

    So you could have gone any of the routes I just laid out to show what you wanted to do, but went with the security patches argument, which considering we know nothing of the test conditions of any of the CPUs, it can be argued to lay that aside to look at the performance.

    Also, if you wanted to show Zen 2 in a good light, you could have pointed out that the 3950X has a boost of 4.7GHz and a PBO of 4.9GHz. That means you could add another 26 points to the 3800X score, giving 661, matching the 9900K at 5.4GHz (beating by 1 point actually).

    But this is a leak and may not be real, so all of this could mean nothing.
     
    Talon likes this.
  37. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    No, my point was not to portray Zen 2 in a good light.
    This is what you mentioned:
    I interpreted that as if you meant that 3.7GhZ ST would be competing and be comparable to Zen 2 at 4.7GhZ (you hadn't mentioned in this reply anything on how far high up the boost goes but have seemingly compared base Intel to boosted Zen 2).
    That's where I got the 27% uplift in IPC on Intel's end... and I was understandably confused because official Intel sources stated 18% uplift in IPC for the upcoming parts...
    I'm not conflating anything by saying we need to understand how this performance increase will be impacted by security vulnerabilities...
    Did Intel specifically say they will bypass those vulnerabilities on a hardware level and that they won't be affected anymore, or is this an assumption?

    According to this article:
    https://www.digitaltrends.com/computing/intel-ice-lake-wont-rid-spectre/

    It seems that Ice Lake won't be completely rid of security issues, so I'm curious how this will impact the chip's overall performance.
    But, I think its not entirely accurate to state an IPC uplift over unpatched Intel and assume that this performance uplift will stick with full patches enabled... at the moment, we simply lack the data to make that claim.

    As for Zen 2 being a side-grade to people who own 9900k... that depends on the part people choose (if they choose it to begin with). The power draw of Zen 2 parts should be much lower vs what 9900K currently draws... and yes, I'm aware of most people not caring too much about power draw (funny how it seems that people point out efficiency for AMD but tend to disregard it for Intel and NV - and for the record, I did NOT mean you).... plus, the Zen 2 12c/24th parts would be an upgrade in performance - but mainly in regards to if users plan on using that many cores... otherwise, yes, from a performance point of view, a comparable Zen 2 8c/16th would indeed be a side-grade and not much of an investment unless the user really cares about power draw (though to be honest, we don't know the final power draw numbers - although Zen has a history of staying within its TDP limits)
     
    ajc9988 likes this.
  38. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    The 4.1GHz max potential on the Intel and 3.7GHz and 3.6GHz ARE Intel boost on Ice Lake!!! I'm comparing boost to boost (that is why I've been saying 10nm on Intel, as a process, is broken and crap)!

    And, for actual performance for the end user, the security mitigations are something to consider. I just left it out because this "leaked" numbers on CPU-Z benchmark doesn't tell us anything about test conditions.

    But I want to revisit my point on the 9 point uplift with 100MHz on Ice, 13pts on Zen 2, and 15pts on Coffee. That says this leak is fake! Why? Because with higher IPC, by increasing the frequency, or number of clocks and thereby the number of cycles in a given period, you can ignore whatever the main clock is on speed (3.7GHz, 4.7GHz, 5GHz, 5.4GHz). If a chip has higher IPC, and in theory Sunny Cove has the highest, Zen 2 with the second highest, and coffee lake with the lowest, for every 100MHz, the point increase should be the inverse of what is shown, where the highest IPC chip gets the most gain from 100MHz increased frequency and the weakest IPC gets the lowest uplift in pts per 100MHz. That is a big red flag that that specific leak is fake.

    Also, if the IPC is over a fully unpatched Intel chip, then the IPC over a patched Intel chip would be even higher as the security mitigations are now in hardware, ignoring the May security vulnerabilities for the moment. That is a claim I will gladly make and stand by. Now, I said assume that as a best case scenario for this exercise in analysis, not because it is true. We can assume it only to take it out as a variable in the above calculations. I am NOT saying assume it about the final product, just like I wouldn't say assume that for sure the 15% IPC is over Zen 2 fully for the final, we still need testing. All of the numbers need real world testing and I did say that. So please think of the above as a thought experiment and the assumptions as nothing more than reducing variables so that information can be gleaned, if any, while also trying to figure out the validity of a leak (which as I said, that IPC and performance increase per 100MHz says the entire thing is fake, so not worth taking seriously).

    But, I've been singing praises of AMD on Zen 2 for awhile now. Intel did a good job on Sunny Cove from what I can see on an architectural level. So I am trying to say what they are doing right (as there is a lot going wrong there also, and I have been quick to point that out also). Even with the potential of misleading customers, and I did previously make the point that due to the footnote we don't have any clue what mitigations, if any, Intel had installed for the IPC testing, other than it not having the May 11 vulnerability patches or HT disabled. I made that clear, although possibly not in this thread (it is an AMD thread after all).

    Even with that, looking at the other leaks, like Geekbench, etc., which are more likely to be true, we are roughly just seeing parity of the Ice Lake parts anyways, so I threw that in because it will likely hold water in the end.

    But I did call Zen 2 performance to be on par with coffee lake refresh over 6 months ago, nearly 9 months, IIRC. So there are those that paid the premium, but they also got USE of the CPU during that time. Anyone that bought it after April, I mean, if 2 months is too long to wait or is worth that much to you for paying an extra $100-170, that's all you, especially when you could get a 12-core for the same price, even though the rumors had said 16-core for that price. The rumored pricing was, in the end, wrong and needed slid by one, roughly.

    As to power draw, don't be too quick on the draw. I think the TDP is base clocks on the AMD chips, and boost will be drawing more, and OC will draw a fair amount, with the number of up to 300W on the 16-core (although we don't know if they meant water or LN2 for that figure).
     
  39. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    if true, then it is worth getting if it comes with at least 12 cores, as well as minimum 25% more efficient than 14nm++. that wont happen till 10nm+ or 10nm++ tho, 12 cores at 25% more efficient at 5ghz would be equiv if not more power than a 9900K which is still extremely power hungry.

    honestly that picture looks fake, i'd go for the geekbench one instead.
     
    hmscott and ajc9988 like this.
  40. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Although it's fun to see such magical results, it's unlikely it's real, and if it is real then CPUZ isn't balanced in it's testing - or even accurate on pre-release hardware - and given the crazy results boost Intel right before AMD launches real hardware it's even more doubtful - it's sucker bait BS to distract.

    Reading the source most are doubtful, and the best explanation is that AVX512 and other special instructions that boost some scores into the 65%-69% gain range, and others into the 30% gain range, are skewing the results - if any of it is remotely real memory score, latency, bandwidth performance is suffering greatly.
    e0d37d1ed21b0ef4e8033dd4d3c451da80cb3e66.jpg
    http://tieba.baidu.com/p/6164223711?pn=2
     
    Last edited: Jun 16, 2019
    ajc9988 and Deks like this.
  41. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    the geekbench test version for icelake did not have avx 512 implementation. and that was around a 17% IPC increase assuming frequency is reported correctly. so there are some truth to the ipc increase, which i find it odd because core should have already been so optimized, unless of course they had planned to hold back those increases and drag it out for another 10 yrs had AMD not decided to step in.
     
    hmscott likes this.
  42. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Yup, we've suspected this for quite a while, and Intel are likely still holding back architecture improvements - there are so many options - but even with all of the pressure AMD's market share gains are putting on Intel, it takes a lot of time and effort to move the giant ship to a new course.

    If you look at the wide range of tests broken out in the chart I posted, there are lots of gains less than 18% IPC gains would indicate - 15% / 18% are means / average's / overall contributions toward specific results which effects vary by test.

    Same for AMD individual results, that's why application and even games can vary in results widely. When close in performance it's easy to tilt the scale - get better results in a presentation - if certain apps / tests / results are chosen over others.

    That's why I suggest given those variances, select the part you want to progress forward, invest in the vendor you admire and want to succeed. It's been a long time since that vendor was Intel or Nvidia, at least for me.

    It will be a fun 5-10 years moving forward - with more than a few suprises along the way. :)
     
  43. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    I'm ok with Intel being on the side-lines for the next few years until AMD recovers enough market share, and to be quite fair, I think they deserve it after the manipulating OEM's and fixing the prices on the market.
    If they come out with a sufficiently powerful CPU that's actually efficient and also don't try to shenanigan the market again, I'll consider them the next time I'm due for an upgrade... but right now, I'm perfectly satisfied with my 2700/Vega56 in Helios 500 and don't plan to upgrade for the next 4 to 5 years... though to be honest, I wouldn't mind a BIOS update so I can put Zen 2 8c/16th in there (as of yet, Acer hadn't released any BIOS updates that would allow that and there doesn't seem to be anything from any third parties).

    But yes, in regards to software readings, they do seem all over the place, and right now we don't have the needed context in regards to how these enhancements relate to security vulnerabilities and their patches.

    Also, there's a matter of industry devs still not fully optimizing for AMD as they should and are using Intel compilers to boot... which gives a bit of a skewed image when reviewers are testing the hw, and they seemingly use not suitable software as well which might be using the hw to a better degree (for instance, Adobe is still not up to par with full use of multithreading as it should be in comparison to say DaVinci which is free on top of that).
     
    Last edited: Jun 17, 2019
    hmscott likes this.
  44. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    funny how we think they deserve it and what not. truth to the matter is the people ran the company and the investors are quite different now then who they were 10-20 yrs ago. whoever pulled it, already gotten away with their fat pay cheques and bonuses, left with current employees/investors to suffer. quite sad.

    we consumers aren't exactly doing any justice either with buying the "best" or "brand name", we literally supported intel with our wallet lmao.

    @tilleroftheearth said this already, dont let emotion get into your purchases, and just get what suit your needs best.
     
    Last edited: Jun 16, 2019
    raz8020, hmscott and tilleroftheearth like this.
  45. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,615
    Trophy Points:
    931
    [​IMG]
    Retailer Leaks Prices of AMD Ryzen 3000 CPUs & X570 Motherboards

    GIGABYTE, MSI, & ASUS X570 Motherboards – Pricing
    Previously, there have been multiple reports of the X570 Chipset being priced considerably higher than it’s predecessors, and this, unfortunately, seems to be the case. The X570 Chipset brings various new features and improvements, but they certainly don’t come cheap.

    Various motherboards from multiple manufacturers have been leaked, along with their potential pricing. To start, here are the leaked prices for GIGABYTE’s upcoming X570 motherboards:

    ------------------------------------

    And worse it will be... If this is true.

    AMD X590 Chipset Reportedly In The Works – Designed For Premium AMD Ryzen 3000 CPUs, Higher PCIe 4.0 Lanes, Much Higher Priced Than X570 Boards
     
    raz8020, tilleroftheearth and hmscott like this.
  46. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    There are always overpriced hardware in Intel / Nvidia product cycles - but there are more fairly priced AMD hardware with adequate performance and features available. Which do you buy? The overpriced Intel / Nvidia hardware, right?

    I know you fixate on the overpriced highest performing options, but they aren't the best choices for 99% of the people out there. The best choices are the affordable ones.

    The AMD / Navi / Threadripper / Epyc releases will happen and the best choices will work out to be the best performance and value. There will be affordable AMD motherboards for you, and everyone else, nothing to fret about.

    Waiting patiently for the actual releases with the testing and evaluation of the best performing and priced options that follows is best, and don't worry, you'll find a nice combination of AMD CPU / GPU to migrate to soon enough.

    Until the facts are available, try to ignore all the FUD being spread, rumors and hype, it's not worth our attention, and is only a useless distraction.

    You'll find those wastes of time at every release, it's best to identify them and ignore them - not giving them the time they are meant to take.

    If you don't want to buy an overpriced motherboard, CPU, GPU, then stop buying Intel and Nvidia. That's the only overpriced hardware shipping today. Seems to me you and all the benchmarking fans like buying nothing but overpriced hardware.

    Think of the rumored overpriced AMD hardware as being made just for you and the other fans of buying overpriced Intel / Nvidia hardware.

    The rest of us will buy the perfectly functional fairly priced AMD options. :)
     
    Last edited: Jun 17, 2019
    raz8020, Papusan and Deks like this.
  47. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Like I said what, over two weeks or more ago? AMD would be incredibly dumb to not price accordingly. But was soundly told that AMD just doesn't do that to its customers. :rolleyes:

    Yet, here they are being praised for the very thing Intel is still bashed for.

    So many words written supporting AMD and all their shenanigans. Yet we still have Intel in the undisputed performance lead with exactly 3 weeks to go before 7/7.

    So, how many more words can be written praising AMD for doing what Intel has done (and is doing) but without so much as an actual product available yet that undisputedly offers higher performance than what Intel has been delivering all along?

    AMD, just like Intel are mere faceless and soulless corporations. Neither are your friends. The sooner you stop acting like they are, the faster they'll both get back in line. You know, actually competing for our $$$$. :rolleyes:
     
  48. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    The best part of new product launches is that the old products are available cheaper. I've been saying consistently to buy the best of the previous generation at the discounted pricing, and to buy used as well.

    There are going to be adjustments and price changes moving forward - and if we all stop buying the overpriced products and focus back on buying performance for value, we can retrain the companies to do the same.

    We don't need the overpriced products, we need something that does what we need at a reasonable price - and for the most part the top end products are useless for that need.

    Paying ridiculous prices for a few percent more fps (Nvidia) - fps you can live without - overpaying for brand new shiny crap that doesn't perform (Apple / Dell - Intel) - has been training the vendors to look beyond delivering rationally priced value performance products, and instead create ridiculously overpriced products that the cash fat "non-rational" buyers out there have been foolishly gobbling up.

    The companies are only delivering what customers will buy. Stop buying overpriced crap, start acting rationally and buy products that are value for money - not in excess of performance or cost - and the market will turn back toward value for performance products.

    Here's a nice recounting of the situation:

    Value vs Fanaticism: The Real Issue in PC Gaming
    The Good Old Gamer
    Published on Jun 15, 2019
    With AMD's announcement of the Radeon RX 5700 series it seems both AMD and Nvidia are moving the goal post for Mainstream PC gaming.
     
    Last edited: Jun 17, 2019
  49. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    that will depend on how you define performance i guess, highest of ipc with highest possible frequency?

    assuming the icelake IPC gain is true 17-18% in a non avx512 geekbench, where mitigation performance degression dont matter much, its a pretty damn improvement. it sucks intel has been holding back and milking us but since im after high ST performance this is the chip i'll go. good news this time is, cometlake will have 10c on 14nm++, so the 10nm++ intel desktop chip will likely have 12c and this is all thanks to AMD.

    right now, intel's 10nm+ is horrid at 4ghz peak for those mobile skus, with a superb IPC increase (if true). while AMD has excellent value with amazing 7nm from TSMC, and a decent IPC.

    most tech tubers are basically just giving their opinion and what most already know pretty pointless to watch. if we need opinion theres reddit/forums, and in depth review can come from ANANDtech and Tomshardware.

    only ones i really watch nowadays is coreteks, imo much higher quality even compare to ones like adoretv/moorelawisdead.
     
    Last edited: Jun 17, 2019
    hmscott likes this.
  50. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    The "Tom" " The Good Old Gamer" was talking about in the commentary I posted is the guy from " Moore's Law Is Dead", they had just wrapped up an hour long discussion about pricing - and I didn't want to post that long video, the short summary I posted was good enough.

    All of the YT people you mentioned talk between themselves, that's one reason they have a consensus around the pricing issue, and that video I posted summarized most of all of their conclusions pretty well.

    I don't agree with all of their conclusions, but the subject in general - that hardware pricing has gotten out of hand - is what I was talking about in my thoughts on the subject. :)
     
    Last edited: Jun 17, 2019
    ole!!! likes this.
← Previous pageNext page →