The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    AMD's Ryzen CPUs (Ryzen/TR/Epyc) & Vega/Polaris/Navi GPUs

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by Rage Set, Dec 14, 2016.

  1. Vasudev

    Vasudev Notebook Nobel Laureate

    Reputations:
    12,035
    Messages:
    11,278
    Likes Received:
    8,814
    Trophy Points:
    931
    I used their toolkit in trial mode and it was good because I'm not a gifted coder but I stayed away from it and used gcc and MSFT Compiler since I had a gut feeling they could cause more issues in future.
     
    ajc9988, hmscott and ole!!! like this.
  2. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    Correct me if I'm wrong, but when pitted against each other in Linux, AMD compiler turned out to be really good.
    At any rate, Intel compilers will ALWAYS give Intel itself preferential treatment (even if its relatively minor - this can easily skew the accuracy of any tests).
    Until a compiler is made that doesn't exactly 'discriminate' (aka can take advantage of ALL cpu uArchs properly), I don't think we can get an accurate picture on which is better.
    Intel compiler in all likelihood won't be using all of AMD cpu features which could give us a better insight into what Zen is capable of.

    Furthermore, devs are indeed lazy when it comes to coding for anything but Intel.
    They grew accustomed to that ecosystem that they saw no need to change.
    Plus, it begs the question how much of that could be potentially attributed to Intel paying devs to use their compiler (much like they bribed OEM's to not touch AMD hardware, despite it being perfectly good for most people's uses - and we are STILL feeling the effects of that today what with standardized chassis in laptops being used for AMD laptops as well as cooling layout).
    Acer Helios seemingly broke this 'pattern' (at least on the cooling front) and their RX560x with 2500u laptops also seems to have decent cooling in them... but without a dGPU, you will be hard pressed finding a laptop with an AMD apu that doesn't throttle due to OEM's not paying enough attention to cooling and allowing those units to stretch their legs (though in all honesty, this issue happens on Intel systems as well due to excessively thin chassis being used combined with inefficient cooling for such a design... thin and light laptops would need to use carbon composites for cooling and overall different cooling methodology to avoid throttling - its more than doable, but as you know, OEM's can be quite stingy).
     
    Last edited: May 31, 2019
    ajc9988, Vasudev and hmscott like this.
  3. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,615
    Trophy Points:
    931
    MSI’s Internal Slides For AMD Ryzen 3000 CPUs Show A 13% IPC Increase Instead of AMD’s Claimed 15% IPC Jump
    [​IMG]
    With all major launches, there is often a bit of confusion on details. Sometimes bigger and sometimes smaller. This go around it’s an interesting one with regards to the IPC of the new Ryzen 3000 Series CPUs. Launching on July 7th the Ryzen 3000 Series is promising 15% IPC which is great, but there seems to be a bit of a disconnect between what AMD showed to the public vs what they showed to MSI and potentially other board partners.

    Robert Hallock of AMD, the man on stage doing the demos and owner of sweet shoes, came out on Twitter to clarify the discrepancy between the 13% and 15% IPC increases. He put it very simply that the Cinebench 1T-derived IPC results were 13% and the results from SPECint, which are more rigorous, showed the 15% IPC gain that was shown on stage. So, both are right and goes to show you that IPC will vary a bit from application to application.
     
  4. Vasudev

    Vasudev Notebook Nobel Laureate

    Reputations:
    12,035
    Messages:
    11,278
    Likes Received:
    8,814
    Trophy Points:
    931
    Intel compilers work on most platforms.
    Let's wait for benchmarks to compare AMD Ryzen 3xxx chip performance.
     
    hmscott likes this.
  5. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Intel Brings Its Own Benchmark to Refute AMD's '2X' EPYC Claim

    See:
    https://www.tomshardware.com/news/intel-amd-computex-benchmark-epyc-xeon,39544.html

    Note: There is more to the story than the select quotes above - please go read the link provided. But they do support the fact that AMD is (shockingly!) not comparing apples to apples when it's trying to do battle with its main rival. :rolleyes:

    More performance for the masses is great, as is the most performance for a budget (but when will they be able to use it though in a genuinely useful way in normal consumer workflows is still not visible in the foreseeable future).

    But the constant talk that more cores = more power is constantly being refuted by actual objective results and all while being done on older 'nodes' too. :oops:

    Will we thank AMD in the medium/far future for pushing for more cores? I have no doubt about that at all and I'll be the first to sing their praises if they keep iterating and become more competitive by then.

    Does anyone today need merely more cores vs. an actually more productive platform instead? I would suggest possibly not, from all available information.
     
    Vasudev likes this.
  6. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    On the chiplets used, AMD left room for quad core chiplets and there are plenty of quad core samples. If nothing else, defective dies with those will be on Athlon CPUs in the future.

    By separating the I/O from the core chips, it has made it easier for core chiplet binning, because you are just looking at cores and caches, primarily, versus the I/O as well.

    because of this, I think the fully functional dies will be used on the mainstream chips, with only 1 core die on the 6 and 8 core chips (unless yields are so bad they need dual dies in to get a higher effective yield and to save on costs, which would increase packaging costs for integrating more dies, but may outweigh the lost cost on the die).

    As to gaming, I think they will be competitive there, which changes the recommendations significantly.

    I don't think that yields are so low they will need to resort to that. Currently, reported yields are around 70%, which is about 0.45 defect density. That is a pretty common defect density for a new node and is usually when a node gets into mass production, but lowers over time.

    Lower latency can be found with a single chiplet, if for no other reason than if the scheduler bounced a thread to the other chiplet rather than CCX on same die. I'm really looking forward to June 10th on that and Hot Chips in August for a deeper dive.

    I disagree for one reason: With Zen, we had the 8-core 1800X, the 8-core 1900X, and the 8-core server parts. Even if AMD ditches the 8 and 12 core TR parts, I think they would leave the 16-core TR part in tact because of cost to consumers (outside of workstations, many don't need more than 16-cores and some buy the platform just for more PCIe for storage or GPUs) and because it picks up at the same core count as the mainstream platform.

    So I think 16-cores will become AMDs new entry level into HEDT, which is fine by me (my workloads don't really call for more than that, I don't know what I would do with 32 or 64 cores, TBH, other than the LULZ).

    Yes, you are the only one. In fact, I was proven wrong on price and frequency to a degree, but the IPC gain more than made up for it so that my performance estimate was about right (I estimated a bit on the high side, but not so much as to be wrong IMO, although some may differ with that).

    Just hold your horses and watch for reviews. Same hype on Intel side with their claimed IPC being challenged by many on multiple grounds, including 6700Ks being clocked to 3.5GHz to compare them while using 2133 with high CL.

    This is why it is always best to wait for independent reviews.

    Instead of thinking of the nefarious reason, think of the laziness factor! Intel was known for ecosystem development (there are many ways to challenge the effects of this, but without a doubt, they have done this). In addition to working with hardware partner vendors, they also poured a ton into R&D to develop tools to make their partner's lives easier. If I was a lazy man (which I am), I would use an easily and readily available tool at my disposal rather than making my life difficult and more work.

    That is the strategy surrounding giving away IP to create open standards (something AMD has done, but is making more headway with recently, as well as something Intel recently did by giving an interconnect IP to DARPA). It can also be seen with project Athena, where Intel is helping to eat some of the partners' development costs, which then benefits Intel through creating a standard which may not be compatible with AMD's implementation without a large cost of redesign. It's a fair, competitive tit for tat in that way. Now, when Intel purposely made their compiler sandbag any non-Intel CPU, that was ruled under antitrust to be unfair competition.
    I saw this but find the NAND optimization excuse to be slightly dubious.

    Also, it isn't just about more cores = more better. It is looking at the system power consumption and TCO. Intel's chips have a TDP of 340 or 350W. AMD's have a requirement of 180-250W. AMD's power pulled is much more consistent with their TDP than Intel's. Intel's system also REQUIRES a water cooler to use, whereas AMD's can be air cooled, although many supercomputer implementations have decided to create their own water blocks for the upcoming chips.

    That means, overall, your power budget is going to be considerably higher with the Intel solution, which needs to be considered.

    Where Intel really takes the win is workloads that are still bandwidth limited with even 8-channel memory. Intel has 12-channels per socket. But, those workloads are now fewer and far between and will change as both companies integrate HBM in the future, or some similar standard.

    There is also the question of IF AMD could even get one of those 2P 48-core per chip servers at the time of their presentation (or ahead enough to test in lab to then use during the presentation). Intel launched them in April. I'm not in the market for a system like that, so I do not know whether they are actually widely available at the moment or not, or if there is an order lead time. Very likely there is a 4-6wk lead time from order to delivery, which if released in late April means AMD would not have had a sample in hand to test BEFORE going live with their presentation. And it isn't likely Intel would seed AMD with a review system like they would do with someone at Phoronix or ServeTheHome, whose reviews help to drive sales.

    That is why I'm taking Intel's statements in this regard with a grain of salt. If they did have the machine and Intel won, they'd have pushed the power consumption part instead, showing a small performance difference but a LARGE power draw difference. Hell, with the 8180 chips, some places saw the CPUs drawing over 500W per chip in some workloads.

    So I'd argue TCO is going to come into play, and at some point that power usage calculation will suggest that over the long haul the Rome chips will provide a better platform for many uses, not to say Intel's chips will not also sell well for certain uses.

    And there is still the consideration of per core licenses. They are everywhere! Because of that, AMD closing considerably the single core performance differences will help, which last December, one company that was seeded both Intel Cascade-SP and AMD Rome said they were impressed with Rome, but for their needs, they had to go with Intel for single core performance. Haven't seen an update on that story, but it is something that needed mentioning.
     
    hmscott and ole!!! like this.
  7. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    The 48 and 56 core parts are not consumer available, they are for select OEM systems if and when they become available. So for the consumer market AMD is only comparing against Intel's consumer marketed CPU's. Where is this being sneaky etc.? Do not know about most people but that is apple to apples in that respect.

    Also they are touting a CB of 7,000+ score but we have yet to see a TR3! Would love to see a TR3 32 core with a CB R15 of well over 8,000, I can always dream.
     
    Vasudev, hmscott and ajc9988 like this.
  8. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    I thought that Intel shot themselves in the foot with that server part presentation.
    Sure, they got the supposed performance lead ... but at 400W (according to TomshHardware)... whereas, AMD's EPYC apparently tops out at 240W... that's about 40% lower power consumption for AMD.

    Oh and, lest we forget, the AMD system is not susceptible to those rather pesky security flaws (patches of which drop Intel's performance by quite a large number... enough to put AMD in the lead regardless of how you put it) - and as far as I understand it, security is a big thing in servers.

    Plus, the ROME system could be a lot cheaper.

    ROME has efficiency and security on its side.. and as for performance, we need to wait for the final release to gauge the numbers better (given that a pre-production sample is not a finished product), but even if Intel did somehow manage to retain that 23% better performance with those 'optimizations' (whatever they might be), I don't think it would justify the security flaws and massive power consumption (especially in the midst of climate catastrophe).

    As I said, I think Intel shot themselves in the foot with that presentation and maybe jumped the gun.

    I suppose they needed to come out with 'something' to try and counter Zen 2, but it seems as though they are overly 'jumpy'.
     
    Vasudev and hmscott like this.
  9. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    AMD would be foolish to offer their systems for substantially cheaper, except for maybe to account for the actual performance deficit. ;)

    Those security flaws are taken care of as we know and are accounted for, by both AMD in their tests and Intel (otherwise, AMD would be the next one shouting).

    So, correcting wrong assumptions and coming out with a better apple to apple comparison, for currently existing products, while using known and published optimizations are now jumping the gun? Yeah, not.

    The only thing AMD has is most likely lower prices (again; foolish - this market has $$$$$$$$$$$$$$) and possibly lower TDP.

    Agreed that third-party testing of both systems is the only way to solve the 'who has more performance' question.

    Even if we already know that today. ;)

     
  10. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Doing the math, if for the same performance the 400w Intel CPU needs 160w more than the 240w Epyc, which makes the Intel CPU 1.66x as power hungry as the AMD Epyc for equal the performance.

    Even if the Intel CPU computer cost the same, the added power and cooling costs would put the Intel solution out of the running.
    Yup, Intel by trying to look competitive proved beyond a doubt that their DC solutions are overpriced, under-performing / costs, and too big of a "?" for security.

    Given past experience, those kind of gotchas could disqualify Intel from getting accepted into the Evaluation Cycle required for purchases.
    Usually there is a set of Qualification Benchmarks that test all of the hardware throughout the solution - generally configured as for the client request, but we don't always test every corp application installed and configured, unless specifically requested by the internal client.

    The Evaluation Acceptance Benchmarks are supposed to catch all of the requirements so we don't need to clone a complete application instance for testing for each client.

    But with the variable performance due to Intel security mitigations involvement in application specific results - we can't predict end results based on similar tests, we would need to do the actual application testing installed and configured, before we will have the results we know we can rely on for the internal customer.

    What might look good tested on a standalone system with local storage might get much worse results when tested as finally configured in the delivered array of networked systems.

    That would then involve network / storage configuration support, including setting up network and application load balancing - which is often done on the same application system builds rather than dedicated network hardware, which would also be affected by Intel's security mitigations performance losses.

    This goes on and on, with every system built with Intel hardware that is involved in the final client build would need to get built, configured, and tested together to find all the possible gotcha's that would occur if we didn't pre-test the whole system running in place in the client end configuration.

    To do complete application testing we would multiply the time required for the evaluation, requiring client resources to verify the Evaluation build before validation using client made benchmarks (that wouldn't go over well either), this would also require other departments to do their installs (Application, DB, Security, etc) as per typical for the application(s) build. That would be a costly time / resource nightmare.
    That would cost far more than it's worth given the wide track of internal clients queuing up for Evaluations.

    Given Intel's higher power / more cooling requirement than "normal" might require Building resource changes outside the original estimates used to build out the DC's. Those costly in expense and time additional resource requirements would also work against choosing an Intel solution.

    Beyond AMD's awesome performance improvements, AMD needed all of Intel's mistakes to come together at once like they have in order to get this kind of shot at replacing Intel in the DC's.

    The only thing Intel has going for it now is inertia. That plays a large roll in hardware selection, and it takes internal clients pushing hard for alternatives - and that takes time - months and years typically, unless the client puts together the whole acceptance package themselves, doing the work and producing the results. I've seen it happen to get through and around push back from the in place inertia to stay with current solutions.

    This time though the DC people are gonna be fed up with Intel mitigations headaches, and many will end up on the side requesting alternatives to Intel - which gives AMD a shot. Hopefully AMD will be able to respond with the resources needed to help clients optimize applications on AMD Epyc solutions.

    And, hopefully AMD will stay out in front longer this time, giving AMD a real shot at gaining back 50%+ of the DC before Intel clock's back in with competitive solutions.

    Intel will need to come back without Security related performance losses, and with far better power / performance / cost before Intel could stop AMD from taking away market share.
     
    Last edited: May 31, 2019
    ajc9988 likes this.
  11. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Note the warning is for Ryzen and Threadripper systems with RAID, and the download link is for AMD hosted Threadripper x399 drivers only, you'll want to find your motherboard RAID drivers @ AMD or updated versions from your vendor, as well as get Qualcomm WiFi and Bluetooth driver updates installed before attempting the Windows 10 update.

    Heads up if you have a Windows 10 RAID installation: update your RAIDdrivers before the next Windows 10 update or the update can freeze your system in the middle of the update.

    " Note We recommend that you do not attempt to manually update using the Update now button or the Media Creation Tool until a new driver has been installed and the Windows 10, version 1903 feature update has been automatically offered to you." - Microsoft

    See articles and source links for details:

    Windows 10 May 2019 Update Causing Issues For Some AMD Ryzen PCs, Fix In This item
    by Hilbert Hagedoorn on: 05/31/2019 10:25 AM | source: microsoft.com | 40 comment(s)
    https://www.guru3d.com/news-story/w...s-for-some-amd-ryzen-pcsfix-in-this-item.html

    "A new Microsoft Windows update, a new series of problems, the QA levels of Microsoft are just astonishing. The May 2019 update has problems for users with AMD processor, as well as showing problems with Bluetooth and WiFi. The problems are not as bad as when the 1809 update was released but still rather annoying.

    Installing the update can cause AMD Ryzen and Threadripper processors based system a headacke and crash because there are compatibility issues between the update and RAID drivers. Installation simply freezes on these machines as there's a compatibility issue between the RAID drivers and the update.

    There is a simple fix and bypass available, download the latest AMD driver before installing the Windows update. With a clean installation, it is useful to have a removable storage medium with the driver at hand.

    AMD Users: download and install the latest RAID driver ( found here), currently the one listed at 9.2.0.105.

    Qualcomm WiFi and Bluetooth modules also seem to suffer from disruptions after update 1903. Here as well, the solution is to first update the drivers before installing the Windows update."

    Windows 10, version 1903 and Windows Server, version 1903
    Known issues
    https://docs.microsoft.com/de-de/windows/release-information/status-windows-10-1903#452msgdesc

    " AMD RAID driver incompatibility
    Microsoft and AMD have identified an incompatibility with AMD RAID driver versions earlier than 9.2.0.105. When you attempt to install the Windows 10, version 1903 update on a Windows 10-based computer with an affected driver version, the installation process stops and you get a message like the following:

    AMD Ryzen™ or AMD Ryzen™ Threadripper™ configured in SATA or NVMe RAID mode.

    “A driver is installed that causes stability problems on Windows. This driver will be disabled. Check with your software/driver provider for an updated version that runs on this version of Windows.”

    To safeguard your update experience, we have applied a compatibility hold on devices with these AMD drivers from being offered Windows 10, version 1903, until this issue is resolved.

    Affected platforms:
    • Client: Windows 10, version 1903
    • OS Build 18362.116
    • May 21, 2019 KB4505057 Mitigated Last updated: May 23, 2019 09:28 AM PT
      Opened: May 21, 2019 07:12 AM PT
    Workaround: To resolve this issue, download the latest AMD RAID drivers directly from AMD at:
    https://www.amd.com/en/support/chipsets/amd-socket-tr4/x399

    The drivers must be version 9.2.0.105 or later. Install the drivers on the affected computer, and then restart the installation process for the Windows 10, version 1903 feature update.

    Note We recommend that you do not attempt to manually update using the Update now button or the Media Creation Tool until a new driver has been installed and the Windows 10, version 1903 feature update has been automatically offered to you."

    I only included that particular Known Issue, as always check the whole list of known issues before installing a Windows 10 Update.
     
    Last edited: Jun 1, 2019
    ajc9988 and Vasudev like this.
  12. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    AMD charging the same as Intel for their server lineups?
    Not likely.
    If history is anything to go by, they will undercut Intel in all brackets by offering same or higher performance, better security and superior efficiency at a lower price.

    Why would it be foolish?
    The data center market may be loaded with cash, but you will have 'die-hard' Intel fans in that arena most likely who will probably want the best bang for buck while getting everything at the same time (and AMD would need to ween those people off Intel by offering a a competitive solution).

    Offering an EPYC ROME system with 64 cores and 128 threads at 30%-50% lower price than Intel would get AMD a substantial market-share in the data center.
    It would effectively deal a crippling blow to Intel because AMD's yields are better, and they have a far better product when all things are taken into account.

    What I expect is that Zen 2 on 7nm will effectively have same prices as Zen 1 (not +) while moving the bracket higher (offering 12c/24th cpu as top consumer part). Zen + introduced a cheaper version of improved Zen 1 which was expected due to node maturity... so I suspect that EPYC Rome will follow this (with Zen 3, or basically what comes on 7nm+ will improve on performance by further 10%-ish while possibly slashing power consumption and dropping prices to Zen+ levels).
    Although I don't know if the prices of Zen 2 have even been finalized... so we could still see changes there.

    You do realize that a company can remain highly profitable even when they're offering their products at a significantly lower price (aka not gouging customers for cash), right?

    As for security flaws being taken cared of on Intel... are they?
    I hadn't heard/read anything about the server lineup being retested with those security patches applied and what performance impact on Xenos they had.

    It also begs the question just what kind of 'fine tuning' AMD didn't do on the Intel system (part of me thinks it may be possible that AMD compared their ROME results with the fully patched Intel Xeon's, whereas Intel's version of 'fine tuning' would include disabling those patches - but then again, when Intel didn't bother to 'fine tune' AMD's system for comparative purposes by pairing it with higher frequency RAM or modifying the RAM timings to max out Zen, I don't recall AMD complaining about it [though admittedly, my memory is sketchy on this, so please correct me if I'm wrong]... if anything, it was everyone else that pointed this out).

    Plus, AMD is not as likely as Intel to be susceptible to upcoming cyber security issues.
    Intel needs patches to go around that issue which impacts its performance... AMD doesn't.
    Intel's cpu's experienced about 15% drop in overall performance due to those patches... whereas AMD suffered only 3% (for patches it didn't even need in the first place) - that's a 5x differential in AMD favor, and I suspect that these performance drops may be more apparent in the data center than consumer lines.

    From a business point of view and increasing issues of cyber-security playing a part... I'd be hard pressed justifying an Intel system that could be fraught with all kinds of security problems down the line which would require constant patching that would subsequently drop performance, and on top of that would require bigger/more expensive cooling to get virtually same or lower performance as the AMD system which chugs down on 40% less energy (which cuts power bills by a nice amount) and isn't as susceptible to cyber-attacks.

    As for knowing who has more performance... well, no, we don't know that, for the simple reason that AMD still hadn't released their CPU's into the market.
    There's no telling what AMD may or may not change in their lineups that could affect performance prior to release or when the benchmarks in question were taken... that's why I said that pre-production samples shouldn't be taken at face value as being representative of how the final product will behave (though its just as valid to think that pre-production samples are the final numbers we will see - but then again, ROME is not going to be released just yet).
    For that matter, we don't even know if the 240W TDP on AMD's end for ROME is the final and maximum TDP.
     
    Last edited: Jun 1, 2019
    hmscott likes this.
  13. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    It looks like PCIE 4.0 on pre-x570 is dead...

    AMD_Robert
    Technical Marketing 15 points· 1 day ago
    "This is an error we are correcting. Pre-X570 boards will not support PCIe Gen 4. There's no guarantee that older motherboards can reliably run the more stringent signaling requirements of Gen4, and we simply cannot have a mix of "yes, no, maybe" in the market for all the older motherboards. The potential for confusion is too high.

    When final BIOSes are released for 3rd Gen Ryzen (AGESA 1000+), Gen4 will not be an option anymore. We wish we could've enabled this backwards, but the risk is too great."

    AMD's Robert Hallock: No PCie 4.0 support on 300- and 400-series motherboards - Sweclockers sweclockers.com
    Submitted 1 day ago by DonVicati
    https://www.reddit.com/r/Amd/commen...pcie_40_support_on_300_and/epn2c83/?context=3
     
    ajc9988 likes this.
  14. Vasudev

    Vasudev Notebook Nobel Laureate

    Reputations:
    12,035
    Messages:
    11,278
    Likes Received:
    8,814
    Trophy Points:
    931
    You can never get new hardware features via SW anyway.
    New boards are for people who like high end PCIe4 SSD that hit 5GB/s.
     
    hmscott likes this.
  15. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Yup, as I recall between AM2/AM2+ and AM3 you could get CPU's that supported both DDR2 and DDR3, so you could plug the new CPU into your old AM2+ motherboard and it would support the DDR2 memory.

    Then when you wanted to upgrade to DDR3 memory and a new AM3 motherboard that supported it, you could move your CPU over and it would drive the DDR3 memory.

    Of course you couldn't fit DDR2 in the same socket as DDR3, and I recall some motherboards had sockets for both DDR2 / DDR3 so owners could migrate their current RAM and new CPU onto the new motherboard and later upgrade their RAM to DDR3 on the same motherboard.

    Wow, newegg still has a few examples of this on their site for sale!:
    https://www.newegg.com/p/pl?d=am3+ddr2+motherboard

    C68 Desktop Computer Motherboard Support for AM2 940/AM3 938 DDR2+DDR3 Memory Mainboard for AMD
    https://www.newegg.com/p/2MG-002F-00002
    AGSH_131916824110594113uWi5EO41ON.jpg

    Of course there will be another migration coming soon(?) for DDR5 memory, and maybe even PCIE 5.0 - although that's less likely to happen soon. :)
     
    ajc9988 and Vasudev like this.
  16. Ashtrix

    Ashtrix ψυχή υπεροχή

    Reputations:
    2,376
    Messages:
    2,080
    Likes Received:
    3,275
    Trophy Points:
    281
    So far the AMD Ryzen 3000 is looking good, there are a few of benchmarks floating for them especially the Int performance/ST/Gaming. And looks like even the 3600 will be a good hit. And the DRAM 3200 JEDEC certification is cool, however from the Gamers Nexus that will be limited to 2DIMM slots only not all 4, so that issue is still existing, Plus the B-Die is rumored to die, however some say that new 10nm B-Dies are in production (Check comments). Then the price to performance is great from the looks of the 3800X beating a 9700K in ST as well (Stock performance not the OC, ofc the 9700K will OC like crazy).

    But for me the biggest gripe is the ultra hot PCH of X570 and the PCIe switching,
    One - There are basically absolute Zero PLX motherboards out there. None of them have the PCIe 4.0 to 3.0 switch or the lane split, the 28 (24 actually due to the x4 from CPU and same as X470) Lanes Mainstream due to AM4 ? I do not know, maybe plus how the Threadripper is placed into for more PCIe lanes from CPU and PCH, then their ultra flagship EPYC. Everyone got the 10G lan which is cool and all that 3.1 gen 2 USB, and some still retain the PS/2 (Win7 maybe ?) Also no ASMedia USB controllers this time so no Win7 support directly and lack of PS/2 means not an easy task. So the I/O, the PCIe 4.0 lanes if the NVMe SSDs using that high speed and get ultra hot (main reason I personally do not like the NVMe over SATA SSDs because of longevity) next the lanes are absolutely wasted. All that PCIe 4.0 is sheer waste if you plug a PCIe 3.0 GPU in that all of the rest lanes are wasted a.k.a half speed but same lane count usagge, same for NVMe 3.0 SSDs, for everything it's same. So basically you are paying for the new standard which doesn't have any products and how Intel market is saturated and even AMDs own until X470 the products for 4.0 do not exist as of now, maybe in the future ? but even the RTX2080 doesn't saturate a 3.0 x8 lanes. So ultimately the number of devices is limited as the previous X470 or even the Z390 platform as well, basically the 4.0 SSDs only for now (No idea how useful they are to masses, ultra hot SSDs) I maybe wrong on a few things.

    Two - the Hot PCH, so far before Computex I thought there will be High end boards without the fans and massive disappointment. Only chipset is from Gigabyte Aorus Extreme X570 with passive HS and massive Metal HS instead of trash plastic that all use which is super cool and got a good dose of Audio with the ESS9218 but no idea about implementation of that DAC chip and it's $600. Every OEM has fans, to those saying the X570 runs fine with normal load nope not, it's always hot.

    Here's the official word from ASRock themselves (Computerbase.de)

    AMD should have solved this PCH issue and the 3.0 switches (So much potential on how the 4.0 is 2x the 3.0 and we would have got so much of I/O from the Matisse platform rivaling that X299, plus how that Broadcom gulped the whole Avago who made PLX chips and raised their price to moon at BS levels of $100 ( yeah they are that expensive, perhaps only will be running on Server boards) shame. And 5.0 with DDR5 is slated for 2021, Dunno if Nvidia will even launch anything on 4.0 or Intel as well so less 4.0 devices again. Wish EVGA had X570 (50 Boards total, ASUS has a dozen of them already) but their Intel/Nvidia contracts I guess limited them hard, else I bet they would have got the best X570 with passive cool and best design and construction and longevity.

    I'm waiting for the X570 gaming/OC performance (Also check the Aorus Extreme X570 page they show XMP4400+ DRAM on the AMD platform, but we do not know how this new Matisse scales with DRAM since the large cache + redesign makes it less reliant on the DRAM) vs the 9900K Z390 and how the new R0 stepping 9900KS, Plus the final Ringbus 10C Cometlake compatibility on Z390 (Esp the Z390 Dark). All I want is Win7 so that these machines will live a long long time, with that PCH mobo fan on X570 or AIO cooling on any of the CPUs, they won't. So fingers crossed for new DH15S from Noctua as well for improved DRAM clearance. Hopefully this new Matisse can have Win7, plus 9900K may be matched but not beaten from the 3800X, GN has mentioned the 16C CPU runs over 300W to note.

    Good times I guess, Intel will be forced to innovate now, their new Icelake 10nm is having good IPC gains. Although the S Desktop SKUs are absent and we do not know how this new 10nm is from their old slides the 10nm can't beat the 14nm++, need 10nm+ at-least, also the Clocks.

    E3 show is where we wait for more info until the embargo lifts :)
     
    Last edited: Jun 1, 2019
    hmscott likes this.
  17. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    You are making too much out of the PCH "fan" feature. It's a good thing, not a bad thing. I wish previous PCIE 3.0 PCH and SSD's had active cooling features on all motherboards.

    The added PCH power and cooling is due to M.2 NvME PCIE 4.0 usage - not necessarily an on all the time thing. I wish AMD would have added forced air cooling from that fan (or another fan) through the M.2 bays to cool the what I am going to assume are very hot running PCIE 4.0 SSD's, given Corsairs PCIE 4.0 SSD with the giant heatsink and fins:
    789857978.png
    I hope the vendors all get smart and set the fan curve for the PCH fan so it's not running except when there are high performance PCIE 4.0 drives running - perhaps it might be needed for high end PCIE 3.0 drives too? Something to test.

    What I would look for is the fan control for the PCH fan, and set it to an optimized setting that only runs when necessary, when and if necessary.

    I've also heard, and I tried to find the reference but haven't yet, that the Threadripper 3.0 PCH will also need a cooling fan, which means TR4 has PCIE 4.0 (or PCIE 5.0?), and that the TR4 PCH is rated to draw more power, I think it was 15w for TR4 vs 11w for the AM4 boards.

    Maybe the TR4 PCH extra power is for more PCIE 4.0 lanes driven?, or the for added load of PCIE 5.0?
     
    Last edited: Jun 2, 2019
    ajc9988 likes this.
  18. Vasudev

    Vasudev Notebook Nobel Laureate

    Reputations:
    12,035
    Messages:
    11,278
    Likes Received:
    8,814
    Trophy Points:
    931
    First gen DDR5 user base will be low since DDR4 has matured enough and can OC upto 4600MHz or more.
    Only BW and less power are highlights of DDR5.
     
    ajc9988 and hmscott like this.
  19. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    Well guys, up to 8 core will be one Zen2 die and above will be 2.
     
    tilleroftheearth, ajc9988 and hmscott like this.
  20. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    ajc9988 likes this.
  21. custom90gt

    custom90gt Doc Mod Super Moderator

    Reputations:
    7,907
    Messages:
    3,862
    Likes Received:
    4,806
    Trophy Points:
    331
    The fan does make me cringe. It brings me back to the Socket A time, more specifically my experiences with the ABIT AN7. Let's hope these fair better, lol...
     
  22. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Intel did a similar thing with DDR3/4. Huge boondoggle as I understand it.

    Here, it is about signal integrity. If you cannot maintain the signal with the heightened requirements, you cannot guarantee the data speeds, etc.

    DDR5 isn't scheduled for consumers, as I understand it, until 2021 for both companies. PCIe 5.0 is NOT coming to consumers ANY TIME SOON (I'll re-emphasize this below).

    This is a NOTHING BURGER. Literally, even Intel boards have different supported speeds based on DIMM populations, and mostly all Intel boards give only the 2 DIMM speeds. Complain all you like, but it is found with all x86 boards, so it is nothing to lay that at AMD and not Intel.

    I know because I ran my 6700K with 4x8GB DDR4 at 4000MT/s@CL16. That was back when that was hard and competitive overclockers had to bin boards to get 4000+. So that is a non-issue.

    Now, if your point was we need to wait and see performance on ram, I agree. If it is how it sounds, a complaint that dual DIMM speeds are higher for official support than quad DIMM rigs, then I respond that is normal.

    This is complaints to complain. Let's take a deeper dive. The chipset was originally designed to use 8xPCIe gen 4 lanes to the PCH. When they did this, that is where you got the 15W TDP on the full chipset. They cut that down due to heat and power draw, although rumors have it coming to the TR boards.

    Asmedia has been a bane on security. Yes, they are a major chipset player, without a doubt. But only a couple years ago I was still hearing complaints about them.

    Now, the simple fact is more bandwidth on the lanes requires more power to run. There is an argument that Intel and Asmedia could have done it better at lower power. There may be truth to that. But what is also true is that the power consumption from either of them would have gone UP from the current 5W chipsets. The only way to get the power consumption down would be to fab it on 7nm TSMC/Intel 10nm. Now, Intel doing it on 14nm would likely be lower power than GF 14nm, but at that point we are squabbling over a couple Watts.

    As to USB controllers, that doesn't mean anything. You know that I could install the AMD USB controllers for my 1950X on Windows 7. Gave me an annoying message, but still installed. Also, Windows 7 dies in January. Deal with that. You may not like it, but it is dead. Beyond that, unless you have a volume license, you get NO SUPPORT. Considering all the security vulnerabilities, primarily on Intel CPUs, that will make that OS sad pretty quick.

    If your concern is over competitive benchmarking, wrappers for the benches to verify AMD CPUs are using HPET are in the works. Also, we do not yet know if these Zen 2 chips fix the RTC problem or not.

    As to saying the lanes are a waste, the same thing was said from PCIe 2 to 3. Next year, Nvidia will have PCIe 4.0 cards. Still you could call it a waste until programmers implement the use of higher bandwidths in their programming. But considering the design and implementation on the PS5 to allow PS4 games to use the new hardware, including bandwidth, without recoding the game from ground up, similar tools may be available to keep the amount of work down to use it. Also, people keep their systems more than 1 year, on average, meaning that when PCIe 4 cards are ready, they slot in. So it is one of those arguments that doesn't often matter, IMO.

    Now, I'm pretty sure the PCIe 4.0x16 slot splits to x8. We'll have that covered on more boards in the future, but without a PLX for it, the GPUs on PCIe would be stuck on x8. Once again, you have to wait for new tech.

    They have new NVMe 4.0 SSDs available for launch, so to use it, you have to buy a new product. That is a fact of life.

    They also added Thunderbolt 3 on a couple boards, which is nice IF you use it.

    Saying the number of devices is limited is said with EVERY new PCIe gen.

    A slow moving fan that ramps up means what exactly? If at low speeds (because creating fans that turn off rather than 20% is harder, as seen in ALL CASE FANS) and low noise that is likely imperceptible over the 120/140mm fans you have in your case is a negative? Maybe you are not old enough to remember boards from the 90s and 2000s were chipset fans were fairly common. But it really isn't the deal you are making it out to be.

    In fact, chipset waterblocks and full cover blocks that included the chipset and north and south bridge were common at one point. Same with boards that ran a heatpipe to the chipset. Currently, you point out only one premium board ran the heatpipe. Why? Because other costs got in the way. To go to PCIe 4, they had to start with higher quality PCBs. This is why 8-layer PCB is common this gen. Then you have the increases to the VRM to guarantee power for up to a 16-core chip. That adds cost. Daisy chain on the memory topology is fairly inexpensive compared to T-Topology being implemented, at least that is my understanding, which AMD is optimizing for Daisy Chain, which is why the 4-DIMM support is lower, whereas T-topology lowers the 2-DIMM speed but helps with 4-DIMMs. Most consumers only use 2-DIMMs and use 16-32GB of ram, so that really isn't an issue. When you add all of that up, I see why they went with a simple fan. I just hope they bought premium quite ones, otherwise I see many replacing them with Noctua's fans.

    Also, from leaks, ram speed STILL matters. Specifically, the new BIOS features already show you can now control whether the rate of IF to memory is 1:1 or 1:2. That means that you may have to cut the IF speed in half to hit the higher memory speeds. But, IF bandwidth was also doubled and the latency decreased. So the real question is whether the best balance will be with a 1:1 ratio or 1:2 ratio with high speed memory. The cache just reduces how long before it needs to do a memory call.

    People have tried making the memory latency into the primary problem. Although it is a contributing factor, it may not be as important as people have suggested. Wait for the reviews and you will see where the new performance is.

    Once again, Win 7 support is dead, so that won't last long. And on what basis do you say that using a chipset fan or AIO shortens the machine lifespan? Now, if the MB mfr cheaped out on the chipset fan so that it dies, then sure, it will run hot. That is up to you to buy a board that doesn't have a cheap component on it or to replace it with an aftermarket fan. Same with AIO for the CPU. Those are critiques on those manufacturers that you are absolutely trying to foist on the CPU company, which is absurd.

    Did you see GN also note that with engineering samples on LN2, it reaches 6GHz? Now, why that is important is a 4.5GHz all core on those chips is about the same as a 9900K all core 5GHz OC. So, if final silicon can get it to 6.4GHz, we could see new records for 8-core CPUs in multi-threaded, potentially (or at least be right there in the game).

    As to Icelake 10nm IPC, there are questions about its authenticity, from not having all mitigations for security applied to people not seeing that IPC uplift when comparing it to their skylake chips and whiskey lake chips. Ian Cuttress at Anandtech said he will be testing the IPC claims from both companies, so we shall see.

    Also, Ice lake is 10nm+. Intel's slides show no transistor performance uplift until 10nm++, which is tiger lake next year.

    Did you see that Ice Lake and Tiger Lake use FIVR? The same that many blamed for Hot-well running so hot. So we have to see if that is also in play.

    But absolutely correct, E3 is when at least one AMD rep said they can talk about overclocking, so....

    PCIe 5.0 will definitely need extra cooling, but should only be in servers (unless AMD uses the Epyc I/O die on the TR series, which would be nice for workstations).

    As to the 15W variant, that is the full fat, meaning AMD will be laying 8 lanes rather than 4 lanes to the PCH. That means a LOT more bandwidth for components hanging off the chipset. Moreover, on that platform, the PCIe main lanes all hang off the CPU, so unlike on mainstream where it matters, it matters less on workstation boards, but will allow for faster I/O to be on the chipset with less bottleneck, which is a huge win.

    It also will likely have more Thunderbolt 3 on the workstation boards, which is a nice addition, unless they have USB 4.0 ready by then.

    For PCIe 5.0, I'm really excited to see the server implementations of that and I believe Mellanox's new 600Gb/s connectors! Imagine how much you can beef up a company's infrastructure with that! Or, at least in the US, if they did an infrastructure development of fiber across the country bundling multiple connectors at that speed, including updating our backbones. That is what PCIe 5 has me excited for, literally being able to revolutionize networking.
     
    hmscott likes this.
  23. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    That really is the question: whether it was implemented properly!

    Another person said they had fears because they remember their x57 board with it seeming like it was going to melt the board!

    I don't think it is going to be that bad. But, many reviewers have thermal imaging cameras, so we should have an idea before we buy on what type of heat output it has and under what load conditions (because reviewers are extremely excited to test the new PCIe 4.0 NVMe drives).

    Considering they will likely have a modified variant of the chipset on the PS5 and the new Xbox, and that the PCIe 4.0 is likely involved with the new storage tech the PS5 is using, I'd say it should not be TOO bad or run so hot that it is a problem. (edit: then again, they could have the storage straight off the CPU and not off the chipset, but who knows).

    I could be wrong on that, but...
     
    hmscott likes this.
  24. custom90gt

    custom90gt Doc Mod Super Moderator

    Reputations:
    7,907
    Messages:
    3,862
    Likes Received:
    4,806
    Trophy Points:
    331
    Sadly I will disagree on this one point. AMD is way behind Intel when it comes to memory speed/compatibility. I've never had issues running 4 sticks of memory at their rated speed/above in any Intel dual/quad channel motherboards for the past few years, and I've had quite a few. Sadly with AMD and their 1700x and 2700x I've had struggles getting 4 sticks to run at even close to 3000MHz. I've actually tried 5 different x470 motherboards until I settled on this X470 Taichi because I could get 4 sticks up to 3200MHz. Sadly since then I've removed two sticks because it was still unstable at times. I know you are talking about official support, but AMD has to make up a lot of ground here IMHO.
     
  25. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Here's a question: On Intel motherboards, is it easier to get higher clocks with 2 DIMMs or 4 DIMMs? The answer will always be two, especially on dual channel systems. Even on quad channel, hexa-channel, and octochannel, having fewer DIMMs makes it easier (single rank, single DIMM per channel). That has to do with the ability of the IMC to run a certain number of ranks overall. That is why dual rank don't often clock as high as single rank. And if you do dual rank in each slot for each channel, you wind up with 4 ranks per channel or 8 ranks overall in a dual channel setup, as compared to 2 ranks for 2 DIMM dual channel with single rank, or 4 ranks if all 4 slots are filled.

    I'm not saying that AMD does not have room to improve on their memory controller. Hell, I'm running 4x8GB single rank in quad channel at 3466@CL14 15 15 right now. I used to have 3600@CL14 stable before an AGESA update changed that.

    With Intel, when I got the 4000@CL16 on my Skylake, that was dual channel with the same single rank DIMMs populating all 4 slots. The MB was rated for only dual channel two slot 3866 for my Maximus VIII Extreme. Moreover, my Asrock X399 Taichi is rated up to 3600+ (granted, I think that was lower at launch and before second gen).

    So although I agree with your premise, that the IMC needs improved, understanding the impact of topology on the MB on filling a dual channel board with single rank DIMMs is important, along with the unequal delay when DIMMs are setup in daisy chain versus T-topology. I took the complaint more as being upset about the MBs having 4 DIMMs run slower than 2 DIMMs, which is found on competitor boards (going from the description given). You are right to point out I did not discuss the impact caused by the quality of IMC. I also discussed the impact of the ratio setting for the IF to memory speed and how that may need changed to get the higher speed.

    But, considering board manufacturers have increased the single rank dual channel speeds for this generation, I think they have improved it significantly (not to say there isn't more work to do). I just didn't discuss it (due to instead focusing on the other elements in the chain).
     
    hmscott, Papusan and custom90gt like this.
  26. custom90gt

    custom90gt Doc Mod Super Moderator

    Reputations:
    7,907
    Messages:
    3,862
    Likes Received:
    4,806
    Trophy Points:
    331
    Right, it is easier to get 2 DIMMs running at higher speeds than 4 (especially single rank), but it's never been a limiting factor on all of the Intel setups I've owned/tweaked for friends. My experience with dual channel AMD platforms has been less than stellar. The same 4 sticks that easily hit 4266MHz CL17 in dual channel Intel platforms (8086k, 9900k) has been near impossible for me to get stable on AMD, even after spending days tweaking it. Basically I gave up because it just wasn't fun like it was on my Intel systems. My previous MSI x470 gaming pro carbon wouldn't even post if I had 4 sticks above 3000MHz regardless of timings or settings. I'm anxious to see what ground AMD made up with x570, and I'm excited to try it. I just hope it's significant.
     
    Papusan, ajc9988 and tilleroftheearth like this.
  27. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    I still haven't tried newer than skylake, although I hear it was much improved and easier to hit those higher speeds.

    I have two kits of 2x8GB 4133 Trident Z Samsung B-die 19-21-21. I could never get that to run 4133 even in two DIMM configuration on my skylake 6700K.

    So it was a couple years back that it was a limiting factor, which I got good performance at 3733@CL14 on that skylake chip, which due to lower latency, that outperformed the 4000 speed I got in many, but not all, tasks (and was more stable than all four at 4000).

    Now there is something to be said that AMD is now getting over 4000, yet Intel is pushing 5800 in extreme overclocking of ram.

    But, overall, we'll have to see how the IMC came out, especially with them now being able to bin the I/O chips separate from the core dies.
     
    custom90gt and Papusan like this.
  28. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    you guys are talking about diminishing return here, especially on intel system. intel system has better IMC and memory support, and you also get less performance from upping the memory speed. AMD's new design with a load of cache is so that it doesnt have to go to memory all that often, along with IPC boost it is more than enough to match most if not beat intel system.
     
    bennyg, ajc9988 and hmscott like this.
  29. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,615
    Trophy Points:
    931
    [​IMG]
    MSI CEO: Even Low-End AMD X570 Motherboards Will Be Expensive tomshardware.com | Jun 8, 2019

    The arrival of AMD's Ryzen 3000 series CPUs brings more multi-threaded heft to the mainstream desktop and sweet new technologies like PCIe 4.0. Unfortunately, according to what we learned in an interview with MSI CEO Charles Chiang and from other vendors at Computex 2019, the platform will also bring higher pricing for the next series of AMD motherboards. As a result, X570 motherboard pricing could be similar to Intel's expensive Z390 motherboards, if not higher. In fact, even the lowest-end X570 boards could cost more than most previous-gen X470 boards, though Chiang stressed that pricing decisions are still not finalized.

    That's a big change from AMD's traditional role of having significantly less expensive motherboards than Intel. When asked about the source of the higher pricing, Chiang said, "Technology wise, PCIe Gen 4 will contribute a lot of cost on the motherboard, and AMD right now they intend to sell X570 chipset for higher pricing."
     
    ajc9988 and hmscott like this.
  30. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    buy from asus/asrock/gigabyte etc etc. the good part about desktop.
     
    Papusan and hmscott like this.
  31. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,615
    Trophy Points:
    931
    And you think they won't increase the prices? :) Aka offer subsidized prices? It's not the way it works, brother.

    "With better products, comes a desire for higher margins, and a change in direction for a company that was basically forced to almost cut itself out of the market in terms of profits with its previous, non-competitive CPU designs"
     
    ajc9988 likes this.
  32. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    increases the price, we can always wait, no problem.

    plus zen2 will be a new product, who knows what issue it might come with.
     
    ajc9988, hmscott and Papusan like this.
  33. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    But a company can remain highly competitive and profitable without gouging its customers.
    Looking at things long term, charging a lot of cash for a piece of hw can easily backfire, especially in the current economic climate with stagnating wages, automation taking over everywhere and people losing their jobs as a result.
    People just can't afford to get these things... I mean, I certainly wouldn't flung myself into unnecessary debt for a piece of hardware.
    So, how does a company that gouges its customers expects to go on?
    Eventually, this mode of operation will collapse because people will either hold out or wait until prices drop (if it happens), or the economy will mess most of them up which in turn creates less consumers (though more consumers certainly isn't good for the environment due to the fact we're in the midst of climate catastrophy... we need to get away from the cyclical consumption mode and create a sustainable way of doing things - which there is... but most companies aren't looking towards that).

    Can't help but think that if big companies like Intel and AMD were more conscious about these things they would pair up with OEM's to completely harvest older laptops for their raw materials and make new ones using the older ones.
    You spend less energy, less resources and people can spend a LOT less on new technology... best of all, this would eradicate the need for mining new resources because the process of technical efficiency allows us to do more with less (but OEM's along with other companies would need to in that case design hw that uses less resources - because they can).
     
    hmscott likes this.
  34. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    The effects of that are often folded into IPC. For example, on tasks that require memory calls, by now having double the L3 cache, it doesn't have to go off chip as much, which does allow for less down time and fewer delays, increasing speed. On memory speed, that is more about the Infinity Fabric. Increasing the memory speed alone would have minimal effect, like on Intel's platform, but for the IF. Think of overclocking the memory on an AMD system as actually overclocking two systems on Intel simultaneously: 1) the memory (obviously), and 2) the cache clock. Even on Intel's system, overclocking the cache (meaning the uncore) or the mesh speed for HEDT chips can give a noticeable performance increase (edit: depending on task). With AMD, as you increase the memory speed, you also overclock the IF, which increases bandwidth and decreases latency.

    So a direct comparison of Intel to AMD on memory overclocking would require, IMO, to also examine the effects of a cache overclock with the memory overclock. Now most on Intel systems connect overclocking the cache speed to the core overclock, not the memory overclock, usually trying to keep the cache speed within 300-500MHz of the core frequency for mainstream, whereas mesh frequency is lower clocked (not reflective of bandwidth as different tech compared to an uncore or IF). Mesh gets overclocked on its own. IF, like uncore, is overclocked relative to another clock, except it is memory rather than CPU frequency that it is connected to.

    Just wanted to tease out a bit on the different data fabrics and how it relates to overclocking.


    I found it interesting that Steve from GN mentioned that MB mfrs told him they are moving more volume of AMD boards than Intel boards (these boards are often in DIY market segment, although sometimes used with boutique computer building firms, whereas the big OEMs create their own OEM boards). Because of that volume, they are wanting to increase quality. But it goes even deeper than this.

    Two big changes are coming with the X570 board. First, PCIe 4.0 requires stringent signal integrity requirements. That means cheap boards with like 4-layer PCB are out. I'm not even sure 6-layer PCB will be used. So you have most manufacturers putting out closer to 8-layer or better PCB, which is a HUGE jump in quality of the PCB, and that comes at a cost. Although less important on mainstream boards, for server and HEDT boards, they will have to use components to boost and condition the signal for the traces further out in order to match the PCIe 4.0 spec. Those SMBs will cost per chip for implementation, increasing costs (as mentioned, less important for X570 boards, but will come into play in other market segments).

    For the second change, MB mfrs are beefing up the VRM SIGNIFICANTLY on these boards. For the Giga flagship, they have a 16-phase PWM controller (not an 8-phase and using parallel or doubling) with 14-phases for the CPU using 70A smart phases, meaning it could handle HEDT on extreme overclocking EASILY (while running ridiculously cool). This increase in VRM quality does cost money on the better components, which means higher board cost.

    As additional information on changes, AMD made their chipset and that chipset which supports PCIe 4.0 likely, even if running at 11W, will cost more due to features and additional bandwidth. You have them standardizing optimization for daisy chain on memory, meaning you may be better off using dual rank DIMMs in two slots for dual channel than using single rank DIMMs and populating four slots. This DOES need tested to be sure, while single rank in two slots should provide the highest overclock of memory. Wait for testing for confirmation on the 4 DIMMs SR versus 2 DIMMs DR.

    Then, due to more volume, you are seeing MB mfrs add in premium features on these boards, like Thunderbolt 3.0. Those premium features require hardware components and those cost money.

    So, yes, the boards will cost more moving forward. But one could easily argue now that the boards are equal or of better quality (and for some, they require a better quality, as seen in the PCIe 4.0 discussion) than their Intel counterparts. That means that the days of considering AMD on buying a budget board may be over, but you ARE getting a higher quality product for your money as well.

    Also, this is not directed at you, rather is meant to share more information on the topic with the community and try to give context to the additional price. If any price gouging is happening this generation on the AMD side, I'd say it is AMD itself. I complained specifically about AMD sliding the price deck compared to the rumors, which makes the 3800X less competitively priced compared to the 9700K and does not hit the 9900K in the same way I had hoped. That allows Intel to regroup and may keep AMD shareholders happier, as Su has said that on the entire lineup, they have margins at 50% or above, thereby closing in on the margins that Intel and Nvidia have on their products. The problem I have with that is they do not have the mind share yet to do this and need to grow their market share faster, as Intel isn't sitting still. If the roadmaps are to be believed, then theoretically Servers will get the Ice Lake-SP chips next year. Beyond that, a couple years later, Intel will be doing chiplets with 2.5D or 3D integration. AMD has the capability of going from an MCM to an active interposer with 2.5D integration right now, but don't due to cost. By not hitting hard now to claw back market share, instead seeking additional margins, AMD gives Intel an opening to regroup, they take less revenue from Intel during that time frame, and they could limit their growth to a degree.

    Now there is also the argument that AMD has all but won the desktop market, which there is a nugget of truth there. They decimated Intel's dominance in HEDT. They are selling at about 2 to 1 on the DIY market. OEMs are making more desktop offerings, including commercial desktop offerings, with AMD's chips (still not as many as Intel though at OEMs), etc. But they still only had about 18% market share for desktops, expected to grow.

    Mobile (meaning laptops here, not cell phones/tablets, mostly) is a much larger slice of the pie, so to speak. AMD only has about 12% of this market, and using Zen+ and Vega on these 3000 chips won't change that much. Intel's chip shortage did change it. But analysts suggest a high end of 18% for how much AMD can grab, while Intel started increasing volume production in this segment as the shortage alleviates. That means Intel is willing to give up the desktop to defend the mobile segment. It was a choice by Intel, not a force by AMD, that led to this due to a manufacturing shortage.

    Server is where AMD is making large inroads, but is the segment Intel is fighting the hardest to maintain. Yes, they still have the performance crown here, but for servers, power consumption matters in TCO. For consumers, overly focusing on wattage is not the best way to select components, as was shown by JayzTwoCentz here: . But, when talking deployment of thousands of chips in a datacenter, consuming 50% plus more power for less than 10% more performance will quickly balloon the costs for ownership, while not providing enough benefit on production to make up for those costs. This is without addressing security concerns as well. That is compounded with their server chips costing more than AMD's chips as well. This is likely why Intel is trying to go from the small core count 10nm designs directly to the server offerings to get them the power savings to be more competitive. This is the highest margin segment in computing, so losing market share here hurts the most.

    That is why some have speculated that AMD will have constrained supplies, so are using more of the best 8-core chiplets in their server chips, because there is more demand and by pricing the mainstream higher, they are less likely to get a shortage on desktop chips while having enough supply to push more of the high margin server chips. It is a sound theory, but more information would be needed to confirm its validity. Also, as it comes to production, it should be mentioned that Nvidia inked a deal with Samsung to use their 7nm process next year, primarily because Samsung undercut TSMC on price to get the deal done. But, that also means that next year, TSMC will have MORE capacity that needs filled, meaning AMD can grab up that capacity, so long as demand is there, to use for both their CPU and GPU lines (this year, it didn't matter because Nvidia was using the 12nm process while AMD was competing with Apple, Qualcomm, Huawei and others for 7nm capacity). So, as TSMC increases their line capacity for 7nm, AMD will have one less major player to compete with on capacity, which is good news for AMD, especially since, with the trade war, Apple is exposed on 20% of their iphone sales which are to China, along with Qualcomm being exposed in the event either company gets listed on China's unreliable corporations list. So, depending on how the trade war goes, TSMC could lose orders for 7nm, freeing further capacity for AMD while they seek growth. The trade war analysis is another post in and of itself.

    On the comment on less energy, that isn't exactly true. It can take a LOT of energy, and a lot of chemicals, to recycle old electronics and reclaim the rare earth minerals from dead electronics. Although the US has increased its recycling and reclamation efforts in the past decade, much of the reclamation required shipping the electronics to China for processing. So you have the CO2 for transport of the electronics, then you have China refusing to recycle some goods due to the trade war (specifically on plastics), which resulted in us illegally sending it to Malaysia, which recently realized what was happening and banned them shipping it there. The US doesn't know what to do with its trash and recyclables because we NEVER built out our domestic capacity. Hell, some dumps in the US have been literally on fire for nearly half a century and we still haven't stopped it!

    Also, reclamation does not bring down the cost. Having to process the chemicals used in reclamation to comply with environmental regulations has HUGE costs. That is subsidized within the selling of the reclaimed minerals. But, even though it costs more, that does NOT mean we shouldn't be doing it. It is just something that I needed to correct. This issue is a large one and to go through the ins and outs would take a lot longer post.

    Please see my description above on the additional costs of the boards and why these MB mfrs are not necessarily gouging on some of their lines (others they obviously are, but...).

    You also bring up good points on the slowing global economy, the stagnation in wages, effects of automation, etc. I've ranted on those topics at length. I think we have a lot of agreement there. But even with that, we must ground the analysis first to show what is the reason for the increased costs before we then look at the surrounding circumstances of the market. Why? So that we understand first what the bill of particulars are for a product (the raw cost to produce the boards + labor), then see the added margins on the boards, then look at how the market will receive the product, both on demand analyses as well as the global economic climate.
     
    Last edited: Jun 8, 2019
  35. Vasudev

    Vasudev Notebook Nobel Laureate

    Reputations:
    12,035
    Messages:
    11,278
    Likes Received:
    8,814
    Trophy Points:
    931
    Earlier, I was amazed by Intel's low TDP and hated AMDs high TDP. But, now I don't care about power savings because they get in the way of extracting max performance.
     
    Papusan, tilleroftheearth and ajc9988 like this.
  36. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    For energy consumption, what is more important is large scale energy infrastructure change, rather than individual's use of power. As I mentioned, power consumption is less important, as it amounts to like $50 over 5 years, or $10/yr. Truly worth the price if you are running one or a handful of desktops. It does matter when you go to thousands of machines with CPUs pulling 250-560W+, where that $10 becomes more like $20-30 per CPU difference, which then gets multiplied by 1000-10,000 systems deployed, meaning around $20,000 per year or more for 1,000 systems deployed or $200,000 for 10,000 systems deployed, which is not an insignificant amount. So large corporations should care, but small businesses and end consumers, not so much.

    Plus, once you OC, you blow the efficiency curves anyways on power draw, so the rated specs kind of go out the door anyways for consumers trying to get extra performance.
     
  37. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    thats mostly for cancerous prebuilt or laptop bro. for diy and desktop, lower tdp the better. given intel's new TDP only at PL1 thats horrible.
     
  38. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    A British chancellor just recently stated openly 'it would cost too much to save the planet'.
    I don't really care how much it 'costs', because 'money' is not important (since its an artificial invention by certain Humans who simply 'agree' it has greater value than the paper money used in Monopoly or for writing... in itself, its utterly worthless and backed by nothing since its created out of nothing/debt... besides, you cannot breathe it... you cannot eat it, you cannot wear it [to a measurable extent that would have any use], etc).
    We have the resources and technology to save the planet (and in a span of a decade to boot).
    Unless its done, we won't have a livable planet to continue on - its that simple.

    Ecologically speaking, I would prefer we focus on reclamation of raw materials from old electronics and reclaim rare earth minerals from dead ones because its a SOUND course of action... the process has been relatively streamlined by now and with advent of facilities which do this and run on renewables for example, the argument for using 'more energy' becomes moot... besides, the process CAN be improved upon.
    And, furthermore, I don't need to remind you that the % of things that's being actually recycled is LESS than 10%... even things actively slated for recycling aren't actually being recycled but thrown to the landfill or the oceans (which is contributing to massive die-offs in the oceans).

    Besides, we DID create a molecular synthesizer a few years ago allowing us to make matter from base elements in the first place... logically, we CAN also reverse this process.

    Point being we cannot afford to continue with needless extraction of raw materials from the Earth itself when we have everything on the landfills themselves... we actually have ridiculous amount on the landfills... so much in fact that majority of that can be returned to the Earth to repair the damage we caused and leave a fraction of what's there for our needs.

    We need to move away from the notion of what something 'costs'... and focus on whether we have the technological capability, the know-how and resources to make something happen in the shortest time frame with minimal impact to the environment (to which the answer is pretty much always 'yes').
    At this point, 'money' is an interference - even more so when people decide to choose it over their own lives (as certain people in power apparently do).
     
  39. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Although this is "previous generation" Ryzen 2 (Zen+) CPU + 1660ti it's the $1100 range laptop most people will want to purchase; not everyone wants a $2k-5k gaming laptop, and this one has great gaming FPS results, and is a solid entry level gaming laptop.

    It's likely going to be a while before AMD releases the Ryzen 3 / Zen 2 version of this entry level gaming laptop with Navi, and even longer before laptop vendors actually deliver them to market, until then besides the full desktop Ryzen CPU / GPU models, these are great inexpensive entry level gaming laptops.

    You don't need to go broke spending 2x-5x as much as this $1100 AMD / Nvidia hybrid laptop to game.

    Gaming benchmarks start @ 12:10

    ASUS TUF FX505DU Review - Best Value Gaming Laptop?
    Jarrod'sTech
    Published on Jun 10, 2019
    The ASUS TUF FX505DU gaming laptop is offering great value for the performance you get in games. In this review we’ll find out how this strange hybrid with AMD Ryzen 7 3750H CPU and Nvidia GTX 1660 Ti graphics performs to help you decide if it’s worth it.


    ASUS TUF FX505DU Thermals - How Hot Is It?

    Jarrod'sTech
    Published on Jun 6, 2019
    The ASUS TUF FX505DU gaming laptop has a AMD Ryzen 7 3750H CPU and Nvidia GTX 1660 Ti graphics, an unusual combination, but just how hot does it get? In this testing we’ll take a detailed look at performance, see if there are any thermal or power limits, and find out what can be done to improve gaming performance and temperatures. I’ll investigate undervolting, overclocking, and show you how hot the laptop gets while gaming and under combined CPU and GPU stress test with different settings, let you listen to total system fan noise, test with a cooling pad and see hot spots with a thermal camera.

    ASUS TUF FX505DU (Ryzen 7 3750H/GTX 1660 Ti) Gaming Benchmarks - 19 Games Tested
    Jarrod'sTech
    Published on May 26, 2019
    The new ASUS TUF FX505DU gaming laptop is a bit different, it’s got an AMD Ryzen 7 3750H CPU with Nvidia GTX 1660 Ti graphics, but just how well does it perform in games? In this testing I’ve benchmarked 19 different games at all setting levels to show you how well it runs, and compared it against some other gaming laptops to see how it stacks up.
     
    Last edited: Jun 10, 2019
  40. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    AMD seems to be cutting it close to the wire again, less than 24 hours notice for the Live Stream info. I signed up to this notification, but haven't received anything about it yet:
    AMD E3 Next Horizon Gaming Live Stream Notify signup.JPG
    https://www.amd.com/en/events/e3

    I'll post it when I get it, but I may be doing other things tomorrow when it comes through, so you might want to sign up as I have and post the notice when you get it.

    Likely that AMD will upload a Live Stream on youtube just before 3:00pm PT and the same for twitch. An AMD webpage offering an embedded youtube playback as well, perhaps at the link I gave above.
     
    Last edited: Jun 10, 2019
  41. Talon

    Talon Notebook Virtuoso

    Reputations:
    1,482
    Messages:
    3,519
    Likes Received:
    4,694
    Trophy Points:
    331
    https://www.amazon.com/Acer-Predato...17&s=gateway&sprefix=acer+pred,aps,152&sr=8-4

    $100 more gets you a far higher quality laptop with a better screen at 144hz 3ms, aluminum chassis top, lid. You get a far faster more capable CPU with 6 cores, 12 threads that stomps the 3750H. It's also a better looking device hands down. The AMD laptop should be priced around $900 IMO.
     
    Papusan and tilleroftheearth like this.
  42. Talon

    Talon Notebook Virtuoso

    Reputations:
    1,482
    Messages:
    3,519
    Likes Received:
    4,694
    Trophy Points:
    331
     
    Papusan and ajc9988 like this.
  43. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    I have to agree I don't like the Asus pricing.
    AMD is a lot cheaper by default... it shouldn't cost this much.
    They did a similar thing with GL702ZC by pricing it at $1600/£1600... which I think was unfair/unrealistic... should have been closer to £1200/$1200.

    At leas Acer Helios 500 with 2700 and Vega 56 costed £1700/$1700 which is a minor increase in price over GL702ZC for about 10% increase in performance in CPU area and about 35% increase in GPU area... plus I was able to get my Helios 500 with Ryzen/Vega for £1500/$1500... was on a discount... so I ended up paying less for it than I did for GL702ZC.

    Are the OEM's playing fast and loose with AMD and pricing it too much?
    Dell tried to pull a similar thing like before and charging extortionate prices (on par with better equipped Intel systems) for AMD APU's whilst crippling their performance with sub-par hardware
     
    ajc9988 likes this.
  44. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Some new slides from AMD E3 2019, with the new AMD Radeon 5700 (2060) @ $379 and AMD Radeon 5700XT (2070) @ $449 available on the shelf July 7th:
    AMD E3 2019 8.jpg
    AMD E3 2019 3.jpg
    AMD E3 2019 6.jpg
    AMD E3 2019 4.jpg
    AMD E3 2019 2.jpg
    IMHO, the twitch Live Stream is "more fun"; chat in YT is disabled...it's Starting now!

    The AMD Twitch Live Stream is up now:
    https://www.twitch.tv/amd

    AMD YT Live Streaming too:
     
    Last edited: Jun 10, 2019
    Vasudev, Papusan and ajc9988 like this.
  45. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Lisa Su had 2 more things... the AMD Radeon 5700XT 50th Anniversary Edition (2070ti/2080?) - with a performance bump! - it also looks like there are LS initials "signed" after Radeon on the GPU shown @$499 (available only direct from AMD's Store), and last but not least, the Ryzen 9 16 Core 32 Thread 3950X @ $749!

    Mentioned several times in the presentation was OC'ing the Navi GPU's... so besides the 50th Anniversary Edition I would expect 3rd party GPU vendors providing multi-fan cooling / water cooling to step up with even higher performance models. I wonder how high the "standard" blower models will OC? :)

    Also, the new Navi GPU's come with 3 months of MS Xbox Game Pass, which will include Gears 5 early access!
    AMD E3 2019 9.jpg
    AMD E3 2019 10.jpg
    AMD E3 2019 11.jpg
    AMD E3 2019 12.jpg
     
    Last edited: Jun 10, 2019
    Vasudev, Papusan and ajc9988 like this.
  46. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    I just got done BBQing for my dad's B-day, so didn't get to chime in yet.

    Notice, the TR2 16-core was $900. This 16-core was $750. When the first Ryzen and TR launched, the 1800X was $500 while the 1900X was $650. Seems the value of the PCIe lanes has not changed, along with two more channels of memory, as the $150 difference seems to still be there. We'll have to wait for TR3 to confirm that, as they may increase the cost slightly due to PCIe 4.0, but nice.

    Did anyone get the scores on the LN2 OCs taking the world records on 16-core?

     
    Vasudev, hmscott and Papusan like this.
  47. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    So, here are the scores:
    CB15: 5434
    CB20: 12167
    GB4 MT: 65499

    Old world records:
    CB15: 9960X 5320
    CB20: 7960X 10895
    GB4 MT: 7960X 60991

    Percent beat:
    CB15: 2.1%
    CB20: 11.7%
    GB4 MT: 1.2%

    I do wonder if the scheduler fix mentioned in the AMD press event was ran on the systems when OCed for the LN2 showing. AMD has mostly matched the performance of CPUs put out 1-2 years ago, but that also have more memory bandwidth, although it should be noted CB is not memory intensive and can mostly operate out of cache, IIRC.

    Still, this is on a mainstream platform and at $750, you are paying less than half of the cost of a new 9960X which costs $1700. I really wish they had a TR version so I could upgrade (patience).

    Decent article at PCWorld:
    https://www.pcworld.com/article/340...aiming-to-topple-intels-gaming-dominance.html
    upload_2019-6-10_20-3-36.png
    MCE was turned off for their testing.
    On the Windows Scheduler:
    upload_2019-6-10_20-7-39.png
     
    Vasudev, hmscott and Papusan like this.
  48. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,615
    Trophy Points:
    931
    The most expencive mainstream chips ever released (I talk about the latest 10 years)? More cores and faster, thats very good. But $750 is a new direction for mainstream :) And AMD managed this great achievement with sunshine and thunder.
     
  49. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Definitely agree.

    I did complain about the price on the 9900K at $480, but selling due to shortage at first at $530-580. But, it did perform about as a 10-core zen chip would (20+% faster in productivity). Intel really took the sting out of that price point. Now, AMD brings a 12-core that could do OBS slow encode at 58 frames per second while playing at 1440p at that price, then drops a 16-core for $750. I didn't like AMD's 1800X price point of $500 (most didn't, hence why the 1700 was the CPU to buy for 8-cores at that time).

    I'm really excited to see where this is going. So long as software optimizes for AMD, this could get real good in the next couple years. Intel has 10-cores, which, realistically, won't hit like that 16-core will. But, has anyone checked the performance between the 9th series Xeon chips and the Xeon Cascade-SP chips yet? If nothing else, it is worth a couple hundred MHz, meaning Intel will strike back with Cascade-X this fall. Now, with the 3950X out at $750, if it performs as well as the 9960X when water cooled, then Intel may lower prices on the Cascade-X lineup per core (consumer wins! :))! Intel will also drop the binned 9900KS, but will that move with the 12-core and 16-core Ryzens out by then? Then again, Intel will likely take the gaming crown while overclocked (and they are returning the OC protection warranty program and releasing their auto-OC software which doesn't void the warranty). I think with the pricing, it will depend on task between the 9900KS and Ryzen 9 3900X. This actually helps normalize Intel's pricing a bit (did not see that coming).

    Really, now that the 16-core has dropped (or will be released for sure soon), I really see an increase in market share for AMD, but also think a lot more people will buy it for streaming and content creation (so more startups on people streaming or doing web shows).

    Makes you wonder, when Intel finally starts doing chiplets, where will core counts be on mainstream, and for what price?

    Also, here is the productivity. Granted, this is the 12-core stock versus the 9900K stock. I'd like to see the OC vs OC in productivity here:
    upload_2019-6-10_21-22-33.png

    Also, AMD is claiming they have reached 5100MHz@CL18-21-21 on air overclocked. But, above 3733 which AMD said was the optimal memory to IF setting, it switches at some point from 1:1 to 2:1 on mem clock to IF ratio, or something like that. Hopefully, the ratio is set manually, or can be set manually, because I'd want to try to hit 4000 with 1:1 mem to IF.
     
    Last edited: Jun 10, 2019
    Vasudev, hmscott and Papusan like this.
  50. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Both the 12 core and 16 core AMD Ryzen CPU's compete with (perform the same as) Intel (HEDT) CPU's costing 2x as much, bringing previously unattainable performance within the reach of the average consumer. Both the 12 core and 16 core AMD Ryzen CPU's are completely new price points for consumer CPU's.

    Within the same new release of Ryzen 3 / Zen 2 products, AMD provides 6 core and 8 core CPU price points delivering more performance at a lower cost. AMD will also fill out the rest of their CPU / APU product family at lower and higher price points with the new Ryzen 3 / Zen 2 technology as time progresses.

    For 1/2 the price AMD consumers can now obtain levels of CPU performance previously reserved for overpriced Intel HEDT solutions, except now they can fully utilize the high core count processing.

    With AMD / Microsoft software / driver / OS optimizations consumers can fully utilize high core count Ryzen 3 performance effectively in their day to day use.

    All in all I thought AMD gave an nice reserved yet upbeat presentation, delivered by a company that knows they've got the winning hand. AMD's delivered Ryzen / Navi new product news in a way that generates a positive and friendly atmosphere.

    I'm hoping and expecting that real world testing and OC'ing AMD Ryzen and Navi will deliver the joy we hope and expect. Only time will tell.

    And, as Gamers Nexus reminded everyone, DON'T Preorder. Wait for independent test results, wait for results from a wide range of POV's and a wide range of X570 motherboards. There are bound to be positives and negatives to any new product, and waiting for the early kinks and solutions to be found will reduce your stress and increase your joy when you finally get the new Ryzen and Navi components home.

    If you haven't watched the whole AMD presentation or press info or other detailed re-tellings in articles covering the announcements, it's important to do so so you know about all the little benefits and improvements coming with Ryzen 3 / Zen 2: Anti-Lag / RIS (Radeon Image Sharpening), CAS (Contrast Adaptive Sharpening) and also DSC (Display Stream Compression), which come from open-source FidelityFX.

    AMD E3 Overload: 16 Core Ryzen, Two Navi RX 5700 GPUs, Benchmarks, Architectures & More
    Hardware Unboxed
    Published on Jun 10, 2019
    "AMD E3 Overload: 16 Core Ryzen, Two Navi RX 5700 GPUs, Benchmarks, Architectures & More"

    RADEON NAVI Full E3 Analysis: RX 5000 is QUALITY over Quantity, and Lisa is gonna make AMD Rich!
    Moore's Law Is Dead
    Premiered 4 hours ago
    "There is much more to unpack with Navi than most people seem to realize, let's get into not just what Navi is now - but also how cheap it may get in a price war, and why it may be an ok deal."

    Were the new AMD GPUs worth the wait? - Navi specs Revealed
    JayzTwoCents
    Published on Jun 10, 2019
    "AMD has finally revealed the specs of their new RDNA Architecture featuring NAVI GPUs... but was it worth the wait? We also talk about the performance of AMD 3900X vs Intel 9920X... things are getting good!"

    What is Anti-LAG? (RX 5700 and RX 5700 XT Specs and Feature Detail)
    Tech YES City
    Published on Jun 10, 2019
    "Here from L.A in the USA Dr. Lisa Su announced the new AMD Navi architecture with 2 new cards coming in July (I believe July 7th launch with CPUs). On top of these two new cards there is a heap of new features coming from a software side. Today we explain Anti-Lag / RIS (Radeon Image Sharpening), CAS (Contrast Adaptive Sharpening) and also DSC (Display Stream Compression), which comes from FidelityFX, which is open-source."

    Ryzen 3000 + Navi news Blowout @ E3!
    Level1Techs
    Published on Jun 10, 2019
    "level1 news from Beverly hills CA, regular news tomorrow midnight."
    AMD Radeon Anti-lag.jpg
    AMD Radeon Anti-lag 2.jpg
    AMD Navi upgrade from Vega 56.jpg

    AMD's Plan to Destroy Intel - 16-Core 3950X + RX 5700 XT Details!
    Paul's Hardware
    Published on Jun 10, 2019
    "AMD's Plan to Destroy Intel - 16-Core 3950X + RX 5700 XT Details! At E3 this year,. AMD dropped some huge announcements, including pricing and some performance numbers for their new Ryzen 3000-series CPUs as well as their Navi-based Radeon RX 5700 and 5700 XT. Also - launch dates!"

    Remember that these first Navi's (Navi 10) are already beating Nvidia's mid-high products 2060 (5700), 2070 (5700XT), 2080 (3rd party / 5700XT LSE ?) for less $, imagine what 2x CU count higher performance Navi's will do next year...along with filling out the lower price tiers too (Navi 10/20).

    And, AMD are still selling their Ryzen 1 / Zen 1 and Ryzen 2 / Zen 1+ / Polaris / Vega product lines at a nice discount. AMD is offering both budget and performance products now, buyers choice, take your time to decide - you can end up living with your choice for a long time. :)
     
    Last edited: Jun 11, 2019
← Previous pageNext page →