The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    AMD's Ryzen CPUs (Ryzen/TR/Epyc) & Vega/Polaris/Navi GPUs

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by Rage Set, Dec 14, 2016.

  1. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    That is the hearts and minds/brand loyalty issue. This is why AMD has tried to meet deadlines and get closer to standardized cadences, that way to signal to the market they are reliable (more for OEM integrations). They have capacity, but not demand. If the leak is true, I have zero doubt AMD can ramp up production, especially as they plan for double digit growth.

    In yesterday's LTT video, only one vendor recommended an AMD build be considered. With benches on these new chips, that could start changing.

    Sent from my SM-G900P using Tapatalk
     
    hmscott and ole!!! like this.
  2. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Day by day, brings new perspectives on the Zen 2 and Navi rumors...best to ignore them and wait for actual product announcements and release.

    News Corner | Our Thoughts on AMD 7nm Zen 2, Navi Rumors and Nvidia Titan RTX
    Hardware Unboxed
    Published on Dec 7, 2018


    16 Core & Over 5GHZ ? Ryzen 3000 Series & Navi Specs Supposedly Leaked
    RedGamingTech
    Published on Dec 5, 2018
     
    Last edited: Dec 10, 2018
  3. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    16 cores @ -21c OUTSIDE = 4.4 GHZ?!?
    Timmy Joe PC Tech
    Published on Dec 7, 2018
    At -21 celcius here in Canada, I thought we should FREEZE THREADRIPPER! 4.4 ghz cinebench overclocking, how far will it go?!?!?
     
  4. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Too bad he sucked at the score. Yes, he for a higher clock, but I've done [email protected]. But, I've considered doing something similar when it drops to -10C here sometime this winter.

    Sent from my SM-G900P using Tapatalk
     
    hmscott likes this.
  5. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    He got 3678 as a high score, that's pretty darned good. You can't remember your exact "3600" score either? :)
    3878 best score.jpg
     
    Last edited: Dec 8, 2018
    ajc9988 likes this.
  6. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    I don't need to remember it, it is on HWBot. It is just another score I've done which is higher than others running faster than me. Usually people push higher memory while at a lower frequency than me (one or two guys) to beat my scores, or they use 100-200mhz higher core frequency with worse memory timings than me to score higher than me. Few have beaten me on the 1950X though (maybe a handful). So forgive me if I'm unimpressed with his score in sub-zero ambient temps. Until Jay and Steve at GN started really going at it, I wasn't impressed too much with OCing of YouTubers in general, excluding like buildzoid, Der8auer and the like.

    Now, with that said, if he could, at 4.4, get a decent memory clock stable and decent timings, I think his score would be much higher and much more impressive. So it isn't complaining about his parts here, to be clear, it is wanting to push him to be better at OCing.

    Sent from my SM-G900P using Tapatalk
     
    hmscott likes this.
  7. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    We both know success is a by product of putting the time in, while condemning their efforts in growing their abilities is cross purpose to encouraging people to continue to put in the effort.

    They aren't always producing video's to show the end result, they are producing videos for people to share and learn from their process of growth, a brief glimpse at the milestones between destinations.

    The journey is far longer and far more interesting than that fleeting moment of arrival at the destination. :)
     
  8. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    if theres only some software that can take full advantage of it. m$haft win10 still using single threaded rofl, they outta trust alot more consumers having better hardware, especially for their server OS.
     
    ajc9988 likes this.
  9. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    AMD Adding New Vega 10 & Vega 20 IDs To Their Linux Driver
    Written by Michael Larabel in Radeon on 7 December 2018 at 05:09 PM EST.
    https://www.phoronix.com/scan.php?page=news_item&px=AMD-New-Vega-10-20-PCI-IDs

    "While we are looking forward to AMD's next-gen Navi architecture in 2019, it looks like the Vega family may be getting bigger soon.
    Hot off finishing up the Radeon RX 590 Linux support as their new Polaris refresh, it looks like another Vega 20 part may be in the pipeline as well as multiple new Vega 10 SKUs.

    Friday afternoon patches to the company's RadeonSI Mesa and AMDKFD/AMDGPU kernel drivers reveal some new PCI IDs. On top of the five "Vega 20" PCI IDs already part of the Linux driver, a 0x66A4 ID is being added. So far AMD has just announced the Radeon Instinct MI50 and MI60 accelerators as being built off Vega 20 with no consumer parts at this time. As with most new product generations, it doesn't necessarily mean AMD will be launching 5~6 Vega 20 products, but sometimes PCI IDs are reserved for pre-production hardware, the possibility of expanding the product line in the future, etc.

    On the Vega 10 front meanwhile they are adding six new PCI IDs... The new Vega 10 PCI IDs being added are 0x6869, 0x686A, 0x686B, 0x686D, 0x686E, and 0x686F. These Vega 10 PCI IDs are new and not part of the previous batch of Vega 10 parts supported by the Linux drivers. The only other references I could find to these PCI IDs were that a macOS Mojave update recently added in these IDs too and then some of these IDs having just been part of GPUOpen's listings of GFX9 parts.

    The Linux patches today only add in these new PCI IDs with no other changes. It also looks like no other changes will be required for any new products as these patches are also CC'ed for back-porting to the stable branches of the Linux kernel and Mesa.
    [​IMG]
    So it's looking like some new AMD Vega products could be coming down the pipeline in the new year."

    AMDGPU Driver Gets Final Batch Of Features For Linux 4.21
    Written by Michael Larabel in Radeon on 8 December 2018 at 06:45 AM EST. 4 Comments
    https://www.phoronix.com/scan.php?page=news_item&px=AMDGPU-Linux-4.21-Final-Feature

    "A final pull request of new feature material for the AMD Linux graphics drivers was submitted on Friday for the upcoming 4.21 cycle.

    The AMDGPU updates for Linux 4.21 from earlier pull requests is already quite notable especially with finally adding FreeSync/Adaptive-Sync support but there is also AMDKFD compute support for Vega 12 and Polaris 12, Adaptive Backlight Management, various other Vega improvements, more xGMI / Vega 20 enablement, and more.

    With this final AMDGPU Linux 4.21 pull request to DRM-Next there is now also:

    - Tracing support within the AMDGPU Display Core "DC" code to help with debugging.

    - The 4.21 cycle with it is bringing initial documentation on DC.

    - xGMI hive reset support.

    - The AMDKFD driver can now limit video memory over-commits.

    - There is DMA-BUF support for the AMDKFD compute code.

    - The TTM memory management code now supports simultaneous submissions to multiple engines (will be interesting to see if there's any benefit to performance).

    The complete list of changes for this latest feature pull can be found via this mailing list post. The Linux 4.20 kernel won't be released until around Christmas at which point the Linux 4.21 merge window will open up, but DRM-Next cuts off its feature merging a few weeks before that point to ensure the code has time to stabilize.
    4 Comments"
     
  10. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,615
    Trophy Points:
    931
  11. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Talk about a fanboy! Two marketing people and Raja leave (Raja made the connections at Intel when he sold Intel on using AMD's GPU in their products) and you claim end of the world. They also hired Shrout to shill for them in finding ways to rig benches to show Intel in the best light. So, this isn't a brain drain. Look at Intel's shortcomings as of late which I mentioned months ago. Hell, Intel's 10-core for second half of 2019 is a 14nm part. Think about that. Intel is planning a 14nm potential flagship for mainstream around the time they allegedly will release 10nm mainstream parts. Does that make sense? Only if Intel is planning on missing 10nm for holiday 2019. But notice how you don't bring that up, instead showing Intel pick up another marketing guy from AMD. Kind of funny, don't ya think?

    Sent from my SM-G900P using Tapatalk
     
    Last edited: Dec 10, 2018
    hmscott likes this.
  12. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,615
    Trophy Points:
    931
    Why so touchy/upset? It's just an article from the web telling that the guy prefer Intel over AMD. And it seems follow a pattern. Nothing more than that. From what you say... Intel is only interested in paying wage and give a damn in get something back. I would not run a shop/business like this.
     
    Last edited: Dec 10, 2018
  13. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    You and @Mr. Fox show your bias and cannot recognize that it simply is Intel trying to market and message themselves out of a hole. You had tons to say about the 10 core Intel without discussing 14nm in Q3 as 10nm by holidays makes no sense unless Intel is making contingency plans for expected failure. Just like people trying to make a big deal on Intel saying 7nm is on track when there was an article on EEtimes about Intel ceding EUV and not planning on it until 2021, or the report recently saying Intel is investigating moving 7nm up and all Intel says now is on track.

    The point is, you first posted it in an Nvidia group where the content is irrelevant, then ran to here to post it, which matches you're actions every time you previously said look at these people leaving from and for Intel, even including Keller that worked at Tesla in between and was hired from Intel to fix their ship, which he said he still didn't have plans for them after being there awhile, which is a bit surprising.

    So three people, all from RTG, only one of which was ever an engineer, the other two being marketing people, got pouched, out of how many that work there, and we are supposed to think the sky is falling. Instead, it looks like one hire to run the GPU division, and then marketers to try to sell their cards. Sounds normal. But also funny considering your complaints abbot AMD marketing in the past and Intel is hiring those very people. Just so odd.

    Sent from my SM-G900P using Tapatalk
     
    bennyg likes this.
  14. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,615
    Trophy Points:
    931
    As it is today. Nothing from AMD if you run laptop. Call it "your bias" is a bit weird http://forum.notebookreview.com/threads/nvidia-thread.806608/page-110#post-10831827

    http://forum.notebookreview.com/threads/nvidia-thread.806608/page-109#post-10831449

    http://forum.notebookreview.com/threads/nvidia-thread.806608/page-110#post-10831559
     
  15. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Only the Acer predator Helios with a 2700. Vega 56 on the graphics card, which I wouldn't have minded a 1080 offering. But, as far as DTRs, that is it at the moment.

    And what you point to is a question about Intel rumors on upcoming graphics cards. Then you replied with a marketing hire? Not seeing the connection there.

    Sent from my SM-G900P using Tapatalk
     
  16. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,615
    Trophy Points:
    931
    All is about upcoming Intel graphics. I can connect the dots. You need workers if you want push out (new) products.
     
  17. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    And you stated AMD has nothing to offer. Or do you not remember that? Also, on last report, Intel's GPU is a 2020 product. So a hire now for a project more than a year out, although reasonable, is supposed to say what about Intel's GPU, release date, etc., especially in light of the first statement of that post?

    Sent from my SM-G900P using Tapatalk
     
  18. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,615
    Trophy Points:
    931
    I stand with that.
    To show progresjon. You as me, know very well there won't be any trustworhy rumors (or correct info) so long before release.

    From Techpowerup link "His tenure at Hardware.fr has been inspiring to us, with excellent reviews that no doubt were what caught the eyes of AMD in the first place, and Intel will definitely gain from his presence."
     
  19. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Like Intel only having a mid-tier product at release? So don't trust anything bad about them, don't trust any of the Navi rumors, and don't realize that if the I/O is moved off the GPU die, then you can go multi-die GPU potentially side-stepping the NUMA issue they said was why vendors wouldn't support multi-die, which is how they got one NUMA node per socket on Zen 2. Got you, believe nothing.

    Sent from my SM-G900P using Tapatalk
     
  20. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,615
    Trophy Points:
    931
    I don't expect much (regarding upcoming graphics) from Intel. All too soon and they haven't been in this "ballgame" before. I don't even have had iGPU the last decade.

    Edit. All they have messed with is this niche...
    [​IMG]
    One year later, the Intel-AMD Kaby Lake-G platform looks dead in the water
     
    Last edited: Dec 10, 2018
  21. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    https://www.anandtech.com/show/1319...c-with-cannon-lake-cpu-available-for-preorder

    Once again, not knowing that the successor was just announced. Yeah, dead.

    Sent from my SM-G900P using Tapatalk
     
  22. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Last edited: Dec 12, 2018
  23. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,213
    Messages:
    39,333
    Likes Received:
    70,629
    Trophy Points:
    931
    It cracks me up when rumors are embraced as truth. Information that is true is factual and, therefore, it no longer qualifies as a rumor. Rumors seldom serve a useful or beneficial purpose. They sometimes offer entertainment value, but frequently end in disappointment. Best approach: (1) wait and see, (2) trust but verify, and (3) observe, measure and compare before spending.
     
    Last edited: Dec 12, 2018
  24. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Ryzen Leaks - Too Good To Be True?
    AdoredTV
    Published on Dec 12, 2018
    Were my previous Ryzen 3000 and Navi leaks really "too good to be true", as reported at certain outlets?
     
    ajc9988 likes this.
  25. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Yes. Let's state cliché truisms and act as if intelligent. Lmao. Rumors can be surprisingly accurate, but are matters of not being true. Leaks are true information treated as rumors until the truth is public. Rumors serve as useful and beneficial in warfare and business to change course or deploy resources. Everything is weighed and measured based on the source and other known information.

    In the end, after a release, the information is confirmed or proven false. Hints can be gleamed from finances of the players, information from partners, such as motherboard manufacturers on supported features or fabs on processes, etc. But, by release, the truth is revealed and since the ability to purchase a product doesn't occur until then, of course you are forced to wait and verify the veracity of the information. The only time it effects behavior is consumers deciding to buy now or wait, in regards to consumer products. But everything you said has no deep meaning in your context.

    Meanwhile, what I ripped you specifically for is your duplicity, participating in rumors on Intel side, then ****ting on rumors elsewhere, as well as your liking the comment of papusan playing a fanboy on the news of one person changing companies, when I explicitly showed in posts since then that his stated reason didn't match the thread context and his wording showed an alternative intention. So you then fall back to talk on speculation and rumors written the origin was from you liking a post of someone playing fanboy on true information that wasn't conjecture in response to a question asking for conjecture. Sad!

    Sent from my SM-G900P using Tapatalk
     
    Mr. Fox likes this.
  26. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,213
    Messages:
    39,333
    Likes Received:
    70,629
    Trophy Points:
    931
    Can you provide an example of where I have actually done that? No, you cannot because I don't participate in rumors or speculation, and I am always skeptical of new tech. Am I biased toward Intel and less skeptical? Yes, of course, but only based on what products AMD releases. I look for results before making a choice, and I challenge you to show me any recent example where that has not been the case. I don't have an RTX 2080 Ti (yet) and that is, in part, why I do not. The other part is price, which is ludicrous by any measurement.

    I think you just enjoy being argumentative and ripping people because that is your personality type and it fulfills a need in your life of some sort. I'm the opposite and don't really care if others agree with me. Agreement is something I appreciate when it is there, but don't really care when it is not. That's just a difference in personality types. Using persuasion to change the opinions of others is also important to be successful in your vocation, so it is good that it fits your personality. No harm, no foul.

    Another difference in behavior between us is I don't defend brands and I am not going to champion something that doesn't meet my expectations only because the alternative seems sketchy or has business practices that I do not appreciate. I choose based on delivery of the results I am looking for and don't really care who delivers what I want. I don't give any points for good intentions, honorable mention, or for what is planned for tomorrow, but not here yet. AMD has a more difficult time getting my support and I am naturally more skeptical based on what they have for me to buy today, and their long history of not offering a product that I want. I've also been burned repeatedly by their garbage video cards, so I am also more skeptical on that basis as a consequence.

    But, you're only kidding yourself if you think it is because I "like" Intel better. We can apply the same to my position on GPUs, and AMD versus NVIDIA. I will jump ship and move to AMD when they deliver something that (by my measurements) is better and meets my expectations. I'm not interested in talk or roadmaps. I am interested in the here and now. I will not jump ship based on trust or "faith" in AMD as a brand, because those elements are missing at this point in time.

    Is it also sad that I often click "like" on your posts even when I do not agree with all of the content? The reason you do not see more "likes" from me is I do not spend much time in speculation threads and do not enjoy speculation. I do appreciate passion, and that is something you and @Papusan have in spades.
     
    Last edited: Dec 13, 2018
    Papusan and hmscott like this.
  27. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Did you talk about the 9900K before October 8th? That was the official announcement. Shall we check the 9900K thread for discussions before that date? Because that was the official confirmation, everything before that, even if accurate, was rumor and conjecture. That is my point.

    Sent from my SM-G900P using Tapatalk
     
  28. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,213
    Messages:
    39,333
    Likes Received:
    70,629
    Trophy Points:
    931
    I probably talked a little bit about it, especially if someone tagged me. Most of what I can remember saying about it is that I would be considering it as a drop-in upgrade for my Maximus X Hero (which I no longer have) and that I doubted it would be a good option for a Clevo due to thermal management limitations. Since it is basically the same product as an 8700K with two more cores and four more threads, it would be a logical next step and not really embracing a new and unfamiliar product. There would not be any reason to question much with a minor revision of the same architecture and the added core/thread count doesn't require a lot of thought. The brutality of the silicon lottery would be the biggest wildcard in that scenario. Now, I do recall saying I wasn't excited about the IHS being soldered, but that was only after we had confirmation that it was going to be. Before that was confirmed I can remember saying that I hoped it would not be. I do not remember offering any speculation about how it would overclock. At any rate, most of that is slightly different than speculation and rumors about a new product, how it will perform and how well it will overclock, etc.
     
    Papusan likes this.
  29. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    It is still talking about a rumor, no matter the limited context. I just went for the low hanging fruit there. You just admitted to doing what you said you didn't do. That is my point, whether by an inch or a mile, a statement is true or false and you just said your prior statement is false.

    We all speculate to various degrees. It is part of life. We all look at results in the end and we all change our statements according to new info (or should, although this last one people sometimes don't follow). But trying to act above the fray and indignant is just silly. Instead, if you changed your style just a bit, you could be the moderating voice of reason saying rumors are nice, just don't get too excited as release is a long way away. Instead, you say you don't speculate, then participate on speculation, etc.

    Sent from my SM-G900P using Tapatalk
     
    Mr. Fox likes this.
  30. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    This is a CPU thread is it not, we're not putting people on trial or on witness stands here for imagined transgressions are we.
     
    DreDre, Papusan and Mr. Fox like this.
  31. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,615
    Trophy Points:
    931
    Yeah, not fun. One single post and everything is in fire.

    AMD Ryzen 3000 Series CPUs: Rumors, Release Date, All We Know Tomshardware.com by Paul Alcorn December 11, 2018 at 1:02 PM

    Where will the dice land :D
    [​IMG]
     
  32. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Given what we see there in the spec table for these new upcoming CPUs from this leak/rumour, it looks like the 8 core 16 thread Ryzen 5 3600X at 4.8Ghz boost will be the best gaming CPU of the bunch, and at $229 dollars that would definitely be sweet - looking forward to seeing if when overclocked they can push as many frames as 9900K. I know the price point is not a fair comparison, but I do reckon Ryzen 5 3600X will be best gaming CPU as less 'interconnects' I would imagine in comparison to 5Ghz Ryzen 7 3700X at 12 cores & 24 threads (unless same number of chiplets but with some areas deactivated, I don't really know enough about this architecture?).
     
  33. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    So, crash course in the architecture.

    Epyc 2 removed the mem controllers from the core dies and moved it to the I/O die. People complained about the 24 and 32 core threadrippers because they didn't have direct memory access. Now, none of them do, but, it also means that latency is equal to all cores. Why does this matter and wouldn't that slow down the CPU with added memory latency? Well, it does add latency, which is combated with new low latency Infinity Fabric 2. How much did they lower the latency? No one knows yet. That is something that must be tested. But, it does something else that is important. It helps with the problem of stale data. Due to the latency of not having direct memory access, the work done by the non-direct dies can become stale due to waiting on the latencies. I suspect that this, more than bandwidth per core, was a problem on the TR2 WX chips. Either way, the changes to pre-fetch and larger victim L3 cache should help to combat the stale data problem. And having the same trace length latencies (think like wiring for ram), it will allow data to arrive at the same time. Dual edged sword this one.

    Now, they previously used the same dies throughout the stack, from mainstream to server. This recent rumor from AdoredTV suggests AMD paid the extra money to design 2 7nm chips. This takes some explanation. With Epyc, AMD moved all the I/O, memory controllers, SerDes (basically think all things on North and South bridges) over to the I/O die, including the IF2 controller (which means they may give it its own clock, but that is my speculation). So, the 7nm dies for TR3 and Epyc 2 are the same. Now, one would think AMD would use that same die and just design a second I/O controller on 14nm for the mainstream chips, after all the 7nm core chips are around 73mm2 and the Zen 1 dies are around 212mm2. Moreover, 14nm designs are much cheaper to design and execute than 7nm.
    [​IMG]
    https://semiengineering.com/wp-content/uploads/2018/06/nano3.png

    But, in this leak, no one that Jim at AdoredTV spoke to heard anything about an I/O chip for mainstream CPUs. This means that AMD may have a second 7nm CPU with the I/O still on the die planned for the mainstream chips. Just looking at the cost table above, that would be a mighty big expense, while also taking away the ability to bin the I/O chips on memory performance, etc. It also reduces the usable silicon dies per silicon wafer as a critical defect has a larger chance of hitting something critical and rendering the die useless, whereas with the I/O moved off die, it leaves just hitting and making a core defective, which can be turned off and thrown toward a 6 core chip or 4 core chip, with the rest of the cores being fully usable, thereby allowing for a higher effective yield.

    As you can see for yourself, financially, it would make more sense to design a second I/O die for mainstream chips over doing a second 7nm chip, but no one has heard of a second I/O chip except for the large one produced at GF. Also, the use of the I/O chip is why Epyc and TR3 next year will have a single NUMA (non-uniform memory architecture) node per socket, like all Intel CPUs except for the upcoming Cascade-AP with 2x24 core dies per chip, making it have the NUMA problem that TR users have been hit with (is it local memory or does it have to bounce to the other die before going to memory, thereby increasing latency). Now, with 12 and 16 core variants expected from AMD, if they don't have an I/O chip for mainstream CPUs and use more than one die, suddenly how they setup the memory situation matters a lot, because it could be introducing NUMA to mainstream and getting rid of it for the server and HEDT markets, which doesn't make much sense.

    So this is something to watch moving forward.

    As to the speeds, I've mentioned in the past that with Zen 1 AMD gave a max boost clock and an all core clock. Starting with Zen+, AMD went to only giving the max boost clock (single core max boost), but dropped the all core boost speeds. That is fine and Intel does the same, mostly. The reason for this is their algorithms change what speed the all core boost functions at depending on numerous factors, including temp of CPU, power delivery, etc. Because of this, it is likely appropriate to subtract 200-400MHz, even up to 600MHz on the 16-core chip due to heat density on the package being higher than TR or Epyc. That is my personal guess, though, looking at prior chips to date (think of a 2700X with a max boost of 4.35GHz but 4.2GHz for the all core boost).

    But, just because that suggests AMD will be slower by frequency doesn't mean the calculation ends there. We must talk about IPC. AMD is rumored through multiple leaks to have an average IPC increase around 13% (10-15% over prior AMD Zen chips, up to almost 30% on some workloads with floating point and potentially AVX). So, let's multiply 4.6GHz for an 8 core chip by 1.13 (13%), and let's multiply 5.1GHz by 1.04 (representing Intel's 4% IPC lead over Zen+, as everyone has that between 3-4%). We get 5198 instructions for AMD and 5304 instructions from Intel (this being representative of a 9900K so long as you can cool it while at 5.1GHz). At 5GHz for the Intel CPU, you get 5200 instructions. At 4.7GHz for the theoretical 3600X, you would get 5311 instructions.

    This means that the workload landscape will change. Now, Intel's 9900K currently enjoys a 21% advantage over a 2700X, but only an 11% advantage in gaming. Looking at the above numbers, it would suggest AMD may close the gaming gap AND productivity gap with these new chips at the same core count, but offer higher core count chips. After all, MOAR CORES! LOL.

    Either way, we won't know until release, which I estimate is 3.5-6 months away, depending on if a March, April, or May release (with the May release being near Computex, making it practically June).

    Meanwhile, interesting video from GN on FPS being a flawed measure of performance:

    Also, new vid on memory timing control coming to AMD?


    Hope that is a bit more helpful giving a rundown of the state of these chips. In the AMD vs Intel thread, I should be posting different articles about what Intel announced at their Architecture day, along with information that it isn't really ahead if you know what each thing is and means.
     
    hmscott and Robbo99999 like this.
  34. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    You're blinding me a bit with information, but you're saying we don't yet know for sure how the consumer chips are gonna be layed out in terms of having a seperate IO chip or not. Did I infer correctly that if all consumer chips have an IO chip that sits in the middle of each of the CPU chiplets, did I infer rightly that increased chiplet count here doesn't really increase latency and hence not decrease gaming performance? My main point of my previous post is that I thought the Ryzen 7 3600X with 8 cores would be the best gaming CPU because I'm thinking it's just gonna be one chiplet and therefore by definition no interconnects between chiplets required. What's the maximum CPU core count per chiplet gonna be?
     
    hmscott likes this.
  35. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    So, there is still a rumor that the I/O large die can be cut down, but there is nothing about a second I/O die.

    That means AMD may have just shrunk the die, so except for IF for inter-CCX communications (unless they went to an 8 core CCX, which there are rumors all over about that), then you are correct, it would limit the interconnect usage.

    Max core count peer chiplet is still rumored at 8-cores. So a massive 16-core doesn't make sense unless AMD is preparing for 16-core chiplets (matching the optimal core count for an active interposer), and is using mainstream as a test bed.

    Sent from my SM-G900P using Tapatalk
     
    hmscott and Robbo99999 like this.
  36. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    OK, realized I didn't answer fully.

    If it has the I/O chiplet as separate, the latency, although measurable, is compounded partly into the IPC. IPC varies by task, which is why tasks kept in cache are awesome on TR2 WX chips, but have problems if calling on memory (more complicated than that, but roughly what is happening). So, the IPC uplift means even if there is latency with the I/O chip, it is already in the calculation and AMD should theoretically have roughly even gaming performance with Intel moving forward.

    Sent from my SM-G900P using Tapatalk
     
    hmscott and Robbo99999 like this.
  37. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Hopefully that Ryzen 7 3600X 8 core will be just one CCX then - better gaming performance. Do you think it's likely they'd use just one CCX for an 8 core chip?
     
    hmscott and ajc9988 like this.
  38. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    It makes sense to use an 8-core CCX. Rumor is the moved the two CCXs to where all 8 cores are lined up, 4 per side, with cache in the center, meaning they could use a ring bus and just use IF2 to connect to the I/O chip, if used. So there is a good chance, but that is one of the things expected to be told at the CES keynote.

    Sent from my SM-G900P using Tapatalk
     
    hmscott and Robbo99999 like this.
  39. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Well I'd like to see them do something to make their CPUs more gaming efficient, and from that point of view it sounds like they could be headed in the right direction, I'd like to see them on par or better than Intel in gaming performance, plus knowing AMD they'll be priced better for us consumers. Was gonna say they'd be a contender for my next build, but I'm probably not gonna build another one for at least a couple of years, and who knows where we'll be by then!
     
    hmscott and ajc9988 like this.
  40. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Unboxing Boxes #55: $250 X399 Board, New X299 Boards, Aorus 2070 & 2080 Xtreme & More
    "Asrock X399 Phantom Gaming 6" starts @ 14:10

    Hardware Unboxed
    Published on Dec 11, 2018
    Check prices now and support HUB:
    Asrock X399 Phantom Gaming 6 - https://amzn.to/2rqKHoi


    AMD has a NEW video card... Sorta...
    JayzTwoCents
    Published on Dec 7, 2018
    AMD launched a new GPU recently! But is it REALLY new??
     
    Last edited: Dec 14, 2018
  41. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,615
    Trophy Points:
    931
    AMD Shares Details on Radeon GM's Departure Tomshardware.com | December 14, 2018
    [​IMG]
    AMD's Radeon Technology Group (RTG) has seen plenty of shakeups this year as it has lost several key players to Intel, like Raja Koduri and Chris Hook, among others. Now Mike Rayfield, the Senior Vice President and General Manager of RTG, has announced his retirement.


    EXCLUSIVE: AMD RTG Boss Mike Rayfield Retires Amidst Chatter About ‘Disengaged Behavior’ wccftech.com | December 14, 2018

    Mike Rayfield will be officially retiring at the end of the year and David Wang is going to be taking the interim leadership position for Radeon Technologies Group while the company searches for new leadership. This would mean interesting things for RTG as the leadership dynamics change just over a year after Raja Koduri left for Intel.

    When will David Wang leave the AMD ship?:rolleyes:



     
    Robbo99999 likes this.
  42. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    I just do not get it. For Intel these guys are just to hopefully boost IGP, not mainstream GPU graphic. Even assuming they can accomplish this they have a ways to go to compete with AMD. Intel sorely needs to concentrate on its core business, CPU's, as AMD is rounding the corner.
     
    hmscott likes this.
  43. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Exactly, Intel is hitting a rough patch that only CPU innovation and investment can help Intel dig is way out and back to a leading position in the market. Success at 10nm and 7nm is crucial.

    For Intel making a new dGPU isn't crucial.

    Intel iGPU's / dGPU's as investments are a distraction and a financial burden with negligible payoff. Even if Intel could pull a rabbit out of a hat and leap frog AMD GPU performance - past the new releases from AMD *next* year, not just the existing AMD GPU's, Intel GPU's would still land short of Nvidia GPU performance.

    What AMD splits off from Nvidia's market share is so minuscule as to be negligible to a company the size of Intel - that small low end market keeps AMD going but it's not going to be large enough to give Intel the ROI it needs for such a long sustained expensive development effort.

    Most of AMD's GPU market is based on investment gained through custom development for console company's, and then AMD uses both that and the discrete GPU market to spread the costs and profit. Intel won't have that console market to sustain the development of the low end GPU's it could produce.

    Just like Intel Larrabee the new Intel Arctic Sound GPU's will underperform, be incomplete from the software side, and too costly to build on the hardware side, and yet be a year or two late to market so as to be well behind the competition, ending up being worthless to invest in to take to production.

    Intel won't have the "cojones" needed to stick with this GPU effort and sustain that effort for the 2-3 generations of gradual catch-up Intel would need to cycle through to gain the IP needed to generate a truly outstanding class leading GPU that would compete on features, performance, and cost. That's a lot of years of lossage for Intel to eat before a payout happens.

    Maybe the effort will generate a co-processor die as an iGPU chiplet for CPU substrate's, greater in size, performance, and power draw than the current iGPU, but to make that leap to a full discrete GPU that's competitive, I just don't see Intel able to sustain the effort long enough.

    Intel will fold their GPU effort long before it could pay off in the dGPU market.
     
    Last edited: Dec 15, 2018
  44. bennyg

    bennyg Notebook Virtuoso

    Reputations:
    1,567
    Messages:
    2,370
    Likes Received:
    2,375
    Trophy Points:
    181
    Since its being discussed as a theory, it must at least be possible to physically cut a die into four through the middle of the active logic.

    Is this a common thing?

    Any idea what kind of tolerances would have to be built in, say a big gap where the physical cut would occur, or extra logic to build in tolerance for now-incomplete links to other parts of the die?
     
  45. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    AMD Adrenalin 2019 Drivers - Benchmarked & Explained!
    HardwareCanucks
    Published on Dec 13, 2018
    AMD is launching their new Adrenalin drivers today which include performance improvements, a new game streaming feature, updates to AMD Link and a ton of other cool stuff. Let's run some benchmarks and see what kind of improvements have been made! Download the drivers here: http://bit.ly/adrenalinAMD


    The Bring Up: Episode 4: AMD Radeon™ Software Adrenalin 2019 Edition
    AMD
    Published on Dec 13, 2018
    We Bring Up: 2019 Radeon™ driver updates, a trip down memory lane and a trip up north to AMD Markham
    00:55 What is a driver?
    02:40 Cavin and Bridget take a trip down 2013 Memory Lane
    05:19 Radeon™ Software Adrenalin 2019 Edition with Terry Makedon
    05:48 Radeon™ ReLive updates
    09:21 Radeon™ ReLive VR
    11:42 In Game Replay and GIF Support
    13:38 In-Scene Editor
    14:09 Radeon™ WattMan updates
    15:59 Voice Command in AMD Link
    17:16 Built-in Radeon™ ReLive Gallery

    How To Use Radeon Adrenalin Wattman Auto Overclocking
    WccftechTV
    Published on Dec 14, 2018
    In this quick tutorial we show you how to find and use the new AMD Radeon Adrenalin Wattman Auto Overclocking Utility. Performance Results: https://wccftech.com/amd-radeon-softw...
     
    Last edited: Dec 15, 2018
  46. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    It is, to a degree. You have to engineer traces that don't kill the chip when cut, leave clear set backs along where you cut, engineer power delivery to be able to be supplied to each quadrant collectively and independently, etc. To do it properly is really good engineering and it isn't done too often because of the pain in the butt it is.
    Dude, you just letting a troll troll?

    Literally, he is asking when David Wang will leave, calling AMD a sinking ship when the article he posted gave a viable alternative explanation, all while doing so after trying to troll in the same way yesterday. This is exactly why I left and don't visit this forum as much!

    And why Intel is doing it is to diversify the product portfolio. They stayed on x86 too long, missed out on mobile, have Qualcomm, the beast that they are, competing on radios in that segment, are later and over budget on the lines of optane products, see a re-emergence of AMD in CPUs, see the P.C. market shrinking, have Apple deciding to possibly go in-house on processors by 2020, have Amazon designing their own arm design for AWS, have RISC V chips made to order, have fallen behind on fab processes, etc. They didn't react to the warning signs when they should have. Now, instead of trying to predict the market, they are trying to disrupt a mature market which can bring meaningful capital because of brand loyalty and fanboys. This isn't hard to understand why the change.

    It also is a plan to go after the embedded market which kept AMD going in the hard years, trying to cut the feet out from them, but and has the two large contracts already, which is like 5-6 years. So...

    Edit: And no, they had to go to AMD to get the performance necessary for their Hyades Canyon NUC, and their iGP wasn't cutting it. But aside from iGPs, GPUs are taking over the computational processing in the server segment, which means it is a huge threat to their marketability down the road. Computing at that level means if Intel doesn't do something, AMD with superior PCIe or ARM server chips will eat away at them overtime as the transition to GPGPU is already underway. Not only that, you have the rumors of Samsung exploring the GPU space, granted for mobile, which is hitting at where everyone thinks they are going. Embedded and Server is why Intel is doing it, both with very large markets that are growing.

    In fact, that is why I pointed to the list of people pouched from AMD. If you look, most of it was AI and deep learning engineers. According to PricewaterhouseCoopers, one of the largest auditing firms in the world, they used OECD methodology to calculate that 38% of US jobs, and around 30% of all jobs globally, will be replaced by AI and robotics by 2032. That is about 13 years from now. A Harvard professor estimated AI and robotics could take as much as 80% of jobs by 2050 (non-OECD methodology, though). What is driving that? GPUs, in part, AI ASICs in another part, etc.

    So bringing it back to your question on Intel, they are playing catch up to one of the largest markets that is about to boom after missing out on mobile, etc.

    Now I'm going to repost my comments from the Anand article on Intel's stacked chips, or at least the links to the numerous sources showing why Intel is where practically the entire industry is on 2.5D and 3D integration, then not visit this site for awhile, because otherwise I'm fully de-activating my account. Peace.

    Sent from my SM-G900P using Tapatalk
     
    Last edited: Dec 15, 2018
    bennyg likes this.
  47. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    I am not saying AMD has issues but why is Intel gobbling up GPU people? Other than for the sake of saying we are gobbling up AMD talent were ever we can to satisfy stock holders for the short term as it does make it look like they are at least doing something, what no one seems to know.

    So you are both falling into their trap. Arguing the actions not the intent or consequence.
     
  48. jclausius

    jclausius Notebook Virtuoso

    Reputations:
    6,160
    Messages:
    3,265
    Likes Received:
    2,573
    Trophy Points:
    231
    Hmmm... Before poo-poo'ing or dismissing the intel GPU move, one should try to take a look out 3-4 years out. Maybe it's not about gaming after all...

    "In The Era Of Artificial Intelligence, GPUs Are The New CPUs"
    - https://www.forbes.com/sites/janaki...elligence-gpus-are-the-new-cpus/#205ebba65d16

    "Intel's GPU is not what you think"


    Hmmmm...
     
    Last edited: Dec 15, 2018
    hmscott likes this.
  49. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Did you guys even read what the hell I wrote?

    "Edit: And no, they had to go to AMD to get the performance necessary for their Hyades Canyon NUC, and their iGP wasn't cutting it. But aside from iGPs, GPUs are taking over the computational processing in the server segment, which means it is a huge threat to their marketability down the road. Computing at that level means if Intel doesn't do something, AMD with superior PCIe or ARM server chips will eat away at them overtime as the transition to GPGPU is already underway. Not only that, you have the rumors of Samsung exploring the GPU space, granted for mobile, which is hitting at where everyone thinks they are going. Embedded and Server is why Intel is doing it, both with very large markets that are growing.

    In fact, that is why I pointed to the list of people pouched from AMD. If you look, most of it was AI and deep learning engineers. According to PricewaterhouseCoopers, one of the largest auditing firms in the world, they used OECD methodology to calculate that 38% of US jobs, and around 30% of all jobs globally, will be replaced by AI and robotics by 2032. That is about 13 years from now. A Harvard professor estimated AI and robotics could take as much as 80% of jobs by 2050 (non-OECD methodology, though). What is driving that? GPUs, in part, AI ASICs in another part, etc.

    So bringing it back to your question on Intel, they are playing catch up to one of the largest markets that is about to boom after missing out on mobile, etc.

    Now I'm going to repost my comments from the Anand article on Intel's stacked chips, or at least the links to the numerous sources showing why Intel is where practically the entire industry is on 2.5D and 3D integration, then not visit this site for awhile, because otherwise I'm fully de-activating my account. Peace."

    This is LITERALLY the reason I'm moving on.
     
  50. jclausius

    jclausius Notebook Virtuoso

    Reputations:
    6,160
    Messages:
    3,265
    Likes Received:
    2,573
    Trophy Points:
    231
    Uh. Yes. That is why I used your post.

    You just said the ppl leaving AMD were mostly "AI and deep learning engineers", correct?

    And why do you think those people were targeted? Also, didn't intel snatch up a bunch of AMD folks? Does anyone really think intel is interested in the gaming GPU market? Especially with the shrinking PC marketplace... My guess is home consoles can't be faring much better.

    However, in reading my own tea leaves, the computational needs that AI and larger data centers will emerge as a new growing market. And right now it seems adding GPUs is one way to design systems meeting those computational needs. That is *my* best guess to explain the intel GPU move.

    What in the...???

    If the text in my post is causing you this much frustration, remember they're only words. I didn't attack you calling you a fanboy, troll, senile or even obtuse. They represent my own thoughts taking part of this discussion, they are not meant to inflict emotional or mental harm, but rather transmit ideas.
     
    Last edited: Dec 15, 2018
← Previous pageNext page →