The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous page

    Ryzen vs i7 (Mainstream); Threadripper vs i9 (HEDT); X299 vs X399/TRX40; Xeon vs Epyc

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by ajc9988, Jun 7, 2017.

  1. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    You are tilting at non-existent Wind-Mills and you are missing the whole picture.

    There are millions of super happy AMD users out there already, and ten's of millions more coming. They are getting better performance than they've ever had and are loving it.

    I'm thrilled with the performance of the 3700x @ OC of 15% - 20% - and it outperforms the 3800x - the next sku up.

    As most people do I back off of the top 3700x OC on everyday use and I don't notice the difference in everyday use - 15% OC is enough. For 99.99% of the owners of any CPU they run it as it is out of the box, and would never even explore the extremes of their CPU.

    If AMD or Intel threw away all but the top 1% of silicon, they'd be throwing away 99% of their produced silicon, and you think that's the way they should all operate? And to you that sounds like a sane way to run a business?

    You are complaining about a completely inconsequential difference that no one needs, or wants - even when you point it out and get them excited about it - they'll eventually come to their senses and realize it's much ado about nothing useful in everyday life. And, then they grow up and move on to real things that matter.

    And AMD keeps getting better from here...
    i3wftoxmot041.png

    Intel? Probably not so much...for quite a while...if ever.
    maxresdefault.jpg
     
    Last edited: Jan 24, 2020
  2. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    For AMD Silicon Lottery is a waste of time and money. SL is also a waste of money for Intel CPU's as well, outrageous prices so far outside the realm of reality for most people. SL's value is as a learning tool as to what is really important in life and gaming.

    No normal gamer cares enough about that extra 2%-3% to pay those crazy prices.

    AMD's CPU's are consistent in performance and the outliers like mine may show better "numbers" when OC'd, the real life translation of this small difference is unnoticeable - that's why I run it under the maximum OC.

    Even though at maximum OC the 3700x temperature is low around 70c at full load, at only 5% less OC the 3700x OC temps drop to 50c-60c. So what's the point of wasting power heating and cooling for an insignificant yet negligible performance difference?

    The same goes for OC'ing Intel CPU's, if you spend your life at that 1-3% difference - which is meaningless in real life, and totally irrelevant to any normal person buying a OC for actual use.

    And your benchmarking 1st thinking takes you to the point of suggesting that AMD should throw away - not sell - the 99% of silicon under the top performing 1%?

    Essentially throwing out everyone's perfectly good PC - except for yours...

    That's too far out there and gone man... :D
     
    Last edited: Jan 24, 2020
  3. saturnotaku

    saturnotaku Notebook Nobel Laureate

    Reputations:
    4,879
    Messages:
    8,926
    Likes Received:
    4,701
    Trophy Points:
    431
    I'm just gonna leave this here:

     
    hmscott and tilleroftheearth like this.
  4. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    Why are you guys beating a dead horse? AMD is not an overclocking sport CPU, end of story. Of the literal multitudes of thousands of users here less than a handful are upset about it, just like the x399 kill off. I doubt AMD cares that much, it is spilled milk but remember and point it out as well!
     
    Last edited: Jan 24, 2020
  5. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,213
    Messages:
    39,333
    Likes Received:
    70,629
    Trophy Points:
    931
    Agreed. Although, I am not sure that being a "normal gamer" is something actually worthy of aspiration or admiration. To wit, the same are just fine with the idea of paying a lot of money for chintzy disposable turdbooks. Popularity isn't evidence of intelligence. It is more likely to be the opposite. So, using that as a example might not be a good card to play.

    I do agree that the prices are truly asinine. That applies across the board to all tech. It is just more asisine the higher up you go on the food chain. It helps create a perception of value when the level of asinine is not as high. At the end of the day, we are all suckers of varying degrees, because nothing any of us pay for anything actually represents good value.
     
    Last edited: Jan 25, 2020
  6. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    I've still got a nice x58 motherboard and was thinking of taking a new AMD GPU like a 5700xt and seeing how it would run, maybe I could get away with a cheaper 5500xt or 5600xt, anyway here's a dual x58 build with a 5500xt whose dual CPU's provide 8c/16t and only drops a few FPS in his gaming tests compared to a 9900ks running the same 5500XT:

    Dual X58 "Phantom Gaming PC" - Can 2 x X5677 Xeons and a 5500 XT STREAM at over 144 FPS...?
    Jan 25, 2020
    Tech YES City
    It's been a long time since I put together a Dual Xeon System, and thanks to Ryzen Threadripper Patches on Windows 10, the NUMA aware (dual CPU systems) are performing quite a lot better. Though what is the difference between this and a 9900KS with an ASRock Phantom Gaming 5500 XT at 1080p and 1440p? Also can this system stream fortnite whilst getting over 144 avg fps? Let's find out.
    5500 XT PGX: https://amzn.to/38BT2ZO H510i
    Case: https://amzn.to/2TWOZ6q
    X5677 (x2): http://s.click.aliexpress.com/e/_sfHGJU DDR3
    (Reg ECC is fine too): http://s.click.aliexpress.com/e/eo9dt9sk
    600 Watt Power Supply: https://amzn.to/2uzB5w1
    Creative Audigy FX Sound Card: https://amzn.to/2usnsiu USB 3 addin card: https://amzn.to/2sUxgRJ
    CPU Cooler with RGB Ring Fan: http://s.click.aliexpress.com/e/DZjawS8s
    1TB HDD: https://amzn.to/2Rns3LH
    120GB SSD: https://amzn.to/2tQgweP
    Super Micro Motherboard (Dual X58): http://s.click.aliexpress.com/e/_spr33Y


    There are still a lot of server motherboards and xeon's out there for fairly cheap that could be used to put together a cheap gaming rig. Powers a bit high at idle, but the lower power 5500xt fits in nicely.
     
    Mr. Fox likes this.
  7. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,213
    Messages:
    39,333
    Likes Received:
    70,629
    Trophy Points:
    931
    Buying used is very smart. Used tech values drop like a rock as used car values generally do, and it's always nice to let someone else take the blood bath on the depreciation. Unless you accidentally buy someone else's mess, you come out miles ahead financially. The risk is the same for buying a brand new lemon, it just costs more when you buy a new lemon.
     
  8. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    AMD vs. Intel for Hardware Unboxed associate sales through Amazon shows a growing share of Ryzen vs Intel before Ryzen 3xxx with a far larger share for Ryzen vs Intel after Ryzen 3xxx released:

    AMD vs. Intel, Nvidia vs AMD: Amazon Sales Battle, What You Bought in 2019
    Feb 8, 2020
    Hardware Unboxed
     
    Last edited: Feb 9, 2020
  9. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,840
    Likes Received:
    59,615
    Trophy Points:
    931
    Intel 10nm ‘Ice Lake SP’ 14 Core Server CPU Gets First Selfie On SiSoft Sandra, Over 54% IPC Improvement


    http://forum.notebookreview.com/threads/get-desktop-server-now-or-wait.831921/#post-10991104

    It's official: Intel's high-performance 10nm is finally here. After what seems like years of waiting, Intel's high-performance 10nm processors have entered final testing stages and the tell-tale leaks are beginning to sprout everywhere.

    That said, I was surprised to see just how much performance this CPU was able to squeeze out. The Intel Whitley Ice Lake SP CPU ES sample scored a multi-core score of almost 28,000 points - which is a very impressive result for a 12-core part. I am also fairly certain that power efficiency would have gone through the roof as well. One thing is for sure, Ice Lake is on the prowl and AMD is about to face some serious competition.
     
    tilleroftheearth and hmscott like this.
  10. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    One catch: sisoft Sandra has used AVX 512 since the 2016 version. That means that this score may not be representative of real world performance.

    Other than that, impressive.
     
    Papusan and tilleroftheearth like this.
  11. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,840
    Likes Received:
    59,615
    Trophy Points:
    931
    [​IMG]
    32-core (2x 16) Intel Ice Lake Server Geekbench benchmarks leak — 80% more performance than AMD Epyc Rome 7702P with half the cores? notebookcheck.com | Mar 12, 2020

    Geekbench listings of what appears to be a 32-core (16-core dual-socket) Intel Ice Lake Server processor have surfaced. Ice Lake Server appears to put up a strong show against AMD's Epyc Rome 7702P in Geekbench 4 running on Windows Server 2019 with half the number of cores. AMD, however, has the core-count advantage, which still eludes Intel owing to lower yields on the 10nm process.


    A point to be noted here is that Epyc shows way higher scores in Geekbench 4 in Linux even with a 1S setup. So, a lot of factors can influence these scores and unless we get to see an Ice Lake Server Geekbench 4 benchmark running on Linux, it is not possible to make a definite conclusion.
     
    ajc9988 likes this.
  12. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    There is also the optimizations on AVX-512, also seen with some versions of geekbench, or that there are reasons why GB3 is still used for multithreaded points and GB4 only single threaded.

    Sent from my SM-G975U1 using Tapatalk
     
    Papusan likes this.
  13. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,840
    Likes Received:
    59,615
    Trophy Points:
    931
    [​IMG]
    Gigabyte skips AMD, updates entire Aorus and Aero lineup with Intel 10th gen Core i7/i9 Comet Lake-H notebookcheck.net | 2020/04/02

    Now that Intel has officially lifted the curtains off mobile Comet Lake-H, OEMs everywhere are free to announce their refreshed 2020 models. Taiwanese manufacturer Gigabyte is already on board as nearly all of its current models will be updated with 10th gen Comet Lake-H parts ranging from the hexa-core Core i7-10750H to the octa-core Core i7-10875H and even the unlocked Core i9-10980HK.
     
  14. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,840
    Likes Received:
    59,615
    Trophy Points:
    931
    Intel reverses the trend in Steam's processor usage survey by snatching a significant chunk of AMD's overall share notebookcheck.net | April 4, 2020

    Intel has witnessed a surprising reversal of fortune in Steam’s Hardware & Software Survey for March in terms of processor usage. While the chipmaker has been losing share against AMD consistently month after month, the latest data from Steam shows that Team Blue has now actually regained a significant amount of ground.
    Intel has been slipping in regard to processor usage in Steam’s hardware survey for many months, and it would have been reasonable to presume that March would be no different. However, the latest information published by the gaming platform tells a different story, as Intel has enjoyed a considerable percentage change for the month, rising from 78.2% to 81.25%.

    Source(s)
    Steam
     
    tilleroftheearth likes this.
  15. Ed. Yang

    Ed. Yang Notebook Deity

    Reputations:
    86
    Messages:
    751
    Likes Received:
    199
    Trophy Points:
    56
    MSI was the "first" Gaming Laptop maker to adopt Ryzen system to their product line with the Alpha15...... and it seems not to fare well with some buyers out there that cast a negative impression on Ryzen system usage on gaming performance. Gigabyte being the competitor in the gaming field, of course will hv a laugh and alert on MSI "stupidity choice" that drags them behind in sales race.
    In some markets, especially the Asian side, gamers are tilted towards what Intel performance can give them after years of market dominance. Hence, some early adopters of Ryzen system will tend to expect AMD's return will knock Intel down in a single punch with the 3000 series Ryzen CPU, which is IMO, the wrong direction to go to... Where the key factor in Gaming performance always falls on the GPU type, RAM size secondly, CPU comes last!

    So far we're seeing the Ryzen CPU enter towards the mid-end systems of the Gaming arena, such as the Zephyrus, not just simply available thru the entry level TUF Gaming range of Asus. We've yet to see any announcement of entry of the Ryzen into Acer's mid-end arena, the TRITON range yet. By the time it AMD gets adoption into the HELIOS family... U'll probably see Gigabyte have their entry products with Ryzen.

    [​IMG]
    Great news here if Lenovo's plan is happening... on their intermediate LEGION range.
    https://engnews24h.com/lenovo-legion-r7000-the-notebook-with-the-apu-amd-ryzen-4000-is-coming/

    Being a laptop user enters in late XP era with Turion TL50, which accompanied me going thru that dreadful Vista period... which i didn't choose to upgrade. Luckily...
    http://www.notebookreview.com/notebookreview/dell-inspiron-1501-review-2/
    ...then into Win7 with Athlon2 P320...
    http://translate.google.com/transla...-lh520/&hl=en&langpair=auto|en&tbb=1&ie=UTF-8

    The Ryzen series platform given me high(...and excitement) hopes on AMD products!

    Only if LapTop makers can make such user CPU upgradeable gears...



    ...AMD Adventurers can play around with different Ryzen CPUs to get more assessment readings!
     
    Last edited by a moderator: Apr 5, 2020
  16. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
  17. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    @Deks, where is the source that proves what he's claiming about Ian?
     
  18. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,840
    Likes Received:
    59,615
    Trophy Points:
    931
    tilleroftheearth likes this.
  19. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    AMD best-buds, TSMC, designed an 'enhanced' 5nm node for its future Ryzen chips
    https://www.pcgamer.com/amd-zen-4-specific-5nm-enhanced-node/#comment-jump

    Wow... it's actually really interesting that TSMC decided to design an enhanced 5nm node just for AMD.
    I wonder what prompted this development.

    Overall, it still looks like things are on schedule for Zen 4 and 5nm to appear next year (2021) or at the latest in early 2022.
     
  20. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    Just a bit off topic, Folding@Home has started to do Corona Virus. Might be a good time to show off your TR power, I know I have. An FYI the team for notebook review is 213698.
     
    ajc9988 likes this.
  21. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
  22. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    Intel cheer leading;
     
  23. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,840
    Likes Received:
    59,615
    Trophy Points:
    931
    Adios, Wraith? Why AMD's XT chips signal doom for a key Ryzen selling point
    So why do Ryzen 3000 XT CPUs even exist?
    pcworld.com | Today

    Nobody outside of AMD can answer for sure, and there are no doubt many reasons. But I suspect some part of it may be because the company is considering launching some of its next-gen Zen 3 processors without bundled Wraith coolers, in Intel-like fashion. By launching the Ryzen 7 3800XT and Ryzen 9 3900XT at the same full $400 and $500 price points as the Ryzen 7 3800X and 3900X, but without including Wraith, AMD could be preparing enthusiast expectations early.
     
    jc_denton, Rage Set, Vasudev and 4 others like this.
  24. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    I'm more interested in the rumor about an 8- channel memory hedt part. But, you are correct. This will dump that talking point. Which I always saw this as more of a total build cost/ price argument than anything and also would buy other cooling, regardless of manufacturer, anyways.

    So they likely are pushing those box coolers to only x600 to x700x skus. But no longer should any amd fanboys argue "but meh cooler"

    Sent from my SM-G975U1 using Tapatalk
     
  25. Ashtrix

    Ashtrix ψυχή υπεροχή

    Reputations:
    2,376
    Messages:
    2,080
    Likes Received:
    3,275
    Trophy Points:
    281
    Intel "Alder Lake" CPU Core Segmentation Sketched

    This is really BS move from Intel, my take on this...

    Big Little for desktop, the reason is pretty clear, their x86 uArch innovation stagnated along with their lithography node R&D. Rocket Lake leaks give us hints already - Ring Bus. RKL has odd HT vs Physical Core design, that's the only thing which comes to my mind, they do not have that Ring Bus scaling with their post Skylake Architectures so they are relying on less cores with high clock speed scaling and ST performance at the loss of HT performance, probably to keep them relevant in gaming. But this is going to hit them in the SMT performance again, with Consoles going Zen 2 based CPUs and more people buying high core parts, this is not good at all, AMD's SMT is already very strong, Ryzen 4000 will probably decimate Intel Z400 and Z500, esp the Z500 doesn't have the damn Gen 4 lanes from Chipset (from the RKL leaks the DMI version is still 3.0). Horrible, since X570 did it 1 year back and their Z490 MSI top board is frigging $700 without any Chipset Gen4 lanes / IO same like B550 damn it.

    This doesn't have any damned benefit in the Desktop LGA processors, even in the Alienware Area51M series or Clevo P870DM series LGA notebooks nobody gives a fck about the damn big little like phones, where the li-ion battery power sipping increases rapidly by the higher performance cores in the ARM SoC along with ton of other dedicated modules for RF/GPU/Memory etc. Maybe their Mobile might benefit but still at the loss of powerful cores it's a hogwash, when AMD's BGA processors are beating Intel BGA lineup at perf/efficiency, loss - loss unless the ST performance of 8 physical cores is higher along with those 8 or 4 HT cores (RKL has 4 cores HT disabled as per rumors) .

    This requires a lot of OS work AGAIN, AMD's NUMA processors had already seen their lack of adoption even AMD abandoned them, X399 didn't have support for the TR3000, and afaik only Milan moved more parts to the powerful cloud service providers like AWS. So Apple also probably thought along with their R&D cash into A series ARM processors a huge waste of money to put into OS rewrite along with so many first party sponsored software like Adobe products, esp when their Mac sales are also just 10% of their profit and even worldwide marketshare is under 10% for Mac OS, so better spend it on their own x86-ARM translation and A series SoC since many users are into ultra thin and light and don't care about BGA BS or not they do not even care for their own hardware ownership, too dumb or don't care to realize even the damned KB/Battery is sealed to chassis. But again people have to suffer for this BS, developers, users (Win10 nonsense SAC bugged trash releases)

    This doesn't paint a good picture as Intel doesn't have any confidence in their lineup also this looks like a temporary band aid again on the LGA1700. I hope AMD doesn't chase this bs and stay true to their Desktop performance x86 leadership. TBH This won't make to Xeon for sure, having a cheap arse crappy cores on the Xeon means server OS / Software / HW changes NO ONE wants to do that. Esp when Ryzen is piledriving and steamrolling with their EPYC and RYZEN CPUs on both Server and Consumer DIY.

    Edit

    Intel, the juggernaut, they made first FinFet tech on the planet, always top OC performance, gaming, and now pathetic. Need another Andy Grove.

    AMD, no OC, no fun for enthusiasts except Memory need OS to be updated due to ever changing HW irony because now Intel is also at that, also Zen is pitted against Intel's grandma Skylake on 14nm++ and still not able to beat in gaming.

    Even worse is both chase BGA BS in the notebooks and cripple them even further day by day.
     
    Last edited: Jul 14, 2020
  26. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Couple quick corrections to your posts:

    1) Intel is using fewer cores because the architecture is wider, so to speak. These changes are not a problem on a smaller node, but when you utilize them on larger nodes, they can be hotter. It is an engineering thing. They are not trying to go backwards. Further, although not getting the full 25% of tiger lake with the willow cove backport, they will get around 20%. If math serves well, they would basically have an 8 core that performs as well as their 10- core. In apps that do not scale over 8 cores and 16 threads, this will be a boon.

    2) AMD didn't abandon nodes. Microsoft was crap scheduling beyond two nodes. Because Microsoft couldn't fix their scheduler, AMD engineered around it, although for the 64- core chips, due to windows limitations, you are still forced to use 2 Numa nodes. Further and what AMD did abandon, is only having local memory access on two of the four chiplets, in part causing a stale data issue. They addressed that through equal latency per CCX. It added latency in some cases, but caused an overall speed up. Next latency reduction will come from moving to the entire 8- cores of the chiplet will be a single CCX moving forward, removing the need to go to the I/O die as often and reducing the impact of needing more than 4 cores.

    So I wanted to address those points, generally.

    Sent from my SM-G975U1 using Tapatalk
     
  27. Rage Set

    Rage Set A Fusioner of Technologies

    Reputations:
    1,611
    Messages:
    1,680
    Likes Received:
    5,059
    Trophy Points:
    531
    So, did AMD release Threadripper Pro in order to have the same market segments as Intel? Is Threadripper Pro their answer to LGA3647? Or is Threadripper Pro a poor man's Epyc? I'm trying hard to understand AMD's decision to produce Threadripper Pro.
     
    Vasudev likes this.
  28. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    So, let's talk through the decision and why it is not a poor man's epyc.

    1) it has roughly the specs of a "p" epyc, denoting a single socket. Generally, the single socket Epycs are cheaper.

    2) it has the 8 memory channels of Epyc and 128 PCIe lanes of Epyc.

    3) it's single core speeds blow the doors off of Epyc, but are slower than Threadripper. Here, the reason few in the tech press picked up on why, is because the extra pcie phy and infinity fabric, along with doubling the memory controllers, generate more heat. As such, clock speeds are lower than Threadripper.

    4) it has the same TDP as Threadripper of 285W, higher than the 225W of Epyc.

    5) it cannot overclock.

    6) has the "pro" series features, whereas epyc has IMEI and threadripper has nothing.

    Now, all of this adds up to close to the speed optimized Epyc CPUs, roughly targeted at finance and HPC. But, it drops a couple server features, only supports 2TB instead of 4TB of memory, and is by its lonesome.

    As a workstation, though, there were times intel's 6- channels of memory helped keep the cores fed on bandwidth better, and Intel's 1.5TB of memory support on their Xeon workstation chips was way more than the 256GB on Threadripper.

    But the single core speeds also were a pain regarding trying to use the Epyc "p" series as a workstation, driving commercial grade Quadros and Uber fast storage and networking solutions. They could do it, but sometimes you need that single core speed.

    So Threadripper Pro is the answer to Intel's workstation CPUs and tries to marry the best of all worlds.

    For example, at 8 channel memory, the 32 core has as much bandwidth per core as an Intel 8 core mainstream CPU, assuming same speed and equivalent timings. That is twice the memory bandwidth the current 32 core has, and the 64 core threadripper is dying to have more memory bandwidth.

    You put 8- channels on the 16- core, that is now the memory bandwidth quad cores like the 6700k and 7700k had per core. For applications that need bandwidth, that is huge!

    Then, you obviously now have a larger memory capacity for large data sets, up to 2TB. This supports UDIMM up through the most recent server registered memory standards. That is great for a workstation for science or for rendering.

    Does that help better explain it? Personally, I would love that 16- core unlocked to overclock. Imagine that water cooled with all 8 DIMMs populated at 3600+ on the ram, 1:1 IF, around 170-200GB/s memory bandwidth, cranking away...

    So it does seem weird at first, unless you are one that needs it. This gives Intel's $4000 Xeon W no quarter. Nowhere left to hide. It fits that specific market. And hopefully AMD gives the entire Threadripper line specs like these moving forward with single DIMM per channel. It is a great thought, especially after memory bandwidth doubles with DDR5.

    They still have to work on memory latency issues, which will help. By removing the CCX 4- core issue and having a single 8- core CCX/CCD, it no longer has to go to the I/O die for Zen 3 to get to the other 4- cores. Instead of 40ns L3 latency, it goes up to 47ns, but the access goes from 16MB per CCX to the entire 32MB on the core die while not having a round trip to the I/O die.

    Also, Zen 3 allows choosing the best two cores to be further away from each other. This means less concentrated heat allowing for higher boosts with 2 cores active.

    But that is what's coming. How that helps explain Threadripper pro and a little on Zen 3.

    Sent from my SM-G975U1 using Tapatalk
     
    Rage Set and Papusan like this.
  29. Vasudev

    Vasudev Notebook Nobel Laureate

    Reputations:
    12,035
    Messages:
    11,278
    Likes Received:
    8,814
    Trophy Points:
    931
    I think Intel wants to make a point to share-holders that they can do Intel+ARM Hybrid core to have better battery life and performance at expensive prices. Apple won't back out from ARM going mainstream since Intel BGA only has 5% improved IPC in some apps and comes with huge TDP and needs newer thicker heatsink with or w/o vapor chamber.
     
    Ashtrix likes this.
  30. Rage Set

    Rage Set A Fusioner of Technologies

    Reputations:
    1,611
    Messages:
    1,680
    Likes Received:
    5,059
    Trophy Points:
    531
    Bro, it was a rhetorical question. I have absorbed all of the technical specifications through AMD's sales. I question why this wasn't available alongside TRX4 at launch. Honestly, I would have chose the 3975WX instead of the 3960X, of which serves as a server for me.
     
    Vasudev, Papusan and ajc9988 like this.
  31. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Got ya. And yeah, it should have been available from the start.
     
    Papusan likes this.
  32. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    ajc9988 and Rage Set like this.
  33. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Oh, I was very on top of that. Unfortunately, Lenovo says available this fall. If true, it's even sadder.

    I would like to see the sTRX80 with Zen 3 by then or Q1 2021.

    But that is the only thing that might entice me in my zen 4 wait. After screwing TR owners, then trying with mainstream and having to back peddle, they are finally talking about a product I want, just I want it unlocked.

    It more likely is waiting for a 7nm i/o die for an unlocked chip. Power consumption is why clocks drop from TR to these, at least in part.
     
  34. W.D. Stevens

    W.D. Stevens Notebook Enthusiast

    Reputations:
    0
    Messages:
    27
    Likes Received:
    4
    Trophy Points:
    6
    I've been looking to do a complete overhaul of my system for a little while and with AMD seeming to be running rings around Intel lately, they seem to be the way to go. I'm going to get a nice display, build a desktop and get a laptop that hopefully still has a good amount of power behind it so I can do 4K video editing on the go if I need to. I've heard that video editing is actually maybe one area where Ryzen may not be the best option as editing programmes use a combination of CPU, Intel integrated graphics and the GPU to make timeline scrubbing and playback smoother as well as shorten render times.

    Does this become less noticeable the higher-end a CPU you have? For instance, if I were to get a 3900X for my desktop, would that essentially negate the advantage of having Quick Sync with an Intel CPU? Because I guess I could get a laptop with a 10875H so I have power and speed on-the-go as well if something like the 4800H or 4900H (assuming some more laptops with them are announced later in the year like I'm guessing/hoping they will) are going to make those specific tasks perform worse?
     
  35. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Do not grab Intel. Literally, quicksync with the integrated graphics on Intel is lower quality. Literally look up puget systems' analysis on this. Empirically, do not ever use quicksync if you want quality.

    A 3900X and 3950X generally do well, and with recent core optimizations on Adobe, along with stand alone graphics card usage, there is less reason to pick up Intel if you are focused on graphics processing. If doing tile based rendering, Intel cannot compete with AMD.

    The only place Intel has left is on latency sensitive single threaded tasks. That is photo editing (not video editing) and gaming, primarily.

    If you can wait for Zen 3 in Q4, do so. Otherwise, you'll be able to upgrade the platform once sometime down the road.

    If only discussing in laptop, amd high end is not paired with Nvidia high end GPUs. So outside of laptops with AMD's desktop CPUs in them, which also looks limited to a 2070 being paired which cannot be the reason the mobile parts are not paired with the high end GPUs allegedly due to fewer pcie lanes, I'd say an Intel laptop with a 2080 is likely good.

    I haven't reviewed the numbers on GPU acceleration in Adobe in months since they added it. And resolve is its own. Don't know if you are using different software than those.

    Just, if you do get an Intel CPU, do not use quicksync!
     
  36. W.D. Stevens

    W.D. Stevens Notebook Enthusiast

    Reputations:
    0
    Messages:
    27
    Likes Received:
    4
    Trophy Points:
    6
    Oh, yes, I should definitely say I would never use QuickSync for anything proper but it does certainly come in handy when I need something just shot out quickly like a reference to send to someone or just to quickly convert a film or TV show whose codec I only realised when I had already made food and sat down doesn't play properly with my TV. That does still leave the question of ease of playback and timeline scrubbing which is certainly handy to have a bit of extra performance at the expense of a bit of quality. But I will definitely look up the Puget analysis. I didn't know there was one so thanks for the pointer!
     
    ajc9988 likes this.
  37. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Yes. With Adobe now allowing dGPU, I'd recommend moving the offload to that. Linus tech tips has shown part of it.

    https://www.pugetsystems.com/labs/a...on-in-Adobe-Media-Encoder---Good-or-Bad-1211/

    https://www.pugetsystems.com/labs/a...64-NVIDIA-Hardware-Encoding-Performance-1723/

    The second one shows using an Nvidia GPU. Now, September is when the new Nvidia GPUs drop, so if you can wait, please do so. Otherwise, I'd recommend picking up a cheap card like a 1660 to get you by since the NVENC and NVDEC chip is the same number as a 2080 Ti. GPU memory may be a problem, but this is to get you until Ampere in September or October. And because you are doing 4k editing, that video memory is likely an issue. As I said, this is a stop gap. It is buying as low on the scale as possible with the same hardware acceleration capabilities. Faster storage can help, but it could be painful at times.

    Still, it is better than dropping $1200 now and another $1200 in a couple months. Also, Nvidia is a bit scared of AMD'S new cards, so process may come down, even if only slightly.

    And as you can see, even the AMD 64 core doesn't slow down the dGPU hardware accelerated encoding. So don't think you still need Intel because of latency or some other nonsense.

    Epos Vox on YouTube has a couple videos awhile back on Resolve and testing AMD and Intel with Nvidia and the 5700 XT.

    https://www.pugetsystems.com/labs/a...formance-AMD-Threadripper-3990X-64-Core-1662/

    I'd say look through closely at their benchmarks. Gamers Nexus also has done some good testing, but in regards to Adobe, most hardware review companies have adopted Puget Systems' benchmark of Adobe products. It has greatly standardized the testing.

    But Adobe finally offering dGPU hardware acceleration makes quicksync much less something to consider, even for your use case. This is why Intel is moved more toward a laptop offering for my recommendation only. I'd also, if using windows, recommend not getting the 64- core because the value isn't there. 32- core tops until windows fixes the forced 2 Numa nodes, etc. You just don't get a lot of scaling with the 64- core from tests I've seen. For server use, it's a different story. You might get more information on when a 64- core might work for you reaching out to Wendell at level1tech.

    There is also the Thread ripper Pro workstation from Lenovo if looking for a pre-built. With 8 channels of memory and support for ECC and up to 2TB of memory, it really can do 8K rendering if you ever needed it.

    So it is hard fully recommending without more detail, but this should give you resources to make an informed decision.

    Edit: also, I use a 1660 with hardware acceleration on my plex pass subscription and it works well. There is also a driver mod if you need more streams than their artificial cap meant to force you into a low end quadro.
     
← Previous page