The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    Ryzen vs i7 (Mainstream); Threadripper vs i9 (HEDT); X299 vs X399/TRX40; Xeon vs Epyc

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by ajc9988, Jun 7, 2017.

  1. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    https://www.anandtech.com/show/14664/testing-intel-ice-lake-10nm/4
    What you are ONLY referring to is the SPEC benchmarks in this article. Here's a question: Why don't they compare these CPUs in the regular workloads later in the article? BECAUSE it is a mobile part and would be CRUSHED by comparing it to the 9900K and 3900X.

    It's like you don't understand AT ALL context!

    I've said it is a good CPU design. I've sung the uarch praises. But it is still just a mobile CPU.

    I even said WHERE Intel still beats AMD. But your ignorance of multithreading in today's time and age is abhorrent. It is misleading.

    Since your opinion disagrees with EVERY REVIEWER OUT THERE of the hardware, I highly doubt you are the one correct on this. That doesn't mean that Intel's products are bad, just that you are WRONG.

    Intel even admitted their issues in the internal memo. Do I need to post that here? I can. Yet you would ignore it.

    Also, your opinion doesn't mean much in light of hard data showing otherwise in numerous reviews. With single digit leads where leads occur in many workloads, while losing at everything multithreaded at each price point, you are not getting "balanced" performance from Intel.

    Instead, it is about knowing your workloads and use cases. AMD wins some, Intel wins some, yet all these reviewers are still saying BUY AMD. Why? Maybe because the blend of workloads and pricing make it the better buy. In fact, with depressed pricing, first and second gen Ryzen are being recommended A LOT! Jayz has a standing recommendation to wait on the 3000 series until black friday after prices fall, because AMD's CPUs discount faster than Intel's, meaning that you can get a good deal waiting 3-9 months. AND he still recommends AMD for many things! Imagine that.

    Then you have Google rumored to be saying BYE to Intel, having MBs created for Rome and Milan server CPUs from AMD. Is Google wrong? I doubt it. How about MS? Or Baidu? Or Tencent? or HPE? Or Cray (now owned by HPE after the Frontier deal was finalized, which Intel LOST on that bidding)? Or ....

    The point is, say what you want. But the fact you acknowledge ZERO use cases for AMD CPUs at this point shows you are biased in such a way so that your opinion means nothing. I acknowledge where AMD has weaknesses, recommend Intel for certain workloads, recommend Intel on platform costs (like me telling Ole to grab the 9700K or telling Jaybee to grab the 9900K for their laptops instead of buying the desktop Ryzen 3000 platform due to overall acquisition cost and life span).

    You have ZERO nuance to your statements.

    Edit: Here are some videos standing for my proposition:









     
    Last edited: Aug 2, 2019
    saturnotaku likes this.
  2. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    As it is AMD's performance advantage will not convert fayboy's or the shills of the CPU world. As it is there are a lot of maybe's and what if's that have to extend well into the future. When ever you hear the "you just wait and see" just smile, give them a pat on the head and move on.
     
    Last edited: Aug 2, 2019
    hmscott and ajc9988 like this.
  3. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    I'm not afraid to stand up for my own conclusions. Patronizing remarks and useless words and videos won't sway the facts of what I've seen either.

    I'm much more knowledgeable than either of you give me credit for. I buy what works for today, not for some future lala land where 8+ cores will magically be required overnight.

    The only reason the masses should buy AMD is because it's cheaper. The 'experience' is vastly over-rated.

    Personally, I'd like to thank them all. Just the push Intel needs to give the rest of us what we can actually use that's wholly and 100% better than what we have now.
     
    ole!!! likes this.
  4. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    A true expert never has to stand up for their own conclusions as they are constantly learning and are holding too or changing the conclusions as knowledge is gained.

    I NEVER assume to know more than the next guy, I am always looking for an education.

    I only upgrade where an overall 15% or greater enhancement is involved, this is between platforms or within. Saying it is finally somewhat better so you have to upgrade is slightly insane.

    Yes Intel has been surpassed and as they say, the ball is now in you court so entertain us!
     
    hmscott likes this.
  5. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    i agree on this. no doubt intel held technology back due to lack of competition but the ones had that idea that lead the company back then are now replaced at least for the CEO. its great to see intel finally step out of that dormant state and give something worth the hype and sunny cove is the first glimpse of that, then tigerlake, then those arch on matured 10nm process.

    if AMD can give similar things intel can, then im sure most intel users have no issue going AMD. there are just some things different.

    @ajc9988 if someone's workload is all multi threading like video/audio or photo editing amd is definitely a better choice, by far. cheaper 12-16 cores that are more power efficient, better SMT performance. consumer though those legacy software aren't getting updated at all so until we find replacement for them that uses multi threading, intel is definitely my go to choice. (throttlestop plays a big part of it too cause I can do so much with just 1 software)
     
    tilleroftheearth and hmscott like this.
  6. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    First, don't assume because Intel has multiple microarchitectures (uarchs) sitting back and being refined as them holding back. They were FORCED to not release it because they screwed up with the 10nm process for which those uarchs were designed. There is a difference! One would allow them to flood the market with it. Is putting out a mobile chip flooding the market? Is their internal memo trying to spin Intel's problems holding back tech? The answer to both is no!

    This is also why I sing the praises on the uarch side. Those engineers did great. But, if you look at old roadmaps, Intel thought EUV lithography would be ready by around 2015 or 2016. It not being ready was theorized as the primary reason for holding off on 10nm. So we got refresh after refresh on 14nm, something Intel never did and that they gave up the process lead to TSMC to do. Do you think Intel purposely gave up the process lead to TSMC? Or allowed all other fabs to catch up or go beyond their densities on purpose? No, they were polishing their turd sandwich.

    This is why I hate that talking point: that Intel was just holding back. It is a BS spin on their problems. It's like a bully on a playground saying they were holding back when a smaller kid cleans their clock. It is often farcical.

    Now, pointing to mismanagement is fine. We know one of their top uarch engineers has directly stated that in 2016, when realizing 10nm would take forever, that Intel should start work on backporting the new uarch to 14nm. We know they didn't listen because we still only have skylake refreshes on 14nm. That says management lacked foresight.

    Moreover, Intel is NOT out of its dormant state. It won't be for years. The 10nm process has limited fab capacity. They barely got yields up on Ice Lake high enough to commercialize a product, and did so through much turmoil over gutting the plans they had on 10nm. They changed a lot of what was supposed to be used on 10nm just to be marketable. Next year, because the other market they are losing market share in that effects their profitability most is being assailed, they will have 10nm server chips. Out of Server, Mobile, and Desktop, Desktop is the smallest share of the pie. Intel is playing triage. They are trying to stop the bleeding in the most profitable segments, and that is NOT desktop!

    This is why Intel tried saying AMD manipulated the comparison at Computex by comparing 64-core Rome to 28-Core Xeons. Intel claimed they should have used the 56-core behemoth. AMD retorted show us where to buy one on the market (good luck finding them). Intel said they didn't use the AVX-512 optimized code. AMD retorted it did use that code and set it up the way they explain on the open source page. This was covered in the Wendell and Ian Cutress interview.


    We could see their struggling when they showed off the 28-core Xeon at Computex the year before, using a water chiller to get 5GHz.

    My point is that just because a company is late to market doesn't mean they were holding back because lack of competition. It doesn't mean they were milking the market, even though Intel did with high prices for years. Instead, look at the totality of the circumstances to get to the conclusion.

    Also, you misunderstand what mix work flows are. How you look at what processor is better for what purpose is NOT how you describe it.

    For example, AMD is stronger in multithreaded, specifically heavily multithreaded, workloads. We know Intel is better at single threaded and lightly multithreaded workloads. I know very few that would argue with that characterization (small note, many software packages, like photoshop, are actually more single threaded due to the nature of some of how it functions, some is because Adobe hasn't optimized their software enough in a LONG time, but that is single threaded and lightly multithreaded, which is tiller's workload and why Intel still outperforms, generally, on his workloads (not hating, just a fact); whereas software like Premiere is now seeing AMD outperform Intel, because it is more multithreaded).

    So, we then need to look how the person uses their PC. Do they run a heavy program solo, or do they work on it while it renders, converts, encodes, decodes, exports, etc.? Do we run multiple programs at the same time? How much time is spent doing what task (so how much gaming time is spent on the machine versus productive app time, or crossover where they game WHILE having the CPU crunch in the background)? Those questions answer what is your workload.

    You then approximate the time spent on each task. Break it down into percentages. Once you know that, you then look for reviews of similar tasks to approximate what you would expect on performance in your own tasks. But, you need a little foresight here. Not just looking at your current workflow, but what you may want to change your workflow to with a change in equipment, if that hardware is capable of the change.

    Then you can better examine what the benefits or drawbacks of a platform is. By thinking all consumer software is single threaded, you would be wrong as many of those workloads support out at least 4 cores, if not more at this point. Sure, you need to consider if you have a legacy program that is still not made for heavy multitasking or modernized to fully utilize the hardware. But those software programs are becoming fewer and fewer, especially as new versions come out or new software which performs better at the same task (note-not all that is new is gold, sometimes, those old programs better fit into your workflow, would take retraining resources, or new programs lose features needed that they old software had).

    All of this needs taken into account. After that, plus breaking down the percentage of allocation to each task, plus examining changes in workflow and software used, you weight what is most important in those breakdowns, because sometimes a small task may have enormous weight, like encoding a video, even if gaming is done the rest of the time on it. You care more about what makes the money than enjoyment of one's leisure time. Then, after you have weighted it, you examine which product, overall, will best fit your needs.

    If it is a pure gaming rig, nothing else on it, including no encoding for streaming, then Intel is STILL the go to. If you use photoshop, Intel still is the go to. These are NOT disputed. But, if gaming is secondary purpose, it then becomes a question on what is enough performance for gaming while being best for the primary task. AMD closed up the margin on gaming a fair amount, but is still behind. But it curb stomps Intel on many modern multithreaded tasks. This is why you have to look holistically at it. Then comes platform features and costs, etc.

    Buying a tool like a computer is no simple matter, and marketing at any company makes it HARDER, rather than easier, especially guerilla marketing. People also choose what is familiar, because they know what to expect, which is seen in your last statement. As you can see, I weigh a lot more. We all do, really, even if not consciously.

    All I can do is give a better framework in which to help people make decisions. The decision is still theirs. And the consequences of those decisions are theirs (like those poor saps that bought the 7700K over a 1600X, 1700, or 1700X based on reviews at the time with zero foresight; see the 1800X vs 7700K video above where the 1800X basically was same performance as the 1700X and re-examining the changes in optimizations since then and performance benefits, even in games, over the past two years).
     
  7. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,840
    Likes Received:
    59,615
    Trophy Points:
    931
    Core for core included overclock. Nope. AMD still lag behind. I talk about unlocked chips and that you take advantage of it... See etc 8 cores chips as oc’d 3800X vs. oc’d 9900K. AMD will still perform better if they add the extra needed cores (one step up - 3900x with 50% more cores). Nothing has really changed. It will change the day AMD offer better overclock than they have offered with the Ryzen’s. But as we know they max out clocks already. Not that Ryzen is a bad chips. It’s just what we have today.

    AMD Ryzen 7 3800X @ HWBOT
    Intel Core i9 9900K @ HWBOT
     
    Last edited: Aug 3, 2019
    tilleroftheearth likes this.
  8. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    Do not care about a core for core comparison as AMD is not marketing a core to core offering. Now if they only offered the same core count at the same prices your argument stands, but as you mentioned they do not and won't for a while.

    Edit; your argument holds less water if there is a TR3 offering 32 cores at a lower price point than Intel's 16 core i9-7960.
     
    Last edited: Aug 3, 2019
    jaybee83 and ajc9988 like this.
  9. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Intel has been surpassed? Hardly. I'm not Intel and I'm here to learn too not entertain anyone.

    The most important thing I've learned is to trust oneself. I'm not gaining anything by lying to me. But others have much to gain by doing so.

    I'm not here to be cool or to run with the pack, just to feel accepted.

    To me, facts are real-world results always (as they just can't be faked and, they include all of the minutiae without any navel-gazing and circular analysis and necessary future actions/predictions required).

    I read many hours a day on any and all subjects I care about; learning is not an issue for me.

    However, I do take offense when I am told what to 'know'. :rolleyes:

    Especially when direct experience shows otherwise (which some here seem to be failing to take into account).

    I upgrade the ( many) workstations I am responsible for when the total work involved will be recouped with the for-tested performance increases within a given time period (usually less than a year and sometimes less than 3 months) and not before.

    Buying an AMD platform right now to even test in my environment is like suggesting we save on 'electric lights' and use candles again instead.

    So many things that AMD still needs to get right (today). And not some future time-frame I'm willing to wait (and bet) for.

    If/when the O/S and the software I use and the workflows I have established are 'many-core' enabled (meaning greater than 8C/16T for the majority of my workstations), then an AMD platform will need to be fully tested. But I have no doubt that by that time, Intel will still be offering the better alternative then too.

    The ball is and has been in AMD's court for a very long time now. They are still learning and they have a prolonged way to go to be a no-brainer replacement for anything Intel has offered in the recent past.

    It's not a matter of superior ' use the 'score' you want to see improved here'.

    Rather, it's a matter of the entire experience that a new platform gives you (or, in AMD's case; should give you) instead.

    AMD, you've shown up for a couple of seasons and made some great plays. Let's see you win a championship for once too.

    (Yeah; 'once' is correct here, because even when they were obviously the stronger performer ~14 years ago, Intel platforms were still more reliable then too, ime).


     
  10. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    Not a single expert out there agrees Intel has not been surpassed. This was not true before Zen 2 but it is now. That you know better than everyone else in the world that matters, well I bow to your superiority, and that isn't a smirk I am wearing (I swear).

    Now I know others, even though AMD is faster, will hedge on switching over. They should do it slowly and cautiously, if at all. I can respect if it just works for them despite the cost etc. and want to resist change, that may work for them for a while or even forever if no upgrades are needed. to outright ignore the truth, well for them I loose all respect as being even a novice hobbiest.
     
    ajc9988 likes this.
  11. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    I want to note here that Tiller's work involves Photoshop, which Intel still dominates performance in that software specifically. So, he may be correct regarding HIS workflow, which isn't representative of MANY workflows that exist. And very well he may be closeted in not examining any other workflows out there (head in the sand type).

    Just wanted to give context to his statement.
     
  12. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Far from a novice. Not looking for anyone's respect. The so-called 'experts' out there have their own agenda which I care nothing for.

    Scores and theoretical advancements may show AMD surpassing Intel (which I agree with on most points), but to keep going back to that statement over and over again is missing the point I'm making.

    On most consumer workloads, AMD's 'best' is still not ahead of Intel's 'best'. Not 'scores', not theoretically, not biased, no fanboyism either. That is what AMD users/owners have told me and I've seen first hand too.

    The experts out there can keep spouting their opinions. I will believe my own eyes and ears. I have nothing to gain from showing AMD is worse if it's better.

    I'm not simply resisting change either. I'm the first to jump when an obviously better product is presented and I'm able to use it fully (my previous posts in these forums attest to this).

     
    Papusan likes this.
  13. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    You are giving no context because I am not comparing my workloads in the statements I made above.

    If you are not following the conversation fully, please refrain from commenting on my behalf, thank you. :rolleyes:

     
  14. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Oh, I am. But ONLY in looking at specifically your workloads is that in any way resembling truth of your statement. I have to either narrow the universe to make your statement true, or it is false.

    If a multithreaded workload is being used, especially since you buy at price points, the 3900X crushes the 9900K in many of those workloads.

    If you are trying to do a core to core comparison, ignoring the price differential, you are not making a decision on the price to performance. You are ALSO ignoring the workloads where even the 8-core AMD chip beats the Intel chip. Thereby, you are ignoring information out there and not explaining how limited you are on the knowledge that everyone else seems to have gleamed casually from following developments.

    So, I was trying to be nice rather than calling you a liar. Sorry about that.
     
  15. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    You are just, as usual, personally attacking me. How can you answer for me here?

    You have your agenda, obviously.

    Stop spreading lies and stop pretending you know what I want to say better than I can.

     
  16. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    When someone has no evidence of truth other than to say trust them that despite the evidence what they say is true, well?
     
    ajc9988 likes this.
  17. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Did I ask anyone to trust me? Go and find your own platforms to compare against each other. Like I have. :rolleyes:

    I've asked almost two years ago to show me a comparison/article of real-world consumer workloads that benefits from an AMD 'many-core' system. Still waiting. Sigh...

    Do you know why there isn't such a thing? Because AMD isn't superior where it counts for the masses. ;)

     
  18. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    upload_2019-8-3_10-4-35.png
    upload_2019-8-3_10-4-58.png
    upload_2019-8-3_10-5-36.png
    upload_2019-8-3_10-6-4.png
    upload_2019-8-3_10-6-26.png
     
  19. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    How are any of those synthetic 'scores' meaningful to a normal consumer, real-world workload? Lol...

    Really reaching for straws here... must be the desperation setting in of believing in fluff pieces by 'experts'. ;)
     
  20. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    upload_2019-8-3_10-7-19.png
    upload_2019-8-3_10-7-48.png
    upload_2019-8-3_10-8-46.png
    upload_2019-8-3_10-10-29.png

    I even included some showing that certain instruction sets on that software will favor Intel if used. I didn't show all the ones Intel won, nor that some the 9700K crushed the 9900K and the new Zen 2 CPUs. But, you asked for real world workloads.
     
  21. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    So, you really ignore that Cinebench is used as a rendering engine? Or that Corona, V-ray, and Luxmark are all commercially available rendering engines. Or that use of Java and website design benchmarks are used for professionals that use those? Or that people use similar for encryption or for 3D particle physics? Or that a butt ton of consumers use handbrake transcodes for their personal libraries?

    Are you obtuse? To those that use those things, it IS a real, not synthetic, workload!

    Or do I need to show blender? Or PCWorld or GN excoriating Intel at not calling those real workloads when the purpose is to sell the underlying product on which the benchmark is made? Are you really going to ignore that?

    upload_2019-8-3_10-19-55.png
    upload_2019-8-3_10-21-15.png
    Here is your workload:
    upload_2019-8-3_10-22-12.png

    upload_2019-8-3_10-23-27.png
    https://www.pcworld.com/article/3405567/ryzen-3000-review-amds-12-core-ryzen-9-3900x.html?page=4
    https://www.anandtech.com/show/14605/the-and-ryzen-3700x-3900x-review-raising-the-bar/8

    Fine, in the past, you referenced Puget Systems. What do they have to say on it?
    upload_2019-8-3_10-32-7.png
    https://www.pugetsystems.com/labs/a...9th-Gen-Intel-X-series-1529/#BenchmarkResults
    upload_2019-8-3_10-33-18.png
    https://www.pugetsystems.com/labs/a...9th-Gen-Intel-X-series-1538/#BenchmarkResults
    https://www.pugetsystems.com/labs/a...9th-Gen-Intel-X-series-1535/#BenchmarkResults
    https://www.pugetsystems.com/labs/a...-2-Intel-9th-Gen-Intel-X-series-1537/#Results
    https://www.pugetsystems.com/labs/a...-2-Intel-9th-Gen-Intel-X-series-1536/#Results
    All of those say it depends or that AMD's new CPUs are the clear winner at the price point!

    So please tell me that those are "fluff pieces" by "experts." I'd love for you to say that.
     
    Last edited by a moderator: Aug 5, 2019
    jaybee83 likes this.
  22. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,840
    Likes Received:
    59,615
    Trophy Points:
    931
    All show 12 cores does matter(50% more cores help amazing) Could you find 3700x/3800x vs. 9900K all high oc’d? (Both 8 cores chips and unlocked). Most test with Ryzen contain high overclock for AMD but not for 9900K. Often only showed with stock clocks or with 95w cap.

    Just look at Hwbot and CBR-20 etc.
     
    Last edited: Aug 3, 2019
    tilleroftheearth likes this.
  23. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    1) most production machines are NOT overclocked, meaning if he wants real world production information, that does NOT MATTER!

    2) you are focused SOLELY on the benchmarks, which is YOUR use case. Intel is still better for benchmarking. Period. I say the same for gaming in general also. If those are your sole use cases, go Intel.

    3) It isn't an OC necessarily if it is the behavior of the CPU. Are you telling me Intel couldn't do an algo to boost in the same way AMD does and to scale dynamically on power and temp?

    So stop with the crap on that. It means NOTHING on the point I was making!

    Edit: Also, on the 12 core versus 8 core thing, here, it is talking about buying at price points, and at that price point, that IS the proper comparison.
     
  24. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Keep bringing up unrealistic mass-consumer examples. Really proves how obtuse and blindered you are. :rolleyes:

    Forget about me with my 0.00001% case workloads.

    Forget about all the video-kiddies that need to render moar - they're what, 1% at most of the computer users of the world?

    I'm still asking to see the case where a normal/typical consumer gains by going to currently half-baked AMD platforms?

    For the millionth time, I've conceded that for the rare cases AMD should be considered and even chosen for those whose workflows have many cores. Stop harping on that.

    Give me new info or concede that AMD can't touch Intel for those consumers since circa 2005... sheesh! :D

     
  25. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,840
    Likes Received:
    59,615
    Trophy Points:
    931
    I just ask what will benefit me all over. I prefer run unlocked chips oc”d. Whatever in bench or real life. Exactly for you. You don’t buy binned for stock clocks. Neither if you got an ok chips.
     
    tilleroftheearth and ajc9988 like this.
  26. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Are you ignorant? Seriously, those links SHOW and are commonly USED by professionals in purchasing decisions. They even break down under which use cases IN the software each one performs better, instead of just overall. Puget Systems designed the standard for Photoshop testing used by EVERY MAJOR OUTLET!

    It also showed Premiere and Davinci results in those links, if I'm not mistaken, which Davinci Resolve is exploding with the youtube community right now.

    If you cannot understand things and need spoon fed, I could do it, but you'd have to pay me to be your wet nurse.

    Those are not rare cases. So just move on!

    Funny how you present ZERO EVIDENCE to support your statements, yet I gave many.

    And for you, and most of the OC community, Intel is the way to go. I can admit that.

    In fact, with Ryzen, even if you buy a binned chip to get higher boosts, you still need lower temps to scale it. Also, many top OCers said the new Ryzen chips are fun to bench, while also saying Intel is still top for that (specifically referring to LN2 overclocking and overclockers).

    But, aside from that, overclockers are still a minority of all purchasers. I think a couple years back it was only a minority that OC their K series chips, ignoring MCE.
     
    Last edited by a moderator: Aug 5, 2019
  27. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,840
    Likes Received:
    59,615
    Trophy Points:
    931
    I don’t only talk about that. What will give me all over best performance (real life)with high oc’d 8 cores... AMD or Intel chips.
     
  28. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    But that makes no sense because that 8-core Intel chip costs the same as the AMD 12-core. You want core to core solely for OC and bench. If you truly wanted over all best performance, the 12-core BEATS the 8-core at the same price point.

    So, you either want the best performance per dollar, which is AMD with the 12 core, or you want the best CPU in a core category, which the 8-core Intel 9900K is currently that CPU. There are some cases that AMDs 8-core wins, but not a lot. But it is also priced more than $100 less than Intel's chip.
     
    hmscott likes this.
  29. Vasudev

    Vasudev Notebook Nobel Laureate

    Reputations:
    12,035
    Messages:
    11,278
    Likes Received:
    8,814
    Trophy Points:
    931
    NG Spectre exploits will cut that IPC %age boost in near future.
     
    hmscott and ole!!! like this.
  30. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    The AMD chip has Intel beaten badly. I can tell you even with my 1950x, as a normal user, it destroys the lower core count Intel systems. This even though the cores are not equal. Under my workloads individual cores rarely go over 50% load but there is always a ton of extra cores for adding tasks like compression/decompression encoding light gaming file transfer and a ton of stuff all at the same time. Even though my individual cores are only 75% that of Intel's.

    The 3900x and upcoming 3950x and core enhanced TR3 offerings have me very excited.
     
    ajc9988 and hmscott like this.
  31. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    icelake already patched though even if not all it is much more secure than what we got now and i dont even patch it right now lol
     
    Vasudev, tilleroftheearth and ajc9988 like this.
  32. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    I find it amazing, benchmarks up too 12nm showed Intel ahead and they seemed to matter to the Intel fanboys. Now that zen2 is showing AMD overall in the lead those same fanboys are dismissing those same benchmarks. Once they have a glimmer of hope though those benchmarks selectively become relevant.
     
    Last edited: Aug 3, 2019
    saturnotaku, ajc9988 and hmscott like this.
  33. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    You do know normal users are not professionals, correct? Your ignorance is showing here, not mine. :rolleyes:

    Yeah, the need for a wet nurse isn't mine here.


     
  34. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    I've stated since joining the forums here
    that benchmark 'scores' mean less than squat to me.

    Still waiting for the AMD fanboys to show AMD superiority for the masses in consumer workloads. Yeah, real-world, not 'scores'.

     
    Last edited by a moderator: Aug 3, 2019
  35. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    There are no such measurements other than benchmarks of real world apps. You should know this, if not I am not sure what it is you claim to read up on.

    On the same note show what workload you use that favors Intel, can't huh. you and others always relied on beating the superior numbers and how workflow was faster on Intel now though the numbers all favor AMD.

    If your work flow is at max with what you have then stick with it. Wasting money is never a good idea. Changing over just to get a 1 second advantage in a system response where the response only happens once every few minutes is not worthy.

    When I worked for a telco we would load 5-10 pages from proprietary servers while working with customers. since most pages loaded while talking even before the data was needed a faster system would not have improved work flow speed.
     
    saturnotaku and hmscott like this.
  36. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    You do not realize Puget systems used and created customized real workflows for apps like Photoshop, premiere, after effects, davinci, etc., right? That you can download their Photoshop workflow for the "benchmark", follow their instructions, and compare your hardware to their numbers, right? That you can weigh how often you use which element in it in your own workload.

    I mean, I feel like you need a guardian ad litem with the levels of ignorance being shown here.

    And once again, you back up your assertions with nothing.
     
    saturnotaku and hmscott like this.
  37. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    *chuckles* these fresh new cpu flamewars in the forums are a clear sign that we, as consumers, seem to have more choices now than before when it comes to CPUs. thats all that is important, no matter how you choose in the end. :)

    Sent from my Xiaomi Mi Max 2 (Oxygen) using Tapatalk
     
    ole!!!, ajc9988 and hmscott like this.
  38. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    yep everyone wins. i just re-did some fp/int ipc difference from the tigerlake leak, seems like 9-12% boost over icelake and icelake is like 15-17% over skylake.. pretty crazy if you ask me. intel gives in 2 yrs 30% while over last 9yrs only gave 20-25%
     
    jaybee83 likes this.
  39. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    And, you do not realize that I have a much better system in place for my workloads to test current/new hardware with. It is my exact workload. :rolleyes:

    A mass-consumer won't even know to go to that website to download, configure and properly run that benchmark which is still very narrowly focused on productivity, not consuming digital media. :rolleyes:

    You are in dire need of learning to respond directly to questions, not just blindly pushing your own blatant agenda no matter what anyone else's response is.

    Personally, I don't feel a need to compare my systems to anyone else's, therefore any third-party 'benchmark' is effectively null to me. The only criteria for me are that the workflows I have been using a variation of for the last decade+ are faster on any new potential platforms than on any of my previous platforms. Period. There is no ambiguity here. There is no need to translate a 'score' to what may or may not be happening inside my workflows. And there is certainly no need to try to adjust the benchmark to at best crudely mimic my workflows to get a 'score' of my own. That is doing it backward and doing it wrong too.

    You may feel that this works for you. Good luck to you. You have a lot to learn still. Being able to parrot spec's, uarch changes, et al, isn't anything special in this day of cut and paste. Even the tech writers do that too on every single article they produce...

    But once again; I am not talking about me here. I'm asking about the benefit of an AMD platform for the great unwashed masses. In those cases, for the consumers that buy computers because they need to, (to be online, play iGPU capable games and spend their lives on social media), there is no benefit (performance or otherwise) of buying an AMD system except price/budget.

    And even then, it depends on the specific system(s) discussed, with anything before the most current Ryzen platform is a waste of money (and not any great 'savings' as many retailers are offering now to get rid of those old junk systems). Compared to Intel offerings (even at slightly more cost, in some cases).

    You know something? Spoon feeding you the same information over and over is getting very tiring. But others have seen your modus operandi now, they will continue to quietly ignore your impressive-looking but low-quality and information-void posts.

    The promise of multi-core can only/mostly be taken advantage of by a very few %age of people that are either content creators or video editors (for the most part).

    For the rest of the population, the promise of multi-core is overshadowed by the technological decisions made to create them in the first place which hinders the other, more important and more common compute modes everyone else uses every single day. Something which Intel continues to excel at.

    If AMD and Intel were to switch labels, how many would be crying that Intel was paying off the 'industry' as a whole to push their useless multi-core agenda on the masses that don't need, can't use and never will use that kind of focused computing?

    I'm sure it would be the same people that currently find AMD as their personal savior but are blind to the hypocrisy they are propagating to the less informed consumers out there.

    For completeness and for the last spoon-fed line I will offer; for those that do use workflows and programs that benefit from multi-core processors, AMD is a huge boon. I've conceded that many times already. Let it rest.

    But just because the Saturn V rocket is the pinnacle of human/payload transportation, it doesn't mean that every kiddie needs one in their backyard to go to the park to play with their friends too. :rolleyes:

     
  40. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Well somebody had their wheaties this morning. So you flop on professionals having use for the chips, then say consumers don't. Yesterday it was no use for the chips for anyone. Glad you came to see the light.

    You have ZERO information in your posts. You show ZERO comparative data. You also do not seem to have the simplest idea of deployment processes, where a professional can look at the data presented, which then leads to setting up a test bed to check against their own workflows and their own optimizations before purchasing decisions on wide scale deployment are made. That means you are purposely being bull headed on the topic WITHOUT adopting industry standards on deployment practices. If anything, it just shows how inept you can be.

    If you weren't so, well you, I also would have pointed to the deficit of AMD's new processors in After Effects, where if you are using that in conjunction with other software, AMD may not be the best purchase decision, or that if you already have the 9900K, you likely would not get enough to upgrade this round unless you need the scaling that the 3900X provides. If having to buy from the ground up, that is AMD at just about every price point.

    But, since you have no idea on processes, test beds, etc., as is obvious by your statements and tone, it really is a waste of time trying to explain it to you.

    I've spoon fed you information for years now. I have presented actual data, citations, etc. You provide statements about "your workload," while not sharing that workload fully except in small bits and pieces. That is fine except no one has a full frame of reference, hence making your allusion to it less than meaningless.

    As for multi-core, you are wrong. That is what the HU video on the 1800X and 7700K was all about.

    Everyone knows Intel purposely limited us to quad cores on mainstream. Everyone ALSO knows that every time we have added cores, like dual core, quad core, etc., there is a group that always says the same thing: "you can't use those cores and there is no purpose buying a CPU with them." But then why did Intel immediately change their position and increase core counts? Why has the software changed so quickly to utilize those extra cores, even on Intel platforms? It's because people like you lack insight and foresight.

    It's also why, when userbench changed it to increase ST weighting, making the results on their website ABSURD to anyone with any knowledge AT ALL, the communities, both AMD and Intel, said this is ignorant.

    And considering the pricing of those benefits and those cores, who gives a flying rat's butt about the need? You are now saying don't buy it, buy a lower core count for the same price, which, once again, was bad advice 2 years ago and will be bad advice today. If it wasn't bad advice, then explain why Intel is scrambling to put out multi-die chips using EMIB. I'll wait. Yeah, that is because Intel already knows the only way to get around the issues is AMD's solution, which is why they hired Keller.

    Core counts are going to increase for the next 5 years most likely. We will get 128-cores on server in the next 2 years most likely. We will see the 12-core and 16-core chips on mainstream decrease in price and increase in popularity, as well as code optimizations to use them better than now. We will see HEDT move farther with more cores and cheaper pricing as well. Do you really think, considering AMD spanked Intel on the server front that that will not happen again on HEDT with Intel trying to do up to 24-core cascade-x versus the AMD Zen 2? Seriously, AMD will have their better 32-core squaring off against the 16-core to 24-core price point on Intel, which Intel cannot touch that. Or the 64-core HEDT AMD TR versus the 28-core Cascade-X OC Xeon. It's a bloodbath, especially at that level and especially seeing already the 64-core rome eat the 2x28-core Xeons alive.

    Seriously, I'm glad you put on your big boy pants this morning to respond! But you still grossly lack content in your statements. But at least you are bold! LMBO!!!
     
    hmscott likes this.
  41. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Are you serious? You can't even read coherently. No point trying to talk to you at all.

    If the below is what you get from my responses, I'd be better off to talk to a vegetable instead. :rolleyes:

     
  42. Ionising_Radiation

    Ionising_Radiation ?v = ve*ln(m0/m1)

    Reputations:
    757
    Messages:
    3,242
    Likes Received:
    2,667
    Trophy Points:
    231
    @tilleroftheearth, I am honestly quite curious—what exactly is your workflow? What are your workloads such that you find Intel CPUs to be more reliable (and how do you quantify reliability?) and faster?

    Also, to clarify, it hasn’t been 13 years—more like 8, when AMD introduced its Bulldozer architecture.

    AMD was every bit as competitive, and its Athlon and Phenom platforms in the previous decade were every bit as powerful (there’s a reason why 64-bit drivers for devices on Windows are prefixed with AMD64) and reliable (perhaps more so, given Pentium’s overheating and efficiency issues) as Intel’s Pentium.

    Intel’s platform (to use your choice of word) today is so plagued with security issues (never mind performance) that I would think twice before recommending it to anyone doing Mission—critical work.

    But that all aside, I am curious about what you do with your computers.
     
    hmscott and TANWare like this.
  43. yrekabakery

    yrekabakery Notebook Virtuoso

    Reputations:
    1,470
    Messages:
    3,438
    Likes Received:
    3,688
    Trophy Points:
    331
    Tbh, before Ryzen came long, AMD CPU hadn't been competitive since the K8 Athlon. Phenom I was a flop, and while Phenom II had significantly higher clocks/cache and gave more physical cores at the same price as Intel, it still lacked IPC compared to Core 2, let alone iCore.

    This is pretty funny though seeing Bulldozer regress 5 years to pre-K10 levels:

    [​IMG]
     
    ajc9988 and jaybee83 like this.
  44. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    Sorry but Ice lake does not impress me;
     
    hmscott likes this.
  45. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    lmao cancer of a regression in IPC pulls back performance then gave it back to consumer claims improvement /s. where you get this graph?

    CB15 has one of the smallest ipc improvement in ST workloads, then theres that 3D particle benchmark I wish to see the improvement for ST on that software for zen2 vs zen+. and icelake vs skylake.
     
  46. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,840
    Likes Received:
    59,615
    Trophy Points:
    931
    I didn’t see any performance numbers posted in this video.
     
    tilleroftheearth likes this.
  47. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    here, they increase IPC and lower clocks (sound familiar);
     
    ajc9988 likes this.
  48. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,840
    Likes Received:
    59,615
    Trophy Points:
    931
    The video I replied on was in short worthless.

    Probably posted before but this is more enlightening than the video I referred to https://www.anandtech.com/show/14664/testing-intel-ice-lake-10nm

    Turd chips I know but this show only the first chips from the new arc. And notebook manufacturers will cripple it in one way or another anyway. Results will be everywhere.
     
    ajc9988 likes this.
  49. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    Agreed, I did not originally post this as I though everyone would go digging up the numbers themselves and would not look at as me trying to be biased.
     
    ajc9988 likes this.
  50. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Yeah, the anandtech article's summary at the end, basically pointing out that the single digit performance may be wiped away if Whiskey Lake-U had the faster memory that ICL-U has, while I'd also point out there is a reason that Comet Lake is coming to mobile as well (likely as the high performance mobile chips) suggests that ICL-U is already toward the higher end of what they will achieve with overall performance.

    Same thing was said with Zen 1, turd chips but only the first chips from the new arc.

    But Intel's architectural decisions were spot on, the process itself, though, seems broke beyond repair. Likely why they are doing this now, server next year, and they didn't have a desktop variant on any roadmap.

    In deference to Intel, though, they have 2 more core architectures either in development or already completed. They just can't port it to 14nm because of size of the cores, etc.

    That is why I put Intel at a disadvantage to a degree until 2021-22, when Intel goes 7nm. So long as that process is not broken or delayed, it will be quite competitive.

    Another note, Intel still has the performance crown on mobile. AMD is trying to catch up, but them using a gen old core and graphics on their APUs and mobile APUs just really is dragging them down for competing in that segment, where their growth was primarily due to shortages in Intel chips and delays on delivery. That is why we are seeing more ARM and AMD offerings this cycle. AMD needs to get more serious on that market segment. They are making great inroads on desktop and HEDT, but to really get mobile, they need to start pairing the current CPU and GPU together rather than last years or find a way to push up when they release the newest cores and iGP after releasing those products on desktop.
     
← Previous pageNext page →