The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    Ryzen vs i7 (Mainstream); Threadripper vs i9 (HEDT); X299 vs X399/TRX40; Xeon vs Epyc

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by ajc9988, Jun 7, 2017.

  1. don_svetlio

    don_svetlio In the Pipe, Five by Five.

    Reputations:
    351
    Messages:
    3,616
    Likes Received:
    1,825
    Trophy Points:
    231
    @ajc9988 don't bother man. Not worth the time and energy...
     
    ajc9988 likes this.
  2. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Then stop peddling false information!
     
    don_svetlio likes this.
  3. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    exactly my thought.. ty

    @ajc9988 naw it aint false. so from last couple of posts this is where we disagree on things. skylake-x has lower IPC than ryzen. if its so much based on software, i guess why skylakex is slower than skylake/kabylake is definitely not because of the new cache rework nor the mesh design, NOT. to say its based only on software its simply beyond retarded really

    right, you have proved nothing, so keep on talking lol.
     
  4. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    You have too look at your workload and where the system bogs down. For my usage single thread is loading simple web pages , office apps etc. Nothing too overload even the 4c I have now. The system is fast and crisp but where I do get bogged down is multicore encoding and other tasks. So what I need is a TR or the like that will demolish these tasks much quicker. As it stands with the clock issues an 18c Intel does not sound appetizing.

    Now if your workload has single threads tying your system up, by all means get a 5GHz Intel whatever.
     
  5. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    I said IPC varies by task, you said it does not. Every source in the world references the scheduler, the instruction set, cache utilization, memory utilization, and other factors that effect IPC BY TASK! It is more than just the scheduler and if you cannot understand the variance, then you are spouting ignorance!

    " Calculation of IPC[edit]
    The number of instructions per second and floating point operations per second for a processor can be derived by multiplying the number of instructions per cycle with the clock rate (cycles per second given in Hertz) of the processor in question. The number of instructions per second is an approximate indicator of the likely performance of the processor.

    The number of instructions executed per clock is not a constant for a given processor; it depends on how the particular software being run interacts with the processor, and indeed the entire machine, particularly the memory hierarchy. However, certain processor features tend to lead to designs that have higher-than-average IPC values; the presence of multiple arithmetic logic units (an ALU is a processor subsystem that can perform elementary arithmetic and logical operations), and short pipelines. When comparing different instruction sets, a simpler instruction set may lead to a higher IPC figure than an implementation of a more complex instruction set using the same chip technology; however, the more complex instruction set may be able to achieve more useful work with fewer instructions.

    Factors governing IPC[edit]
    A given level of instructions per second can be achieved with a high IPC and a low clock speed (like the AMD Athlon and early Intel's Core Series), or from a low IPC and high clock speed (like the Intel Pentium 4 and to a lesser extent the AMD Bulldozer). Both are valid processor designs, and the choice between the two is often dictated by history, engineering constraints, or marketing pressures. However high IPC with high frequency gives the best performance."

    Read the Bolded, Italicized, and Underlined portions!
    In fact, it includes architecture as a possible reason, but the reasons go beyond that, as certain architectures can perform certain tasks better. Look at ASICs!
     
  6. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    yes, work load does matter. to simply say its all based on tasks and work load that is just silly. to have it all work out we will need 1. hardware, 2. software. its based on these. from that point on the design of hardware and optimizaiton of software will dedicate how good it is. if its on hardware alone it also has its capability regardless of software.

    i dont even feel like arguing with ajc anymore.
     
    hmscott and tilleroftheearth like this.
  7. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    I already said ASICs, designed for specific tasks. What the IPC normalization and the multiple games showing even performance show is that, on average, there is no IPC advantage, even with speed increase of Intel. Now, if your tasks are optimized for Intel over AMD (meaning the majority of your games and programs still show the IPC advantage when using the specific software), then Intel is the way to go. But speaking of Intel having a 7% IPC advantage blanketly is not proper at this point, as the IPC advantage has disappeared with optimizations for the architecture. That is showing you are ignoring facts on the ground and REAL WORLD PERFORMANCE, something you and tiller constantly point to in order to push Intel. Why do you ignore real world performance now?
     
    Deks likes this.
  8. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    double post
     
  9. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    Most current reviews seem to put the i9 lineup IPC on par or BEHIND AMD's Ryzen.
    Intel's i9 had a 500 to 700 MhZ OC (at 4.5 to 4.7 ghz) in comparison to AMD at 4.0 OC, and STILL, AMD came out identical for the most part in gaming while consuming LESS power.

    So, I don't think the IPC claim on Intel side will hold even with higher clock speeds. And even if it does, you MIGHT gain at best 5% performance over AMD for DOUBLE the cost and much higher power draw.
    Although, overclocking the i9's thus far did NOT prove to provide any substantial enhancement performance-wise over AMD.
    At 20% Overclock past AMD's current maximum, you'd expect a bigger difference, and yet, there has been maybe 10% difference or none.

    Also, I doubt that Intel will be topping performance charts anytime soon if its current i9 is any indication.

    Might I ask why are you insisting on going with Intel considering you would be getting a virtually same or worse performing system while paying DOUBLE the money?
    Isn't it more sensible to go for the cheaper solution, or spend same amount of money for top end ThreadRipper 16c/32th which would provide FAR MORE performance than your current 8c/16th Ivy or even i9 8c/16th?

    Going with Intel as a new purchase right now really seems... ill advised.
     
    hmscott and don_svetlio like this.
  10. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    I think that is going a bit far. What works for one person does not so for another. Now it may not be the optimal system for most, but as I've said to each their own.
     
    hmscott and ajc9988 like this.
  11. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Yeah; I already pointed out the 'needs' for an Intel platform today. Server type workloads need not apply (nor does worrying about a few watts more for even a handful (or two) of systems either) - again; that was already addressed by my post and is not relevant.

    Unless anyone is doing commercial level video or audio renderings - or anything similar (compute wise, and whether they get paid for it or not); 12C and higher platforms are what they're already using.

    There is no single core (and single thread) processor today that is worth talking about. And neither is a platform using only a single core/thread (the O/S makes its own demands as well as the (single) program the user may be working with).

    Both platforms will see optimizations over their lifetimes - of course. That isn't what I was pointing out though.

    What I'm pointing out is that the hardware that will be available to the masses in a short time now still has to have the software developers fully behind it to utilize those high core count processors. By the time that happens (and if that happens with current 'consumer' level software)? Those future HCC platforms will make what is introduced today (by both sides) effectively obsolete.

    Buying the proper platform for what actually works today for the intended consumer workflow(s) is much more rewarding in the long run than betting that your consumer level workflows will be optimized to work in essentially server level platforms with such a high performance deficit compared to what is available today. This is not a point that can be disputed.

    Finally; we are not discussing 'all tasks'. ole!!! and I were pretty specific on where Intel still dominates.

    Your post is puzzling because while it seems like you are addressing the points I made; all you're really doing is saying the same thing but from the other (AMD's) perspective.

    The issue with AMD's perspective is that software isn't there yet; for the vast majority of consumers. ;)

    The delayed rewards here come from using an Intel platform that is much more effective today; and then buying the HCC platform that will have been proved to be the more effective solution in the hopefully not too distant future.

     
    Last edited: Jul 24, 2017
    hmscott and ole!!! like this.
  12. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    It is better to stop trying to convince him. He's made his choice. But correct him on giving false information to the community on the IPC or clock speed argument.

    Sent from my SM-G900P using Tapatalk
     
  13. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Times are changing and with the increase in streaming and access to HCC, building one system can become more economical than two, thereby negating your argument on low hanging fruit.

    This is false if looking over time of ownership. If you wait to upgrade, and those will come (Ryzen made game designers and other software engineers start looking at n-core and n-thread optimizations for scaling rather than just a fixed set optimization, like quad, hexa, and octo core setups, mainly because they were given a clue that 8-core is mainstream, meaning you have to support higher scaling and by having it scale without having to set each one, which was easier when it was only a couple chips, changed the game)! This isn't guessing, it is a certainty. The question is timeline. Many mainstream programs saw quicker optimizations because of Ryzen. Certain professional softwares will sleep on it (looking at you Adobe)! So I will dispute it as it is ignorant to say, especially in light of industry changes. You ignored all other factors to try to attack one, then throw out the argument. That isn't how this works.

    God you are ignorant! This isn't about sides! I already said if the majority of your programs are still on the side of Intel, and will remain so for the timeline of ownership, then it makes sense. You are using Intel PR talking points with this last one. I'm now not just calling cheerleader. I'm calling paid troll!
     
  14. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    The minute someone responds with indisputable facts; you start name calling. Sigh...

    Piling on loads of barely relevant 'facts' doesn't change the final outcome.

    For most consumer workloads a HCC (10C and higher...) platform today and for the immediate future holds only promise; not benefits. You're welcome to bet with your $$$$ - but you're not doing others any favors by promoting the platform most suited for you when their workloads more than likely don't match yours.

    IPC 'scores', 'proof' via gaming videos and other attempts to undermine the fact above are but feeble attempts at defending spending equal if not more $$$$ for equal and in most current and immediate future workloads: less performance.

    I'll repeat again: for those that already have software that can leverage many cores (>6C/12T) they are already using platforms that can do so. What AMD has promised is that for those users; they can get more cores, more cheaply. Great. No argument at all.

    But; for the highest %age of the consumer segment that runs normal consumer/prosumer programs getting those extra cores comes at a high cost: lower real world performance for the O/S and software available today. Period.

    The usage cases you quote are not as widespread as you seem to think. Nor are your expected benefits of HCC platforms going to come as quickly either for the same majority of users I'm talking about.

     
    ole!!! likes this.
  15. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    What indisputable facts? You provide assertions already shown wrong with real world results that have been posted in the forums in the last week, meaning your assertions are false. You have ZERO facts.

    As to the "most consumers", things change quickly once the public gets its hands on it. You saying that is like saying the toaster oven isn't suitable for consumers when it first came out, or the pressure cooker, or the slow cooker, or any tech. Use is found quickly in the market by the consumer, not by your assertions! I've already detailed numerous times when Intel is better and when AMD is better, not looking at my woarkloads, but facts on the ground. If a person is considering a 10-12 core Intel, then it is perfectly fair to compare that performance to the 12 and 16 core AMD chips. If they are limited to 4GHz, as assumed, and still perform as well in single core comparing Intel at 4.7 and AMD at 4, then the extra cores help when multithreaded and win with 50% of the uses at single core. This isn't hard to understand to then recommend AMD chips at similar prices but that will give better performance over the life of the chip, improving with time. Once again, do you not understand facts? I gave the caveat on specific programs utilizing it better, then lean on Intel due to use. But that is becoming a minority with Ryzen optimizations.

    What are you talking spending more money for equal. AMD COSTS LESS THAN INTEL! I will say it again, it depends on your program and IPC varies from task to task. Because of that, all you can do is average trying different tasks on the platform to get the normative IPC, which one person has done and shown that AMD Ryzen is now equal to better than Intel's, even at single-threaded tasks in some cases. Know your program and uses, and buy for that. You make blanket BS statements saying Intel is absolutely better, but it is a false statement.

    This lower real world performance is based on what facts exactly? Because you have not shown it and what you have shown is Adobe products that still favor a quad core. Seriously, you are tiring. What Ryzen showed is that it varies software to software, which gets back to varying IPC by task and software optimizations, which gets back to you saying generalizations that no longer apply.

    You only cite programs that quad core wins, yet you accuse me of citing only use cases not widespread. I took low hanging fruit as an example, presented it as low hanging fruit, didn't mention that streaming and e-sports is one of the fastest growing uses in the consumer sector, and you accuse me of use cases not as widespread, mister PS and LR, which is niche in use comparison just like streaming. You are not pointing to anything on the growth of uses by consumers, instead only pointing to current uses of the hardware. That shows how asinine this conversation is!
     
  16. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    i play 1 game, i dont care about fps as long as its playable. i care about how fast my firefox load, how quick my video finish encoding, how quick my ram disk loads and how snappy my system is, those are what i call real world performance and they may very differ from yours or reviewers. gaming benchmark? pfffff


    broadwell is around 3% faster than haswell in average ST workload
    skylake is aorund 3.5% faster than broadwell in same test method

    I think ryzen is slower than broadwell at same clock in ST workload, i use cinebench R15 ST as my base reference point. if you're telling me skylake-x is slower than ryzen or broadwell, i'd disagree. if you say skylake-x is slower than those in SOME specific area because of new mesh/cache redesign, i'd agree. simple as that.

    what people claim is skylake-x has lower IPC than ryzen, so vague, so inaccurate.


    took the words right out of my mouth. you are very good at phrasing but me not so good at put my thoughts into words so i just give up trying to explain. i guess it wouldnt be my words to begin with haha
     
    tilleroftheearth likes this.
  17. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    why do we need to point things out on growth of uses by consumers? if they wanna know whats best for their own scenario they can figure it out.

    and also, we did. infact, i mentioned it MULTIPLE TIMES that majority of software i use are legacy software will never get updated/optimized, there are reasons for my intel purchase. i even further stated if not for those reason along with storage performance, i would not have gone intel this time around. yet ignoring all those things i've listed you simply bring out pricing this pricing that. LMAO i know intel cost more and it doesnt change matter that its still faster in ST scenarios.

    the things you claimed of us not doing is actually what you didn't do yourself, stop glorifying AMD dude. im an AMD pro fanboy but i can figure out bs when i see them and make good choices for myself.
     
    tilleroftheearth likes this.
  18. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Before you presented the higher single thread and IPC for gaming. We don't know your specific game. You say for browsing, but the difference between Ryzen and Intel on browsing would be minimal at best. The video encoding depends on if you use GPU acceleration, but if not, the equivalently priced 16-core will likely encode faster (mile varies by program used and software optimizations). The load on the ram disk will need tested AFTER the 1950X has been reviewed, as it has not yet and using the Ryzen 7 with dual channel memory without later platform changes is irresponsible. Snappiness, same. Those are real world variables, most of which either are a wash or unknown currently. So it doesn't make sense.


    This I agree with EXCEPT FOR the bolded area. Yes, writing the software to better utilize the cache structure will give some performance gains, but little else can be expected for the architecture. But, generally, I do agree with this statement, so long as IPC is limited to specific tasks, like you did with cinebench R15.



    This is you two trying to pat each other on the back to marginalize someone you cannot bulldoze. An echo chamber doesn't mean you are right.
     
  19. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    This is a BS statement. You are making generalized statements of performance. Because of that, you are presenting false information as if it applied generally to all use case scenarios. I told you I'm not trying to convince you to change your decision, I'm correcting false statements. Your generalized statements show you are saying Intel holds it in all case uses. Others and I have disproved that. Do what you want with your money, but limit it to your uses. Say those uses, and stand strong in your decision. I don't give a crap what you buy. I've said Intel has a place. I'm not glorifying AMD, I am stating that this round, AMD has it. I spell out where Intel is still king, but also update that as optimizations roll in. Can't you understand that? Oh, wait, you can't because you say Intel is still king.
     
  20. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    These indisputable facts:

    See:
    http://forum.notebookreview.com/thr...399-xeon-vs-epyc.805695/page-86#post-10566979


    Which many (all?) HCC advocates here simply ignored.

    I am not doing the online rag thing of comparing ~10C to ~10C, or $$$ to $$$ or any other misconstrued comparison.

    I am comparing the actual workloads most real people have using real software, today. HCC from anyone's camp right now makes no sense at all.

    Your analogy is faulty too. The consumer cannot make a HCC platform better on their own; the software developers (and specifically the software developers of the software they actually use...) need to do that. This won't take days, weeks or months. Years and years will pass - if it is actually possible for those serial workloads.

    AMD costs less than Intel when you compare 'cores'. AMD is more expensive in the performance that can be extruded, today.

    I am tiring you because my workloads/workflows mimic Adobe products which also happen to thrive on lower core count but high frequency/low latency monolithic cores? And you're tired of trying to defend the indefensible position that this also mimics most consumer workloads today? Sorry.

    The growth of the consumer isn't defined by what they do with the hardware. All they can do is install the software they need and run it.

    When/if (I'm hoping towards when too...) all software can be made to take advantage of HCC platforms AND those same platforms don't give anything up to lower core count platforms with more speed... then we'll be on the same page.

    Right now? I'll buy and recommend the platform that will give today's software (and single threaded reality) a real productivity boost. Not spend real $$$$ hoping that some developer somewhere can help me in the indeterminate future.

    My gut feeling says; if there was a way to utilize more cores in todays most prevalent workloads; we would have seen it already.

    If AMD proves to be the catalyst to prove that the above statement can be done; great. But in the meantime; it doesn't change which platform is undoubtedly the best one today for the majority of the 'consumer'/'prosumer' worlds - including me.

    Regardless of the facts that you like to extrapolate from hardware we both haven't seen yet (at least; not in our respective workloads).

     
    ole!!! likes this.
  21. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    'Ill advised' was meant in terms of price/performance ratio (I should have clarified that).
    You are correct that what works for 1 person won't necessarily work for another, but when you have majority of tests indicating an AMD system would provide LARGE cost savings while offering same performance at lower power draw vs the overclock on the Intel system, what conclusions are you left with?
     
    ajc9988 likes this.
  22. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    Ok , enough fighting! Eventually let the workloads and benchmarks lie where they will. Intel is in trouble where 95% of users and their workloads are concerned. The other issue is with those 5% that might perceive a slight difference is it worth the extra cost? If the page loads in the blink of an eye do we care if it is loaded just before the eye opens or is ready well before we can see it?
     
    Rage Set and ajc9988 like this.
  23. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    Insults back and fourth etc. again will not be tolerated and I am not picking apart arbitrary trolling in the posts unless you guys just want to shut it all down now!!!!!!!
     
    Papusan and ajc9988 like this.
  24. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    @TANWare - I'm honoring what you asked, but can you do something about these two guys? Trying to correct their statements is exhausting.

    Edit: Just read your post. Sorry and thank you!
     
  25. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    all in a few weeks time. benchmark will start flowing out. too bad no one is gonna be testing turbo boost 3.0 :( im very eagered to find out how it works and how much it benefits me. say 8c 5ghz vs 16c 4ghz but when needed 2 cores hitting 5ghz, that type of scenario.
     
  26. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    Agreed, TB 3.0 could come in very handy with legacy apps.
     
  27. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    Stop trying to correct. If there is a flaw in their logic most will see it for what it is. Again let the BM's and workflows lie where they will. Once the various silicon hits the streets word will get out soon enough.
     
    Papusan, tilleroftheearth and ajc9988 like this.
  28. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Today's workloads won't change for most consumers - even once the silicon hits. That may change once developers have enough time (my prediction: years) to rework their programs though (of course).

    To buy something today (i.e. Betamax) because it is theoretically superior? Hindsight says not good enough.

    When (and if) the programming/software has caught up in gen2 or even gen3 of what AMD has started with a bang this year; then it will be go time. Wishing it to become so over the expected lifecycle of any platform you can buy today or the near future is not looking at the overall picture very objectively at all.

     
    ole!!! likes this.
  29. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    Actually no. If we went by this philosophy the Advent of quad, and even octa core, CPU's would still be just a notation in the page of computer development. You need an operational base of the hardware for the software to support. It is the chicken before the egg, I mean why develop for something that does not exist in the wild?
     
    ajc9988 likes this.
  30. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    It also promotes a chip monopoly, as developers would never see more purchases of the other platform, then not develop for it, to force it to die off due to lack of adoption, then allowing monopolistic pricing, etc. Wait, where have I seen this before? Think it had something to do with an antitrust lawsuit a because of actions a decade ago, but not sure...

    Also, software is already being optimized for it and has made large leaps since March. So, by this logic of waiting years (which won't be necessary for most consumer products, but may be needed with slow companies like Adobe), no one should buy it until then, then it will hopefully be dead because no one adopted it.

    Also, Beta was empirically superior. That is NOT theory, it is fact. But, it was also priced too high and the cheaper standard won out. Meanwhile, when you look at Blu Ray and HD-DVD, the superior platform won out (Edit: both were priced similarly, with BR being slightly more expensive, but offering an extra 40% capacity). When you look at Intel versus AMD a decade ago, the inferior product won because of illegal market activity. Now, AMD may have the superior product AND the lower price. If true, that means (looking at historic records) that AMD would win out, unless Intel acts illegally. Hmmm. Thanks for the history lesson!
     
    Last edited: Jul 24, 2017
  31. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    I think what some of us are trying to say, is that with all the problems of the i9 x299, all the promise of the ThreadRipper and Epyc releasing so soon, and the months delay waiting for the top i9 CPU's to release, it's a good idea to not rush in and buy X299 build parts on first availability, wait a couple of months to see how it all shakes out.

    Many things can change and swap back and forth several times between Intel and AMD CPU's over the next 6 months. :)
     
    Papusan, don_svetlio and ajc9988 like this.
  32. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    IDK if I'd call it false information, maybe information colored through Intel filtered glasses :)

    So far he's more wishful than falseful in his posts. Thanks for that @ole!!!
     
    don_svetlio likes this.
  33. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    I agree 100% with your statements below.

    What I don't agree is paying a performance penalty in the meantime for today's workloads to support something that may or may not happen in the near or even medium future for most workloads. But I'm glad others feel differently and will get this HCC 'base' that developers need to make HCC platforms as suitable as they can for even mere mom/pop consumers too. It will take many more years though - even with the hardware appearing 'overnight' as some here think.

    See:
    http://www.anandtech.com/show/11549...iew-the-new-single-thread-champion-oc-to-5ghz

    See:
    http://www.computerworld.com/articl...essors/cpu-architecture-after-moores-law.html


    What I get from the above is that gaming has stopped being a viable 'bm' to consider - unless you happen to be playing the exact game on the exact platform and the exact O/S, driver and etc. etc. revisions... (I really 'hate' it when they 'standardize' on a bm'ing platform that quickly falls out of relevance to what the rest of the world is actually doing/using).

    I also get that HCC platforms are for very specific workloads and are a very small minority of users (at least today). Those that have those workloads already had options before; with AMD, they now have more and cheaper options. Doesn't mean that it is the best choice for everyone though... not by a long shot.

    I believe that the hardware vendors got pushed to create higher than one cores so long ago by the software developers. Maybe there just isn't anymore? (It has been at least twenty years of HCC promises that have for the most part gone unfulfilled - from the developers side - there are many options even a decade ago when they could have optimized and worked on their projects just as well as today, yet here we are with single core performance still being the determining aspect of most consumer and workstation user workloads today).

     
    ole!!! likes this.
  34. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    The fact that Beta was superior isn't the point. The point is that if you had bought it; you would still have to buy the other.

    What software and what large leaps have been made since March of this year with regards to a HCC platform? Please; don't show me silly synthetic 'scores'. Show me real world workflows and workloads affected.

     
    ole!!! likes this.
  35. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    7700k vs.7740x vs. 1800x vs. 7820x
    What's The BEST CPU for Gaming with 1080 Ti's in SLI?
    Long intro, benchmark results start at ~8:15...
     
    Last edited: Jul 24, 2017
  36. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    "Show me data that I know hasn't been put into any reviews and that I can marginalize as a fixed workload even if using a specific program." Everything, to varying degrees, is synthetic, especially in reviews. What I can show you is the improvement of FX stomping on SB due to the optimizations for Ryzen, but that is in games, so doesn't match your criteria. I could show you games that added scaling, but that is a before and after look, as many reviews do not have the before and after. I could point to PCMark 10, but all they have is the performance from pre-ryzen optimizations, not the full journey, something literally fixed within a couple weeks of release. But none of this meets your qualifications, because reviewers DO NOT SHOW THAT INFORMATION! What you point to is software with preset image and settings run on the same program with windows, all best suited for quad cores.

    And I believe my context made it clear. I showed Beta lost on price, not performance. You had a $300+ Beta going with a market flooded with VHS at about $100. The fact that VHS had enough performance while coming in at a drastically reduced price is why Beta lost. You intentionally tried to use Beta as a way to show not to bet on new tech, but then failed to disclose way it failed. Here, you have Ryzen with enough performance, more compared to price point and likely the same compared to a 16-core costing 70% more than it, and yet you chose the beta analogy? Do you not get what you said? I think you hoped no one would actually look at the underlying reason Beta failed, instead trying to say don't bet on something you don't know. Considering the programs will be able to be used on both CPUs even if not optimized, it also breaks down the Beta analogy you made. Do you really want me to keep going?
     
    Last edited: Jul 24, 2017
  37. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    tiller's point is rather clear. server and enterprise is where all the money is, they can get optimization very quickly. for consumer side of things, its ease of use and UI really what it comes down to, and sets of features*, rarely its about how fast a software is over the other unless the difference massively slow and inconvenience the user.

    @ajc9988 take avx for example, "Intel demonstrated a Sandy Bridge processor in 2009, and released first products based on the architecture in January 2011" - it came out in 2011, its 2017 so going into 7th yrs and just how many consumer software used by non-techies uses avx? we can probably count them on two hands with our fingers, really. so as you can see, not a lot uses avx which was out forever, will consumer side get optimization? sure but how long?? is waiting 1 yr acceptable? what about 5 yrs after you have purchased ryzen, what about longer?
     
    Papusan, tilleroftheearth and hmscott like this.
  38. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    I Switched to Ryzen for Gaming & Editing. Here's Why.
    ( Choosing x299 7900x vs AM4 1700x )
     
    Last edited: Jul 25, 2017
    ajc9988 likes this.
  39. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    So, besides the rant on the Beta analogy (which you have some good points on, granted) - you have nothing to show me to support your statement that programmers suddenly saw AMD's high core count platforms and knew they could suddenly make legacy programs better. (Just what was it that stopped them by doing the same with Intel's offerings)?

    All the performance improvements you claim still don't beat Intel's offering (see the AnandTech article I linked to above), in an overall sense.

    If they PCMark 10 (?) 'scores' were literally fixed within weeks, why can't you show me exactly? And if you can't show me, how can you claim they were fixed? ;)

    Creating or remodelling existing software to be more parallelized isn't something that can be pulled out of a hat. Even if some parts of the workflow are parallelizable - it usually doesn't affect the whole process (i.e. 'workflow') to a great degree.

    From the AnandTech article:
    Yeah; WinRAR again - but notice Agisoft too. Even with double the cores for AMD, Intel is still the one to beat. At triple the cores, the tables get turned. But at that processor level (i5 and Ryzen 5) - I think we can both agree nobody is interested in those results here.

    More cores is great in a general sense - even better if double the cores come at less $$$$ too - like AMD is now offering us.

    But that is greatly offset by real world workflows and workloads from some of the best minds in the programming world that simply cannot use >6C/8C platforms effectively today. Parallelization isn't a right or a given in all workloads or even most workloads today.

    It is inherent in the data and the manipulations needed to achieve the expected outcome.

    As I alluded to before; beginning ~20 years ago I had the same conversation about workstation class computing with more than a handful of cores. Nothing has changed in between then and now.

    Read the two links I provided originally in the post above. Not everything is synthetic - even in reviews. The 7z results is a canned bm 'run'. Even if I used 7z; I would not use the internal bm to compare to my current systems... WinRAR is actually doing real work. Including using the storage subsystem just like anyone would use it too. Same for the Agisoft real world software test run. No benchmark can compare to just using the software (full install) just like it was supposed to be used - and with real world data used and created too.


    So once again; I'm showing your why my stance is logical and my conclusions correct.

    I welcome any response to the above of why I'm not - backed with actual proof this time.


     
    Last edited: Jul 25, 2017
  40. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Yeah. :)

    If I got an 16C/32T platform or higher - I would hire the talent needed to extract the maximum performance from that investment.

    That is the programming talent to make my workloads fly on the hardware I chose - not the talent to put it together (nor O/C it, btw...).

    Like many articles have stated about Epyc; the companies that buy those types of systems and need them by the truckload already have that talent, within the company. More cores does equal more performance for them.

    But for mere consumers (I'm including myself in this aspect...) that mostly buy off the shelf software? Small sliver of a chance of that happening 'in house'.

    Why can't MS make it's O/S fully core-unlimited? Because right now; it can't. I believe most consumer software is in the same boat.

    But, no problem; here are the 'chickens'... let's see if they can make some eggs too. ;)

     
  41. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,840
    Likes Received:
    59,615
    Trophy Points:
    931
    THIS^^^
     
    ajc9988 and hmscott like this.
  42. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Last edited: Jul 25, 2017
    Papusan and hmscott like this.
  43. don_svetlio

    don_svetlio In the Pipe, Five by Five.

    Reputations:
    351
    Messages:
    3,616
    Likes Received:
    1,825
    Trophy Points:
    231
    temp00876, Papusan, ajc9988 and 2 others like this.
  44. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Don't be so hasty!!! We still have the 14, 16, and 18 core variants!!!

    In other news, have you seen that the VRM heatsinks on the X399 boards now have a heat pipe to offload some of the heat to over by the I/O?
     
    hmscott and don_svetlio like this.
  45. don_svetlio

    don_svetlio In the Pipe, Five by Five.

    Reputations:
    351
    Messages:
    3,616
    Likes Received:
    1,825
    Trophy Points:
    231
    I know, right? I'd be tempted to think that X399 isn't being rushed out the door like X299 :confused:
     
    ajc9988 likes this.
  46. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    My concern is the 300 MHz on the boost per dual core added. At 14 cores we would be at 3.7 Ghz, for 16 core 3.4 and 18 core at 3.1 GHz. I am not sure at that speed the 18 core will be a TR killer. Prior to SandyLake-X they had the low core count and high clocks to themselves but now even that landscape is a changing.

    It seems to have competitive machines out there they have taken one step forward and two or three backwards at the same time.

    Edit; if you give much salt to CPUMonkey he has the 1920x TR and i9-7920x as pretty close.

    http://www.cpu-monkey.com/en/compare_cpu-intel_core_i9_7920x-759-vs-amd_ryzen_threadripper_1920x-757
     
    Last edited: Jul 25, 2017
  47. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    I wouldn't necessarily go that far. The Xeon gold 6154 is an 18 core chip that reaches 3.0. That should mean the skylake-X should be 3.2-3.7, probably around 3.3-3.4. But all core overclock likely will not beat AMD all core at these numbers. With AMD also having better multithreading, it looks like those will not get much market share, with the 10-14 core being the point where it makes sense, if needed. 14 is limited, but may have a place considering you can usually get 200-400MHz over boost, unless the HCC get clocked closer to thermal limits.
    This is likely due to getting the redesign they planned on the x299 after finding out the problem there, so don't judge too heavily yet.

    Sent from my SM-G900P using Tapatalk
     
  48. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Yup, the x399 boards are getting the benefit of longer testing and finding the x299 flub up's.

    It's still worth checking for VRM cooling for whatever x399 board you want to get to make sure they got it right.

    I couldn't attend the AMD webcasts, did anyone see the x399 presentations? Anything significantly new, besides the VRM cooling?
     
    Last edited: Jul 25, 2017
    ajc9988 likes this.
  49. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    The motherboard presentation happened this morning, but limited amounts. The MSI board was shown off, which is new and had a massive VRM sink instead of the heat pipe to the I/O like the other boards (Asrock, Giga, Asus). Asrock has two boards, the rest have one. Pre-orders start in 24 hours. Let's hope it isn't a mess like it was for Ryzen 7 and they actually made enough boards for the platform!
    https://videocardz.com/71100/msi-showcases-x399-gaming-pro-carbon-ac
    https://videocardz.com/71145/gigaby...per-motherboards-soon-available-for-preorders
    http://wccftech.com/amd-asrock-msi-gigabyte-x399-motherboard-ryzen-threadripper-cpus/
    [​IMG]
    [​IMG]
    [​IMG]

    I didn't quickly find a slide like this from Gigabyte, and we've posted Asus recently (although I'll look again on Asus).
     
    hmscott likes this.
  50. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    Two things, where is that ROG Zeineth board and what are the prices? I need to know how little I will have left. :(
     
    Rage Set and ajc9988 like this.
← Previous pageNext page →