@ajc9988 don't bother man. Not worth the time and energy...
-
don_svetlio In the Pipe, Five by Five.
-
-
@ajc9988 naw it aint false. so from last couple of posts this is where we disagree on things. skylake-x has lower IPC than ryzen. if its so much based on software, i guess why skylakex is slower than skylake/kabylake is definitely not because of the new cache rework nor the mesh design, NOT. to say its based only on software its simply beyond retarded really
right, you have proved nothing, so keep on talking lol. -
You have too look at your workload and where the system bogs down. For my usage single thread is loading simple web pages , office apps etc. Nothing too overload even the 4c I have now. The system is fast and crisp but where I do get bogged down is multicore encoding and other tasks. So what I need is a TR or the like that will demolish these tasks much quicker. As it stands with the clock issues an 18c Intel does not sound appetizing.
Now if your workload has single threads tying your system up, by all means get a 5GHz Intel whatever.hmscott, jaug1337, don_svetlio and 2 others like this. -
" Calculation of IPC[edit]
The number of instructions per second and floating point operations per second for a processor can be derived by multiplying the number of instructions per cycle with the clock rate (cycles per second given in Hertz) of the processor in question. The number of instructions per second is an approximate indicator of the likely performance of the processor.
The number of instructions executed per clock is not a constant for a given processor; it depends on how the particular software being run interacts with the processor, and indeed the entire machine, particularly the memory hierarchy. However, certain processor features tend to lead to designs that have higher-than-average IPC values; the presence of multiple arithmetic logic units (an ALU is a processor subsystem that can perform elementary arithmetic and logical operations), and short pipelines. When comparing different instruction sets, a simpler instruction set may lead to a higher IPC figure than an implementation of a more complex instruction set using the same chip technology; however, the more complex instruction set may be able to achieve more useful work with fewer instructions.
Factors governing IPC[edit]
A given level of instructions per second can be achieved with a high IPC and a low clock speed (like the AMD Athlon and early Intel's Core Series), or from a low IPC and high clock speed (like the Intel Pentium 4 and to a lesser extent the AMD Bulldozer). Both are valid processor designs, and the choice between the two is often dictated by history, engineering constraints, or marketing pressures. However high IPC with high frequency gives the best performance."
Read the Bolded, Italicized, and Underlined portions!
In fact, it includes architecture as a possible reason, but the reasons go beyond that, as certain architectures can perform certain tasks better. Look at ASICs! -
i dont even feel like arguing with ajc anymore.hmscott and tilleroftheearth like this. -
Deks likes this.
-
double post
-
Intel's i9 had a 500 to 700 MhZ OC (at 4.5 to 4.7 ghz) in comparison to AMD at 4.0 OC, and STILL, AMD came out identical for the most part in gaming while consuming LESS power.
So, I don't think the IPC claim on Intel side will hold even with higher clock speeds. And even if it does, you MIGHT gain at best 5% performance over AMD for DOUBLE the cost and much higher power draw.
Although, overclocking the i9's thus far did NOT prove to provide any substantial enhancement performance-wise over AMD.
At 20% Overclock past AMD's current maximum, you'd expect a bigger difference, and yet, there has been maybe 10% difference or none.
Also, I doubt that Intel will be topping performance charts anytime soon if its current i9 is any indication.
Might I ask why are you insisting on going with Intel considering you would be getting a virtually same or worse performing system while paying DOUBLE the money?
Isn't it more sensible to go for the cheaper solution, or spend same amount of money for top end ThreadRipper 16c/32th which would provide FAR MORE performance than your current 8c/16th Ivy or even i9 8c/16th?
Going with Intel as a new purchase right now really seems... ill advised.hmscott and don_svetlio like this. -
-
tilleroftheearth Wisdom listens quietly...
Yeah; I already pointed out the 'needs' for an Intel platform today. Server type workloads need not apply (nor does worrying about a few watts more for even a handful (or two) of systems either) - again; that was already addressed by my post and is not relevant.
Unless anyone is doing commercial level video or audio renderings - or anything similar (compute wise, and whether they get paid for it or not); 12C and higher platforms are what they're already using.
There is no single core (and single thread) processor today that is worth talking about. And neither is a platform using only a single core/thread (the O/S makes its own demands as well as the (single) program the user may be working with).
Both platforms will see optimizations over their lifetimes - of course. That isn't what I was pointing out though.
What I'm pointing out is that the hardware that will be available to the masses in a short time now still has to have the software developers fully behind it to utilize those high core count processors. By the time that happens (and if that happens with current 'consumer' level software)? Those future HCC platforms will make what is introduced today (by both sides) effectively obsolete.
Buying the proper platform for what actually works today for the intended consumer workflow(s) is much more rewarding in the long run than betting that your consumer level workflows will be optimized to work in essentially server level platforms with such a high performance deficit compared to what is available today. This is not a point that can be disputed.
Finally; we are not discussing 'all tasks'. ole!!! and I were pretty specific on where Intel still dominates.
Your post is puzzling because while it seems like you are addressing the points I made; all you're really doing is saying the same thing but from the other (AMD's) perspective.
The issue with AMD's perspective is that software isn't there yet; for the vast majority of consumers.
The delayed rewards here come from using an Intel platform that is much more effective today; and then buying the HCC platform that will have been proved to be the more effective solution in the hopefully not too distant future.
Last edited: Jul 24, 2017 -
Sent from my SM-G900P using Tapatalk -
-
tilleroftheearth Wisdom listens quietly...
The minute someone responds with indisputable facts; you start name calling. Sigh...
Piling on loads of barely relevant 'facts' doesn't change the final outcome.
For most consumer workloads a HCC (10C and higher...) platform today and for the immediate future holds only promise; not benefits. You're welcome to bet with your $$$$ - but you're not doing others any favors by promoting the platform most suited for you when their workloads more than likely don't match yours.
IPC 'scores', 'proof' via gaming videos and other attempts to undermine the fact above are but feeble attempts at defending spending equal if not more $$$$ for equal and in most current and immediate future workloads: less performance.
I'll repeat again: for those that already have software that can leverage many cores (>6C/12T) they are already using platforms that can do so. What AMD has promised is that for those users; they can get more cores, more cheaply. Great. No argument at all.
But; for the highest %age of the consumer segment that runs normal consumer/prosumer programs getting those extra cores comes at a high cost: lower real world performance for the O/S and software available today. Period.
The usage cases you quote are not as widespread as you seem to think. Nor are your expected benefits of HCC platforms going to come as quickly either for the same majority of users I'm talking about.
ole!!! likes this. -
As to the "most consumers", things change quickly once the public gets its hands on it. You saying that is like saying the toaster oven isn't suitable for consumers when it first came out, or the pressure cooker, or the slow cooker, or any tech. Use is found quickly in the market by the consumer, not by your assertions! I've already detailed numerous times when Intel is better and when AMD is better, not looking at my woarkloads, but facts on the ground. If a person is considering a 10-12 core Intel, then it is perfectly fair to compare that performance to the 12 and 16 core AMD chips. If they are limited to 4GHz, as assumed, and still perform as well in single core comparing Intel at 4.7 and AMD at 4, then the extra cores help when multithreaded and win with 50% of the uses at single core. This isn't hard to understand to then recommend AMD chips at similar prices but that will give better performance over the life of the chip, improving with time. Once again, do you not understand facts? I gave the caveat on specific programs utilizing it better, then lean on Intel due to use. But that is becoming a minority with Ryzen optimizations.
What are you talking spending more money for equal. AMD COSTS LESS THAN INTEL! I will say it again, it depends on your program and IPC varies from task to task. Because of that, all you can do is average trying different tasks on the platform to get the normative IPC, which one person has done and shown that AMD Ryzen is now equal to better than Intel's, even at single-threaded tasks in some cases. Know your program and uses, and buy for that. You make blanket BS statements saying Intel is absolutely better, but it is a false statement.
This lower real world performance is based on what facts exactly? Because you have not shown it and what you have shown is Adobe products that still favor a quad core. Seriously, you are tiring. What Ryzen showed is that it varies software to software, which gets back to varying IPC by task and software optimizations, which gets back to you saying generalizations that no longer apply.
You only cite programs that quad core wins, yet you accuse me of citing only use cases not widespread. I took low hanging fruit as an example, presented it as low hanging fruit, didn't mention that streaming and e-sports is one of the fastest growing uses in the consumer sector, and you accuse me of use cases not as widespread, mister PS and LR, which is niche in use comparison just like streaming. You are not pointing to anything on the growth of uses by consumers, instead only pointing to current uses of the hardware. That shows how asinine this conversation is! -
skylake is aorund 3.5% faster than broadwell in same test method
I think ryzen is slower than broadwell at same clock in ST workload, i use cinebench R15 ST as my base reference point. if you're telling me skylake-x is slower than ryzen or broadwell, i'd disagree. if you say skylake-x is slower than those in SOME specific area because of new mesh/cache redesign, i'd agree. simple as that.
what people claim is skylake-x has lower IPC than ryzen, so vague, so inaccurate.
tilleroftheearth likes this. -
and also, we did. infact, i mentioned it MULTIPLE TIMES that majority of software i use are legacy software will never get updated/optimized, there are reasons for my intel purchase. i even further stated if not for those reason along with storage performance, i would not have gone intel this time around. yet ignoring all those things i've listed you simply bring out pricing this pricing that. LMAO i know intel cost more and it doesnt change matter that its still faster in ST scenarios.
the things you claimed of us not doing is actually what you didn't do yourself, stop glorifying AMD dude. im an AMD pro fanboy but i can figure out bs when i see them and make good choices for myself.tilleroftheearth likes this. -
-
-
tilleroftheearth Wisdom listens quietly...
These indisputable facts:
See:
http://forum.notebookreview.com/thr...399-xeon-vs-epyc.805695/page-86#post-10566979
Which many (all?) HCC advocates here simply ignored.
I am not doing the online rag thing of comparing ~10C to ~10C, or $$$ to $$$ or any other misconstrued comparison.
I am comparing the actual workloads most real people have using real software, today. HCC from anyone's camp right now makes no sense at all.
Your analogy is faulty too. The consumer cannot make a HCC platform better on their own; the software developers (and specifically the software developers of the software they actually use...) need to do that. This won't take days, weeks or months. Years and years will pass - if it is actually possible for those serial workloads.
AMD costs less than Intel when you compare 'cores'. AMD is more expensive in the performance that can be extruded, today.
I am tiring you because my workloads/workflows mimic Adobe products which also happen to thrive on lower core count but high frequency/low latency monolithic cores? And you're tired of trying to defend the indefensible position that this also mimics most consumer workloads today? Sorry.
The growth of the consumer isn't defined by what they do with the hardware. All they can do is install the software they need and run it.
When/if (I'm hoping towards when too...) all software can be made to take advantage of HCC platforms AND those same platforms don't give anything up to lower core count platforms with more speed... then we'll be on the same page.
Right now? I'll buy and recommend the platform that will give today's software (and single threaded reality) a real productivity boost. Not spend real $$$$ hoping that some developer somewhere can help me in the indeterminate future.
My gut feeling says; if there was a way to utilize more cores in todays most prevalent workloads; we would have seen it already.
If AMD proves to be the catalyst to prove that the above statement can be done; great. But in the meantime; it doesn't change which platform is undoubtedly the best one today for the majority of the 'consumer'/'prosumer' worlds - including me.
Regardless of the facts that you like to extrapolate from hardware we both haven't seen yet (at least; not in our respective workloads).
ole!!! likes this. -
You are correct that what works for 1 person won't necessarily work for another, but when you have majority of tests indicating an AMD system would provide LARGE cost savings while offering same performance at lower power draw vs the overclock on the Intel system, what conclusions are you left with?ajc9988 likes this. -
Ok , enough fighting! Eventually let the workloads and benchmarks lie where they will. Intel is in trouble where 95% of users and their workloads are concerned. The other issue is with those 5% that might perceive a slight difference is it worth the extra cost? If the page loads in the blink of an eye do we care if it is loaded just before the eye opens or is ready well before we can see it?
-
Insults back and fourth etc. again will not be tolerated and I am not picking apart arbitrary trolling in the posts unless you guys just want to shut it all down now!!!!!!!
-
@TANWare - I'm honoring what you asked, but can you do something about these two guys? Trying to correct their statements is exhausting.
Edit: Just read your post. Sorry and thank you! -
im very eagered to find out how it works and how much it benefits me. say 8c 5ghz vs 16c 4ghz but when needed 2 cores hitting 5ghz, that type of scenario.
-
-
Papusan, tilleroftheearth and ajc9988 like this.
-
tilleroftheearth Wisdom listens quietly...
Today's workloads won't change for most consumers - even once the silicon hits. That may change once developers have enough time (my prediction: years) to rework their programs though (of course).
To buy something today (i.e. Betamax) because it is theoretically superior? Hindsight says not good enough.
When (and if) the programming/software has caught up in gen2 or even gen3 of what AMD has started with a bang this year; then it will be go time. Wishing it to become so over the expected lifecycle of any platform you can buy today or the near future is not looking at the overall picture very objectively at all.
ole!!! likes this. -
ajc9988 likes this.
-
Also, software is already being optimized for it and has made large leaps since March. So, by this logic of waiting years (which won't be necessary for most consumer products, but may be needed with slow companies like Adobe), no one should buy it until then, then it will hopefully be dead because no one adopted it.
Also, Beta was empirically superior. That is NOT theory, it is fact. But, it was also priced too high and the cheaper standard won out. Meanwhile, when you look at Blu Ray and HD-DVD, the superior platform won out (Edit: both were priced similarly, with BR being slightly more expensive, but offering an extra 40% capacity). When you look at Intel versus AMD a decade ago, the inferior product won because of illegal market activity. Now, AMD may have the superior product AND the lower price. If true, that means (looking at historic records) that AMD would win out, unless Intel acts illegally. Hmmm. Thanks for the history lesson!Last edited: Jul 24, 2017 -
Many things can change and swap back and forth several times between Intel and AMD CPU's over the next 6 months.Papusan, don_svetlio and ajc9988 like this. -
So far he's more wishful than falseful in his posts. Thanks for that @ole!!!don_svetlio likes this. -
tilleroftheearth Wisdom listens quietly...
I agree 100% with your statements below.
What I don't agree is paying a performance penalty in the meantime for today's workloads to support something that may or may not happen in the near or even medium future for most workloads. But I'm glad others feel differently and will get this HCC 'base' that developers need to make HCC platforms as suitable as they can for even mere mom/pop consumers too. It will take many more years though - even with the hardware appearing 'overnight' as some here think.
See:
http://www.anandtech.com/show/11549...iew-the-new-single-thread-champion-oc-to-5ghz
See:
http://www.computerworld.com/articl...essors/cpu-architecture-after-moores-law.html
What I get from the above is that gaming has stopped being a viable 'bm' to consider - unless you happen to be playing the exact game on the exact platform and the exact O/S, driver and etc. etc. revisions... (I really 'hate' it when they 'standardize' on a bm'ing platform that quickly falls out of relevance to what the rest of the world is actually doing/using).
I also get that HCC platforms are for very specific workloads and are a very small minority of users (at least today). Those that have those workloads already had options before; with AMD, they now have more and cheaper options. Doesn't mean that it is the best choice for everyone though... not by a long shot.
I believe that the hardware vendors got pushed to create higher than one cores so long ago by the software developers. Maybe there just isn't anymore? (It has been at least twenty years of HCC promises that have for the most part gone unfulfilled - from the developers side - there are many options even a decade ago when they could have optimized and worked on their projects just as well as today, yet here we are with single core performance still being the determining aspect of most consumer and workstation user workloads today).
ole!!! likes this. -
tilleroftheearth Wisdom listens quietly...
The fact that Beta was superior isn't the point. The point is that if you had bought it; you would still have to buy the other.
What software and what large leaps have been made since March of this year with regards to a HCC platform? Please; don't show me silly synthetic 'scores'. Show me real world workflows and workloads affected.
ole!!! likes this. -
7700k vs.7740x vs. 1800x vs. 7820x
What's The BEST CPU for Gaming with 1080 Ti's in SLI?
Long intro, benchmark results start at ~8:15...
Last edited: Jul 24, 2017Papusan, jaug1337, don_svetlio and 1 other person like this. -
And I believe my context made it clear. I showed Beta lost on price, not performance. You had a $300+ Beta going with a market flooded with VHS at about $100. The fact that VHS had enough performance while coming in at a drastically reduced price is why Beta lost. You intentionally tried to use Beta as a way to show not to bet on new tech, but then failed to disclose way it failed. Here, you have Ryzen with enough performance, more compared to price point and likely the same compared to a 16-core costing 70% more than it, and yet you chose the beta analogy? Do you not get what you said? I think you hoped no one would actually look at the underlying reason Beta failed, instead trying to say don't bet on something you don't know. Considering the programs will be able to be used on both CPUs even if not optimized, it also breaks down the Beta analogy you made. Do you really want me to keep going?Last edited: Jul 24, 2017 -
@ajc9988 take avx for example, "Intel demonstrated a Sandy Bridge processor in 2009, and released first products based on the architecture in January 2011" - it came out in 2011, its 2017 so going into 7th yrs and just how many consumer software used by non-techies uses avx? we can probably count them on two hands with our fingers, really. so as you can see, not a lot uses avx which was out forever, will consumer side get optimization? sure but how long?? is waiting 1 yr acceptable? what about 5 yrs after you have purchased ryzen, what about longer?Papusan, tilleroftheearth and hmscott like this. -
I Switched to Ryzen for Gaming & Editing. Here's Why.
( Choosing x299 7900x vs AM4 1700x )
Last edited: Jul 25, 2017ajc9988 likes this. -
tilleroftheearth Wisdom listens quietly...
So, besides the rant on the Beta analogy (which you have some good points on, granted) - you have nothing to show me to support your statement that programmers suddenly saw AMD's high core count platforms and knew they could suddenly make legacy programs better. (Just what was it that stopped them by doing the same with Intel's offerings)?
All the performance improvements you claim still don't beat Intel's offering (see the AnandTech article I linked to above), in an overall sense.
If they PCMark 10 (?) 'scores' were literally fixed within weeks, why can't you show me exactly? And if you can't show me, how can you claim they were fixed?
Creating or remodelling existing software to be more parallelized isn't something that can be pulled out of a hat. Even if some parts of the workflow are parallelizable - it usually doesn't affect the whole process (i.e. 'workflow') to a great degree.
From the AnandTech article:
More cores is great in a general sense - even better if double the cores come at less $$$$ too - like AMD is now offering us.
But that is greatly offset by real world workflows and workloads from some of the best minds in the programming world that simply cannot use >6C/8C platforms effectively today. Parallelization isn't a right or a given in all workloads or even most workloads today.
It is inherent in the data and the manipulations needed to achieve the expected outcome.
As I alluded to before; beginning ~20 years ago I had the same conversation about workstation class computing with more than a handful of cores. Nothing has changed in between then and now.
Read the two links I provided originally in the post above. Not everything is synthetic - even in reviews. The 7z results is a canned bm 'run'. Even if I used 7z; I would not use the internal bm to compare to my current systems... WinRAR is actually doing real work. Including using the storage subsystem just like anyone would use it too. Same for the Agisoft real world software test run. No benchmark can compare to just using the software (full install) just like it was supposed to be used - and with real world data used and created too.
So once again; I'm showing your why my stance is logical and my conclusions correct.
I welcome any response to the above of why I'm not - backed with actual proof this time.
Last edited: Jul 25, 2017 -
tilleroftheearth Wisdom listens quietly...
Yeah.
If I got an 16C/32T platform or higher - I would hire the talent needed to extract the maximum performance from that investment.
That is the programming talent to make my workloads fly on the hardware I chose - not the talent to put it together (nor O/C it, btw...).
Like many articles have stated about Epyc; the companies that buy those types of systems and need them by the truckload already have that talent, within the company. More cores does equal more performance for them.
But for mere consumers (I'm including myself in this aspect...) that mostly buy off the shelf software? Small sliver of a chance of that happening 'in house'.
Why can't MS make it's O/S fully core-unlimited? Because right now; it can't. I believe most consumer software is in the same boat.
But, no problem; here are the 'chickens'... let's see if they can make some eggs too.
-
-
http://www.tweaktown.com/news/58549/intel-core-i9-7920x-12c-24t-4ghz-140w-tdp-1199/index.html
So, Intel 12-core boost is 4.0GHz. The 7900X had 4.3. Speculate accordingly.
Sent from my SM-G900P using TapatalkLast edited: Jul 25, 2017 -
don_svetlio In the Pipe, Five by Five.
-
In other news, have you seen that the VRM heatsinks on the X399 boards now have a heat pipe to offload some of the heat to over by the I/O?hmscott and don_svetlio like this. -
don_svetlio In the Pipe, Five by Five.
ajc9988 likes this. -
My concern is the 300 MHz on the boost per dual core added. At 14 cores we would be at 3.7 Ghz, for 16 core 3.4 and 18 core at 3.1 GHz. I am not sure at that speed the 18 core will be a TR killer. Prior to SandyLake-X they had the low core count and high clocks to themselves but now even that landscape is a changing.
It seems to have competitive machines out there they have taken one step forward and two or three backwards at the same time.
Edit; if you give much salt to CPUMonkey he has the 1920x TR and i9-7920x as pretty close.
http://www.cpu-monkey.com/en/compare_cpu-intel_core_i9_7920x-759-vs-amd_ryzen_threadripper_1920x-757Last edited: Jul 25, 2017Papusan, temp00876, ajc9988 and 1 other person like this. -
Sent from my SM-G900P using Tapatalk -
It's still worth checking for VRM cooling for whatever x399 board you want to get to make sure they got it right.
I couldn't attend the AMD webcasts, did anyone see the x399 presentations? Anything significantly new, besides the VRM cooling?Last edited: Jul 25, 2017ajc9988 likes this. -
https://videocardz.com/71100/msi-showcases-x399-gaming-pro-carbon-ac
https://videocardz.com/71145/gigaby...per-motherboards-soon-available-for-preorders
http://wccftech.com/amd-asrock-msi-gigabyte-x399-motherboard-ryzen-threadripper-cpus/
I didn't quickly find a slide like this from Gigabyte, and we've posted Asus recently (although I'll look again on Asus).hmscott likes this. -
Two things, where is that ROG Zeineth board and what are the prices? I need to know how little I will have left.
Ryzen vs i7 (Mainstream); Threadripper vs i9 (HEDT); X299 vs X399/TRX40; Xeon vs Epyc
Discussion in 'Hardware Components and Aftermarket Upgrades' started by ajc9988, Jun 7, 2017.