AMD has revealed more information on the Ryzen family of CPU microarchitectures and it appears AMD IS might be ready to fight Intel like the old days. AMD is also close to launching their consumer Vega GPU's to counter Nvidia's next gen GPU's.
Coming Soon:
Consumer HEDT Platform: Ryzen Threadripper, Up to 16c/32t
Enterprise Platform: Epyc, Up to 32c/64t
Consumer Vega GPU's:
Released
AMD Ryzen 7 1800x
WORLD'S FASTEST 8-CORE DESKTOP PROCESSOR
- 3.6 GHz clock rate; 4.0 GHz with boost
- Extended Frequency Range (XFR) in the presence of better cooling
- 95 Watt TDP
AMD Ryzen 7 1700x
MERGING THE WORLDS OF ENTHUSIAST, GAMER, AND CREATOR
- 3.4 GHz clock rate; 3.8 GHz with boost
- Extended Frequency Range (XFR) in the presence of better cooling
- 95 Watt TDP
AMD Ryzen 7 1700
WORLD'S LOWEST POWER 8-CORE DESKTOP PROCESSOR2
- 3.0 GHz clock rate; 3.7 GHz with boost
- AMD Wraith Spire cooler
- 65 Watt TDP
From Arstechnica.com
The first Ryzen-based part will be the Summit Ridge family of high-end desktop chips. Summit Ridge chips will feature an 8-core 16-thread CPU—which is a true 8-core chip using simultaneous multithreading instead of the much maligned clustered multithreading of Bulldozer—with the top-end part sporting a base clock of 3.4GHz. While that isn't as speedy as Intel's latest quad-core desktop parts (the i7-6700K has a base clock of 4GHz), it is more than competitive with Intel's 8C/16T Broadwell-E processor, which has a base clock of 3.2GHz.AMD's Press Release
Anandtech's look at Ryzen
Ryzen will be a four year architecture according to AMD exec...
See more at Extremetech.
More information will appear as it is announced.
-
-
-
I'll believe it when I see it, many years ago I used to buy only AMD but post Athlon 64 they've done nothing but poo their pants.
Last edited by a moderator: Dec 15, 2016tilleroftheearth likes this. -
Which is a long time coming tbh.. The main thing is that people need to show the faith and take the plunge.. If the price is right, Intel are in big trouble.. Their focus on buffing iGPUs has meant that they got their eyes of the ball and that is a major mistake..
Sent from my LG-H850 using Tapatalk -
tilleroftheearth Wisdom listens quietly...
For me, the competition from AMD would be most welcome (but... I can't wait for Intel's response...).
A CPU (plus the RAM it is tied to) is all about productivity for me - I have never seen an AMD platform that is more productive than an Intel platform - ever (for my workflows/workloads). Even when the 'scores' posted by AMD based systems kicked Intel all over the playground.
Clock speeds mean nothing - especially between competitors' product lines. If AMD was serious about giving Intel a real trouncing... they would team up with MS and get their products optimized for a Windows workflow.
Giving more cores, more MHz and more 'value' by being less expensive is not what a complete platform option looks like to me. More, actual, productivity is the bottom line and that is what AMD will need to prove for me to sing their praises - even if I don't buy their products in the end.
Their biggest failings for me is not so much with the absolute performance offered (I do have demand variable workflows, after all...). No, their biggest failing is performance per watt. Intel is in the 22nd century compared to AMD on this front. Hope this is where AMD concentrated most of their skills on (because they're not catching up in absolute performance/productivity - so far).Last edited by a moderator: Dec 15, 2016Kent T likes this. -
Have you red anything about Ryzen before composing this post? From the last couple of sentences it seems that the answer is "No".
-
-
I agree. AMD needs to move away from APU's and start delivering what made them so valuable, a competitive CPU at an amazing price.Kent T likes this. -
So, we still don't know top single core and don't know top overclock. We know that ryzen beats a 6900K at efficiency and speed for encoding with handbrake and with blender (which can be optimised, which is why many discounted it). One report claimed that the 6900K was set to dual channel memory instead of the quad channel, so I need more info there. It beat the 6900K both with and without boost enabled and the AMD cpu locked without boost. They both had equivalent air cooling. Meanwhile, 36% of 6900Ks can do 4.4GHz at 1.36V (silicon lottery). We don't know the number of pcie lanes on ryzen. We don't know overclock per voltage. It should beat the 6700K in well designed multithreaded applications, but will lose in single threaded or optimized for dual core only. What is interesting is the auto overclock to scale to cooling (manual should be better). We don't know the new effect of the prefetch for the cache. Etc.
Sent from my SM-G900P using TapatalkCedricFP likes this. -
Yeah, there are a lot of unknowns, but it looks mighty impressive when you take into account the current gap or rather abyss in performance. Hopefully it scales well. After all it shows comparable performance at less power consumption, so I really hope that it would directly translate into overclockability. Then again AMD usually tune their stuff to the sweet spot of power/performance. We'll see soon enough.
ajc9988 likes this. -
Robbo99999 Notebook Prophet
Yeah, this is brilliant news about AMD, it really looks like AMD are gonna produce a chip that is equal to or better than what Intel has available! Here's a link to another article about Ryzen: http://www.hardwarecanucks.com/foru.../73981-amd-ryzen-cpu-preview-zen-matures.html
I've already built my desktop with a 6700K a few weeks ago, would have been good to have experienced some of the Ryzen technology first hand, but should be ok for a good few years with the 6700K I reckon.ajc9988 likes this. -
tilleroftheearth Wisdom listens quietly...
You guys are comparing 'scores'. I'm comparing actual productivity in my actual workflows.
AMD simply cannot be the 'performance per dollar king' if it can't give me better/more productivity than what I already have 'now'.
Keep chasing the 'scores'. They'll lead you nowhere...Kent T likes this. -
You don't point to the prefetch might be misaligned, you point toward nothing. Blender and handbrake isn't used for hwbot.org benching on their own (or at least not for points, although blender is incorporated into the realbench benchmark). So please contribute with analysis or go. Being hesitant and giving reasons is fine. You attacking without information just puts you in an unbecoming light! I tried to help you by showing what is unknown and the better way to challenge the chip, yet you did not take the hint. Above, I did it again in this post. So stop talking or learn how to express your reservations!
Sent from my SM-G900P using Tapatalk -
tilleroftheearth Wisdom listens quietly...
This is a conversation, is it not? I'm entitled to reply as I see fit, correct?
I made my observations, I posted my basis for those observations - I am not making assumptions from zero info - the info I have is quite sound to today - tomorrow, if/when/whenever AMD shows me something new to test (and I actually do...), then I may change my mind.
Right now, nothing has changed. As a few have already expressed in this very thread.
I'll say it again; I don't care about 'scores', or the internal details for that matter (because I can't change anything even if I do research and 'know' them - the details, I mean...).
All that counts in the final analysis is what makes me more productive than where I'm at now.
You can argue my style of writing/responding - but you shouldn't be too concerned about my 'attacking' anything or anyone - except when we're just talking about tools that can be had for mere $$$ to help me do my work better, easier faster and even with less energy used...
I don't work with the unknowns... I work with what I know. Workflow, Workloads and finished jobs (mine, of course). Nothing constructive will come from theorizing about what AMD may bring out or how it will implement things. I'll let you guys talk.
What I offered was a glimpse into what is important in my world with regards; the topic at hand. You don't necessarily need to agree to that.
But you also can't tell me how to respond either.
-
You spoke many times without reading the articles on the changes made and still speculate that it is **** at workloads when that is unknown. That is not a conversation or arguing from a position founded in facts, it is relying on prejudice to shape an opinion before enough information is available to form one on fact! Hence, speculation and bigotry.
Everyone else here is hesitantly optimistic from what is known while acknowledging what is not. This thread was made to share information, not to praise nor denigrate AMD products, which you immediately did and continue to do unjustifiably. Hence, if you are not adding to the conversation, based on disclosed facts, you are derailing and trolling the thread. Those violate forum rules, thereby not being allowed.
As to your aspersions that others have done so, go check. You and I have been the most critical of Ryzen in this thread. Note the difference in style, mine pointing to unknowns, biases, facts known about the product, why we should be hesitant of some of those facts, etc. Yours is to talk down on AMD, based solely on experiential data, not pointing to historical events, and not incorporating new information that may change the situation upon full coverage and release. You do not point toward what specific workloads you would like to see tested, what looks promising, or even what they have done related to the chip architecture that was a mistake. You point to something amorphous, your projects, without stating what type of work loads you are performing and would need to see improvement before you give consideration. You focus on you, do not add to the discourse, do not even share enough specifics so when information relevant to you appears we can share with you, etc. That is thread crapping!
As such, please contribute or move along. Either say you don't have enough info, or address the information presented, or even the inaccuracy of the information and why.
Sent from my SM-G900P using TapatalkLast edited: Dec 16, 2016raz8020, TomJGX, Rage Set and 1 other person like this. -
@tilleroftheearth, check this out to compare your setup to the one recently used at the press event:
https://www.google.com/amp/www.pcwo...-cpu-compares-to-amds-ryzen-for-free.amp.html
Sent from my SM-G900P using Tapatalk -
http://www.anandtech.com/show/10907...nvme-neural-net-prediction-25-mhz-boost-steps
Sent from my SM-G900P using Tapatalk -
This had a really good look at the platform
http://hothardware.com/news/amd-to-...-ryzen-more-zen-architecture-details-revealed
Sent from my SM-G900P using Tapatalk -
tilleroftheearth Wisdom listens quietly...
You're not listening to me. I don't do 'BMs'. That isn't what keeps a roof over my head and food on the table.
I don't run Blender in my workflows and probably never will.
Doesn't matter what my current platform's Blender 'score' is - it is irrelevant to my workflows and would be a waste of my time to run it.
Why? Because it can't be used to predict where my actual productivity will end up with if/when I moved to an AMD platform...
-
tilleroftheearth Wisdom listens quietly...
Sure, sounds/looks impressive. Even when I read it earlier this week.
Doesn't mean anything yet. (That has yet to be proved in my workloads, not BM's or theoretical musings).
Spartan@HIDevolution likes this. -
tilleroftheearth Wisdom listens quietly...
Yeah, just like the article states, I too am optimistically awaiting RYZEN’s release from AMD.
The difference is that I can comfortably wait for Intel's answer to RYZEN and will more than likely choose to do so.
As I've stated; a pure performance advantage isn't enough for me to jump platforms. The interaction between all parts (me, O/S, software and required output) is the balance I strive for.
-
You, in these last two comments, are closer to joining the discussion, as blender doesn't represent your workload, meaning encoding is likely out. Keep it general for what types of workload, but give more of an idea.
Me, yes, I love number chasing, but don't confuse that with me not understanding specialization. I work with spreadsheets, databases, and research. I push ram usage while doing so, meaning I'm extremely disappointed to see only dual channel on this chip. I do encoding of videos and tinkering with video content as a hobby, but have friends that do it as a job. They discuss which hardware they should get with me, hence why I watch and recommend based on needs and market availability (also software being able to utilize the hardware fully so they don't break the bank for what they don't need or won't be as efficient).
I bill in tenth of the hour intervals for my work and do not like to discount based on waiting on a computer. I understand what you are getting at. But you directly said in your first post in this thread that it was not as efficient as Intel 6900K when ryzen used 4 or 5 watts less draw then Intel (which has a more mature power efficiency with the firmware) while outperforming it on time for an encoding task. Now, will that translate to your workload, you say no. But, your statement ignored facts already proven. You could point to contradictory statements on if quad channel was enabled on the 6900K and whether that would impact encoding tasks with specific hardware. That is reasonable. But that isn't what you did. You came to put down the chip. That is what I take issue with.
Btw, blender, although in this case used for comparison, is not just used as a synthetic show of performance. It uses opencl, which can translate to other uses for the chip, something to be remembered.
Sent from my SM-G900P using Tapatalk -
tilleroftheearth Wisdom listens quietly...
I'm not ignoring any new data. The 'data' presented is meaningless to me. Sigh...
To give you an extreme example of where I'm coming from:
It is circa 1900 and I've been using my horse and buggy to sell and deliver eggs - for decades, now. Along comes a car salesman and gives me the 'facts' about the new thing called a horseless buggy and gives me great incentives to buy one.
I review my needs for my egg business and conclude that with my 'workflow' being within a few miles from the farm, the fact that my customers want the eggs 'today fresh' and that the horse is young and the buggy newly reconditioned; trading that for the horseless buggy to save a few minutes (when it works...) a day but also potentially having to not only rely on the old methods - but maybe even miss some of my deliveries if/when the horseless buggy breaks down and needs repairs that I may or may not be able to do (right then...).
After considering all the options the new tech will offer me - and considering the possible setbacks too - including the ability to expand my deliver radius (if I can also proportionally increase my egg output...) and the very real notion of needing a 'backup' for those inevitable misfires the new tech will bring... I decide that I, as a sole entrepreneur that runs all aspects of my business, cannot take a chance with the new tech based on what the car salesman just told me.
If and when I am able to see how applicable the 'theory' was to someone else's 'real world' use, I could change my mind. But right now?
I have to wake up at 4:30 AM to get my eggs in order.
To further clear things up for you. My 'style' is to give one a perspective that not many may share.
But a perspective that I have found to be more accurate in answering the real question; is it better for me.
This works regardless of whether different architecture is compared or not (you're getting stuck on things that don't matter in the long run). And regardless also of which node or compiler is used too (don't forget; you can only use the tools you have available - even if they're bad...).
Finally, I'm not doubting the accuracy of the information presented so far. But my point is that this new 'info' cannot possibly sway me to hope - when, AMD has proven to me time and again that that hope is almost always unfounded.
You're welcome to read the links you post as you see fit. Your opinion doesn't bother me one bit, really.
But, I read with a more critical eye and also for a deeper comprehension. And I know I'm not wrong; because when I am, I just admit it and move on (with a new understanding of things...).
If you can see past all that and know that 'I' am wrong? Well, what are you doing posting here.
-
Now read my statements over, especially the post before this. I express directly that you have not detailed what the workload is that you do. Further, depending on your current system performance, upgrading too soon means you don't get the performance boost for what your acquisition costs are, meaning it would be worthless. Don't beat around the bush with these analogies, speak directly and plainly!
Further, if you examine my previous statements, I point to skylake-x, a platform set to last, as far as the chipset, until beyond 2020 before upgrading the platform, whereas by 2018 or 2019 (new content), AMD plans to release an AM4+ board, meaning we don't know what chips will continue to run on the AM4 boards or what features we will miss out on. There is a point.
Instead, you use prose to try to establish a point unrelated and that could be said in two sentences: "with my current hardware and cost acquisition considerations and acquisition timeline, it does not make sense for me to consider Ryzen as a move to improve my current needs for work. As such, although I always cheer and hope for competition within the x86 chip manufacturing industry so as to bring down costs, this release is irrelevant to me at this time."
This being the case, why the **** are you commenting in the first place? You add no information and solely focus on you, not the topic of the thread.
Sent from my SM-G900P using TapatalkLast edited: Dec 17, 2016triturbo likes this. -
LMAO this thread. Maybe tilleroftheearth works for CIA. :3
ajc9988 likes this. -
OverTallman Notebook Evangelist
Watching this thread, partly because I'm looking for a cheap long-lasting build in 1-2 years, partly to watch how this fight goes
Meanwhile...
-
tilleroftheearth Wisdom listens quietly...
Hey guys, relax. No fight here. I'm not being condescending and asinine. I'm just misunderstood by ajc9988 and possibly a few others. No skin off of my nose. It is not my loss if the points I'm trying to make are glossed over.
As for my workflow? That has been detailed many times here on this forum - I'm not going to do it again (use search).
Bottom line? Not only AMD and Intel, but anyone else too, has to prove themselves against my established workflow (no - that is not what ajc9988 said in the post just above... read carefully...) - each and every time I have a chance to upgrade my platforms.
The thing with Intel though, is it seems for the last decade or so I can depend on what a new product will give from them; yeah - real world productivity improvements. Whether those are enough for anyone else is open to debate, but for me, those incremental improvements over time have made 4 or 3 and even 2 year old platforms obsolete for me (in actual use) (depending on the workload I'm doing).
Like I said before; continue debating the 'scores' and marketing fluff (on all sides). I don't have time for that. -
I don't know you nor your work, nor do I care. We have talked in passing on the forum or participated in threads together, but in that time, I have not gleaned your work. We don't always participate in the same threads. That is fine. But to feel someone must research you when you are only talking about yourself and leaving the details necessary for what little connection you have with the topic of the thread ambiguous, you miss the point of having a thread focused on one topic (hint: "it ain't you babe").
As to the rest of your post on companies proving themselves for given workflows, can you speak in more generality and other than truisms? That is true of everyone that isn't firmly in brand loyalty mode. To the majority of professionals, brand doesn't matter, performance and reliability do. Because of that, this doesn't need said as everyone understands it!
Intel's performance over the past decade is known. That is talking about historical data which may or may not hold true for the 6 month period between the shipment of Ryzen and the shipment of Skylake-X. Once again, this ignores new facts and the topic of discussion. It speaks generally, then degrades into experiential data applicable solely to you (see hint supra). This has been covered in my prior posts, so please reference them if it needs rehashed.
Finally, people were discussing performance under set conditions of two processors performing the same tasks in an opencl platform and encoding workloads, referencing the only current available data there is to discuss. We also were discussing the relevance and potential performance implications of the new architecture and platform, as well as detailing unknown information that would be needed for purchase decisions, which may or may not be available this month, but that will most likely be available within 44 days.
We all understand the nature of puffery from corporations. We aren't discussing that, mostly. In fact, it took 3+ articles, each containing different tidbits, to paint the picture I did above when discussing the topic of the thread. In fact, this thread has devolved into my attempt to show you that you are off topic and have contributed ZERO new information. If you cannot see your own narcissism, I've got a mirror for you (at least it will keep you busy instead of derailing a thread).
Sent from my SM-G900P using TapatalkLast edited: Dec 17, 2016 -
tilleroftheearth Wisdom listens quietly...
You know, everything you say about me can be equally said about yourself, right?
As to your discussing performance under 'set of conditions of two processors performing the same tasks...'...
My comments are on point. Those hand picked tests by AMD do not make them applicable to anything else in the real world. Unless, as I've already pointed out; that is exactly what you do with your platforms (run Blender).
I don't. I don't know anyone else that does either. Therefore, replying from experience, I wrote what I wrote.
Why can't you just accept I wrote a perfectly reasonable and on topic post?
And btw, from what I have seen here (now and in the past 8 1/2 years or so...), speaking in generalities and truisms is needed - most here are not professionals and reminding them of the real end goals is needed to be repeated as often as necessary. After all, new members/readers show up every day.
-
As to the topic, the main complaint of using blender is that it can be a heavily optimized openCL application. If you perform certain types of research utilizing heavily customized openCL code, then it is relevant. It is harder to explain away the h.264 and h.265 encoding, yet that is still limited for the market segment to which they want to sell.
In other words, you saying you are speaking to people new here that do not know shows two things: 1) you insult their intelligence, and 2) you are trying to imprint bias from historical data that may not be relevant to the current product because, as both you and I have said, THERE ISN'T ENOUGH INFORMATION TO FORM AN OPINION! Yet, you already have formed an opinion without information, showing BIAS and PREJUDICE! Those new readers deserve to understand that too!
Sent from my SM-G900P using Tapatalk -
-
Sent from my LG-H850 using Tapatalktemp00876, TBoneSan, Rage Set and 1 other person like this. -
tilleroftheearth Wisdom listens quietly...
Doesn't matter what I state; you show total disregard to facts. This also isn't a conversation anymore (too bad) - see previous sentence.
For everyone else that cares about the actual implications of AMD's new chips:
Here are my relevant facts on this topic:
From my first post on this thread; I hinted that Intel's response is highly anticipated. By me and by all others that value productivity rather than meaningless 'scores'.
I don't think I have the best platform today (Intel); I know I do because that is what my constant comparisons to the previous platform I was using lead me to. Regardless if that previous platform was also Intel, btw.
Comparing what will be coming in Q1 2017 from AMD to what has come from Q2 2016 from Intel is not exactly comparing apples... - that is why I stated 'waiting for Intel's response' - at that point we can see who is still the performance/productivity and efficiency king.
Any competitor will hand pick the 'scores' that show their newest offering in the best light - and here, AMD choose Blender. I have no problem with that (manufacturers will do what they will do...). But what my point was and still is; unless someone is using that exact (less than a minute... sigh...) workflow... - it is meaningless. And it is meaningless to me and millions of others who want to know which processor is better for their workflow/workloads. This isn't just my opinion - this is FACT NUMBER ONE when decoding marketing mumbo-jumbo.
(Just because processor 'A' beats processor 'B' at running program/workflow/workload 'x', doesn't mean it will do the same when compared when running program/workflow/workload 'y').
Overall, everything I've read about Ryzen so far is simply AMD catching up to Intel on how to do processors, '2017' style, instead of being stuck circa 2005. I sincerely applaud their efforts, but surpassing Intel when Intel hasn't released what's coming for 2017? I think that is more than a little far-fetched.
What Intel has been delivering, consistently for me, is the right balance of what I'm looking for. Even if AMD gives more performance - that in and of itself is not enough to persuade me to move to an AMD platform (on it's own). No matter how inexpensive AMD's platform may be (the cost isn't in the parts - the true costs are in the long term ownership/setup/maintenance/stability and reliability of the platform - AMD has no track record here with me...).
I want the fastest systems possible for the lowest (absolute) cost for my most demanding workflows/workloads (multiple workstations = less than halo products for each one...). That means i7 QC w/64GB RAM as a minimum (~$300+ CPU). That doesn't mean O/C'ing capabilities, or other 'extras' such as more cores (needlessly) or ECC SDRAM - unless they come for 'free' ($1K+ CPU's).
I want the most responsive setups possible (that is why I run almost everything I have with 16GB of RAM or more, QC i7 CPU's and 1TB SSD's OP'd by at least 33%) running the most productive O/S possible (Win10x64Pro). And I want the above whether I sit at my workstation to chew through a few thousand RAW images in a day or, try to balance my books, read my emails or browse the 'net on one of my 'lesser' machines too.
See:
http://forum.notebookreview.com/threads/why-arent-cpus-getting-faster.797265/page-5#post-10371922
(The above link shows an overview of how I perceive Intel).
AMD has tried many times in the last decade to pull itself up by its bootstraps. And many times it has failed (at least when compared to what it boasted about...).
Given all the above and more, I can make a highly educated guess that AMD will probably not deliver what I need in Q1 2017. And almost certainly, it probably won't deliver better than what Intel's response will be - whenever that happens to be (announced and/or delivered sometime in 2017).
Why am I waiting for Intel's response? Because I can.
Take care.
Merry Christmas/Happy Holidays to all! A Prosperous and Joyous New Year to one and All!
'till
-
@tilleroftheearth
Your posts do read and sound like it's Intel or nothing for you. There's nothing wrong with that, especially since their tech 'works' for you and your endeavors. I, myself, lean more towards Intel and that's because business dictates it. I have a wonderful business relationship with Intel but I would be the first person to tell you, they have become quite complacent. AMD could give them a kick with Ryzen.
If Ryzen is the comeback for AMD, I'm all for it. Would I integrate AMD's products into my business work stacks? No. Intel is heavily embedded into my company's products/services. Would I give AMD a chance in my personal computing? Absolutely. Intel has some tricks up its sleeves in the up and coming years but it would be fool's gold to doubt AMD and its ability to offer a competitive product. -
im skeptical, im waiting to see if they perform well with a good mobile processor at all to compete with at the i7's QMs which are cheap and really are DTRs in compact low power draw money saving form.
-
Sent from my SM-G900P using Tapatalktemp00876, TBoneSan, TomJGX and 1 other person like this. -
@Mr. Fox, could you help educate @FluffyPuppy without biting? I'm not sure if I can... This involves saying mobile quad processors are DTRs in a low power draw money saving form...
Sent from my SM-G900P using TapatalkRage Set likes this. -
ajc9988 likes this.
-
Sent from my SM-G900P using TapatalkRage Set likes this. -
Even if it only offers 90% of what the fastest intels do at the time of release it is a win for me.
I got so tired of upgrading intel systems. The motherboards only last 2 cycles, prices are inflated and they cut corners with TIM between the heatspreader and chip die etc.
Competition again is a win for everyone and lets hope AMD offers good competition.
But from what I see these chips are especially good for professionals that needs compute power, more cores for faster rendering etc. Especially in that field Intel was just ruining the pricing. -
My main issue with Ryzen is that AMD has once again dropped the ball before launching the product. The primary problem here is the lack of quad channel memory support. I haven't seen the specifics of the chipset (which is important, just like the raw performance of the CPU) but what I have seen is everybody seeming to forget the relevance of good RAM. To break things down, good RAM:
- reduces CPU load in many scenarios, like games
- helps with things like encoding and rendering, especially in real-time
- improves overall performance and consistency of a system, including lower minimum framerates in games
Anybody that wants the best is still going to grab Intel's enthusiast line. It doesn't matter if Ryzen is clock-for-clock as good as skylake-E (since they'll both basically be out soon) or if they both overclock to 4.8GHz and use the same TDP. Skylake will use quad (or rumored even hexa) channel memory, and it'll surpass Ryzen where it matters. Also, people are going crazy about TDP using the most power hungry CPU architecture in a long time; Broadwell-E. Skylake-E is going to use MUCH less power, and will instantly kill AMD's power efficiency "advantage". Extrapolating Skylake into Skylake-E is where you'd find this making sense. Broadwell and Haswell were both very power hungry and that was magnified with their -E lines... they're not good lines to be using to tout "efficiency" against.
The only way AMD is going to "win" over intel is if their CPUs clock higher than intel's, run cooler than intel's, have enough IPC that their higher clockspeeds still provide benefit (5.2GHz Sandy Bridge is beaten by 4.7GHz Skylake; it clocking that high is no benefit in this instance... AMD needs to avoid such a scenario. If their 5GHz is intel's 4.8GHz, then them hitting 5GHz on average to intel's 4.8 on average is pointless... they need to average 5.1, 5.2, 5.3GHz to take the competition), and are priced so far under Intel's competitive offerings that the majority of users not caring about the "ultimate" in performance and are fine with "slightly second best at a huge cost reduction" are very happy. And I don't know if they're going to win over enthusiasts like that, or take on all counts.
But if they're going to compete in the enthusiast line and with a pricetag to match... it's not gonna end very well.
tilleroftheearth, TBoneSan and ajc9988 like this. -
I did mention the dual channel before, but did not expound on it, which I thank you for.
I agree with your points, but then refer you to my point on relative vs. objective wins. Relative win can be gaining only a fraction of the market, so long as it increases and enough margin is built in.
Sent from my SM-G900P using TapatalkD2 Ultima likes this. -
But I'm a little selfish. I dislike Intel, thoroughly, for the way they've handled the last few generations of processors. They need to be put in their place. I wanted them to absolutely lose almost their entire market from the point of Ryzen's launch onward. I want people to laugh at those who buy Intel because they wasted their money for nothing. I had hoped for this in the GPU market toward nVidia but that didn't happen; Zen was my hope for the CPU market.TBoneSan, Scerate, TomJGX and 1 other person like this. -
Meanwhile, they still must get over the 2018 debt hurdle to stay in the game, and catching up may have done that (even though we all wanted a win). Right now, tsmc, glofo, and Samsung may reach 7nm before Intel over the next 3 years (ibm also). Then, China's risc chips are shrinking fast as well. This means there is a chance this was a catchup point and the race behind in or after 2018, as 10nm is the new wall...
Sent from my SM-G900P using Tapatalk -
But AM4+ had better have quad channel support, though. PCI/e 4.0 could be added on the new CPUs for AM4+ and things could be better.
As for the debt hurdle, I think playing it safe is the worst way to go about these things. What they need is solid marketshare so they can put more R&D into their drivers and such. Ryzen is basically nearly out; R&D into that is not going to get much. I figure they best make their products as best as they can, especially the unreleased ones and the upcoming ones. Push into it heavy and reap the rewards, don't just try to live on a lifeline as long as possible, you know?
As for Intel, they've already laid out and designed at least the next two CPU archs. No huge changes are going to jump into them. If they've been planning 8% worth of IPC improvements over Skylake via Kabylake and its two successors, that almost certainly won't change. Which is why I want them to seriously lose marketshare. Make them actually work at a decent successor.ajc9988 likes this. -
AM4+, if done like prior generations, can utilize prior CPUs, but AM4 boards will not likely be able to use Zen+ chips is my concern. They have changed all chips to use the same socket moving forward, so I may be wrong. But it needed said. Meanwhile, Intel guarantees the HEDT market can support 2 generations. As such, there is a concern of not capturing the market of people wanting the multi-generation mb (granted, even Intel's boards have multiple revisions created in that time, including boards to incorporate to features).
But quad channel, I agree, is a necessity!
As to R&D, I agree. Take, for example, what they did with Mantle. It created an open driver for lower level, which big green laughed at. It then was adopted by opencl with Vulkan and mostly adopted by Microsoft with DX12. This with their asynchronous computing developed for x86 and modified for use by the GPU, gave them a leg to get back in the fight, although most programmers want an AMD guru on site to optimize the code like nvidia does, something amd cannot yet afford (although they are now creating videos and seminars to teach how to optimize for them). If nothing else, they are headed in the right direction, which is important.
Sent from my SM-G900P using Tapatalk -
First AMD Ryzen Review Leaked – Aggregate Performance 46% Faster Than FX-8370 with 93 Watt Power Consumption
http://wccftech.com/amd-ryzen-review-leaked/
"What is probably one of the biggest leaks of this year has just occurred. A review that was apparently published (in paper form) before the NDA lifted has leaked out from a trusted French PC hardware guru and reveals real-world performance testing of the Zen based Ryzen processor. The clocks of the Ryzen CPU used in this testing are the same as all the leaks we have seen so far, namely an all core turbo of 3.3 GHz (with a base clock of 3.15 GHz) although Lisa Su has stated in the New Horizons event that the final clocks will be 3.4 GHz+." -
Sent from my SM-G900P using Tapatalk -
2017 is literally DO or DIE for AMD, they better get their act together and so far looks good! -
What cares us, is the gaming performance and it is only comparable with i5-6600K. That Intel processor consumes 33W less power. Means more heat, fan noise during gaming. No thanks!
hmscott likes this.
AMD's Ryzen CPUs (Ryzen/TR/Epyc) & Vega/Polaris/Navi GPUs
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Rage Set, Dec 14, 2016.