It looks like the Ryzen 3 / Zen 2 Tsunami has hit Japan...![]()
![]()
![]()
AMD CPU market share in retail stores reaches 68.6% in Japan!!! (Lots of great information in the expanded article)
https://www.reddit.com/r/Amd/comments/citz2d/amd_cpu_market_share_in_retail_stores_reaches_686/
libranskeptic612 S] 22 points 8 hours ago
"...The other message seems to be how fast this has happened - like ~20% to 68% in ~no time."
Pulse_163 283 points 10 hours ago
"Those anime mascots were really worth it"
BEAVER_ATTACKS 106 points 7 hours ago
"ryzen cpus come with a free anime waifu in japan"
diamondrel 67 points 6 hours
"I'm moving to japan"
disastorm 18 points 4 hours ago
"no joke but mine actually came with a hat and a ryzen t shirt (I'm in Japan).
Not sure if the promotion was store-specific or if amd gave out shirts to the other stores as well, but I wonder if the marketing here might be different than in other places."
Waifu, AMD...
https://www.reddit.com/r/Amd/comments/89iiwm/my_waifus_have_arrived/
-
-
I guess different strokes... I'd find that to be a major turn-off and it would raise questions in my mind if I had actually made a good decision or not if it came bundled with something like that. I do realize it is a cultural thing. That's probably why we don't see it here.
Personally, I'd rather just have it overclock like a banshee on acid rather than come bundled with silly kiddo gimmick crap I have no use for. A generic black cap or a black t-shirt would be OK. Or a big black mouse pad with a logo on it would be OK. A 25-50% discount on motherboards, GPUs, water cooling parts or a free delid kit would be more useful and enticing.
Or, even better, just sell it for less with no gimmicks. Something like an OEM/system builder package. No cooler. No freebies. Just the bare CPU in a plain looking no-frills box and a $50-100 price discount would be best for most enthusiasts in the US. I don't even care about a decal or case badge. But, I know some folks really get off on those. I always found them annoying and an extra nuisance when they come pre-applied on laptops and desktop cases. Just a physical form of inconvenient "bloatware" that has to be dealt with along with the digital bloatware.Last edited: Jul 28, 2019ajc9988, ole!!!, Papusan and 1 other person like this. -
It's not an either or deal - gifts or overclocking headroom - and it's not an AMD thing it's a Japanese gift giving thing.
The Ryzen CPU's are cheaper - better performance value than Intel CPU's. AMD CPU's are now preferred purchases over Intel CPU's in Japan to the tune of 68%+ sales going to AMD. That's the main take away.
The traditional bundling of gifts with products in Japan is part of their gift giving culture, and many fans of Japanese culture around the world enjoy the gift giving and receiving too.
Japan Guide - Giving Gifts
https://www.japan-guide.com/e/e2004.htmlLast edited: Jul 28, 2019 -
4.3ghz OC is pathetic to say the least and 7nm+/6nm wont change that. gonna have to wait till 5nm and see what TSMC is capable of.
on the other hand intel isnt exactly enticing with it's 10 core 300w cpu. both side of camp with nothing super attractive.. why cant we have best of both sides. -
Seen SL binning? https://siliconlottery.com/collections/matisse
4.2GHz and $810. Maybe you get 4.4 with 5/6nm?
Or amazing 4.5GHz. See... Performance will increase with more cores as usual
Last edited: Jul 29, 2019Mr. Fox, ole!!! and tilleroftheearth like this. -
4.4 will not happen with 6nm cause its pretty much the same as 7nm. might see a 4.5 on 5nm with acceptable voltage. if im gonna get 4.5ghz, that IPC better be damn impressive, something like another 10% over zen2.
-
These are clearly still not very good products for overclocking enthusiasts. They are for people that want something that costs less and delivers good stock performance. I have a hard time getting excited about that as well. Not my cup of tea and not a good fit for my hobby. But, it would be OK for something that isn't used for that, like a work computer or something just used for casual entertainment, like playing games.electrosoft, ole!!! and Papusan like this.
-
Again, why do raw GHz numbers matter? AMD GHz != Intel GHz. Different microarchitectures, different processes. For that matter, Ryzen+ GHz != Ryzen 2 GHz.hmscott likes this.
-
I think it varies by individual. If you don't care about that, then it doesn't matter at all. But, when your hobby is overclocking, then it matters a ton and a product that doesn't do that well isn't a good product to use. It's the wrong tool for the job so to speak. When everyone gets the same cookie-cutter experience because the product isn't good at overclocking and the playing field is mostly leveled apart from minor tweaks here and there, then it sucks if that's your hobby. There is not even much point in the product being called "unlocked" when that is the case. Everyone getting a trophy for participation always sucks no matter what the situation is.tilleroftheearth, ole!!! and Papusan like this.
-
Exactly. The word “Unlocked” and that AMD talks about “overclocking” their chips doesn’t fits in this sentence.tilleroftheearth and Mr. Fox like this.
-
The real focus is not necessarily the clock speed number as much as what you can do to put a great deal of distance between yourself and others that own the same CPU by overclocking it. When you are limited in your ability to excel against others that own the same CPU due to the clock speeds not having much wiggle room, then that is what sucks and makes the product boring to own. With unlocked Intel CPUs you can put a massive difference between yourself and others that own the same CPU by overclocking the snot out of it. That is what makes owning them fun when your hobby is overclocking.Papusan likes this.
-
then why not clock the ryzen cpu at 1ghz and call it a day because ghz dont matter right?
amd already overclock it to max right out of the box pretty much. no more joy there. -
-
And, when that is your hobby it is about the only thing you care about. Without that, there's no hobby left and nothing left for you to care about. You join the ranks of the mediocre click-and-run masses with a cookie cutter PC. It's OK when you're OK with that, but it's not OK when you're not content with status quo.
-
I’m not so sure Intel will put the middle finger for overclocking as AMD does. And Intel still make their own chips vs. what AMD do. The same will they do the next coming years. How high they will oc we still don’t know. New arc and chips ain’t out yet. Determine no oc now isn’t the way to say it.
Last edited: Jul 29, 2019Mr. Fox likes this. -
electrosoft Perpetualist Matrixist
AMD > Intel Stock
Intel > AMD OC'd
AMD >= Intel IPC
I have always OC'd my systems since my PowerMac 7100/66 @ Drexel in the 90's. It's just something I do to eeek out a good chunk of performance. I get a thrill tweaking my systems. I get giddy when temps plummet. I still remember the sheer joy of overclocking my
celeron 300a to 450mhz and just going, "sheesh!"
But I do realize, even just looking at the mass of family, friends and gamer guildies I know that I am the massive exception not the rule of Joe Basic User and the market reflects that unfortunately for myself and others who do enjoy poking and prodding around in their systems.
Most users just want to unbox, load their software and go. If they encounter a hiccup, they call customer support to sort it out.
This is the reality of modern consumer computing.
With Intel falling behind a bit, as an OC'er, my biggest fear is Intel suddenly pushing their chips to the wall from now on to stay ahead of AMD and just like that their enthusiast OC headroom is greatly diminished too.
CaerCadarn, jaybee83, Mr. Fox and 1 other person like this. -
im sure most of us got into computer because either its a hobby, or tired of dealing with garbage customer supports and just diy and went the route of self taught. for me its a mix of later and also tired of slow computers.
as for OC it wouldnt really matter tbh if intel doing similar what AMD does and their chips can do 5.3ghz they'll have skus be at 5 or even 5.1 ghz. 9900k is a good example of that.
all that matter is the silicon and IPC, we want 4.8-5.2ghz variant with a very good IPC vs current standard we have like coffeelake or zen2. i mean zen2 already beats coffeelake interms of in core IPC, just the ccx latency as well as memory added latency kills off the advantage it has over intel. -
Intel 9900k even overclocked in general is not faster than the 3900x in most applications, maybe gaming. Most applications AMD is faster but there are a select few Intel still is slightly better at.
Most overclockers, such as myself, ddi so qas 3e just wanted more power. Now that I can have so with Zen 2 I do not missing having to tweak the system. Just make it work, and make it fast.... -
Of course beating 50% more cores is a hard task. How would it be with equal cores?
-
Don't know but they are not and with further CPU releases the difference will be even greater.
-
Yeah, AMD hadn’t other choices than put out 16 cores later this year. This as usual. They had to do similar to compete when Intel pushed 4/6 cores as well. They will struggle with equal amount cores. That time will come.
-
I agree. I have always been in a niche with most things. I derive satisfaction from that, and from distinguishing myself through the use of personal effort, knowledge, trial and error, creativity and skill. It doesn't matter to me what the masses think or do because I am in a niche and only care about me and others like me. (Of course that statement is made in the context of the hobby, not that I don't care about other people in more holistic terms. That's clearly not the case and obvious to those that know me.)
The thing with overclocking is winning. As the case is with most sports, winning doesn't mean that you are undefeated. Competition includes different categories. You compete against like hardware (identical part, different part with same core/thread count, etc.) for high scores and against dissimilar hardware for overall performance high scores. If AMD CPUs all run about the same, or very close to it, with very little headroom to work with and most max out only a little better than stock it is kind of hard to distinguish yourself in that niche. It will be more difficult to win against identical hardware and more difficult to win against similar hardware that has much greater capacity for overclocking. It's nice when you can get almost as much performance at a lower clock speed due to good IPC and owning efficient hardware, but that doesn't equal to a win unless you have a score that places higher within that particular category. And, things start to get boring and frustrating real fast when there is no headroom to work with and the changes take tremendous effort with a small return. That also happens once I max out what I own at any given time, but it takes a while to get there when you have more headroom to play with and the fun might last a little bit longer before you hit the wall. And, when you do max it out it is usually substantially more powerful than what you started with. Then it's time to buy new or different hardware to push to its functional limits. I just find it difficult to get excited about something that is not going to respond well to overclocking efforts. That doesn't mean the product is no good, or that it is weak or impotent. It only means that it is not well suited to what I want to get out of owning it. And, if I am not going to get what I want from it, at that point there is no good rationale for me to entertain the idea of purchasing it.
The day may come that I lose interest in what is on offer from all of them and find another hobby. If Intel doesn't have the ability or motivation to stay ahead of AMD indefinitely and AMD only has the capacity and/or motivation to mass produce cookie-cutter products with minimal overclocking headroom, the writing is on the wall. That hobby is going to die and I won't want to spend my money on it any more. So, both brands will ultimately be losers along side the rest of us that care about overclocking. I won't purchase high end parts for playing games, or office productivity, web browsing and email. I'll buy whatever can get the job done in an ordinary way at the lowest cost possible.Last edited: Jul 29, 2019electrosoft and Papusan like this. -
Single core / multi-core (same count of cores) AMD is faster against same frequency 9900k, so that settled it.
As AMD is faster at lower frequency than Intel at higher frequency as well, Intel @ top OC and much higher power usage is currently able to be a sliver ahead in some tests this generation. That's down to games coded as is today.
The next generation games PC / console games becoming optimized for AMD / Navi / Vega will leave Intel behind
The power draw and thermals required of Intel to barely be able to catch up, match, or barely stay ahead - even with the price drop - isn't worth it.
It's time to hop off of that 14nm Dinosaur
jaybee83 likes this. -
Felix_Argyle Notebook Consultant
AMD does not need 50% more cores to beat 9900k, just take a look at this:
https://www.techpowerup.com/review/amd-ryzen-7-3700x/6.html
This is 3700x with same amount of cores as 9900k.
Of course, in other programs results will be different but this is a valid example that matters to people who use those 3D rendering programs. -
It's nice that AMD is finally able to compete with Intel stock versus stock. It has been many years since that has been true. (Like since first generation Athlon days, so as long or longer than some members of this forum have been alive or old enough to know how to use a computer.) I don't think anyone here is arguing that their products are bad in any way. Clearly, that is not the case. We're only going back and forth talking about overclocking capabilities. Dismissing that is wrong and if the industry stops caring about that it will ultimately bring harm to everyone. Homogenization and making everything common and ordinary always has disastrous results in the long run. (Look no further than the notebook rubbish that is available to choose from for an example of what it looks like to be able to have whatever you want as long as it is a piece of garbage.)
And, assuming they remain financially solvent it will only be a matter of time until Intel comes back with something stronger. If they do not that will also be bad for everyone. They have never been good sportsmen that play nice, and we can hope that they never will. They'll be out for blood, and we want AMD to do no less. We definitely don't want AMD to take Intel's place as the only good option. We've had way too much of that with Intel and NVIDIA for way too long already. We also need AMD to step up their game on high end GPUs. They're just barely playing footsies on the cheap stuff right now. We need them to open a can of whoop-ass on NVIDIA on the high end enthusiast products like they have Intel on CPUs in order for things to get interesting in GPUs as well. -
Felix_Argyle Notebook Consultant
Sorry, I missed the "overclocking" part. I guess Intel is still faster when counting in overclocking, but personally I don't care for that anymore. My days of overclocking Celeron 300A on Abit BH6 mobo are long gone, it's just not fun to do anymore so all I care about are the stock CPU performance ;-)
Yes, I agree. I would definitively like to see more GPU options and more universal feature implementation for them. Right now, if I want to play a game with ray tracing effects like Metro or upcoming Cyberpunk 2077, my only option is Nvidia GPUs since game developers use Nvidia's proprietary method for ray tracing because Nvidia were first to integrate hardware for that and provide appropriate API to developers. AMD should really step it up and encourage developers to use non-proprietary APIs for that so you can make the game look exactly the same on both AMD and Nvidia hardware.Last edited: Jul 29, 2019 -
Actually, 7nm+ takes a larger redesign whereas 6nm has 5 instead of 4 layers of EUV and takes direct ports from 7nm whereas 7nm+ does not. In many ways, that makes 6nm cheaper to switch to than 7nm+, but came AFTER AMD was already planning for 7nm+. Then 5nm takes another redesign, although less of one if coming from 7nm or 6nm than the 7nm+, if I understood the Anand or other articles on the topic correctly.
Further, as the nodes get smaller from here, there is NO reason to think frequency will EVER increase again, because thermal densities are such that it likely will not, unless you make the HPC libraries considerably less dense than the node, which then harms transistor count scaling, thereby trading the benefit of a smaller node for more transistors to get the IPC benefit and the going wider on components, all in order to try to get frequency which will likely prevent you from getting the overall performance scaling higher.
Take Ice Lake-U as an example. Intel went slow and wide for a reason. You can't even do over 4GHz, yet the cutting off of 1GHz in speed didn't matter as performance is roughly the same between the generations.
So give up hope on frequency and start looking for better indicators as times are a changing.
So, your complaint is that instead of getting 300-500MHz difference between chips, it is a 200MHz spread? Also, the speeds listed on SL are at around 1.3V, meaning that it can go faster. There is a lot more to this. Also, read comment to Ole for why the nonsense of more speed isn't likely.
That makes NO sense! There is no reason to think the frequency will go up from here. Instead, it will likely continue to decrease, at least until you get to graphene or certain 2D materials.
Bull. The LN2 crowd actually said they were fun to bench. If anything, it isn't fun if you are not chilling or sub-zero. There is a difference.
These chips scale with temp alone, meaning that they could be quite fun, and there will be variance, to a degree, depending on how you cool your system.
It also pushes for more refinement in other areas, like mem, OS modding, etc. Overclocking has NEVER just been about core clocks. That is simply the easiest, low hanging fruit and you know it! You have spent time overclocking ram, on modding the OS and finding peak settings, testing drivers for the best performance, etc. None of that changes. Same with honing the cooling system.
So, it is NOT quite as simplistic as you make it sound.
Now, with this gen, there is the CB and CBB for some chips, which sucks, and the IF has been exposed to not scale well with cold, in fact causing the SOC to need a huge voltage to allow scaling past -80C. Many Intel chips this gen have a CB around -100C, also, although this varies chip to chip, etc.
Just sounds like to have your fun, you may need to go from chilled to single stage!
See last comment. If you cannot stand out with the same materials as everyone else, instead needing a better tool to make you stand out, then you are the reason for the death of overclocking, not the tool!
I have been working with the 1950X for awhile. Even with the first gen Ryzen, the ability to stand out was there, if you worked for it. But, it took more WORK to do it. If you cannot accomplish it or cry about it, who is the real problem there, the tool or yourself?
It's like giving everyone a Bugatti, then saying there is no way to stand out. Guess what? When an experienced OCer uses that chip, there will be ways to stand out, just like a professional racer in the Bugatti versus regular drivers. The cream ALWAYS rises. Kneecapping others to feel better IS a participation trophy, just like Intel leaving so much room on the table.
This is a false analogy. If you, with your experience, cannot find the room of light between you and a novice, maybe you need self-reflection then, rather than blaming the hardware. You still have to get the mem right, you still have to optimize the OS, find the right drivers, and even mod the cooling system to allow better scaling, or go to chilled water, which you have, or single stage, or more exotic cooling.
There is more to overclocking than what you are suggesting.
This is why I'm explaining exactly WHY you are incorrect to see it this way, and also how to go further with the change that is happening.
Is it really a middle finger? Instead, maybe you should make suggestions on HOW they can give more tools to the end users than currently available to allow for better tweaking of the hardware to give more performance.
Intel gave the middle finger to the OC community long ago with the need for pay to play. That easily was hurting the community as much as this.
Who cares? Intel put out the Linux testing to optimize the power curve for a software overclock not too long ago. That means Intel will likely design a hardware version before too long. That means you must adapt or die. In other words, learn how to differentiate your performance through other means than more core due to silicon lottery and electron bleed.
I disagree COMPLETELY! Give me double the current IPC at 2.5GHz of that of a current Zen 2 chip and I'll take that all day over a 5.2GHz 9900K! It would crush it, assuming software optimizations.
The latency is an issue, which is why I want AMD to jump to an active interposer and move the I/O components to that interposer, along with 16-32GB HBM2/3 on the chip, with 1Tbps bandwidth, while having the lower latency of the change to an active interposer. But that is at least a year off, if not two or three.
And Intel has nothing to respond with. So, supposing the 3900X is a sign of what is to come with threadripper, with it showing the bandwidth was NOT the issue on the 2990WX, rather stale data, scheduler issues, etc., there is a good chance the new 32-core threadripper at $1700, give or take, will even compete or beat Intel's $3000 28-core, while the 64-core Threadripper will have NO INTEL ANALOG, meaning Intel will have NOTHING for the high-end desktop for multicore performance crown. NOTHING!
Intel is working on doing the same thing, using chiplets, although they plan on EMIB rather than IF to start (they really should just go active interposer, but likely costs even for them).
Except right now, Intel wins on IPC on ONLY their mobile parts, then next year with IPC on the Ice Lake server parts. Desktop was abandoned for known information to date.
Intel will NOT have high frequency likely from 10nm and 7nm. But it will have IPC gains. So, there are going to be changes, even on the Intel side.
Are you afraid you might have to work harder? Because that is what it sounds like! The tools do not make the man, and when it comes to overclocking, you, as are many in this forum, are a hell of a man, so to speak. As such, this talk seems contradictory to me.
My only fear on Intel is that they will fail to get 7nm operational in a timely manner. That will cause it to go from the big three fabs to the big 2. GF already bowed out of cutting edge at 7nm. That left TSMC, Samsung, and Intel left. If Intel stumbles on 7nm like it did 10nm, they will have no choice but to fab CPUs at either TSMC or at Samsung. THAT is my only worry.
Intel's GPUs should be alright, and their CPU uarch to go wide and increase IPC, even with the process screwing the frequency, is fine. Intel has two more gens of uarch pretty much finished while it was waiting for EUV. Deciding to skip EUV for 10nm was a mistake, so was not backporting the design from 10nm.
But, Intel will be just fine.
Meanwhile, because Intel jumped in the game, Nvidia is now going to push ARM with Nvidia as an alternative. So, that could get interesting. -
That isn't EXACTLY true. Only a couple games use Nvidia's implementation, whereas the overhauled game engines actually use MS DXR, which is a more general implementation. Those game engines mean when Intel and AMD implement raytracing, it will not just be Nvidia exclusive still. That is what Nvidia would like you to think, but isn't true in the slightest.hmscott likes this.
-
That's OK. Not everyone does. And, some have never done it at all, don't know how and don't know what they're missing. Some believe it is a terrible thing to do and have listened to too many lies about how bad it is. If you're not into it now, stay away because (as you might remember from the past) it is highly addictive and once the hook is set it is really hard to shake loose from the overclocking bug. It's almost the only thing left I care about any more now when it comes to computers.
-
-
I didn't say clock rate is irrelevant. I merely said that comparing it across architectures is irrelevant. Apples and oranges.hmscott likes this.
-
As you are already aware by virtue of your astute power of observation, I have worked for it... hard, and for a long time. I haven't seen any truly massive gains with Ryzen. Sure, you can stand out from others with the same hardware. But, none that I have seen look like massive gains. I never said it can't be done, but no... I don't want to have to put forth a ton of effort for a minor gain.
Yes and no. Working harder is fine when the harvest is bountiful. I don't mind that at all. I'm just getting tired of everything being a huge pain in the butt with only tiny gains to show for the effort. That's why I have ditched laptops. Too much effort and too little reward compared to the amount of time and effort required. Now, putting forth a ton of effort to get a ton of gain? Sure. I like that. Who doesn't like a nice return on an investment of time or money? It's hard to get either one now. Dividends are too small in both arenas.
If you haven't checked already, if you would then you would notice that I haven't benched my HEDT setup in like 2 months. And, for exactly the same reason I am not excited about an AMD CPU. I've gathered all of the low hanging fruit and pummeled the crap out of the vast majority of my competitors that are not using LN2 or DICE cooling. But, I've reached the end of the road based on the resources at my immediate disposal and I'm not going to spend a lot of time and energy, or more money, competing against myself for the sake of eeking out 5 points here, 10 points there. I don't enjoy it when the benefits are smaller and less meaningful, and if I don't enjoy it, why bother. But, I did work my butt off to get to this point and loved every minute of it.
All this means is that I don't want to do that. It doesn't mean that I can't or that somebody else can't. Only that I am not willing to fuss with it at this time. And, I probably won't want to later if the gains are small. And, if the gains are going to be really small after a lot of effort from here on out with overclocking AMD, Intel, NVIDIA or anything else, I am probably going to find a new hobby. Just because that's how I roll.Last edited: Jul 29, 2019 -
I remember that Celeron
450a300a. That was amusing.
But "my biggest fear is Intel suddenly pushing their chips to the wall from now on to stay ahead of AMD" is a bit much, and I do have a problem with it. You're asking them to deliberately sandbag their chips for everyone else to leave you more headroom. These days, with advances in power and thermal management, chip vendors can do a lot more to maximize the performance of their chips, which is really, really good for everyone who's trying to get work done with their computer (or, for that matter, game). That's not a matter of "consumer" computing. That's something server vendors -- I worked in the systems organization at Sun and Oracle for a good many years, and while I was on the software side of the platform house, I saw all the thermal and power management stuff that was done to get the best performance out of the system. Which is something that enterprise customers care about. A lot.
From my standpoint as a user, the more that Intel and AMD beat each other up over performance, the better off I am. I really don't want them deliberately leaving performance on the table so that a few aficionados can claim bragging rights. I admit I wasn't happy about Intel locking multipliers and such to deliberately limit my performance for the sake of market segmentation, but that's not what's happening here. Quite the contrary. AMD in particular is saying "we're not going to stop you, but we want to get our customers as much performance as we can".
There is still room to OC the Ryzen 2 processors. You just have to try harder. What was that about someone achieving 5.2 GHz all-core on a prototype Ryzen 3950X? So it can be done; you just have to be more creative about it (and be willing to spend more money on it).
But as for Intel and AMD pushing each other to achieve better stock performance: bring it on, I say. -
Felix_Argyle Notebook Consultant
Yes, I know there is more generic way to use ray tracing but game developers like CDPR are not using that and AMD is not doing enough to convince them to use it. That's what I meant. Perhaps AMD should throw more money at bribing developers or maybe they should also add a dedicated hardware to their GPUs for better ray tracing performance so the developers would want to support it, I don't know. All I know is they aren't doing enough to convince developers to use more generic methods for that. And it's more than just CDPR - I believe next Watchdogs game will also have Nvidia's ray tracing and perhaps some other games will. I will gladly switch to AMD GPUs as soon as I will see support for those effects from game developers which matter to me (I don't really care if EA games will have support for it but for other games I do). -
I highlighted "more difficult". Winning a championship is difficult, yes? Just means that you have to up your own game.Mr. Fox likes this.
-
Yes, that is true. But, there is a wildcard. In addition to upping your own game, you also have to hope that you don't get screwed too hard in the silicon lottery, LOL. No amount of effort, knowledge or talent is going to fix crappy silicon. LN2 can cover a multitude of sins, but the short straw will show itself there as well. I've been pretty lucky in this area, but I don't know how much longer I can count on that. I did have a couple of really poor quality 8700Ks, but the third one was the charm. Considering how many years I've been at it, probably luckier than most.Papusan likes this.
-
Sorry, but that dog just won't hunt.
Ever watch a bike race, or a motor race, or a footrace where the winner wins by inches by dint of just working harder? Winning a sprint by inches after 100 miles of hard racing is exactly that -- working hard, really hard, for just a tiny bit of performance. But that vanishingly little increment of performance -- 1 foot over 100 miles works out to 0.0002%, or 10 KHz out of 5 GHz -- is the difference between winning and...not. As @ajc9988 said above, find some other way to tune your system.
That has always been true. Sorry, no out there.Last edited by a moderator: Jul 30, 2019 -
Nah, it's definitely worse now than I have ever seen it before. By far. It's a lot more common to end up with a piece of crap CPU or GPU. An unfortunate sign of the times. Computers are only one example. Quality is something that some consumers still value, but most manufacturers seems to care less about.
Then it's time to shoot the dog and stop hunting.
Sorry. Not interested in that. I guess I am too spoiled by years of victories that paid bigger dividends. I don't want a small victory... too boring and not worth the effort. And, I am not talking about competition. I am talking about huge amounts of effort and toil for small gains. Head-to-head or solo... nope. I'll watch paint dry first. Equally exciting.
We can use fishing as an example as well. I love fishing. For about 15 minutes. Unless I am catching lots and lots of fish. Then I can do it all day and never get tired of it. Catching fish is fun. But, fishing isn't (to me).Last edited: Jul 29, 2019TBoneSan, Vasudev, electrosoft and 1 other person like this. -
Yeah, I think you are, in part, placing the blame on AMD when Nvidia locking down took a lot of your fun away. I was never much of a GPU OCer and too much came down to the card silicon and the cooling for that with me. I also have gotten bad GPUs more than CPUs.
But I can get it that it just is lots of toil for very little.
Unreal and a couple other engines are hybrid raytracing and DXR. So, the only question is whether the underlying engines adopt DXR or not, which many likely will.
Yeah, mem OCing can be paint drying, which doesn't bother me, so long as I'm getting slightly intoxicated while doing it.
On the point on quality, companies figured out if you can prevent people from repairing their products or build them to be disposable, then people buy more. You then price certain customers out of the market with the products that can last, as well as forcing people to buy more and more for greed. A huge waste of resources overall, if you ask me.Last edited by a moderator: Jul 30, 2019 -
I've never been burned by a bad AMD CPU and I have nothing against them. I switched to Athlon when Pentiums sucked. And, I switched back to Intel when they could no longer keep pace with the Intel Core processors. It was (and always will be) all about me, not about them. My opinions are driven mostly by selfish interests and my personal observations and experiences. I view Ryzen as being a fantastic high quality product that serves a definite need in a market that has been dominated by Intel. Not because Intel was terrible, but because AMD had nothing to offer for such a long time. Thank God we had Intel for all the years that AMD CPUs were slow and low-budget junk. Otherwise, everything would have been garbage. Even though it might not seem like it, I am super happy that Ryzen turned out so great, and I am super happy that Intel has very serious competition. It's way past due. But, I love overclocking (no surprise here) and they just don't do great with that. At least not yet. But, they're making progress and there's still hope that they will. Their CPUs (Athlon and FX) used to overclock fantastically, they just didn't have any horsepower to go along with the high RPMs. So, if they can get back some of the CPU overclocking mojo they used to be famous for they're going to be a terrible force to be reckoned with. I hope they do exactly that.
I have been burned by lots of really horrible AMD mobile GPUs experiences, so my trust level has been seriously damaged with respect to their GPUs. I am speaking here of not just poor overclocking, but a string of too many defective parts that really turned me against them as a solution for GPUs. I was an ATI fanboy once upon a time and probably overreacted to their demonstrations of incompetence due to an abnormal sense of betrayal. In contrast, I have had mostly good experiences with NVIDIA GPUs. But, they've done a lot of stuff to make me want to hate them, too. With NVIDIA the contempt is more about their constant manipulation, greed and shifty behavior than selling broken rubbish. For every awesome thing they do, there are probably one or two sucky things they do.
Amen to that. It's pretty sad.Last edited by a moderator: Jul 30, 2019 -
interposer and HBM may solve some latency issue, we'll have ot see it in real practice or maybe amd can just have like 2-4GB of L4 all together acting like super fast ram. they may have space to fit them with another shrink.
for the frequency, TSMC will probably stay on 5nm for awhile, not the 2yrs but maybe 3yrs or more so we just might see it mature like intel's 14nm and get some frequency boost. i welcome a generous IPC boost zen4/5 with 4.8ghz.
or give me 3x ipc at 4ghz that of zen2 with 32 cores and im set.
it is because even with different arch, AMD and intel currently have comparable IPC so frequency does matter, especially in single threaded workload.electrosoft, Vasudev, Papusan and 2 others like this. -
Speaking of clock speeds... and, how the silicon lottery is getting worse...
https://shar.es/aX0Jup
Not all cores in AMD's Ryzen 3000-series processors can reach the boost frequency.Vasudev, Papusan, tilleroftheearth and 1 other person like this. -
3x IPC, or 300%, faster is not going to happen anytime soon. TBH I would not want that, it would make Intel offerings totally irrelevant and you would have a real hard time getting hardware.
-
To put the cost of HBM into perspective, if HBM is $12 per GB, 16GB in 4 stacks, for HBM2 to get 1TBps, would cost around $200 extra, but a significant jump (like 20X that of dual channel, 10X that of quad channel, and 6-7X that of octochannel) in bandwidth. You then just have to keep that mem fed. For 32GBx 8GB stacks, you are looking at like $500 to have it integrated, although both of those options have limitations of server and HEDT. For Mainstream, you could likely only fit 1 stack HBM2 on the I/O die (the patent they have is for HBM on a TEC (Peltier) on top of logic, which likely means the I/O rather than the core chiplets; one stack is around 256GBps bandwidth, still higher than 8-channel DDR). Just for more info on it.
For TSMC, you have a point, as 5nm is going into volume, but according to current roadmaps, we shouldn't have 3nm volume until 2022-23, depending. So, from 2020 for AMD until around 2023, supposing they stay with TSMC for the duration rather than jumping to Samsung 3nm, targeted for 2021, that is enough time to refine the process a fair amount. But we have no idea if 5nm will be at the same frequencies or not. Time will tell.
Thanks for the morning reading. I haven't flipped through it yet, but wanted to mention that, for yields, it is around 70% for TSMC 7nm. 7nm+ has the same to slightly better yields. Intel even is having yield issues on 10nm and had similar yield issues when 14nm started. As we go smaller, yields will be worse, which is why everyone has to go chiplets.Mr. Fox likes this. -
They're similar (on average) but not identical. So direct clock-for-clock comparison is not the full story by any means.
-
But, being clock limited is sad. Even more sad if you enjoy overclocking. I don't like making excuses for hardware opportunities and dismissing the low clock speeds on the basis that performance is similar at a lower clock speed seems like a lame excuse to me. If they are similar at lower clocks, just imagine how much better they could potentially be if they were not shackled by low clock speeds. We should require more than we we are getting from all of them... Intel, NVIDIA and AMD.
-
Tom's HW seems to be biased towards Intel and they don't know that Intel BGA is garbage that rarely boosts to advertised but likes to get stuck at 400-800MHz freq bug due to 100C.
I was initially skeptical of Ryzen and wasn't impressed by their marketing. You know, judal57 can't be swayed over very easily and he likes Intel very much but he switched and I was totally surprised when his CPU can boost to 4.3GHz but benching scores surpass intel's offering at Stock/Aftermarket air coolers. For me, achievable turbo helps since my ambient temps are insane!Last edited: Jul 30, 2019 -
Equal as mobile chips. From your previous link
As well. As I said in my previous post... 8 cores brand new 7nm Ryzen can’t compete with equal 8 cores 9900K if you push them a bit. That’s sad. And the reason AMD need push more cores on their new arc. Same as before!electrosoft and Vasudev like this. -
With smaller node sizes, it will be more difficult to actually overclock hardware because of intrinsic limitations you hit at such small nodes. The transistors are very close to each other and the heat generated can be problematic for the electronics (which use current materials).
On top of that, yields on early nodes aren't exactly great, so you will reach a limit to how far you will be able to push the clocks up.
And besides, AMD already implemented a host of options to extract most of what it can from the existing 7nm.
I wouldn't expect higher clocks until 7nm+ (EUV) Zen 3 is out... and even then, the boost in clocks would probably be moderate at best.
Bear in mind that Intel only achieved such high clocks because they refined 14nm so many times its become a proverbial childs play... and overclocking is better there because the transistor density is not as small as on TSMC process.
Bottom line is, even if/when Intel finally hits 10nm and below (7nm), I wouldn't expect their clocks to be really high.
This is probably what will force software to finally shift a lot more to multi-core, and hw developers to increase speeds via IPC and some other method within the uArch of their choosing.electrosoft likes this. -
All right: let's look at it from another perspective, to demonstrate just how arbitrary clock-for-clock comparisons really are: engine redline.
The Mazda RX-8 with 6-speed stick redlines at 9000 RPM and it generates its peak HP (232) at 8250 RPM. The Chevrolet Corvette ZR1 redlines at 6500 RPM and generates 755 HP at 6400 RPM. So does that make the RX-8 a higher performance car? Well, not quite; the RX-8 does 0-60 in 6.2 seconds; the 'Vette in about 3.0 seconds.
The point being, you simply can't compare RPM's against different engines. Wouldn't it be nice if you could open the ZR1 up to 9000 RPM? Well maybe...but you'd likely blow the engine if you could get it to even go there. Meanwhile, at a "sedate" 6500 RPM the 'Vette is leaving the RX-8 in its dust...not for very long, considering how far down the road the ZR-1 will be.
By the same token, you can't compare clock speeds across different architectures. Sure, if you could clock the 3900X at 5 GHz it would probably be even faster. But the same goes for the 9900X at 6 GHz. Or the 3900X at 6 GHz. Or whatever.
The Ryzen chips aren't "held back" by their clock rate any more than the 9900X is. They're designed to run at certain clocks, and those cannot meaningfully compared against each other (any time I see a benchmark that equalizes them in the name of "fairness" or "comparing the same thing", I know it's worthless because the people running it don't have a clue how it works).
By the measure you're using, AMD could spec the Ryzen2 chips at 3 GHz, you'd "overclock" them to 4.2 or whatever, and declare an incredible victory, where in fact the reality would be that everyone else would be losing an enormous amount of performance. Any safe headroom above the rated performance means that the vendor was leaving performance on the table.
AMD's Ryzen CPUs (Ryzen/TR/Epyc) & Vega/Polaris/Navi GPUs
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Rage Set, Dec 14, 2016.