The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    AMD's Ryzen CPUs (Ryzen/TR/Epyc) & Vega/Polaris/Navi GPUs

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by Rage Set, Dec 14, 2016.

  1. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    First, what that shows is consistency. AMDs ability to reach a specific multiplier is very notable in that 67% hit 4.0 and 20% hit 4.1 with 1.405 and 1.44V, respectively. Second, AMD does not leave the performance on the table, as Intel does. For a specific heat, Intel clocks all processors WAY under its top frequency. With the 7700K, 57% reach 5.0, 28% reach 5.1, and 7% reach 5.2. The base clock is 4.2 and the boost clock is 4.5. So, Intel gives the illusion of it being better in the way you mention, but what is really going on is they purposefully leave performance on the table just so you can slap the CPU in and go. They could have a 4.5 base and 4..8-4.9 boost clock, but chose not to (especially with how hot it runs). The 6700K is similar in that regard. If everyone, with simple tweaking, can extract that performance, does it matter that the company removed that gap for all users? It is the minor tweaks that separate the top OCers anyways. Yes, it is harder to stand out in the group, but those that can shall do so anyways! That is one way to give a marketing illusion and that little satisfaction for those that can separate themselves from the chaff. So, I do not consider that a bad thing. I consider what Intel is doing as overcharging and robbing consumers that do not OC of LOTS of performance!
     
    triturbo and Mr. Fox like this.
  2. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
     
    ajc9988 likes this.
  3. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,213
    Messages:
    39,333
    Likes Received:
    70,628
    Trophy Points:
    931
    That makes good sense as a rational response. Had Intel taken the opposite approach and not left performance on the table for those able and willing to overclock to get everything they paid for, AMD would have had needed to work much harder to be relevant as well. Intel's laziness is essentially backfiring in this example, because many people are comparing stock performance and Intel did not give us their best effort. What we don't necessarily know yet is if AMD pulled an Intel on us and did the same thing and we have not discovered it, or the overclocking abilities only seem to be missing due to crappy drivers and firmware. Too much is unknown right now. It's too new to predict how things might improve in the next 60 to 90 days.

    The Green Goblin pulls the same crap as Intel as far as holding out on us is concerned. They can give us a "new GPU" by changing hex code in their magic cancer vBIOS.
     
    Rage Set, triturbo and ajc9988 like this.
  4. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    This is where I believe AMD did not. Intel feeds way more voltage than needed, in part to allow the the stepping on core multiplier for the boost clock, in part because of laziness, and in part because of volume of chips and variations between fabs and batches. If they did a straight optimization for all 4 cores and optimized voltage to the lowest common denominator, they get a better overall product, but less material to advertise (like using the single core boost number to sell processors). People would then expect their lower, locked CPUs to do more, meaning harder to stage clocks and make it look good. With zero competition, it makes sense to milk it. You may not see that in the next generation to try to give more gap in performance between Intel and AMD.

    But, AMD did optimize voltage and has a very tight performance grouping, with 97% of 1800X reaching 3.9. I've seen one person have to use 1.55V to reach 4.1GHz, which means that same ledge we see on Intel (especially the HW chips) to then use smaller voltage jumps is where they clocked the chips to. But when chip companies do this, you often hear reviewers say the chips run "hot." That is not what a company wants to hear, so they clock it back to avoid that characterization. Even though we'll get improvements from cache utilization and better firmware and drivers, I'm not expecting more than 100 mhz, if that, from all platform improvements unless and until a chip revision occurs. So, I'm putting that out there. Where performance will scale is in the microcode revision to allow higher frequency ram in a month or two. Because AMD does not separate the bclk from the platform clock, it will be limited on ram speeds, especially until the ram divisors and multipliers, along with secondary and tertiary timings, are unlocked more. This isn't a big deal for enterprise because that ram is ECC and already within what speed is used commonly for that platform, but will make a huge difference for consumers.

    So, I'd say expect more performance, but do not expect more overclocking headroom on frequency for the near future!

    Sent from my SM-G900P using Tapatalk
     
    Rage Set and bloodhawk like this.
  5. Althernai

    Althernai Notebook Virtuoso

    Reputations:
    919
    Messages:
    2,233
    Likes Received:
    98
    Trophy Points:
    66
    Thanks. I looked at the reviews and you are right, these do benefit from more than 4 cores -- although one often has to go to the i7-6950X to really see a difference. We're further along the road to multithreading than I thought.
     
    hmscott and ajc9988 like this.
  6. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Yes, a good Lulz, that's what we all get from CIA / NSA intervention in our private data systems :mad:

    There are a number of other interesting quotes about what's in the leak at your link, thanks for that, and @Gabrielgvs also mentioned another link here:

    Are our Smart TV's Watching Us?
    http://forum.notebookreview.com/threads/are-our-smart-tvs-watching-us.801697/page-2#post-10478175

    Also, replied here to continue discussion On Topic in this thread:

    Windows 10
    http://forum.notebookreview.com/threads/windows-10.762434/page-365#post-10478207
     
  7. triturbo

    triturbo Long live 16:10 and MXM-B

    Reputations:
    1,577
    Messages:
    3,845
    Likes Received:
    1,238
    Trophy Points:
    231
    Late to the party. We had a celebration over here (03.03 - Liberation day (with Ryzen release only a day before, coincidence? *cough*LiberationFromGrIntel*cough* :D )), so I was away and just skimming from time to time. So here's my view on things (it would be a bit lengthy):

    That's what the consumers seem to want though. Or that's what reviewers make them to think that they want (yeah, it's an odd world). Since they screwed the general perception of how things should happen, AMD played along. After all, they are in no position to play it stubborn... yet. That's the reason for the geared towards efficiency architecture (and it sits better on the slides - 95W CPU beats a 140W one). That's the reason it doesn't overclock well. Not quite though, in the supporting review docs, AMD claims that most CPUs should clock to 4.3GHz. Would we actually get these clocks after optimizations, remains to be seen, but I think that it is obvious that the fail to do so is not (entirely) up to them. Because of this:

    I was going to post something VERY similar to this (cache, manufacturers), so thank you for saving me some time. Yes maybe AMD haven't given them enough time (what the manus claim), but obviously they haven't gotten them seriously either. AMD is in no position to point fingers and blame them. Hopefully this will get behind us, everything would be sorted and I can only hope that we'll get even beefier (HEDT class) motherboards. There's one thing that makes me a bit nervous though, in the AMA, AMD responded that they have no plans to get more than dual-channel memory and would focus on improving what they already have.

    Not exactly, when compared to 7700K it was clocked to 5, even 5.1GHz (haven't watched/red all the reviews, still a lot to catch-up, so you'll excuse me if I missed something). That's one, two we all know how one can get such clocks out of it - delidding (i.e. bye-bye warranty and extra expenses and risks (something goes wrong - get a new CPU - profit? (yeah, but guess for who))). Then make a guess what kind of cooler it needs (not the box one) which drives the price where exactly? AND THEN there comes the micro-stuttering, which at least a couple of reviewers mentioned. Pushing something at full tilt can only do that much. In almost all cases the 7700K was seeing almost 100% utilization and that's on test benches with only the game running. What about if something else is running which is (almost) 100% of the cases of the average user? Make no mistake, this is not only about Ryzen, after all quite a few CPUs went head-to-head, it is also valid for GrIntel's 6 and 8 core offerings. All of them claimed smoother gameplay on more cores. AND THEN comes the final nail that everyone keeps forgetting about - AM4 would be around for while (getting entry level Opterons?). What is the life-span of LGA-1151? Or LGA-2011? What CPUs are we going to see (especially on LGA-1151)? AND THEN comes this as an extra bonus:



    So, it turned out that we had a gaming AMD CPU all along :D I wasn't aware, since I haven't paid attention about desktops for a while, but this sentiment (FX-8xxx = affordable, yet adequate gaming (when clocked high enough)) is shared by quite a few people as it seems. Of course 7700K trumps it everywhere else, but we are talking gaming here... or we aren't anymore :D

    So far the clear winner is the 1700, since it clocks just as high (actually in one occasion it clocks 50MHz above 1800X, but I guess that was lucky/unlucky combo) and costs significantly less, while doing most of things one can throw at it just fine. There's no MoBo and RAM to be bought without second, even third guess. Oh well, we'll wait. I was going to wait anyway, since R5-1600X is the CPU I'm really interested into. By then we'll have at least dozen out of these 82 MoBos worthy to be bought (unlike the current situation) and eventually some more (newer, i.e. beefier, as well as ITX (that's granted - X300, A300)). Also the process would mature by then, so 6/12 overclockable to 4.4 maybe even 4.5GHz (given that the 4.3GHz on the 8/16 are indeed achievable, if not - dial back 100MHz on the predictions), might not be a dream. Rumor has it that it would be a $300 CPU.

    I think there was something more that I wanted to say. Oh well, probably later.
     
  8. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,213
    Messages:
    39,333
    Likes Received:
    70,628
    Trophy Points:
    931
    It will also be interesting to see what kind of sorcery Intel pulls out of its magic butt. I suspect they will have an answer for it. It's possible they were not prepared for this kind of upset, but I think believing that might be too naive. No matter what happens with Ryzen optimization, you've got to love seeing Blue Boy getting backed into a corner and being forced into putting their best foot forward in response to such a strong comeback from the brink of extinction. If they do not have an answer they immediately lose credibility. If they do have a Ryzen killer hiding in the shadows, that raises the question of why they were not giving us their best all along, and why did it take a competitive upset from AMD for them to want to do the right thing?

    I'm hoping we can see a similar resurrection stunt in the battle against the Green Goblin. Red can be the color symbolizing embarrassment, but it can also be associated with bloodshed. Green doesn't have a positive word picture association that I can think of... envy, greed, nausea, and tree-huggers are the things that come to mind when I think of word associations for the color green.
     
    Last edited: Mar 7, 2017
  9. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Thank you for filling in the blank spots I missed mentioning. Specifically that microstutter happened on all of Intel's CPUs, regardless, while AMDs was smooth. This is a huge plus on AMD!

    As to the dual channel for ram, I present two arguments that feed into their analysis. First, efficiency. Their IMC is reaching in the 90 percentile on what is capable on bandwidth for dual channel. Intel is getting closer to 75th percentile, give or take, on both dual and quad channel hypothetical maximum potential bandwidth. This means you are getting close to 3200MHz under Intel with just 2667MHz with AMD for dual channel. If this fully scales when higher frequencies are supported, getting 3600 on AMD then is better than or equal to the higher offerings with Intel. So, it is competitive and money should be spent making the product more competitive in other regards (get the most gains for the R&D spent). Second, this is the Fury X/HBM argument reborn. Many criticized that 4GB of HBM was not enough for modern games, stating 6+GB were needed. That only held true for two games, while, due to increased bandwidth and throughput, 4GB was enough to provide the same benefit of the larger needed buffer. Here, they are relying on empirical evidence that shows the quad channel ram, on regular tasks, including some ram sensitive applications, does not show more than 6-7% difference in performance. I posted an article in this thread showing the performance difference. Now, this can change for certain applications, but there is some truth to it. So this feeds into the decision, right or wrong.
    http://www.pcworld.com/article/2982...e-shocking-truth-about-their-performance.html

    Considering all the companies gave AMD the finger, a good way to curry favor is to allow yourself to take the hit to goodwill by being the whipping post, saving those companies from the backlash. If AMD did anything but that, they would be screwed with companies on which they are dependent to be successful. Sometimes you just take the hit!

    Finally, on the HEDT part, these chips roughly approximate that, but are on a different socket to the Opterons. Naples is the new Opteron line and will require a much larger socket (but that is because they are practically putting 4 dies on a single SOC, and it will still be an SOC).

    Intel pushed up the release of the server 3600 pin chip, pushing it ahead of other releases which they usually use to hone the process. Obviously that is not talking about the 2066 socket that is in the HEDT market, as they are separating the two lines, with Xeons with 2p-4p to have the larger socket and the new chipset. But I don't expect anything from them. They continue to plan to milk the market, waiting for 7nm until 2021. Now, the tinfoil hat in me says this is to guarantee AMD doesn't fold to debt in the interim, which is fair as then Intel's future is uncertain! Meanwhile, they didn't expect this.

    What AMD did was say they wouldn't compete in the high-end and enthusiast markets. They then released the carrizo update last year and put out the 480. Everyone thought that was a joke, thereby discounting Vega and Ryzen significantly. Intel didn't start to get a sense about performance, nor did anyone else, until December. Even then, people cried foul, optimizations, etc. In fact, the cannard PC benches were on early silicon and did not show much power. They didn't expect the final to have a couple extra 100MHz, nor being disruptive. Intel, in hubris, ignored the potential warning signs. Then came the February numbers and the March fervor. Intel never expected the demand to be that high nor that performance and pricing would be that enticing. They did not know the market for this in between existed and did not plan on creating it until this summer by using Kaby-X to step some up to the HEDT platform and 4Q for coffeelake to give a 6-core to the mainstream. What AMD did is give a competitive product in that marketspace BEFORE Intel got there. In fact, Intel, at the end of January, said they were pushing the HEDT platform to August for release, from the expected June release. If held to, this now gives them a couple months to optimize their platform to deal with the new releases, but also gives AMD a couple months of sales for the 6-core release. Whether coordinated or serendipity, it plays to both on benefits and drawbacks.

    But the fact remains, the architecture won't be greatly improved nor the IPC during that time. They are already in production, meaning tweaking out that headroom we discussed is the easiest way to optimize during that time. They will win on IPC. They will likely have a higher frequency. It will jump on AMD for top performance. But, the price matters! At the end of the day, BOTH platforms will last for about 3-4 years for HEDT. There will be MB revisions in the interim and no one knows if AMD will jump to 7nm in 2019 while Intel is on 10nm until 2020-21. That being the case, and Intel requiring a new chipset for the other 2 10nm CPUs because of reintroducing FIVR, we've got a fight on our hands which I cannot wait to see.
     
    Last edited: Mar 7, 2017
  10. Reciever

    Reciever D! For Dragon!

    Reputations:
    1,525
    Messages:
    5,340
    Likes Received:
    4,298
    Trophy Points:
    431
    Show me how many people only use their PC to game, and you may have a valid point. Until then its really just making a statement without evidence to ground it. Then of those "gamers" show me how many of them actually overclock, and of those show me how many overclock to the edge of stability. It seems to be your arguing a niche as the mainstream. Would overclocking headroom give it more performance, sure I can admit that. There isnt, however, not with this version of silicon. As a consumer you would have to determine if this CPU meets your needs now and in the future and determine your purchase from that.

    Arguing abstracts is annoying. There isnt a predetermined function to Ryzen 7, the argument seems to be that there should be, because its not the king of the hill, of which there was never any leaks to suggest it would. Its a CPU that does a lot of things well, but isnt king, for a significantly less price point.
     
    Raiderman and triturbo like this.
  11. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Edit:
    Techtrancer laid out the full voltage variance. Please ignore below. Only left for thread reference.

    @Raiderman - Please let Chew know at XS that the top chip frequency used 1.44V, but the other two categories use 1.408V, which may effect his testing/thoughts on the binning:

    As of 3/6/17, the top 67% of 1800Xs were able to hit 4.0GHz or greater.

    Passed the ROG RealBench stress test for one hour with these settings:

    • 40x CPU Multiplier
    • 1.408V CPU VCORE (Or less)
    • LLC Level 3 (Asus Crosshair VI Hero)
    Test equipment:

    • Motherboard: Asus Crosshair VI Hero BIOS 0702
    • Memory: 2 X 8GB 2400MHz 15-15-15-35
    • Cooler: Corsair H105 AIO
    • Thermal Paste: Arctic MX-4
    • Ambient temperature: 22°C
    Depending upon your setup, frequencies achieved can be ±100MHz.
     
    Last edited: Mar 7, 2017
    Raiderman, Atma and triturbo like this.
  12. Althernai

    Althernai Notebook Virtuoso

    Reputations:
    919
    Messages:
    2,233
    Likes Received:
    98
    Trophy Points:
    66
    It's not a question of giving us their best or doing the right thing -- it's one of how much risk a company is willing to accept. Think about how new CPUs are made. It takes several years to come up with a substantial redesign (even with the best people in the world at this task working on it). After that, multibillion dollar facilities must be reconfigured to physically produce the chips which usually takes a couple more years. And there is no guarantee whatsoever that the new chip is actually going to hit the projected performance targets with acceptable yields. Both Intel (with Pentium IV) and AMD (with Bulldozer) have come up with architectures that were evolutionary dead ends and, for the latter, sometimes didn't even match the performance of their own preceding product.

    Thus, in the absence of competition, there is a strong tendency to stick to safe, proven designs. Obviously, the stagnation is not complete -- there were significant architectural changes between Ivy Bridge and Haswell and again between Broadwell and Skylake -- but the changes are not on the same order as, say, Piledriver to Ryzen or Westmere to Sandy Bridge. Now that Intel can see that it's strategy of tiny improvements over the last 6 years has allowed AMD to catch up, the people arguing for more fundamental revisions should have the upper hand again.

    In the short term (probably by summer or fall), I expect Intel's response to be the release of updated 6, 8 and 10 core CPUs at perhaps lower prices than we've seen in the past. Of course, the more interesting responses are medium to long term: they're going to have to take some risks.
     
  13. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    go to /r/pcmasterrace
    go to LinusTechTips
    go to Overclock.net and avoid the heavy modding forums
    go to pretty much any tech forum, youtube video set, or other gathering of people who claim to have good hardware.

    The number of people who will literally jump down your throats to insist that an i5 is all you ever need for any purpose that does not involve doing a lot of livestreaming or video rendering is STAGGERING. I can guarantee you that the absolute majority of users will never even hope to push an 8c/16t chip to any sort of limits whatsoever.

    I laid out some very precise statements.

    - The 6c/12t Ryzen 5 at the price point of i5 chips is a no-brainer once the issues with the cache and microcode are ironed out.
    - The 8c/16t Ryzen 7 is fine for prosumers, productivity owners, render boxes, home servers, what have you. Great CPU performance at a good price. But not for primary gamers who rely on stronger 2-6 core CPUs. There is a reason i3 CPUs beat 8350 CPUs in lots of titles... the i3 does not hold a candle to an 8350, but games won't use enough of the threads on the 8350 to draw its power out.
    - For the purpose of gaming for the massive majority of gamers, strong and fast single thread performance is king. i7-7700K holds crown at 4.4GHz out of the box, and has support for better RAM by a LARGE margin than any tested Ryzen chip.
    - For the purpose of more hardcore gamers, especially ones willing to do some basic overclocking on a chip, the upcoming Skylake-E (assumed to be named) 7800K should easily crack the 4.6GHz range without much tweaking and a good air cooler or AIO watercooler, and supports even better RAM than the 7700K does. 6c/12t with good IPC, great RAM, and a high clockspeed for games that are more single-thread heavy and it's going to be the breadwinner, especially since it will cost LESS than the 1800X.
    - For the same majority of gamers, however, an extremely good system is going to be Ryzen 5. It will be much better than any i5 will do except in single- or dual-threaded titles, but ones that are so bad will likely have no problems with the base power of said Ryzen 5 chips anyway.

    I never laid out a function for Ryzen 7. But I have said multiple times that for someone who does little else that is taxing on a PC besides gaming, it is a waste of money. Overclocking, benchmarking, rendering, streaming, using as a server, mass file transfer and large zips/unzips every day, running simulations, simultaneous video playback in non-hardware accelerated applications, large code compilations, hosting a database, etc? These are taxing beyond gaming. And considering 90% of everybody I've ever met who admits to playing games on a PC stops at League of Legends or DoTA 2 and spends their other time on Facebook, Youtube, using Netflix, or some other form of media consumption and entertainment avenue? These chips aren't for them. And of the ones that DO play more on PCs but DON'T do anything else I've listed? It's STILL not for them.

    I said who it didn't make sense to buy for. You keep insisting that because you like to run two games at once and/or try to do large file decompression while gaming that it really requires a heavy CPU that I'm assuming too many people don't do it otherwise. And that's right. I am. Because people don't. Know what I used to do on my last laptop? I used to sit in Teamspeak 3, talking with friends, playing Black Ops 2 at max graphics and 120fps, while rendering a video in Camtasia that I recorded from Black Ops 2 itself a few moments prior, while watching a livestream on the second monitor, while listening to music.

    Am I normal? Not even close. Do I feel I would need an 8-core? Please, I can break a 16-core 32-thread CPU if you hand me one, ask @tgipier. Do I ever consider other people would do the same thing I do? Not even close. It's good to understand what everyone else does so you can get an idea of the market. As it stands, the Ryzen 7 is in a bad position for a gamer. It's overkill, and the majority of people here are talking about it with respect to gaming.

    I don't understand what is so difficult to get.
     
    tilleroftheearth and Rage Set like this.
  14. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Remember, the chips are mostly finalized. You tape out 9-12 months before you hit market, some variance to yield. Since Intel moved the release date to August, they would have to have taped out between August last year to November/December time frame. The architecture is pretty well set, as is many chips they plan on releasing in the 2H of this year. You could get slight tweaking on the early releases next year, but that is it. All response to Ryzen, true response, comes on anything AFTER next March. So the summer releases are not a "response!" They are planned cycle releases. But, that is why you can predict what is to come this year, mostly.

    Meanwhile, I thought more on the quad channel issue most have @tgipier , @Mr. Fox , @D2 Ultima , @triturbo , @Papusan and all other interested parties (it would take awhile to tag all of you wonderful people). What if the reason for ignoring quad channel is a plan, after the APUs and when HBM2 is in more abundance, to directly integrate HBM2 onto future CPUs? Think of the bandwidth that would provide to an 8-core CPU with no integrated graphics. DDR4 is a slug, even in quad and six-channel configurations, comparatively. If next year's refresh incorporates HBM2 with 512GB/s throughput, intel wouldn't have a chance. With that speed (which the platform wouldn't really need changed because it is an SOC), DDR4 literally would be slower regardless of the configuration for the end user. That would really put some whoopy in that cushion if AMD plans to do that (and if they don't, here's the hint, give it to them STAT)!!!
     
    triturbo likes this.
  15. tgipier

    tgipier Notebook Deity

    Reputations:
    203
    Messages:
    1,603
    Likes Received:
    1,578
    Trophy Points:
    181
    That wouldnt work. What if you need more memory than whatever HBM2 can do. HBM2 is limited to 16gb for 4 stacks.

    Either way, Intel is in more trouble than people think. Mostly not because of Ryzen though....
     
    D2 Ultima likes this.
  16. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Look, you still have DDR4 on the platform. That means, if you tax it beyond the 16GB at the 4X throughput of DDR4 in quad channel or six channel, then you hit the DDR4 penalty by having to send it to the DIMM off chip, but not before. How many GB of DDR4 would be needed to handle what goes through HBM2 at that speed?
     
    Last edited: Mar 7, 2017
  17. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,213
    Messages:
    39,333
    Likes Received:
    70,628
    Trophy Points:
    931
    How so? What kind of trouble?
     
  18. Reciever

    Reciever D! For Dragon!

    Reputations:
    1,525
    Messages:
    5,340
    Likes Received:
    4,298
    Trophy Points:
    431
    I dont reddit. Couldnt find numbers, but I dont reddit so...
    I dont linus forum, but humored you and checked, a whopping 10k strong (w/ no mention of guest numbers)
    OCN is only about 3000 strong on a good day including guests, otherwise its around 400-500 online
    I consider myself an enthusiast on some measure, now compare that with those outside of the overclocking niche?

    last time I got an i5 was for the 2500k, it was fine for what it was but I preferred the x5650 even before they became so cheap. Though I have absolutely no idea why this is being brought up.

    Again, Ill repeat myself for the last time.

    Just because Ryzen isnt the best CPU in gaming doesnt make it a bad CPU for gaming.

    I dont need to ask someone who is yet to justify argument fallacies to reinforce your position, I would hardly call that a reference worth calling upon.

    I guess I cant argue against your anecdotes though, it would likely be a waste of time to do so. I have my own anecdotes to argue with since I only have 1 friend that actually plays DotA 2 and the rest play Ark, BF4, BF1, Stellaris, COH2, Rust etc.
     
    triturbo and Raiderman like this.
  19. Raiderman

    Raiderman Notebook Deity

    Reputations:
    742
    Messages:
    1,004
    Likes Received:
    2,434
    Trophy Points:
    181
    @ajc9988 chew* and others discussing silicon lottery. He has been pounding the shiat outta the 1700, and as far as OCing....Quote

    Heat is bad enough to burn me with certain cpus installed.

    Like i said my comparison is not by default vid. 1700 is way lower...im running all chips at identical volts...well +.0750mv on 1700 vs 1700x and 1800x.... Add 20c to my 1700 temps and those are my x chip temps equal volts.

    Im not running a test at 3.8...time consuming i already ran 4.0 add 20c. Done.

    Default vid is 1.2x on 1800x default vid is 1.0xx on 1700...

    This just further supports my findings.

    1700 scaling with voltage on air/water and likes it way more. Heat output despite this is less from heatsink...screw the temp monitor software..

    1800x wants to be cold and refuses to scale otherwise. Yes i got to 4.2 at who knows how cold..for 40c idle with voltage highly advised against.

    Between what im seeing and ln2 guys are seeing plus 1700 not liking voltage on ln2...the data is pretty compelling evidence.

    The old way will no longer work. Turbo, dyanamic tdp, xfr and amd running a tight binning scheme has changed that method drastically.

    Even BD was different...the way i had to bin changed and even then it was still somewhat of a crapshoot, turbo was most likely playing a factor in that...lowest vid sucked.

    Old way worked for deneb thuban...

    Ive played with enough silicon to understand it. Its a gift.
     
    ajc9988 likes this.
  20. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    I honestly feel that sharing memory bandwidth on the GPU is not such a good plan. It would end up worse than one might think, really. And designing a platform to pick up the end result on optional additional hardware is also not such a good idea.

    I honestly feel that AMD was copying intel mainstream Haswell chips in Ryzen's basic design, and simply tweaked IPC and extra cores in the long run. Think about it:

    16 PCI/e 3.0 CPU lanes (1 x16 or 2 x8 config for GPUs; bad choice for using non-officially-supported tech. Crossfire is irrelevant; even though the bandwidth is sufficient with XDMA, you cannot force mGPU so it's pointless).
    8 PCI/e chipset lanes (though I believe this is separate from SATA/USB/etc ports, it still matches Z87's 8 chipset lanes).
    ~3.6 GHz base, ~4-4.2GHz OC (considering Haswell, not the Haswell refresh here).
    Dual channel memory.

    They now claim it's an enthusiast chip, but it really isn't. I mean it has very good value for certain people, right? I'm not discounting the chip. But it's no enthusiast product, and it's by design.
     
  21. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    You misunderstand me. HBM2 can be used as a memory, exclusive from a GPU. Unless it is labelled as an APU with AMD, it is a pure CPU. I'm talking about slapping HBM2 onto the CPU as a ram, using it like a 4th level cache, or DDR4 as a storage device for pagefiles beyond what is being used in the HBM2. Nothing to do with GPUs at all!

    Edit:
    https://www.amd.com/Documents/High-Bandwidth-Memory-HBM.pdf

    Read the list of products to incorporate HBM - CONSUMER MULTICORE CPUS
     
    Last edited: Mar 7, 2017
    triturbo likes this.
  22. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Wccf coverage of the der8auer delidding, and a new World Record set by der8auer :)

    AMD Ryzen Delided & Tested – Gold Plated Solder & Silicone Protected Caps Deliver Impressive Thermal Performance
    http://wccftech.com/amd-ryzen-delid...-silicone-protected-caps-confirmed-cheap-tim/

    "AMD’s Ryzen CPUs impressed us in many ways since their release just under a week ago by delivering the holy trifecta of performance, power efficiency and affordability. As it turns out, Ryzen wasn’t done impressing us just yet. After putting Ryzen under the knife pro overclocker der8auer unveiled another one of Ryzen’s strong suits, build quality.

    der8auer removed Ryzen’s integrated heatspreader, otherwise known simply as the IHS, to find out what kind of a job AMD did when it put these chips together. This practice is quite common among overclockers and is called delidding. That is removing the “lid” i.e. IHS off the package to expose the chip underneath."

    AMD Ryzen 7 1800X Breaks Two World Records At 5.8GHz And 5.36GHz
    http://wccftech.com/ryzen-7-1800x-overclocked-58ghz-ln2/?utm_source=wccftech&utm_medium=related

    "AMD’s Ryzen 7 1800X CPU has only been out for a few days and overclockers have already pushed it to break several world records. German overclocker Der8auer has already broken the Ryzen 8-core frequency world record with his 1800X. Using liquid nitrogen he has successfully managed to push the CPU to 5.8GHz on an insane voltage of 1.97v, officially making him the frequency world record holder on HWBot.com."
     
  23. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Then... if this is the case, how would you add it to the system?
     
  24. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    On the CPU itself!

    https://www.amd.com/Documents/High-Bandwidth-Memory-HBM.pdf

    Read the list of products to incorporate HBM - CONSUMER MULTICORE CPUS

    Edit:
    You'd have to buy a new CPU with it on the chip itself. The rest comes through microcode, firmware updates, and drivers on how to best utilize it.

    Edit 2: Can you give me a reason why it cannot be integrated for use by a CPU without a GPU on board? Just because originally designed to solve another problem does not mean it cannot fulfill another role. The APUs already planned will utilize HBM2. So, why not use that to start with the playing of incorporation W/O the GPU being present?
     
    Last edited: Mar 7, 2017
  25. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    I see. Then that kind of messes with the normal chips. I don't think it's a good bet. That's real segregation. If anything, they should make a line that supports it, and have it be a large cache like how Broadwell has 128MB of L4 DRAM
     
  26. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    As I said, 16GB of screaming 512GB/s L4, with DDR4 covering anything past that. As I added above: The APUs already planned will utilize HBM2. So, why not use that to start with the playing of incorporation W/O the GPU being present?
     
  27. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    I think 16GB of "cache" is too large to function as a cache. Caches are generally meant to be small for maximum performance. I'd say even for HBM2, 1-2GB of cache would be the highest that should be used to function as a cache.

    As for if they'll do it, I have no idea. It seems like adding cost quite a bit for little benefit.
     
  28. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    If used as a cache, yes. If used as ram, and figuring out how to use another level of ram, no. Think of how the hard drive is used beyond your ram. Now, imagine making the HBM2 function like your ram and having the DDR4 function like the hard drive would, while allowing the hard drive to fulfill that role for virtual memory if needed beyond the DDR4. . . . Trying to find a way for you to visualize this...
     
    triturbo likes this.
  29. Mr.Koala

    Mr.Koala Notebook Virtuoso

    Reputations:
    568
    Messages:
    2,307
    Likes Received:
    566
    Trophy Points:
    131
    [citation and reasoning needed]

    Limiting the capacity of a HBM chip won't make it any faster.
     
    ajc9988 likes this.
  30. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    It's more the amount of data stored in it. It needs to have a small area to search. Otherwise, why not make 10GB L1 cache on a CPU?

    Think about it. If I could search 100 meters in 1 second, searching a 2m area is going to be pretty fast; much faster than if I had to search 20m. Since caches function as sort of a "key" to accessing the larger part of data (be it in RAM or on a storage drive) the faster you can check a cache the better. You're not going to hold entire programs in cache.

    RAM needs paging files to function anyway, so I don't think it'd turn out as great as you might think. Storage would not be delegated directly to pagefile and actual holding of data.

    But your concept is interesting. Basically adding an extra layer of memory. But the OS would be required to handle that very weirdly. I don't think it would see the light of day anytime soon considering M$ would need to actually do something about it... and the price of 16GB of HBM2 is not going to be cheap.
     
  31. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Another way of thinking of it was when you had multiple levels of ram. They used to have 8-bit cards to put ram on aside from the ram slots. So the idea isn't new, it's actually old school. Now, M$hit would probably **** the bed on implementation, but... Also, it would add a couple hundred onto the chip. So an 8-core increases to $800-$900, but has ram that runs circles around DDR4. I'd buy it!

    Edit: For comparison, the price of 16GB (2x8GB) 4266MHz DDR4 is $250. They are looking at putting these on GPUs costing $500+ (Look at Fury line costs). So, to slap HBM2 on a CPU already valued at $500 retail, you are looking at a reasonable combo for faster ram that could, in many cases, outperform buying 4266 DDR4 and this CPU combined, which even if the CPU could utilize that fast of DDR4, it would be dwarfed by HBM2....
     
    Last edited: Mar 8, 2017
  32. Althernai

    Althernai Notebook Virtuoso

    Reputations:
    919
    Messages:
    2,233
    Likes Received:
    98
    Trophy Points:
    66
    Kind of. It's true that they can no longer adjust the chips coming out in the summer, but they can play with the binning to some extent. If you compare the Silicon Lottery sites ( PGA 1331 vs. LGA 1151), Kaby Lake appears to be leaving a lot more performance on the table compared to Ryzen. It can also be undervolted so perhaps one can pick out the chips that can be both undervolted and overclocked (though obviously there's a tradeoff between the magnitudes of these) and thus operate at the same power usage. They can also try playing with the thermal interface material and such.

    That said, the biggest way they can (and almost certainly will) respond is obviously price. I have a hard time seeing them sell the 8 core chips for $1K in the near future.
     
  33. Mr.Koala

    Mr.Koala Notebook Virtuoso

    Reputations:
    568
    Messages:
    2,307
    Likes Received:
    566
    Trophy Points:
    131
    Which is why high level caches are sliced. The latency of search within each slice doesn't increase with total capacity. You just add more slices. Obviously when you add 10GB of cache the latency will skyrocket with all that extra tracing distance or even address width, not to mention cost.

    It's unlikely HBM will be used in this manner anyway. Large pages like RAM page files are much more practical.
     
    ajc9988 likes this.
  34. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    As a cache, or included as ram at all? I agree with the cache, but still think it may find use as a ram, using multiple levels of ram...

    Sent from my SM-G900P using Tapatalk
     
  35. Mr.Koala

    Mr.Koala Notebook Virtuoso

    Reputations:
    568
    Messages:
    2,307
    Likes Received:
    566
    Trophy Points:
    131
    Both, depending on how you look at it.

    A (flat) RAM address space matching the capacity of either the DDR4 (HBM as an inclusive cache for DDR4) or both (exclusive, HBM and DDR4 both acting as RAM) is reported to the OS. The memory controller keeps track of pages and swap hot ones into HBM on the fly.

    Xeon Phi Gen 2 does this already in cache/hybrid modes.


    Using HBM as stand-alone addressable RAM is unlikely because they won't get support from prosumer software. Xeon Phi can get away with that, Ryzen can not.
     
    Last edited: Mar 8, 2017
    triturbo and ajc9988 like this.
  36. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Overclocking World Record with AMD Ryzen™ 7 8-Core Processor
     
    Ashtrix, triturbo, ajc9988 and 2 others like this.
  37. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    The Zen in Ryzen: A Tale of Cache, Compute Complexes & Scheduling
    Published on Mar 7, 2017
    "...During the HotChips conference last year, it should have been clear that Zen needed the Windows scheduler to treat it different than most consumer CPUs.

    Ryzen is actually a Server/HPC CPU in disguise and there's pros and cons to it's design. Hopefully this video help you guys better understand the Zen architecture and so you can objectively analyze the results you see in an informed manner.

    References:"
    (using chrome right-click translate on youtube page)
    " eduardogunner1 5 days ago
    It's just me? or anyone noted the overall CPU usage? Ryzen did not use even half of its power already used intel 70%."
    BF1 side-by-side:

    Dual Xeons: https://www.youtube.com/channel/UCGYUCUBCE7uVwHSKzU8HAyA/videos

    "Pinned by NerdTechGasm
    NerdTechGasm 9 hours ago
    I see a lot of talk on low-resolution testing for inferring useful gaming performance..."
    "Reviewers often focus on low res testing because of the future proof argument. It postulates that:

    1. Future GPUs will get considerably more powerful.

    2. Therefore, current CPUs with weaker performance in current games may not be able to keep up.

    #1 is true. #2 is simply "depends".

    Depends on what? Well, let's say you have an Intel i5 now, it's great for gaming (I have a few of these systems). But if you notice, in many current modern games, it's already running max or close to max. It has nothing left to give.

    This can even be seen with the i7 7700K in some of the new games, like Battlefield 1 multiplayer for example.

    I'm a software guy. We have long dropped our reliance on pushing single threads for performance, because we saw long ago, that the Ghz & IPC progress stagnated. We simply cannot rely on faster single thread performance moving forward, so our apps & game engines have slowly but surely evolved to the new multi-thread paradigm.

    This march is ongoing, some would even say it is accelerating.

    Under this scenario, a current CPU (with more cores) that may be slightly slower in current games, will have the better odds of being faster in the future.

    You may ask, how far off is that future? It's not the right question. It comes down to the game, whether it even needs that level of CPU processing capability. If the game design and scope needs it, it will happen simply because today's software engineers are highly skilled at multi-threading code."
     
    Last edited: Mar 8, 2017
    Ashtrix, TANWare, Papusan and 3 others like this.
  38. tgipier

    tgipier Notebook Deity

    Reputations:
    203
    Messages:
    1,603
    Likes Received:
    1,578
    Trophy Points:
    181
    Where is Intel gaining marketshare? Intel lost the mobile/ultra-low voltage market, trying to compete with nvidia's CUDA with xeon phi but probably not gonna win. Desktop PC sale is stagnating.

    Better question is, where is Intel winning?
     
    Raiderman, DukeCLR and hmscott like this.
  39. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    "Intel reported quarterly earnings and revenue that beat analysts' expectations on Thursday, as two of its key segments outstripped Wall Street.

    The company posted fourth-quarter earnings per share of 79 cents. Revenue for the quarter came in at $16.37 billion."

    That's where Intel is winning, profit :)
    " News Summary:
    • Revenue growth in 2016 driven by strength across the business including full-year revenue growth in Client Computing, Data Center, and Internet of Things
    • Record annual cash flow from operations of $21.8 billion
    • Solid earnings with GAAP net income of $10.3 billion; non-GAAP net income of $13.2 billion"
    Intel earnings 79 cents adjusted earnings per share vs 74 cents expected
    http://www.cnbc.com/2017/01/26/inte...-earnings-per-share-vs-74-cents-expected.html
    "Analysts expected Intel to post earnings of 74 cents a share and 15.752 billion in revenue, according to a consensus estimate from Thomson Reuters.


    The stock rose more than 2 percent in after-hours.

    "The fourth quarter was a terrific finish to a record-setting and transformative year for Intel," said Brian Krzanich Intel CEO, in a release. "In 2016, we took important steps to accelerate our strategy and refocus our resources while also launching exciting new products,successfully integrating Altera, and investing in growth opportunities."

    The chipmaker beat revenue estimates for the key categories client computing and Internet of Things. Intel reported client computing sales of $9.1 billion, above a StreetAccount estimate of $8.55 billion. Sales for "Internet of Things" rose 16 percent year-over-year to $726 million, topping a StreetAccount estimate of $703.6 million.

    Revenue for Intel's data center group came in at $4.7 billion, an 8 percent year-over-year increase for the quarter. Analysts were expecting revenue of $4.78 billion this quarter, according to StreetAccount.

    Intel has made efforts to move away from reliance on the generally declining PC business. The company's focus has shifted to the Internet of Things and to its data center group, which creates chips for large computers.That's the company's most profitable segment. Yet, the 2016 third-quarter revenue for the group grew less than 10 percent year-over-year at $4.5 billion.

    Intel's guidance is flat for revenue and gross margins in the first quarter of 2017, despite strong fourth-quarter numbers. On the conference call company leaders said the sale of gaming systems in the holiday season helped their higher end chip business.

    As they are expecting a weaker PC market moving forward, Krzanich noted the companies capabilities to play a major part in more innovative technologies.

    "Autonomous cars for example will generate about 4,000 gigabytes of data each day," Krzanich said on the earnings conference call, "The resulting explosion of data each day is creating tremendous opportunity...our products are key to turning raw data into high value insight and information."

    Intel also gave strong guidance for it cloud computing business. Last November, Google partnered with Intel to challenge Amazon's dominance in cloud services and has continually tried to restructure itself, including making 12,000 job cuts.

    Earlier this year the Santa Clara, California-based company announced plans to buy a 15 percent share of mapping software company Here Maps. The German car-maker backed company provides information for self-driving vehicles. Intel has also created other partnerships in the growing self-driving car space and has moved into the virtual reality space for live sports, too.
    Shares of Intel are up 25 percent in the last 12 months.
     
    Last edited: Mar 8, 2017
    DukeCLR likes this.
  40. tgipier

    tgipier Notebook Deity

    Reputations:
    203
    Messages:
    1,603
    Likes Received:
    1,578
    Trophy Points:
    181
    Am talking about marketshare....
     
    DukeCLR and hmscott like this.
  41. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Yet Intel is figuring out how to make money despite the changing markets, pretty fancy foot work o_O
    You asked where Intel is winning, I answered.

    Please don't spin this over and over, ignoring that the Q/A completed, continuing into a many page long useless OT rant.
     
    Last edited: Mar 8, 2017
    Reciever and DukeCLR like this.
  42. tgipier

    tgipier Notebook Deity

    Reputations:
    203
    Messages:
    1,603
    Likes Received:
    1,578
    Trophy Points:
    181
    By your logic, AMD sucks because their EPS is negative?
     
    ajc9988, DukeCLR and hmscott like this.
  43. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    I have enjoyed (and finally caught up with) the last 500 - 600 posts on this thread and can now post (intelligently) in this thread once again. Why have I enjoyed it even though I'll probably never own an AMD system again? Because this will hopefully make Intel move decisively once more (and that is profit for me...).

    My first post here was pretty spot on, imo - especially considering it is from Dec 15, 2016...

    As tgipier, Mr. Fox and a couple of others have stated, repeatedly: MORE performance*** is what we want; period. And what AMD certainly hinted at the last three months or more... Not just more value, not just for certain workloads/workflows - 'only' - just MORE of everything Intel has been providing; everywhere. This is not what has been delivered, imo. I'll even concede 'close'; but no cigar.

    Yeah; I read all the reasons/excuses - in the end though, while they delivered much more than they have in the previous decade, they still didn't deliver what they said they would. And I'll predict in three and even six months time from today they won't have improved the situation substantially for me to change my original prediction and my current (today's) opinion of an AMD platform instead of an Intel based system.

    What my main takeaway of this AMD launch is that they'll be sticking around for a while longer and have a few more chances to 'get it right'.

    By that, I mean they should develop a CPU that is needed/wanted by the masses 'today'. They need to work with MB manufacturers to have the most stable/fastest/most feature rich (and feature working) platforms possible. Finally, they need to also work VERY closely with the O/S manufacturers to ensure that their hardware works transparently (vs. an Intel based system) with the software. This is at a minimum. If this were iterated a few times and over a few years (successfully); that is when I would take AMD seriously for my production workloads and workflows.

    As the last few pages have shown; a CPU isn't an island. Everything else needs to be in sync too. Blaming MS or MB manufacturers isn't correct either. This is AMD's baby... they need to change the diapers when their baby soils itself...

    And; agreed 100% with AMD dropping support for Win7 on their new processors - that ancient O/S was built for steam powered CPU's - not ones with the features we need today... or the ones that will become available in the near future...

    Re-reading my post here; let me be clear that I'm not 100% negative on Ryzen (for other's) nor am I gloating. Just stating the truth of what has transpired (which is what most of my deleted messages stated too; wait and see - 'scores'/'numbers'/'stats' don't paint the complete picture - and in most cases - not even a partial one...).

    My position on Ryzen is exactly where I predicted it would be; waiting for Intel's response so that I can upgrade (if my testing indicates to...) my current workstations (DT's and mobile) and get another competitive edge against my competition - and as soon as possible before my competition does. Absolute $$$$ (saved or spent) isn't the goal or the finish line for new platforms. The goal is the most productivity increase overall for my available budget at that time (not just for my personal systems; but for the whole shebang of my entire operation).

    Once again 'balance' is the word that AMD needs to achieve. Myself and many others in my position are already there and have been for a very long time; even if AMD's CPU's cost $1 today, I would not jeopardize that balance for a few mere $$$$$$ saved (I'm an ongoing concern - not here today and gone tomorrow).

    AMD, Bravo! for having the best launch of a new processor in a very long time. Continue doing so while paying attention to the points above and I could be a potential customer sometime around 2022 or so. :D





    ***Of course; I define 'performance' as 'productivity' (mine) - as that takes all aspects of a system into account and cannot be argued against. ;)
     
    Papusan, DukeCLR and hmscott like this.
  44. triturbo

    triturbo Long live 16:10 and MXM-B

    Reputations:
    1,577
    Messages:
    3,845
    Likes Received:
    1,238
    Trophy Points:
    231
    They first need to get their bank balance in check before shooting for the top 1 or 2% you know. That's why it is advertised as a gaming option even though it's not the King of the hill (that's subjective, but let's play along). They need every single sale out there. Yes R5s would be better suited, but that's some months away, while 7700K sells like a hot cake.

    Because?

    And they just did that. It's most balanced CPU for a while now - it can game, it can crunch numbers and it doesn't break the bank. What's MORE to the balance? I mean REALLY?!
     
    Kommando, Ashtrix, TBoneSan and 4 others like this.
  45. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    AMD finally posted the playback version of their GDC Presentation on Feb 28th

    AMD Presents Capsaicin & Cream at GDC 2017
    Published on Mar 8, 2017
    Join us for our Capsaicin & Cream live event from Game Developers Conference in San Francisco, California for the latest in Radeon graphics and game development. Learn more about AMD at GDC here: http://radeon.com/gdc-2017/. Join the social conversations with #AMDGDC17.
     
  46. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    I don't see it advertised as a gaming option, rather; "The Pinnacle of Desktop Processing Power". No, no it's not...

    See:
    https://www.amd.com/en/products/cpu/amd-ryzen-7-1800x


    triturbo said: "Because?"

    Quoting myself from the post above:

    This is far from a balanced CPU, let alone a balanced platform at this point (see same points in my quote of myself, above).

    The top Ryzen isn't aimed at gamers or number crunchers - (by design) it is severely impeded in both aspects compared to what is available.

    Economical you say! :) Sure! But as I stated in my post; I need something better than what I already have, not just something for less money.

    Will a first time DT buyer find a Ryzen build tempting? No doubt. But kindergarten is a long way back for most of us here.

     
  47. triturbo

    triturbo Long live 16:10 and MXM-B

    Reputations:
    1,577
    Messages:
    3,845
    Likes Received:
    1,238
    Trophy Points:
    231
    Tell me is/was every GrIntel CPU release flawless as you are clearly implying? Especially when a totally new architecture? Oh wait, scratch the later GrIntel can't do innovation.
     
    Ashtrix, ajc9988 and hmscott like this.
  48. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Never stated that. But I don't buy at release either - same as I suggested on this thread for AMD's new hotness too. ;)

     
    Papusan and hmscott like this.
  49. triturbo

    triturbo Long live 16:10 and MXM-B

    Reputations:
    1,577
    Messages:
    3,845
    Likes Received:
    1,238
    Trophy Points:
    231
    Not in plain text, but your familiarity tone did.

    You know what's funny, even with all the issues it still kicked quite some @$$.

    What do you have exactly? Tired doesn't even begin to describe how I feel every time you throw one of your random "my workloads" or "my system(s)". You know what I think? You have an i5, want to get i7 and now you are mad that AMD would pull the prices down, so the mortgage you did was pointless.

    Have a nice life. You are the first person to meet my ignore list.
     
    Raiderman and ajc9988 like this.
  50. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    So, considering it actually outperforms BW-E on productivity in multiple locations, while still having unoptimized OS features, software programming, etc., that Intel enjoys, you say that close, but no cigar? I even give a wait recommendation for ironing out issues, you give a rejection and write-off. Having information, knowing what will come, then resolving to ignore data is not open-minded, nor is this actual proper analysis. You do not point to each and every aspect which is good and bad, you instead just surmise it is all bad, but a gold star effort? Faulty logic.

    So, here you say it is all reasons and excuses. Believe it or not, a chip producer does not have full control of all aspects at launch and even Intel has more than had this issue (I'll finish this idea in the next section). In fact, this chip and Vega has been a turning point insofar as AMD has partnered and put more engineers on site to develop software compared to the past. They have offered and given more hardware to coders to optimize for their uses. So far, one company, ONE--the maker of Sniper Elite 4--has taken advantage of their GPU used to allow achievement of 100% scaling in cross-fire. The fact you do not recognize those efforts, while instead speaking as if no effort on that front has been made at all, shows you do not understand the old adage: "You can take a horse to water, but you cannot make it drink." AMD is making the effort, but you cannot force the companies to work with them so that AMD's product is able to be utilized to its full potential. This shows ignorance of the moving parts of the industry and dynamics, generally.

    Once again, refer to the horse adage. If these companies dropped the ball on Intel, it is apparent which ones do or do not work well with their product. You then would say, "X company did not do A, B, and C, meaning it cannot compete with Y company's product which did not drop the ball." The problem is, the entire industry gave the finger and their fingerprints are on each failure. You cannot make these companies actually optimize for AMD, produce the proper number of boards to meet demand, etc., when you are not on their boards, in their managements, or similar situation. You cannot blame AMD for MB shortages NOT produced by AMD. You cannot blame AMD for MB manufacturers not allotting enough time to work on firmware (which is expected to be updated, but everyone could tell with this line of products they really set the bar low for out the door).

    None of this suggests AMD did not work with these companies, provide early silicon, have their engineers work with these companies to address issues, design OS drivers, etc., yet you do, curiously, without proof. These companies get blasted if they have hiccups with Intel (which Intel receives relatively little, and people like you often act the same as Apple people, eating **** with a smile). So you create a double standard.

    Now, M$ ****s the bed with everybody, basically because they don't give a ****! They have 9 out of every 10 computers worldwide. Intel is run in most of those computers. Windows does not even always play nice on Intel processors. So, why get off your ass when a fraction of users use AMD, and even fewer use the Ryzen CPUs? Especially considering their perpetual beta. Wait until people tell you what is wrong, then sift to the messages from those with tech savvy that can describe the problem in more depth. Now, you just saved a **** ton on diagnosis. You replicate, then you fix the code. They don't give a **** about the consumer. Just because we know their crap does not absolve them from blame. Now, to be fair, there is AMD failing here as they worked to supply the driver for the OS. It is one thing if the sole problem was OS usage, but the driver and pointing out these issues discovered within hours of playing with the device by the press shows it was apparent for anyone that looked and should have been dealt with before release (or the day of, so it comes with all other updates). But that does not absolve M$ of ****!

    This shows your ignorance. In certain tests optimized for Win 10, it wins. But, in general use, both synthetic and real, Win 7 has a performance surplus of 6%, averaged, over Win 10. Even Ryzen without proper driver support gets a bump of over 10% on some tests. Not only that, Win 7 is still widely used and still is where the majority of productivity software sits. So, you try to denigrate it all you want, but Win 8 and 10 are **** stains by comparison, which is why many refused to even take Win 10 for FREE!

    This finalizes the ignorance. You do not point to any numbers in your post, any specific use scenarios, etc. In fact, it performs well except in games* (not arguing the point, you've supposedly read it). Also, Intel cannot respond until next year. Read my post above on tape outs and on them cheating customers of performance for awhile. Finally, I'll repeat, it ain't you babe! I don't care if it works for you, considering you have not pointed to performance in a specific area, nor numbers related to that area, meaning the comment cannot be confirmed nor denied except inside the black box that is your mind.

    You mention them needing to repeat this for years, etc. Here's the thing, you are basically saying you want to sit out the fight period between AMD and Intel until 2021, when the new ram and PCIe standards are introduced, and if, during that time, AMD has provided a competent product, more companies are working with it and willing to optimize for its products, and it seems market share has grown, while adequately addressing your workload needs, then you will consider it. If you simply said that, in that way, I'd say cool. I have no problem with that. But then comes the question of WHY THE **** YOU ARE IN THIS THREAD DISCUSSING RYZEN IF YOU ADD NOTHING TO THE CONVERSATION, SAY "I TOLD YOU SO" WITHOUT POINTING TO ANYTHING IN PARTICULAR, SAY ONLY BLAME AMD, INCLUDING FOR OTHER'S SHORTFALLS, THEN CONCLUDE YOU ARE NOT IN THE MARKET SO IT DOESN'T MATTER TO YOU?
     
    triturbo and Raiderman like this.
← Previous pageNext page →