The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    AMD's Ryzen CPUs (Ryzen/TR/Epyc) & Vega/Polaris/Navi GPUs

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by Rage Set, Dec 14, 2016.

  1. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    I do agree! And having the 16C clocked at 4.0GHz boost! For comparison, Intel' Xeon 2698v3 had a base of 2.3 and a boost of 3.6. The 2683v4 had a base of 2.1, boost of 3.0GHz. Neither could overclock, meaning on all cores, neither could do 3.6, which is the base of the 16 core for AMD. We'll see if that changes this round. I don't think Intel planned more than a 12 core tops for the HEDT segment. So, if they do create one, we'll see what it's clocks are...
     
    Last edited: Mar 29, 2017
    triturbo likes this.
  2. Mr.Koala

    Mr.Koala Notebook Virtuoso

    Reputations:
    568
    Messages:
    2,307
    Likes Received:
    566
    Trophy Points:
    131
    You can get Haswell-E to stay at max Turbo by booting up with no (new) microcode. The max is ~3.6G for 12C+ though.
     
    alexhawker and ajc9988 like this.
  3. lctalley0109

    lctalley0109 Notebook Evangelist

    Reputations:
    132
    Messages:
    582
    Likes Received:
    616
    Trophy Points:
    106
    Here is what I have found so far for stable clocks. All clocks were done with Prime95 overnight with blend test:

    Temps may not be accurate due to Ryzen saying they were off and my office varies from about 70F to 74F. Ram is just XMP 2666 (16,18,18,35)
    4.0 @ 1.375 - 84C - Cinebench R15 - 1680
    3.925 @ 1.325 - 71C - Cinebench R15 - 1640
    3.8 @ 1.25 - 67C - Cenbench R15 - 1582
    Stock - Not really much testing but Cinebench R15 - 1570

    To bad my board does not appear to have VRM sensors or at least HWInfo64 is not picking them up. Would like to see those temps.
     
    Last edited: Mar 30, 2017
    Raiderman, ajc9988 and Papusan like this.
  4. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,614
    Trophy Points:
    931
    Raiderman, ajc9988, hmscott and 3 others like this.
  5. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    This is like what I experienced the last time I bought a $2K AMD platform many, many years ago; incompatibility with my then current programs and O/S.

    This isn't 'harsh' or 'misleading'. Just stating facts (yeah; I actually read the article...).

    Of course they'll fix these issues, but if my workflow was affected by this issue and I had jumped on AMD/Ryzen blindly... any performance improvements over my 'old' Intel platforms would have fizzled into thin air in only a matter of hours or a few days (downtime)... This is the very reason that I said I'll revisit Ryzen in a few years. ;)

    Compatibility, reliability, longevity and dependability is king for a computing platform. Much more so than nominal 'performance' might initially indicate.

    Productivity isn't how fast I can produce work for a few seconds (i.e. world's best overclock...) it is how much work 'done' I can consistently produce over the course of ownership of the entire platform. A few seconds/minutes faster with relatively long periods of down time is not conducive to 'sustained performance/productivity over time'.

    In the overall scheme of things, this is a minor bug for AMD to fix (I hope!). But to think that this is a very specific use case where this 'bug' shows up is a little short sighted, ime.

     
  6. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    I think the bug is a non started. one of those uncaught flukes. that it will be fixed though is a definite plus.

    As far as optimizations, well M$ may win again if they are only incorperated in DX12 versions. Hopefully the optimizations move forward to older engines as well.
     
    Papusan and ajc9988 like this.
  7. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    And, there's still hope for even more optimization improvements:

    "“Every processor is different on how you tune it, and Ryzen gave us some new data points on optimization,” Oxide’s Dan Baker told PCWorld. “We’ve invested thousands of hours tuning Intel CPUs to get every last bit of performance out of them, but comparatively little time so far on Ryzen.”"

    http://www.pcworld.com/article/3185...zen-can-benefit-from-optimized-game-code.html
     
    triturbo, Papusan and ajc9988 like this.
  8. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Those seconds and minutes add up to hours and days and you know it!

    But, I do agree as far as early adopters. Now, what should be remembered is Zen will be used for years to come, with improvements. This means after this time, and with larger market share coming, we shouldn't have as much lag for optimizing.

    Now we know the HEDT will have 16C/32T, quad channel, and more PCIe lanes (meaning tri or quad gpu support not limited by lanes, drivers and programs yet to be seen) while running 3.6/4.0 boost. That may be able to take Intel to task at certain tasks if their HEDT only has 12C/24T, even with better support and higher IPC. So, price and performance may not be out of your expectations for a $2K rig.

    Now, what I would recommend is wait for Intel's release in August. With the unveil of thread ripper at computex and likely sale in late June to July, by August and September, you'll have a clear understanding of what benefits are available at what price for your workloads. This will tell you whether your software will support the extra threads and what will suit you. With the two CPU boards, you may even consider a 2P board and throw 2 16 core chips in (little more expensive on the build, but if your software can handle it, what is paying $2500 for 32 cores running 3.6/4.0 (costs before GPU and Ram)). You don't get 8 channel memory, like with Naples, but you get a higher clock speed.

    My point is, your options have increased. As I've said before, I don't know your specific workloads. But seeing general performance and what is expected for both sides, more custom tailored rigs will be possible for your work loads (edit: not saying which will be better for your workloads yet). So don't write it off yet, but also wait for bugs to be addressed and to see this year's competition. Between now and then (5 months, Ryzen 1800x will have been out 6 months), I bet a lot of productivity software will be able to be optimized!

    Sent from my SM-G900P using Tapatalk
     
    Last edited: Mar 30, 2017
    Papusan and tilleroftheearth like this.
  9. lctalley0109

    lctalley0109 Notebook Evangelist

    Reputations:
    132
    Messages:
    582
    Likes Received:
    616
    Trophy Points:
    106
    Man those boards just don't come along. One thing I have been seeing though which you may already have is newegg has a bundle through Gigaparts with:

    1800X
    Asrock Fatl1ty x370 Professional Gaming AM4 Motherboard
    Swiftech H240X2 Prestige AIO Liquid CPU Cooler
    Swiftech AM4 Mounting Kit

    You probably don't need all that but was just letting you know.
     
    Rage Set likes this.
  10. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Yeah, those seconds do add up over time to something tangible (just make sure they don't stop adding...). :)

    I too used to think that my latest $18K box would last for 'years to come' - now, I know better. Even the just discussed Coffee Lake, 8th gen processors while still at 14nm will still be impressive enough to jump to for me (considering ~20% improvement over Skylake...). And that's just a few months from now. Today's Ryzen, even with all it's optimizations by then, will have been effectively surpassed on what some are calling 'old tech' (see link below). ;)

    See:
    http://forum.notebookreview.com/threads/intel-teases-mystery-8th-gen-processors-–-and-confuses-everybody.803111/


    I'm not waiting for August or any specific time period... upgrade time is not on my arbitrary schedule; it is when products are actually available to be tested and confirmed working in my production environment. :)

    Right now, my options haven't increased one bit. Ryzen is still too immature for me to consider. Intel still hasn't shown me something better that I can buy right now. And, there is no one else to play/do business with.

    But what has increased is my expectation of the possible productivity increases I'll be able to achieve after the next 12 to 18 months or so.

    Intel isn't written off by me yet - not by a long shot; XPoint/Optane (v2/v3 or later) is where the next big shift in computing is/will happen. AMD will be able to use it too, of course - but 'compatible' to me is not something I like to settle for - especially as the 'core' of my platform's heart and soul.

    My workloads/workflows have proven themselves to not follow what a single BM (or even a lot of them) might indicate; that is why I complete a full cycle (or dozen) on my daily/weekly workflows when I test new hardware/components/platforms and software/programs/drivers too. My workflows are pretty consistent and constant - seeing if the same amount of work can be done with new hardware/software is an easy thing for me to see (or not). The point being that I don't care what others (on line mags...) report with regards to their performance stat's.

    Sure, I'll read their stories - I love to read - but no buying decision is ever made (by me) by looking at graphs and bm 'scores'. I'll still run my own testing procedure (i.e. try to make money with the system I'm considering...) before giving real $$$$ for anything. ;)

    And as was seen with the AMD optimizations; not just AMD platforms are helped by those tweaks... Intel platforms also saw a performance increase too (just smaller - they were already ahead...).

    See:
    http://www.tomshardware.com/news/amd-ryzen-game-optimization-aots-escalation,34021.html



     
    Last edited: Mar 30, 2017
    Papusan and ajc9988 like this.
  11. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    I must ask, coffee lake is a 6-core and not full lineup deployment, from what I've read. It is cannonlake architecture at 14nm. So, if that is the case, I'd figure you would be more interested in cannonlake, which they have pushed up the Server platform for it. That will be squaring off with Zen 2 (or the HEDT form of it). Both will offer sizeable increases, one being a die shrink and optimizations, the other having yields more matured and optimizations. Now, on software optimizations, AMD at that point will see much better, if this is an indication, but both sides will see improvements. So, regardless of timelines, there will be something for everyone, and more to come!

    Sent from my SM-G900P using Tapatalk
     
    Raiderman and tilleroftheearth like this.
  12. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    I don't really care what it is 'classed' as (all is just conjecture at this point anyways) - productivity gained is what the end goal is.

    i.e. I didn't drive fast only when I had a 'sports' car... sometimes, the old minivan is good for some tail out moments too. :)

    We'll all (actually) cross that bridge when we get to it... - right now, I want to see XPoint v2 already... :)

    I like that 'something for everyone, and more to come'! I can live with that. :)

     
    ajc9988 likes this.
  13. Rage Set

    Rage Set A Fusioner of Technologies

    Reputations:
    1,611
    Messages:
    1,680
    Likes Received:
    5,059
    Trophy Points:
    531
    Let me ask the guys in this thread for their honest opinion.

    Would you buy an X99 rig for a good price, that comes with 1080 SLI, a really good mobo, and a 6850K, but you know X99 is done or buy into the AM4 (or X390)? The price is 2600 for the X99, with AM4 costing a little less. Obliviously, I will get future upgrades with the AMD platform(s) but the X99, again, is a really good setup. What would you do?

    EDIT: This is for my personal gaming rig. I will do some work related activities on it, but for the most part it is solely for gaming and some video projects here and there.
     
    Last edited: Mar 30, 2017
  14. Rage Set

    Rage Set A Fusioner of Technologies

    Reputations:
    1,611
    Messages:
    1,680
    Likes Received:
    5,059
    Trophy Points:
    531
    I actually didn't see that bundle. Can you PM me a link?
     
  15. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    The size of a point memory and it being slower than ram turned me off, although if you need the speed for large transfers, after they increase storage size, it may be better.

    I would wait. The newer 1080s have better VRam and cost around $500-600 each, depending on manufacturer ($1200 tops). The X390 will give 3.6/4.0GB boost. If you can turn cores off on it, then you can do what you need for different uses. That will be $1000-1200. The motherboard will be about the same as an Intel mb, so $250-500. Ram is about the same for either board. Granted that adds up to $3600 at most, but a lot more bang. If you want just the 8C, that is $327-500 for 4GHz 8-core with a $250 mb, plus ram, so cheaper than that rig with more cores. If you are gaming, the 6-core might serve you better. But some software optimizations for Ryzen may allow better scaling with more cores, if software companies take it there (looking at the game companies there). So, it depends. Also, more pcie lanes then Intel means the 16 core could have 3-4 16xpcie lane slots (even though 8 lanes per does a decent job already). So, personally, I'm waiting...


    Edit: I flipped my answers...
    Sent from my SM-G900P using Tapatalk
     
  16. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Note: I know nothing about gaming... so please adjust the following response accordingly (depending on the games you play and their processor/platform preference). What I'm responding to is the 'work related' part to the question and the assumed obsolescence of the x99 setup. ;)


    See:
    http://www.cpubenchmark.net/compare.php?cmp[]=2800&cmp[]=2966


    Right now, there is an ~11% single core (raw) performance difference that leans towards the Intel setup. That is like getting the next gen/iteration of the platform you're considering for (to me; that makes it fairly equal to the AMD Ryzen 7 upgrades possible...). What about that ~6% advantage in multicore performance for AMD? When considered that 2 extra cores are responsible for that small benefit, I see that as an inefficient design, not a positive. Sure, 140W TDP vs. 95W TDP is also something to think about - but keep in mind how often you would keep either platform pegged at 100% for hours at a time.

    Another thing to consider; with such a high performance system, you're in the top 1% of consumer/prosumer computing platforms. Even if something comes out at double the performance next year; you won't lose any performance from continuing to use the setup you're now considering (and of course, if something came out at double the performance; that is when you start calculating the cost to you by staying on the older (free) platform or spending real $$$$ and getting the latest).

    In that link above, there is a ~$70 difference with the Intel being more expensive (is this in the ballpark of the price difference you're seeing?). If you keep this system at least 18 months or more, that very small price difference is effectively insignificant - especially if you're able to use the firepower of this setup days, weeks or months earlier than a competitive AMD setup.

    Another point to consider is XPoint support (100%, not just compatibility...). Not only with the AMD setup, but also with the Intel platform we're considering here. This is where I see most responsiveness/productivity/performance will be gained in the next 18 months or so... if you expand your options to the higher end Kaby Lake and newer processors; that is where I would be putting my $$$$$ today. Especially if your current setup is 'good enough' or better and allows you to wait (as long as possible) for a true XPoint powered platform.

    Right now? My vote for this particular instance: Intel. If you need to buy 'now'. Time is money and money is always worth much less than time... in the end.

    Hope this helps.

     
    Rage Set likes this.
  17. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Yeah, I'm waiting. But not for your reasons below. :D :D :D

    GPU's have very little to do with the 'raw' performance my workflows require.

    If I saved $500 per workstation - it isn't enough with all the known compromises now, nor with all the expected benefits in the near future - and from what I've seen? The savings won't be in that $500 range for a completely decked out platform...

    What I'm waiting for is Optane DIMM's - that is where my vote is going... the platforms we can realize today will seem like smartphones instead of workstations (mobile or desktops).

     
  18. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Optane dimms are slower than regular ram. The benefit might be in size and it being able to hold it if crashed. That isn't enough benefit, in my opinion. Also, unless you have 6-8pcie lanes going to an optane ssd, you cannot get over the bandwidth limitation. So, at that point, the extra pcie lanes with a raid setup can actually provide more benefit than optane. That is why it doesn't have my interest.

    Sent from my SM-G900P using Tapatalk
     
    triturbo likes this.
  19. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    The benefit is huge when viewed within a complete system; Relatively small capacity DIMM's accessing high latency SSD's (PCIe or otherwise) is a huge bottleneck.

    High capacity Optane RAM (though slower than DIMM's) will still trounce any PCIe access to any current storage device...

    Even better? All current storage devices are half duplex... Optane RAM and SSD's are full duplex. Welcome to 2020. :)

    See:
    http://www.storagereview.com/intel_optane_ssd_dc_p4800x_enterprise_ssd_launched


     
  20. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Personally, I'd rather see HBM2 and 3 integrated on the MB with a huge ram drive of DDR4 or 5, with a raid array not limited by the connection through the chipset, which is part of the issue. Intel gave 4 extra lanes on Kaby and 8 on the upcoming HEDT setups, but it is still limited by going through the chipset, even though increased to utilize the full speed. If you have two nvme Samsung drives in raid, you have more bandwidth and lower latency. If you raid optane, you hit the cap. So, with more pcie lanes available and allowing for higher than just two in raid, you can obtain higher speeds with AMD in specific setups.

    Sent from my SM-G900P using Tapatalk
     
  21. Mr.Koala

    Mr.Koala Notebook Virtuoso

    Reputations:
    568
    Messages:
    2,307
    Likes Received:
    566
    Trophy Points:
    131
    How much capacity do you need? Depending on the workload it might make more sense to get a old multi-socket server board and take advantage of those super-cheap used DDR3 server RAM, especially if your CPU computation can scale well to many slower cores. Assuming up to 32 DIMMs on a 4P board, a quick look at eBay puts 0.5TB (with 16GB 1066 DIMMs) at ~1000USD and 1TB at ~3000USD. There are also some 2P options with 24 DIMM slots. Given the performance advantage over any SSD it's not a bad deal.

    If you do need more than 1TB for hot data the capacity/cost ratio with DDR3 will drop quickly.
     
    Last edited: Mar 31, 2017
    Kommando and ajc9988 like this.
  22. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    RAID increases latency on almost any current drive available - even as it increases total throughput (mostly sequential).

    The proof that what you're suggesting is not optimal is that XPoint is here. ;)

    Latency is not to be underestimated - even at the nano second level. CPU's live far above that already... (and in the end; CPU + RAM is where all work is done, still).

    See:
    http://hothardware.com/reviews/inte...ing-3d-xpoint-memory-technology-debuts?page=2

    [​IMG]

    Also; PCIe lanes are grossly limited when compared to DIMM's...

    We are just beginning to get the use of DIMM's just now though... (for something other than RAM) :)


     
  23. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Those options look good, initially.

    Where it fails on providing the benefits and promises of fast RAM is the (s-l-o-w) drivers needed to keep that much data 'live'. Not to mention backed up to slower storage media and the added expense and time requirements of a stable and always functioning UPS setup (especially for the RAM - not just the system itself).

    A system with a quarter or a half a TB of RAM would make me happy. But not one with slow/ancient cores. Nor one which relies on fleebay to acquire the parts for either. ;)

    Seeing a 1960's muscle car fly past at it's performance peak is impressive - in the '60's. Not something I would sink my money into to see it almost 70 years later... and what you're suggesting to me is almost as many decades apart (tech wise). :)

    Might be an option for some, but I'll pass. ;)

     
  24. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,614
    Trophy Points:
    931
    triturbo and ajc9988 like this.
  25. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Papusan, honest question:

    What does this do to actual game play (advantage) by going from ~60 to ~90 FPS... minimum?

    Will this be effectively a non-advantage for players whose monitors are 60Hz or less?

    Players with monitors that can refresh faster than 60Hz... will they see any real advantage in game play - or will this just be eye candy?

    Great job on AMD for getting these improvements so quickly, either way. :)

     
    Papusan likes this.
  26. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Funny, because there was an example of lower latency even with the pch limit by raid striping, not higher latency, as you suggest. Also, there are already ram cards that allow putting pcie nvme cards through a ram based adaptor, as well as pcie based cards with ram to create ram disks and system ram expansion.

    For my uses, optane is overpriced and doesn't give enough benefit compared to alternatives!

    Sent from my SM-G900P using Tapatalk
     
    Papusan likes this.
  27. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    In my opinion & experience going past 60fps/60Hz is only really significant if you're playing online multiplayer first person shooters - I saw a big difference from going from a 78Hz/78fps (overclocked) laptop monitor to a 144Hz/144fps desktop monitor, enabled me to play better & more competetively. Most of the benefits of 144Hz comes from being able to pan the camera really quickly to track a fast moving close target and to be able to capture all the detail & cues so that you can aim effectively, you can't really do that so well at only 60Hz/60fps (just not enough density of information for the eyes & brain at those kind of high movement speeds).
     
    Last edited: Mar 31, 2017
  28. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,614
    Trophy Points:
    931
    I'm not a big gamer :D I can't see the big difference between ok and very good screen calibration. But 60Hz panels is yesterday. And new games coming and will push hardware even further. High performance will not be less important when your tech begin to be older. Bigger the better said.... :eek:
     
    Raiderman and tilleroftheearth like this.
  29. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    You can see the graph in post #1121 above?

    8x to 40x lower read latency than an Intel DC P3700... PCIe v3.0 8x... while consecutively doing a random write on the drive (at ~10x what any other SSD can currently do, btw).

    RAID0 with anything less will be laughable in a sustained, over time, workflow.

    To be clear; Optane as it is today isn't beneficial to me either; as-is.

    But if I was betting; Optane/XPoint is clearly going to be huge. Huge. Even over significantly more cores (AMD) or almost any other near term tech leap that I know about today.

     
    Last edited: Apr 2, 2017
    ajc9988 likes this.
  30. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    http://m.slashdot.org/story/306633
    https://www.pcper.com/reviews/Stora...e-RAID-Tested-Why-So-Snappy/Latency-Distribut


    Sent from my SM-G900P using Tapatalk

    Edit: it isn't more cores, it is the lowering of latency and increased pcie that gives the benefit. The core count and use will depend on the person. But going to three nvme will give you better than two optane in raid, although because less bottleneck, it will have a bit more latency, but move at faster speeds. So, there is more nuance here than just pushing optane. Generally, I support the tech as I support phase change memory, but it is way too early to sing it's praises over new setups that will be possible. That is what I'm trying to get at...
     
    Last edited: Mar 31, 2017
    hmscott likes this.
  31. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,614
    Trophy Points:
    931
    hmscott and ajc9988 like this.
  32. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    QD=16? Enough said.

    We're talking workstation class workloads, 1 to 4 QD... maybe getting to 8 QD occasionally.

    RAID is less responsive even when it's faster on the 'top end'. ;)

    Those scores still don't compete with the DC P3700 btw, on a sustained, over time, workflow.

    The 'more cores' comment was about Optane vs. AMD's Ryzen. Optane at this point suggests much more performance potential than anything either Intel or Ryzen can offer from a pure CPU aspect.

    See:
    http://www.gamersnexus.net/news-pc/2845-intel-optane-dc-p4800x-ssd

    So, in at least the above specific use case; the Optane SSD is 2.83x faster than the already respectable Intel 750 PCIe x4 SSD...


    There is no CPU option available that will give that kind of increase over existing CPU's. :)

     
    Papusan likes this.
  33. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Already more mature than the 2018 DDR5 is. :)

    These are not comparable; XPoint is much higher capacity (1TB RAM on a notebook, anyone?), but slower and persistent RAM. DDR5 is just faster volatile RAM (what we already have).

    These would complement each other nicely. :)

     
    Papusan likes this.
  34. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,614
    Trophy Points:
    931
    Intel's Cannon Lake PC chip shipments may slip into next year

    "If you were expecting to buy laptops with Intel’s next-generation Core chips—code-named Cannon Lake—by the end of this year, you may be disappointed."

    "There’s a chance that shipments of Cannon Lake—Intel’s first on the 10-nanometer production process—may slip into next year."

    "So don’t expect Cannon Lake laptops during this year’s holiday season. Instead, users will be able to get PCs with 8th Generation Core processors, which are made on the 14-nm process. PCs now are available with 7th Generation Core processors code-named Kaby Lake."

    "Those 8th Generation Core laptops may be more attractive to customers. The first 10-nm Cannon Lake chips will be slower than 14-nm 8th Generation Core processors. Intel acknowledged the speeds during the manufacturing event, with a chart showing 10-nm chips catching up with 14-nm chip performance in one to two years."

    "The first Cannon Lake chips will be targeted at low-power laptops Jokebooks and 2-in-1s filth. PC makers typically need time to test the chips in laptops, so availability of the chips in mainstream PCs may drag into 2018."

     
    Last edited: Mar 31, 2017
  35. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    So not only has intel had problems moving to 10nm, they will run slower (negating much of the IPC benefit), including taking a year or two to catch up. Obviously part is tightly packed increases heat. So, by touting that they are more dense than anyone on 10nm, they failed to mention that the heat negates that benefit!!! LMFAO!!!

    Edit: even worse, yields are still so bad, they are having to reduce the number of transistors on top of clocking them slower because of the heat! Yet, people still say they are so great! LMFAO!

    "But the first low-power Cannon Lake chips will have fewer transistors, and won’t be comparable to the mature 14-nm chips with more transistors."
     
    Last edited: Mar 31, 2017
    Raiderman, triturbo and Papusan like this.
  36. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Changing an expected launch date by a few weeks after initially announcing it just two months ago is not a reason to state Intel is having problems moving to 10nm.

    For whatever reason, they have chosen to release the low power parts first; that is why they have less transistors and lower peak clocks too. This is business decision which I'm sure Intel has not taken lightly. They are shifting gears and their focus (we need to try to keep up...).

    Mature 14nm chips with more transistors are i7 QC models... This initial batch of 1onm chips will be 'U' models for tablets and 2 in 1's... you're comparing apples to orangutans. :)

     
  37. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Wrong! For a man that loves history, you sure do not seem to know it well!

    1) Intel planned 10nm to be done with EUV originally. With the delay of that to market, they had to use the older lithography. In fact, the 1-2 year time frame coincides with the release of EUV. Funny how that works out.

    2) Intel dropped the tick-tock cadence with 10nm. Why? In part, it was yields. No secret. That information has been publicly available for years. That is why a third 14nm, then a 4th 14nm was planned, with only the fourth version being the same architecture used for cannonlake (in other words, a backup in case the yields were not fixed in time).

    "The first 10-nm Cannon Lake chips will be slower than 14-nm 8th Generation Core processors. Intel acknowledged the speeds during the manufacturing event, with a chart showing 10-nm chips catching up with 14-nm chip performance in one to two years." This directly states, not comparing apples and oranges or qualified by the author, that the speeds are lower and will take years to catch up. That discusses heat more than yields, so just is compounding the prior yield issues.

    "In theory, the 10-nm chips should be faster than the 14-nm chips. But the first low-power Cannon Lake chips will have fewer transistors, and won’t be comparable to the mature 14-nm chips with more transistors." Either the author has no ****ing clue what he is talking about (possible, especially if he is comparing other than low-power chips), or he is directly stating in the second sentence that the chips, when compared equally, both being in laptops, that the yields are so low it has less transistors. Considering Intel was touting over 25% increase in density due to their process, if it has fewer transistors than the prior generation's transistor count on low-power chips, yields is the only conclusion. If the author is comparing the low-power with a desktop, when the 14nm coffee-lake he said would be in laptops, then he is switching the comparison and makes no ****ing sense. I will take it he has an idea on how to compare (although he doesn't understand compact meaning higher heat, so lower clocks, so maybe I shouldn't give him that), so take his words to mean what they say on their face. On its face, the maturity of a node directly correlates to transistor yield. This means yield issues!

    (To be fair to you, this sentence suggests the author is an idiot: "But Cannon Lake is expected to beat its low-power Kaby Lake predecessors—which also have fewer transistors—on performance.").

    3) When the author states the change on priority, considering Intel will not release a cannonlake before next summer, the author fails to recognize that the trend for over 5+ years has been to do mobile, mainstream, HEDT and then Xeon (with the latter two released often the same time and originally the first two launched at the same time, then doing mobile first, followed by desktop a month or two later, which is because of optimizing yields before the server clients to offer the most mature process and less waste; this was changed because of AMDs core count which scares them). As such, cannonlake is NOT subject to the change, unless moving Xeons and HEDT up, which makes little sense unless cannibalizing Skylake-E/X significantly on fears from AMD. I doubt that, so I'm guessing it is Ice and Tiger that switch priorities in timing.

    So, with the above said, I will gladly stand by my yield and heat comments!
     
    triturbo and Raiderman like this.
  38. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,614
    Trophy Points:
    931
    Or A better hypothesis. Intel wanted to milk 14-nm chips long as possible. Why would-should they speed up? Zero competition. Zero intensive to go for 10-nm chips earlier than what we now seeing.
     
  39. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    The fourth one, maybe. But, as I said, the 10nm was planned for EUV, which was supposed to be to market by 2015 (matching Intel's Tick-Tock cadence, allowing for a transition to 10nm instead of Kaby). So, that is why I believe it was more than them milking. But, your point is well taken. Meanwhile, all others are waiting for the EUV before doing 10nm and below, except for Samsung. This is why I believe it is more of a problem with available lithography, which, because the old lithography cannot function as finely grained on function, is why everyone has had trouble with yields on 10nm. Why anyone would think Intel could defy the laws of physics, I do not know.

    Edit: Also, Coffee lake was originally planned to be similar to broadwell, with more limited lines than Haswell or Skylake. So, the discussion of wider application, I'm guessing that it is yields.

    Edit 2: https://www.pcgamesn.com/intel/intel-new-stacked-cpu-design
    This suggests 10nm is difficult to use on all elements, meaning problems!
     
    Last edited: Mar 31, 2017
    Papusan and hmscott like this.
  40. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    It is a bit simple-minded to put those hotter dies next to the cool running ones, maybe split functional features a bit better, or wait till it all works on 10nm.

    Maybe Intel is desperate?
     
    Last edited: Apr 1, 2017
    ajc9988 likes this.
  41. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Well, this is mainly thought that Intel will directly purchase Vega iGPUs from AMD rather than licensing the tech and designing it on the process. But, even the article suggested not shrinking non-essential components, which would speed up time to market for the harder to shrink components that provide less gains, but delay drastically. They still can call the core a full node shrink, but still use 14nm in other areas.
     
    hmscott likes this.
  42. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Yeah, but 22nm mixed in too?
    Intel Heterogeneous CPU design.jpg
    The GPU is 10nm, so that's supposed to be AMD?

    Maybe Intel is desperate and they can't put out something competitive quickly, so they are forced to Frankenstein a solution with various "parts"?

    To me that sounds more like a one-off rat-hole that could create more of a mess than a successful interim solution.

    The Ryzen response: Intel have forgotten how to deal with a genuinely competitive AMD
    https://www.pcgamesn.com/intel/intel-amd-ryzen-competition

    "To me, Intel's ad hoc, scattergun response to the swathes of column inches that have been written about AMD's new chips has been ill-conceived at best and irrelevant at worst. They've almost given up on the recent Kaby Lake 7th Gen Core launch by talking about the 15% performance boost you can expect with the upcoming 8th Gen and missed the point of Ryzen by seemingly bringing forward their new expensive high-end desktop platforms."

     
    Last edited: Apr 1, 2017
    Raiderman, ajc9988 and Papusan like this.
  43. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,614
    Trophy Points:
    931
    The video tell everything :D
     
    ajc9988 and hmscott like this.
  44. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    It's at times like these where more cores on the same process is looking like a lot of sense! (see Ryzen of course!)
     
    ajc9988 and hmscott like this.
  45. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
  46. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    :cool:This is a reply to all beginning from post 1136:

    Lol...

    I get it; Intel is the big bad wolf that is soon to be eaten by the lovable, docile AMD. Ha! ;)

    Nothing that has been stated hasn't been done before by both tech companies (and others too). Is Intel regrouping? Yeah. I sure hope so! But the 'facts' are being taken from a long line of others' opinions...

    Unless anyone here works for Intel; nice conjecture. Bravo. Intel is scrambling (I agree) but not to the degree noted by some.

    Considered logically:
    Intel is still ahead, for the moment (I am not the only one still promoting Intel platforms to my clients...).
    Why wouldn't they use every method and process they have at their disposal? While simultaneously working on their next big thing?
    While AMD has hit a home run with Ryzen (and seemingly won the game too...), it remains to be seen if they will be able to repeat that process (again and again and again).
    Intel on the other hand has a definite plan with their offerings (no, I'm not privy to those plans; I just see their real world implementations just like everyone else). Eventually, nothing seems to be offered at random... and when the pieces do fall into place like they want; they introduce disruptive technologies that leave other companies far behind.
    Of course, there is still the business and shareholder side of things... but everyone plays that game... the actual products sold that I can use is what matters; and here, they deliver in spades (to now).


    Like I've mentioned before; I don't pay for tech based on the theory, process node or the marketing a company does. No; I buy and recommend to clients tech that has proven in my real world use (and the client's...) to be superior. Period. In that regard, nothing has changed with regards to what I can guess Intel will deliver next. It may not be what I want (then; I simply don't buy it). But it will almost for sure be what I need in the next two or three iterations - if I want to stay competitive, productive and profitable vs. my direct competitors (who will and do buy 'actual' superior tech - not just the promise of it)...

    I think that all our opinions are possible realities for the next few years for Intel (more likely; a mix of them).

    Focusing on the worst that can happen is reading the 'history' wrong. Sure, it is one possibility.

    But in tech, the past has never been very good at predicting the future. Twist those words if you want. But the truth is that whatever 'wrongs' Intel was doing up until now; it doesn't need to continue them. Again; it may for one or more iterations - but it will also be simultaneously working on something else/better too.

    The fact that Intel indicates so far in advance that a platform will ship in 2017 Q4 or 2018 Q1 is not a negative. That is a company I can do business with. The fact that it uses 2, 3, 4 or more process dies isn't relevant in the least; the proof will be in the pudding.

    What I concentrate on is the real end goal: a platform that is more capable (performance/productive), more efficient, just as stable/reliable and priced fairly - over and above what I already have. If Intel releases a turd, I am not obliged to buy it. If Intel releases something that is lower end than what I bought from them in previous years; I can wait. Especially as there is no real competition (yeah; even today).

    If those statements/'facts'/quotes about Intel were indeed true; yeah they seem idiotic. But they are irrelevant too in how I operate.

    Am I blindly defending Intel? Nah...

    Just showing a blueprint to navigate all the conspiracy theories, biases and other ideologies that may limit others from making logical decisions for their current and future tech purchases.





    Aside:
    Today's tech started life at least a decade ago. All of it. Even AMD's Ryzen.

    There is no company that can jump into new tech with both feet (they can't afford to - not even Intel).

    In 2006, AMD released the Turion. I had to try it against my 'older' Intel Core 2 platforms. Not good. This process continued for the next decade, with each new AMD offering that seemed to promise 'more'. Ryzen is the fruit of that goal (to beat/match Intel).

    In between, I have seen my productivity increase steadily and impressively. In a decade; that increase is positively explosive! Those 'measly' few % bumps sure do add to a lot - especially with all the random bits and pieces Intel has sprinkled in throughout the years (while holding back the tech they actually have... :rolleyes:). :)

    In late 2009, of SSD's, Intel said 'wait for the next few processors' that will need an SSD to shine - they were right (~2011 time frame for me). Today, they're showing Optane (can't wait for `2019 to get here)...

    What Intel has conveyed (and proven) to me as a customer (when all the marketing BS is stripped away) is this: we will offer products that enhance real world usage. This, they do in spades.

    If or when AMD (or anyone else) can offer the same vision and with the track record to believe it, I'll be in line with $$$$.

    Everything else kinda fades into noise... :D


    I have to repeat this again (with the context above):

    See:
    http://www.gamersnexus.net/news-pc/2845-intel-optane-dc-p4800x-ssd


    I don't know what color glasses anyone else is using... but there is nothing on the horizon (cpu or otherwise) that will give a 283% increase in productivity like the results above show.

    I almost wish that were applicable to my workflow... I could work 'hard' 2 days a week and be on the beach for 5... :p :D :cool:
     
    Papusan likes this.
  47. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Happy April Fools everyone! :D :D :D
     
    Papusan, TANWare and ajc9988 like this.
  48. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    So, you ignore that the industry will be on 7nm by 2019 when Intel gets there 2021. Then, the industry moves toward 5nm by around 2022, Intel gets there 2024 earliest. Finally, you ignore that they are moving to where all components do not have to be shrunk. Intel's dominance is to compare on equivalent nodes. If the competition is working at a smaller node, part of the lead is erased. During this time, blood will be spilled, as the 7nm ARM A72 will run at over 4GHz with a TDP over 100. This means ARM is planning to eat some server space. If AMD also hits 7nm with Zen 3 in 2H 2019, with 7nm Navi, they will definitely be a force cutting into Intel's profits. What I foresee is a "death by 1,000 papercuts," mostly related to late to market EUV and needing developments for materials sciences. They just were the hare and got to the wall first. Emerging from the wall is another story.

    Now, diverting to discuss Optane is NOT how you win that argument. You stop talking about the topic and distract by discussion of other products. I am even less impressed considering if Intel does do an Ice/Tiger E/X platform, you have to switch MBs because they are FIVR again. So, I find myself seeing both do what they can, but intel is blowing sunshine up a lot of asses here!

    Edit: You also ignore Intel pushing back releases regularly now by 1-2Q.
     
    Raiderman likes this.
  49. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    The problem with the 283% increase is it is dependent on overloading the swap file. In most cases where that type of stress is involved in an every day workflow the system should be built around the idea of enough RAM or other resources to keep away from the swap file.

    Now I would agreed these drives though would give Ryzen and Intel High core count CPU's much more capability on lower end server motherboards etc..
     
  50. Maru

    Maru Notebook Consultant

    Reputations:
    136
    Messages:
    235
    Likes Received:
    89
    Trophy Points:
    41
    Here's one for a 'Super' Ryzen notebook. I see a cape and mask...
    More realistically, they also have a plea:


    How can a community drum up and show strong buyer interest in Ryzen notebooks, to convince their suppliers to commit capital and good design and engineering personnel soon?

    Discuss and socially spread proposed specs, price/performance, and design proposals, as done at crowdfunding sites?

    If so, how can Ryzen notebook proposals differentiate themselves for people who aren't yet interested. Is it down to price? Or does Ryzen have some advantages beyond price for some niches? (such as less power consumption due to less AVX hardware and no iGPU.)

    Most manufacturers may prefer to wait for mobile parts such as Raven Ridge with an integrated GPU so they can power down the discrete GPU and prolong battery life. Is there a bigLITTLE design with two GPUs that could achieve similar power savings? (Then the GPU would be fabricated on a GPU optimized process rather than a CPU optimized process).

    Or is there a large enough market of people who buy portable desktop power and don't rely on battery life? (I think the 'lunchbox' portable computer market is small, but the portable ITX market may grow, just add a portable screen like GeChic 1503H and an external keyboard with trackpad or trackpoint.)
     
← Previous pageNext page →