The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
 Next page →

    Intel's upcoming 10nm and beyond

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by ajc9988, Apr 25, 2019.

  1. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
  2. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
  3. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Vasudev and hmscott like this.
  4. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,706
    Messages:
    29,840
    Likes Received:
    59,619
    Trophy Points:
    931
    Last edited: Apr 25, 2019
  5. bennyg

    bennyg Notebook Virtuoso

    Reputations:
    1,567
    Messages:
    2,370
    Likes Received:
    2,375
    Trophy Points:
    181
    I actually think that leaked road map is too bad it's unbelievable.

    That it starts at Q12018 is a bit fishy too. It might be from 2017 or a worst case scenario based on the capability of their old broken 10nm before that was all but officially canned.

    It'd mean Intel's been misleading on the market on their revised 10nm "not+" progress if we get nothing but low power dual and quad core laptop and tablet chips by the end of 2021. Even the new CEO would be implicating himself in the continuation of the lie, when upper management changeover provides the single best opportunity for uncomfortable revelations to the market (that the new guys can blame on the old guys).
     
    Robbo99999 and hmscott like this.
  6. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    So, some more info on its origins. The roadmap is for SIP, which is for commercial integration of new CPUs, which does lag, slightly, from consumer releases. So those dates may be up to 6 months later, or a quarter later, than what consumers will see. It was presented to Dell and was leaked anonymously after the presentation.

    Not necessarily. As I just pointed out, it was presented to Dell, an OEM, for commercial products. That means detailing where they have come from last year does make a little sense, that way to show how long the life span is for each product sold to commercial purchasers.

    And, as mentioned, it is rumored to have been decently recent. Intel refused to comment on it either way. And Intel may have thought or believed they figured it out when the statement was made and never updated the statement.

    This also gives weight to the rumor that Samsung will be producing Intel cards on 7nm Samsung process. They do show an integrated 10nm iGP, but that would not be nearly able to do a large die dedicated card, especially since they don't have large die CPUs listed.

    Just some thoughts. Another thought is if Intel misses getting 7nm worked out by 2022-23, they may be forced to go fabless. That would mean they would have to rely on TSMC or Samsung. Now, that is a scary thought, only due to further fab consolidation. But that is too far out to predict! Also, EUV lithography WILL be available for them by then, so if they get the cobalt integration figured out, they will be right on track. Just 10nm, without EUV, seems to be DOA.
     
    hmscott likes this.
  7. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    I really hope Intel delivers desktop / laptop 10nm CPU's that are truly a performance jump ahead, vs matching 14nm+++ features and performance.

    It also seems that the new Intel iGPU technology only comes on the 10nm ULV CPU's coming end of the year, as the 14nm Desktop / Laptop CPU's still have the current poor performing onboard iGPU's - maybe another reason why the KF / F CPU's are arriving in force - most have no need for the iGPU on high performance desktop / laptop's that have dGPU's - it took Intel far too long to figure this out.

    The low cost Ryzen APU's with onboard GPU far outpace the Intel highest performance CPU's in gaming FPS, with the 9900k + onboard iGPU being almost unusable:
    http://forum.notebookreview.com/thr...-lake-cpus-z390.811225/page-149#post-10900296

    No matter what Intel finally delivers for 10nm, 2x more than hardly anything isn't enough to make a dent in the bottom line or increase market share back from AMD for 2019 / 2020.

    Hopefully AMD will be wise and take this opportunity to build bridges with vendors to deliver more laptops and desktops with Ryzen CPU's + Radeon GPU's.

    We need a strong competitor to Intel for progress to pick up again - continue to pickup as from AMD's Ryzen push - and hopefully Intel will catch up down the road.

    The Intel Datacenter sales drop-off a cliff seems doubly large to me. Is the market really saturated at such a high level, or are customers holding on to 14nm technology that is already dated for longer than the usual upgrade cycle - waiting for Intel to deliver on 10nm (7nm?) and AMD's 7nm DC solutions that promise real performance and cost improvements - instead of wasting more investment in 14nm silicon with architecture security vulnerabilities?

    If 10nm CPU's don't get rid of the 14nm security vulnerabilities or the accompanying performance hits the DC's have been suffering, DC customers are going to have a good reason to jump ship to AMD that has fewer vulnerabilities and less of a performance hit mitigating them. If performance alone won't make DC customers jump ship from Intel, then AMD price + fewer vulnerabilities and fewer mitigation lodestones together might.

    Why Intel Is Slashing Its Sales Forecasts
    Bloomberg Technology
    Published on Apr 25, 2019
    Apr.25 -- Bloomberg's Nico Grant and Sarah Ponczek break down Intel Corp.'s first-quarter results on "Bloomberg Technology."


    If DC customers aren't buying, where is that "redirected 14nm production" going? As I've said before I've doubted this whole "Intel CPU shortage due to redirection to DC demand" tall tale.

    I think the numbers from Intel are showing there is simply no demand in the DC or consumer realms - AMD is gaining market share and Intel has fewer placements of their products in a quarter after quarter market share decline.

    Sales shortfalls in a mature company like Intel with market and production dominance are due to customers not buying product, not because production can't keep up. Intel has dropped that story now, and is starting to give a glimpse of the true situation, customers aren't buying product.

    What is going to happen when reality is no longer obscured and the truth comes out?
    d7suf5lk89u21.jpg
     
    Last edited: Apr 26, 2019
    ole!!!, Vasudev, lctalley0109 and 4 others like this.
  8. Talon

    Talon Notebook Virtuoso

    Reputations:
    1,482
    Messages:
    3,519
    Likes Received:
    4,695
    Trophy Points:
    331
    I agree to a point, the iGPU is largely useless and Intel could have slashed it earlier it it meant they could increase yields or or chips that were able to be sold. That said I do like having that iGPU in there in case the day should come I need to rely on it to get back into my desktop. In the case of dGPU failure or in the brief time in between old and new GPUs it's nice having a way to still use my desktop. Though this is still a rare use case.
     
    Vasudev, Ashtrix and Papusan like this.
  9. bennyg

    bennyg Notebook Virtuoso

    Reputations:
    1,567
    Messages:
    2,370
    Likes Received:
    2,375
    Trophy Points:
    181
    I think 110% of enthusiasts would say, for that die space gimme 4 more cores instead, I'll just grab some cheap htpc grade card from ebay for display outs.
     
    ole!!!, Mr. Fox, Vasudev and 6 others like this.
  10. Talon

    Talon Notebook Virtuoso

    Reputations:
    1,482
    Messages:
    3,519
    Likes Received:
    4,695
    Trophy Points:
    331
    I agree. But they have to give us those cores lol.
     
    Mr. Fox and Vasudev like this.
  11. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Which is completely at odds with the supposedly leaked roadmaps! Makes me think the roadmaps are fake. But, if 10nm not coming to Intel until 2022, then that's not good for them vs AMD's purported future with increased IPC & frequency, because AMD are obviously already winning with core numbers & price, if they can get IPC & frequency up significantly then it'll be down to Intel to offer a response, and I'm thinking they'd need 10nm for that.
     
  12. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,706
    Messages:
    29,840
    Likes Received:
    59,619
    Trophy Points:
    931
    None is happy seeing their competitor eat much more of the cake. Of course they can't wait +3 years to answer the threat. Would you if you run an business?
     
    ajc9988 and Arrrrbol like this.
  13. Arrrrbol

    Arrrrbol Notebook Deity

    Reputations:
    3,235
    Messages:
    707
    Likes Received:
    1,054
    Trophy Points:
    156
    IMO they only keep developing the iGPU to get a head start for when they start producing dGPUs. Technically Intel is the largest GPU manufacturer if you count iGPUs.
     
    Rei Fukai, hmscott and Papusan like this.
  14. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Exactly, keep that basic video controller outside the CPU in the chipset where it belonged all along.

    Now with Hybrid packages - now chiplets - it's a matter of multi-layer communication close to the CPU die, but off-die to maintain separate power / cooling - with fancier IHS "heat-pipes"(?) - on separate silicon.
    Intel shouldn't even be on that GPU volume tracking list, it's Intel marketing BS at it's finest.

    Intel's on board iGPU shouldn't count in a discrete GPU tracking comparison, that would make those tracking numbers more sensical without Intel.
     
    Arrrrbol likes this.
  15. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Actually, it is not. Ice lake-u quad core is on the roadmap. Also, entry level Xeon on 10nm (115x socket) is as well. But who here cares about a mobile quad core or likely a low core count entry level Xeon? Also, we have to look at yield and fab lines. Yields are so bad currently that only a quad core is possible, up from a dual core. This matches with the "double" comment from Swann this afternoon regarding yields since the changes, although he could have meant a 50% reduction in defect density. To put this in perspective, AMD with TSMC is already at 70% yields, which comes out to a defect density around 0.45/mm2, which is enough for mass production on a new process, which will only reduce with time, and would yield over 500 non defective dies on a 7nm wafer costing $11,000 per wafer, meaning each die would be around $22 for an 8-core. That doesn't account for scavenging usable dies, which would give a higher effective yield.

    With that said, Intel also has the IMC and IO on die, which is hard to shrink. But, Intel only has three lines to produce 10nm and just invested heavily in expanding 14nm, meaning there is a need to recoup cost and they have extremely limited capacity for 10nm lines at the moment.

    As I explained in a different thread, details on the leak say it was a SIP roadmap presented to Dell, meaning commercial machines, not consumer machines, which lags behind by a couple quarters. Then there is a question of how current the map is. So let's give the benefit of the doubt and pull all products in by 1-2Q. That puts comet lake at Q4 to Q1 2020, which around Q3-Q4 were the rumored for comet lake anyways, with the possibility of slipping to Q1 2019.

    With that explanation, does it seem more possible now?

    Sent from my SM-G900P using Tapatalk
     
  16. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Long day at work, so can't follow all the threads of your thoughts/information there, but your initial post where you quoted the links to the various news articles, you summarised in your post before those links with "Bad news for people on the Intel platform: NO 10NM UNTIL 2021 EARLIEST, 2022 DESKTOP.". So, you're saying backend of 2020 for desktop 10nm now, rather than the 2022 you mentioned in your earlier post?
     
    ajc9988 likes this.
  17. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Pictures can help.
    Mobile roadmap.png Intel SIPP Roadmap.png
    So, I misstated, if being absolutely critical.

    Above on the client mobile roadmap, you will see Ice Lake U in a 2c/4c offering that is limited coming out this year. In fact, it says half way through Q2. Then, you get Tiger Lake U in 4c offering on 10nm in Q2 of 2020. You also get, in Q3, Rocket Lake U 4/6c with 10nm graphics. On the second image, the Client Commercial roadmap, you can see NO listing for 10nm until Tiger Lake U and Y in Q2 of 2021, both in 4C variants. Allegedly, from reports, certain Xeon E parts may get 10nm.

    So, long end short, nothing above 4C 10nm is planned until after 2021, which places it in 2022. Intel should have 7nm close to around then, so it seems they are skipping 10nm entirely.

    Now, for anyone here, do you want 4C or less? Do you want a "U" or "Y" series low power chip? Otherwise, no, there is NO 10nm planned.
     
    lctalley0109, Robbo99999 and hmscott like this.
  18. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,706
    Messages:
    29,840
    Likes Received:
    59,619
    Trophy Points:
    931
    Ok, we say 2022 :biggrin: Since you are so sure.
     
  19. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Intel has given me no reason to believe in them otherwise in regards to 10nm. They have limited production lines and invested in having more production on 14nm. The roadmap allegedly comes from a couple months ago. Although yields have gotten better, they are still low. OEMs are deciding to put out better AMD laptops and integrate them into more desktops, signalling them having weaker confidence in Intel. Intel's 14nm shortage has now been placed into Q3 of 2019. We know Comet Lake will be 14nm. So the only question is if Rocket Lake S will also be on 14nm. Either way, Rocket Lake will be end of 2020 or start of 2021. That is one full year worth of lineup. If it is on 14nm, then it will be confirmed. And 14nm going against 7nm+ on EUV lithography in 2020 lineup is bad anyways, needless to say if they have Rocket Lake on 14nm, which would then go against a likely 5nm EUV chip on the other side.

    But, with little to the contrary, I'll go with the leaks. Not only that, Intel should have 7nm finally ready for volume in 2021, meaning in 2022, they would have those chips, which may be the largest jump in performance on Intel's side in a long time, if ever, if they are going from 14nm straight to 7nm.

    Also, as I have said, that is process tech. They will continue improving architecture over that time period. But Intel being stuck on 14nm, after being late with 14nm then giving Broadwell, I'd have to say yes, they are having huge issues.
     
    hmscott and bennyg like this.
  20. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Thanks, makes sense. So nothing much better than the 9900K likely until 10 core Comet Lake in Q2 of 2020? And that's still on 14nm. (I'm thinking along the lines of 'ring bus' gaming oriented CPUs). To be honest though, in terms of gaming I don't think we'll need anything better than the 9900K for a number of years! I see CPU development stagnating now somewhat for consumer/gaming needs both in terms of requirements and performance offered. I think this because we've been on 4C/8T for so long as an upper end consumer/gaming CPU, then 'suddenly' we get 8700K & 9900K where we double the number of cores. That large increase in performance after all the years of stagnation makes me think there is gonna be a lot of headroom left in these CPUs for a number of years to come. I don't see the 8700K and certainly not the 9900K becoming sub-optimal for gaming for quite a few years now. Perhaps now CPU development will go back to the boring snail pace of before like the long reign of the 4C/8T CPU!
     
    ajc9988 and joluke like this.
  21. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    nah, not unless ryzen 3000 will be an absolute disappointment. expect that stagnation only on the intel side for a while...

    Sent from my Xiaomi Mi Max 2 (Oxygen) using Tapatalk
     
    hmscott and ajc9988 like this.
  22. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Couple points.

    1) the desktop roadmap refers to SIPP. That is commercial client deployments. That lags consumer releases. That means you need to subtract a quarter or two from some of the chips to determine consumer releases. Tom's hardware says 9 months, but I wouldn't go that far. They are counting each quarter as 3 months, but are ignoring Intel will release comet lake in October with wider availability months later, always with the possibility of slipping a quarter.

    2) the adoption of Vulkan and DX12, along with ryzen optimization advancements means as time goes on, newer games will scale better. DX12 has CPU optimizations to more parallelize the workload, allowing better core scaling. Although single core is important still, it does improve performance on higher core counts. Vulkan is similar. And, as we've seen with Ryzen game optimizations in certain newer titles, it has added performance where an 8 core has concrete benefits over a six core chip. These will only continue with time.

    3) comet lake may have sunny choice architecture advancements. Looking at each revision in architecture, meaning Ivy to haswell and broadwell to sky lake, Intel has improved IPC around 11%. This isn't overall performance, just IPC.

    This least point is important. AMD is looking at 11-15% IPC over zen and zen+. Zen is around 7% lower IPC than Intel's current offerings. So that means AMD should have 4-8% IPC over Intel, depending on workload. If Intel releases comet lake with an 11% IPC gain, that would put them 4-7% ahead on IPC again. This means AMD really needs frequency improvements to keep the single core lead (they already win for core count and multi threaded).

    Now power consumption to achieve the performance is where Intel will really be hurt! But that is a different story.

    Sent from my SM-G900P using Tapatalk
     
  23. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    We'll have to see what kind of game development can really challenge these 8 core 16 thread CPUs, historically games have mostly been GPU bound apart from high refresh rate gaming, but future game development may change that I suppose. I don't know much about game development, but I think the more 'assets' you have on screen at any one time, then the more CPU hungry it is - I think this was highlighted by the DX12 3DMark CPU test, where DX12 showed a massively more complex scene than DX11. Yep, I think in order to take advantage of all these extra CPU cores that it will mean more complex game environments, we'll have to see if the demands scale with the development of these new CPUs.
     
    ajc9988 likes this.
  24. yrekabakery

    yrekabakery Notebook Virtuoso

    Reputations:
    1,470
    Messages:
    3,438
    Likes Received:
    3,688
    Trophy Points:
    331
    You're forgetting the x-factor. Next-gen consoles arriving in 2020 will sport 8C/16T 7nm Zen 2 CPUs, so as with every new console generation, expect a huge increase in baseline CPU requirements in next-gen games as those boxes are what drive AAA game dev.

    We already saw that happen this gen starting a few years ago, with games like BF1/BFV, Watch Dogs 2, and AC Origins/Odyssey making 4C/4T CPUs obsolete for 60 FPS. Thank god for Ryzen giving Intel the kick in the ass it needed to give us more cores on mainstream/non-HEDT CPUs.
     
    Last edited: Apr 29, 2019
  25. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Yes, I remember seeing reports about those next gen consoles over on Guru3D and it lead me to speculate that it'll put pressure on my 4C/8T CPU in the future! Those are probably gonna be pretty low clocked CPU cores in those consoles I'm thinking - so 8C/16T PC desktop CPUs are gonna pack more of a punch due to frequency (maybe 25% extra?), but then again they don't have quite the same optimised environment as a console, so maybe it's Even Stephens on that front.
     
    lctalley0109 and ajc9988 like this.
  26. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Even with the lower clocks looking roughly like mobile CPUs, it looks like they will be adopting the CPU optimizations for DX12 or Vulkan. M$ obviously has an incentive to use their own DX12, but if Vulkan starts being used, even if a tweaked version, for PS5, it will be interesting to see where things go.

    But, even those games mentioned, many of them didn't get Ryzen CPU optimizations until Feb/March of this year.

    But, definitely gotta be happy for the kick in the pants. It will be interesting to see the Comet Lake 10 core against the Zen 2 12-core. With the rumored 15% IPC, it would still take a Zen 2 based 8-core likely 4.7GHz all core to match a 9900K clocked to 5GHz. The engineering samples only do 4.5GHz, so unless they got more gas in the tank, it would still take the 12-core to beat the 8-core Intel chip. And that is assuming the 4.5GHz is all core OC, not single core boost.
     
  27. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    so even in worst case scenario, the 16 core zen 2 will likely beat everything that intel has to offer in its mainstream lineup :)
     
    lctalley0109, ajc9988 and hmscott like this.
  28. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    And they know it. That is why Red Gaming Tech has said two things: 1) AMD has no idea how to market and who the target audience is for a 16-core mainstream CPU (some workstation work needs the memory bandwidth, other needs the PCIe lanes to scale out add-in cards, like graphics cards, to accelerate processing, etc.); and 2) AMD wants to wait on the 16-core release to keep hype built up for the process and their products. Technically, a sub-point to number 2 is that they also want to respond in the event Intel's 10-core somehow beats their 12-core CPU.

    Personally, I feel any mainstream CPU at 8-cores and above, regardless of AMD or Intel, has two primary consumers: 1) game steamers, and 2) entry level to intermediate content creators, like YouTubers, hobbyists, etc. The second group is based depending on the amount of content produced, the quality necessary for the content, etc. As they grow, they will likely move toward HEDT builds as it makes sense. But this will help get them used to platforms before that point. And due to more cores and higher performance graphics cards in the industry at large, there is a burgeoning of steamers and content creators online taking advantage of the lower cost to entry.

    But, "that is like [my] opinion, man." (Big Lebowski).

    Sent from my SM-G900P using Tapatalk
     
    jaybee83 and hmscott like this.
  29. AlexusR

    AlexusR Guest

    Reputations:
    0
    I would not say 8-core CPUs are only useful for streamers (who can actually use Intel's QuickSync or Nvidia's NVENC to encode the stream, which would be cheaper and give better performance) - gaming consoles right now use 8-core CPUs so many game developers have to learn to optimize games for maximum scaling for those 8 cores. And this will also affect PC ports of console games or games which are released both on PC and consoles, especially MMORPG games with large PvP battles where you have to do A LOT of CPU calculations for things like position and action of each player (there can be 100's of them in single battle) or physics calculation for large destructible structures without relying on proprietary APIs like PhysX (for example, the upcoming Camelot Unchained MMO has its own proprietary physics engine that is not using PhysX or other existing APIs).

    Now the 16-core is a little bit excessive. Although if AMD would've convinced console manufacturers to use such CPU for next console generation - game developers would've quickly found the ways to use all those extra cores to maximize the performance ;-)
     
    Arrrrbol likes this.
  30. bennyg

    bennyg Notebook Virtuoso

    Reputations:
    1,567
    Messages:
    2,370
    Likes Received:
    2,375
    Trophy Points:
    181
    Can't we just look at what workloads the current 16 core CPUs show good scaling with, and make allowance for better gaming performance on non-NUMA/non-mesh memory subsystems ?

    I get why AMD are not rushing to release 12/16 core mainstream chips... they will just cannabalise sales of their own 12/16 core Threadrippers which will force firesale prices like what we saw at EOL of the 1900X

    If Intel keep the same socket for the rest of their 14nm desktop CPUs... Maybe one day for teh gits n shiggles I'll drop a 12 core in this thing that was originally sold with a 2015 quad core ... 1151 may end up longer lived than AM4 ROFL.
     
  31. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    bwahahahahahahahahaaaaa.....im right there with you! :D
     
    Rei Fukai likes this.
  32. AlexusR

    AlexusR Guest

    Reputations:
    0
    Yes, CPU encode will still give a better quality and you're right, the 16-core can be useful for that for people who don't want to go dual PC (one for playing and one for encoding). I just wonder how well the core sharing will work with games that will be trying to use as many cores as possible (though you can always set Processor Affinity manually for games and programs like OBS for most optimal performance).
     
    ajc9988 likes this.
  33. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Have you ever tried process lasso?

    Sent from my SM-G900P using Tapatalk
     
  34. AlexusR

    AlexusR Guest

    Reputations:
    0
    No, I only tried streaming using dedicated hardware encoders. I can see how this program would be useful on multi-core CPU since it can apply persistent affinity to individual apps and games, so this should work great for 16-core CPU users.
     
    ajc9988 likes this.
  35. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    For the stated use cases, two 4 Core or 8 Core platforms will be a lot more productive and future proof than any single 16 Core platform you can buy today, even if each setup comes with 64GB RAM for every 8 cores used.

    Having 16 Cores today is the equivalent to racing stripes and flame decals on '90's cars...

    I've been hearing for almost three years now how AMD will change the PC landscape with their high core offers.

    Yeah, still waiting.

    There is a reason that Intel didn't lose 80% (net income) as AMD did, but that side of the business is conveniently ignored by the AMD blind allegiance here. That reason? Intel is still delivering more performance period.

    The fact that Intel can do this with their oh so old process node(s) is even more noteworthy. But I'm sure I'll be told how 'out of touch' they are again.

    It may look like I'm bashing AMD and/or putting Intel on a pedestal. But people, come to your senses, you nor I can influence the numbers that matter one iota. When Intel is surpassed, I'll be the first to admit it.

    But continually coming up with imaginary uses for 16 cores when there's still not much use for them (especially in this topic/context), is getting a little tiring.
     
  36. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Says the person using programs optimized for single threaded performance on an aging adobe program.

    We can revisit this in a month to two months, then again in about five months or six months if comet lake drops around then.

    Part of the issue is software designers not adopting parallelizing workloads. As we all mentioned, games are only starting to regularly get scaling beyond six cores. Adobe actually went backwards on scaling on some programs, whereas their competitors actually do scale, but are less used (compare premiere and resolve for video editing).

    The use of 12 and 16 cores or above is understood for professionals. Consumers, until now, have not had such a luxury. Because of that, and how consumer software is designed, you do have a point on consumers looking for ways to use all that extra power, mainly because software companies haven't designed their products for the commercial space, instead focusing on lower core counts. That is a temporary issue, fixed through changes that are coming.

    Hell, with the optimizations on a game like Civ VI gathering storm, the AI processing for later game play on my 1950X now demolishes ANY 8-core chip out there. Once the programming is in place, your comment will not age well.

    Sent from my SM-G900P using Tapatalk
     
    bennyg likes this.
  37. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Please stop trying to pigeonhole me into a single workload. I have always stated my workloads are varied, but that they were 'most' like PS. Not only do you not know my actual workloads (I would be a fool (yeah; competitors) to divulge them fully and publicly), but most here have an instant bias that seems like anything or anyone that seems against AMD or is pro-Intel is to be ridiculed, instead of engaged in a conversation. Not only are most of my workloads not based on Adobe products currently (and for quite a while now), but they are also made up of custom and proprietary code too. ;)

    Let's try this again: I'm pro-productivity, period. I support the hardware that actually increases my productivity at the end of the day, not how much I'm liked in a little corner web forum, or how good the 'scores' look on mere synthetic tests. Yeah, I would love to revisit this in a month, half year or even another three years from now, but I don't see the movement needed and required to move beyond 8C/16T in any meaningful way in this time-frame, just like I predicted in another thread so many years ago too.

    Professionals who actually need more than 8 Cores were always served well enough, and right now, they have great options of choice. For that, we can thank AMD. But if productivity is their goal and they have a normal, varied workload, just like I do and most of the people I know do too, then mere additional cores are not the answer, even today. This is known by most of the professionals in my circle, intimately.

    At the most? One, two or even three (high and very) high core count platforms are commonly used much more efficiently in an organization vs. having every workstation be capable of the highest demand process, yet come in second, third or last in their most used processes/workloads. It just makes sense because that is still the reality of how software currently and for the foreseeable future, works.

    Stating that software designers are slow in adopting and utilizing these additional (sometimes available) cores misses the point. I don't go to my suppliers and whine about how I wish things to be and then go about buying and configuring products based on those imaginary wishes. I tell them to provide me with their ultimate platform example and then I test it in my environment. Either it flies or it dives. Next.

    While it is nice for consumers to have access to multicore platforms that resemble the best of a few years ago at cheap prices, that has always been the case. Are they now getting mostly on par with those older platforms and in some ways surpassing them? Great! But that doesn't make a multicore platform the 'go-to platform' either, for most.

    My comments will age well because I don't say these changes will never come. I'm saying they are not here yet. And until they do, these computers are frequently toys for most because there are other, more suitable and finely tuned options out there and for much less too.

    Will a $1K processor 'demolish' a $500 8 core chip in a single example as you've given? Yeah and yawn... (and, I'm simply taking your word for it) of course, it will. When I say the same thing about Intel's offerings for my workloads, why am I wrong then?

    I hope that the very slow momentum for (very high) multicore support and especially for parallelizing software workloads in the last three years or more will accelerate at a much more
    exponential pace going forward. As long as it is not at the expense of the performance we can get now, or even with three-year-old hardware, in less parallelized workloads.

    When a current desktop runs my workloads slower than what I had as a 'mobile' workstation years ago, that is not something I will pay for.

    And, I have always agreed that, once the programming is in place, we'll all be singing from a different songbook.

    But then, so will Intel. ;)


     
  38. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    You always hide behind this bs ambiguous workload that you never say because it would allow people to dig into it.

    Now, with that running critique out of the way, let's get to where you are absolutely correct. Getting the machine right for the job. That is why awhile back I recommended doing a cheap amd capture machine and an Intel 6-core for streaming rather than getting an 8-core. At the time, there were basically no games getting much scaling for the two extra cores. This is just an example, as you've seen me have vastly different recommendations on server and other workloads.

    Finally, we are starting to have increased parallelization on programs allowing them to scale further. That is only going to grow as it is more common in mainstream and HEDT chips as well as server. Chiplet is coming to everything. And because, at least until graphene or similar is viable, we are hitting frequency limits on current technologies, as well as costs of miniaturization increasing, we really only have one choice: moar cores! That is just where we are at on computer tech.

    Now, if doing purchases for business or as an enthusiast, upgrades happen quicker, as in once it is necessary or makes fiduciary sense for businesses on TCO and ROI, or a jump in performance for enthusiasts. But, the overall trend is that ordinary people are buying systems then holding onto them for longer periods of time. In part this is due to the global economy retracting, in part it is increased costs of systems while wages are stagnated, creating a system where it is harder to allocate funds, in part due to proliferation and higher costs of phones and mobile devices (tablets, laptops) causing people to have to plan which device is upgraded when, etc.

    Now, with that last point, that is were we disagree and agree. When it makes sense for us to upgrade, we do it. I rely more on CPU multi threading, which is why I still have a 980 Ti paired with a 1950X in my workstation (the extra 60% to grab the Intel 16 core wasn't within budget at the time, not saying it is a bad product). Your workloads seem lighter on multi threading by comparison, but benefit from high frequency single thread work. Nothing wrong with that. And for systems in various work environments, you might want to have a 9900K sitting by a 2990WX machine loaded with 8 graphics cards.

    But, for average consumers, 5 yrs + is happening more often. As such, and the stated info above, higher core count will age better.

    Now, we both have our caveats on stated goals. And, depending on where you for in on purchasing, different goals males sense. But not always is it better to buy best for today, while ignoring where things are going, especially for non-business use consumers.

    Sent from my SM-G900P using Tapatalk
     
    rlk likes this.
  39. rlk

    rlk Notebook Evangelist

    Reputations:
    146
    Messages:
    607
    Likes Received:
    316
    Trophy Points:
    76
    There's no question that parallel programming, particularly for things that aren't embarrassingly parallel, is more difficult than sequential programming. All manner of programming models (a lot of which I've been involved with, starting with Thinking Machines) -- from the pure data parallel, to data flow, message passing, explicit threading, what have you -- have been tried, but they tend to be more difficult than trusty old single threaded for a lot of problems, and no one parallel programming model works well for everything. But it's also true that clock rates and CPI aren't improving as fast as in yesteryear, while core (and thread) counts continue to increase, and we can thank physics for that. Faster clock rates demand faster switching, which means greater power consumption (superlinear) to drive faster transitions. And then we hit the simple speed of light limit, and the fact that matter is composed of atoms, which are not negligible in size compared with circuit features. Those limits are much less of a problem in parallel systems, particularly when sychronization can be minimized.

    For that matter, even single threads are not purely sequential at the instruction level, and haven't been for ages. Pipelined instructions are themselves parallel (control parallel, not data parallel), and it's important to issue instructions in a way that doesn't break those pipelines. If you don't use assembly, you're relying on the compiler to do that work for you, but it's still there.

    Bottom line, getting better performance will intrinsically require greater use of parallelism, be it at the data or the task level.
     
    Kyle, hmscott and ajc9988 like this.
  40. rlk

    rlk Notebook Evangelist

    Reputations:
    146
    Messages:
    607
    Likes Received:
    316
    Trophy Points:
    76
    I don't particularly care what your exact workload is. I do find it interesting that other professionals in your circle (presumably your competitors) understand your workload but you can't tell anybody else. Whatever.

    It's not "still the reality of how software currently and for the foreseeable future" can't take advantage of lots of cores. That may be the case for your field, whatever it may be. Or at least for the custom and proprietary code that may not have been updated. But for my field -- software development (working on Kubernetes/OpenShift by day, other various FOSS packages off hours) -- having lots of cores means that builds run that much more quickly, and testing (when written in a way that's not inherently sequential) also completes more quickly.
     
    hmscott and ajc9988 like this.
  41. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    ajc9988 and rlk,

    My workloads are not ambiguous, but neither are they 'standard' to anything you or my competitors may like to think or imagine. To understand my workload is simple; I process lots of high-resolution images and transform them into something my clients want. Now, do you know my workloads? No, I didn't think so. But I'll repeat once again; slow high multicore platforms are the bs empty promises around here.

    I never stated that your workloads may or may not benefit from a high multicore platform, but I am stating that the majority of users won't benefit (and haven't for the last three years or so either). Pushing AMD or Intel high multicore platforms as future-proofing at the expense of the present day, real-world performance is kinda sad to me. Those kids don't know any better.

    Finding software that takes advantage of all those cores efficiently and better than a lower core platform is also a little disingenuous too. I would even say that is one more reason why PC sales have stagnated. If 'upgrading' gives you less performance at double the core count, why not wait for the 112 core platform to waste my $$$$$ on? :rolleyes:

    Do I not want actual productivity workloads to be more parallel? Of course, I do. That is the future. Still doesn't bring it here today.

    Let's talk about concrete facts, shall we? I've been saying a version of the above for almost a third of a decade now. What has happened in that time with regards to parallelism in workloads/software that we didn't have before then? This is the only question that needs to be answered. Everything I've been saying rests on that.

    Given that slow, glacial progress over the last two or three years (if we give time for AMD's products to be in the hands of the devs...), I really hope that the next equal time period shows exponential results.

    But coming back to what you buy today? You always buy the most powerful system you can for your current workloads. Predicting the tech future is a good way to go out of business.

    Here's my concrete example: when I joined notebookreview almost 10 years ago looking for info and real-world tests to decide whether SSD's were in my immediate future, I was told to just shut up and buy them. A few years later, I found their use case; OP'ing by almost 50% at the time. The proof that I offered then still wasn't enough to stop the ridicule from the naysayers. This is no different, but now, my 'bs ambiguous workload' is what is attacked, instead. o_O

    Instead of trying to build yourself's up by tearing me down, try answering the questions above and below that I pose to the whole forum too.

    What has transpired over the last few years that has made buying a high core count platform a requirement? And without wishing and speculating on the future, what (if there is any real reason) is buying one today being 'future-proof' to 2024?

    Because from where I stand, in 2024 I won't have a single tech item that I'm currently using today.
     
    Kyle likes this.
  42. custom90gt

    custom90gt Doc Mod Super Moderator

    Reputations:
    7,907
    Messages:
    3,862
    Likes Received:
    4,808
    Trophy Points:
    331
    I'm going to just step in real quick and remind everyone to be civil to each other.

    Having said that, in my opinion, the blanket statement of buy the fastest you can afford doesn't make sense to me. There are so many factors you have to weigh when purchasing a system that you can't make a simple statement like that. You have to look at budget, desired longevity, current/desired usage, cost of downtime, etc...

    When I was doing custom builds, I would take a ton of time trying to figure out what the customer actually needed. Sure I could have just thrown the fastest processor and what have you in there, but that would have been a disservice to the customer. There is no right answer for a build for everyone, that's why there are so many different parts out there.
     
  43. rlk

    rlk Notebook Evangelist

    Reputations:
    146
    Messages:
    607
    Likes Received:
    316
    Trophy Points:
    76
    Interesting. A priori, this workload sounds like it would be highly parallel, and assuming that the transform stage is independent of any other image (which assumption may well be false), that part should be embarrassingly parallel. "Embarrassingly parallel" is not an epithet; it's a term of art from the high performance technical computing community that means that different items to be processed are independent of each other, and so adding more parallelism up to the size of the problem results in arbitrary speedup.

    The key to optimizing any workflow of this nature is to minimize the amount of work that needs to be serialized, and in particular, the amount of time required by a human. That means making the human interaction as fast as possible, even if it means more back end processing. It's worth analyzing one's workflow carefully. Your actual processing steps might be very different, but it's worth doing the kind of analysis I'm outlining below.

    I have a schematically similar workflow in my avocation (sports photography for my alma mater) that I've put a lot of work into optimizing. I typically take about 2000 frames and keep 300-400 of them, which I upload (you can see this at https://rlk.smugmug.com/Sports). The steps amount to:

    1) Shoot the game.

    2) Offload the photos onto my system.

    3) Import the photos into my image management system (KPhotoAlbum).

    4) Review the photos and select the ones I want to keep.

    5) Crop and rotate the selected photos.

    6) Apply a watermark.

    7) Upload the photos.

    Step (1) of course is on the game time. Step (2) is sequential; it's limited by the I/O throughput (but if I had a fast enough card, it might be worth investigating parallelizing that, to achieve a deeper I/O queue depth.

    Step (3) is partially parallelized, helped by some coding I did to partially parallelize checksum computation and thumbnail generation (so there's some data parallelism and some control flow parallelism there) in addition to using an I/O scout thread to pre-read the images into memory. With a fast SSD, it would be worth increasing the number of scouts to improve queue depth, but I don't have an NVMe drive to tune that. More threads might allow greater parallelism of thumbnail and checksum computation if I had an NVMe. Between this and some other improvements, I'm basically I/O limited on a SATA SSD and am completely I/O bound to a hard drive.

    Step (4) is, of course, sequential, although KPhotoAlbum preloads images so I don't have to wait to skip to the next image. This is also human-intensive; KPhotoAlbum lets me tag images with a single key and use the space bar to move to the next image (being able to tag-and-next-image in one key stroke might have benefit). This steps is one of the two time-consuming steps, in this case because I have to review a lot of images.

    Step (5), the processing step, is partly on my time (decide on crop and rotation) and partly computation. There are two basic apps I can use for this, Darktable and RawTherapee (on Linux). I use RawTherapee because the crop workflow is faster; I can do it with one click-and-drag rather than having to position the mouse in the corner and do it in more steps. It's about 5 seconds faster per image because of that; with 300 images, that's not negligible! This is the other time consuming step, and I'd like to see what I can do to further optimize it.

    But actually applying the crop, rotate, and watermark (step 6) is something else. Neither Darktable nor RawTherapee efficiently parallelize image export. They can perform certain operations using multiple threads, but not multiple images simultaneously. So I wrote a script that extracts the crop and rotation from the sidecar files generated by RawTherapee and use ImageMagick to apply the crop. This part is parallelized; my script processes multiple images simultaneously. That saves about 10 minutes processing a typical game.

    Step (7), of course, is network bound.
     
  44. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    You have a pretty good workflow for a single 'shooter', single user workflow.

    I make workloads as parallel as possible by using multiple 'shooters', dozens of workstations and multiple staff. ;)

    Multiple workstations are much more productive than a single monster workstation in my experience - especially when a workstation goes down (and they will and they do). 'Chunks' of each shoot are processed on multiple NAS and even more workstations and the entire job process stops when/if the 'perfect' required/contracted images are processed and recognized early on.

    Machines simply can't replace humans when selecting 'keeper' images, except for things like focus, etc. If they are used like that, sooner or later the images just all kinda look the same (this was tried and abandoned already). I doubt that this will change in my lifetime, or at least for my clients' needs.

    When I was also shooting not that long ago, I would capture up to 1K images per 10 minutes, continuously for hours. And I seldom shot alone. When the shoots were (known) to be shorter, time-wise, 2K+ images per 5 minutes was easily reached, per photographer.

    The less an image is retouched, the more life it has (yeah; even RAW images). After effects are more packaging and getting some special images print-worthy at the sizes requested, but seldom significantly slow the above process anymore. The cameras are used to create the 'feel' contracted. The software is only a safety net. ;)

     
  45. rlk

    rlk Notebook Evangelist

    Reputations:
    146
    Messages:
    607
    Likes Received:
    316
    Trophy Points:
    76
    Yep, that's certainly one way to do it :) It sounds like a highly tuned workflow for your needs, and that you're doing even less post than I am. I agree that parallelizing the photographers and having the fastest processor for a very simple workflow -- which likely does mean high clock rate and low core count -- makes perfect sense for what it sounds like you're doing.
     
  46. Talon

    Talon Notebook Virtuoso

    Reputations:
    1,482
    Messages:
    3,519
    Likes Received:
    4,695
    Trophy Points:
    331
    Intel Process Technology Update: 10nm Server Products in 1H 2020, Accelerated 7nm in 2021

    10NM Ice Lake shipping in June for laptops?

    https://www.anandtech.com/show/1431...r-products-in-1h-2020-accelerated-7nm-in-2021

    https://www.reddit.com/r/intel/comments/bmaslc/intel_confirms_10nm_to_be_released_this_year_10nm/

    I'll be holding onto my 9900K until 2021 me thinks. 10nm will be great an all, but 7nm should be a healthy jump for me. Looks like AMD will a short lead over Intel, but won't be nearly long enough to dramatically shift market share.
     
    Last edited: May 8, 2019
  47. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Could be an extra year, until 2022, that you might have to wait for consumer 7nm according to that article. I'm imagining that I'll be going with a 10nm 8core+ product before then, don't think my 6700K will hold out in usefulness to 2022 (but we'll see).
     
    Papusan, joluke, Talon and 1 other person like this.
  48. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    They only mention 10nm server chips, which fits in with the Xeon E entry level, low core count Xeons previously disclosed.

    We already know comet lake is 14nm. Unless it's successor is 10nm, which there is little to no guidance on, and which would be a year after comet lake (read late 2020), then it would be late 2021, which may be the 7nm chips, which could slide to 2022.

    So, the only question is if Intel will skip directly to 7nm for mainstream desktop. Nothing they said discredits my prior analysis.

    Sent from my SM-G900P using Tapatalk
     
    hmscott likes this.
  49. Talon

    Talon Notebook Virtuoso

    Reputations:
    1,482
    Messages:
    3,519
    Likes Received:
    4,695
    Trophy Points:
    331
    Intel's latest road map shows 10nm this year/next month, 10nm+ in 2020, with 10nm++ and 7nm in 2021. Exciting times ahead for all. Somewhere in that mix we will see desktop chips obviously.
     
  50. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    More Intel Marketing BS and future unfulfilled promises for 10nm, and now 7nm production best wishes is all I see.

    As far as I have seen so far, the only Ice Lake 10nm CPU's are ULV Quad Core CPU's, nothing like a 6c/12t or 8c/16t model. Who wants a 4c CPU in this era of higher core count consumer CPU's?

    These are supposedly higher yield versions of the same 10nm process used for last years low production 10nm ULV CPU's that had disabled iGPU's, maybe this year the iGPU's will work? I doubt that 10nm process will match current 14nm IPC or performance at the same clocks, maybe it will match it?

    Does anyone see a desktop 10nm part on the charts? I didn't see any such listing. It wasn't obvious to me, and I don't think Intel is sure enough to even suggest a wish date for delivery for 10nm desktop or H level laptop CPU's.

    Based on the 10nm/7nm overlap with no 10nm desktop parts showing, and no 7nm desktop part showing, I still don't know what to think as far as Intel finally delivering any kind of useful 10nm / 7nm desktop / H laptop CPU's.

    To me it looks like a nicely filled out chart with 10nm / 7nm BS sprinkled in between the real 14nm production runs, in the same way as the last 3-4 years of missed deliveries for 10nm production promises.

    The only difference is that now Intel has added 7nm to their wish list.
     
    Last edited: May 9, 2019
 Next page →