The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    Intel Core i9-9900k 8c/16t, i7-9700K 8c/8t, i7-9600k 6c/6t 2nd Gen Coffee Lake CPU's + Z390

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by hmscott, Nov 27, 2017.

  1. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    id say thats actually quite a lot of difference. remember: we are comparing the highest performing liquid metal paste to the second highest performing regular paste and the latter even manages to beat the former, albeit by a small margin.

    if you want a proper comparison (apples to apples), we should look at "GC extreme between die/ihs and ihs/heatsink" compared to "GC extreme between die/heatsink"
     
  2. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    I think you misinterpreted the information that ictalley supplied. He was comparing the following two configurations:
    1) Silicon Die->CLU metal paste ->IHS->'regular non metal paste'->water block
    2) Silicon Die->Conductonaut Metal Paste->water block.

    The second configuration beats the first option by a small margin, so you got it the wrong way round when you said the regular paste was beating the liquid metal paste. In addition, the second configuration had the combined advantage of direct water block contact combined with using only liquid metal paste (no regular paste involved), yet it only won by a small margin. Given the small temperature margins we're talking about, the experiment has to be very tightly controlled to determine the extent of the real temperature performance differences of the two configurations.

    You're right though that apples to apples should involve apples, and we've got a few oranges in this experiment! :-D
     
    lctalley0109 and jaybee83 like this.
  3. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    aaaaah, got it! here i thought the second config with direct die was with GC Extreme :)
     
    lctalley0109 and Robbo99999 like this.
  4. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Yeah, I figured that is what you had inferred.
     
  5. Talon

    Talon Notebook Virtuoso

    Reputations:
    1,482
    Messages:
    3,519
    Likes Received:
    4,694
    Trophy Points:
    331
    jclausius, Robbo99999 and ajc9988 like this.
  6. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Wow! Unlike the Ryan Shrout hire, which was to design purposefully misleading benchmarks to show Intel in the best light, even when the effects of some of those benches amounted to relatively little in real world performance between the two CPU companies, Kyle has blasted every tech company over the years and even in recent times. In fact many tech journalists attacked him last year with some of his rants about Nvidia.

    But, I cannot speak ill of his work, as even when I disagreed with certain elements of it, his work was good and he did have a responsiveness to the community not seen at many publications these days, save a few. And considering the nature of his new job, enthusiast outreach, I really think this is a great opportunity for him to give straight feedback internally, act as a mediator between the community and Intel, and help to crack the normal issue of being wary of comments due to the lawyers holding them back, making members of Intel always having to carefully craft and double think on whether they are allowed to say something (although that will ALWAYS be in place to varying degrees).

    If Intel wants advice on moving forward, to be honest, they need to address pricing. Ever since Ryzen came out, AMD is back in the game, even if not the top performer. Looking at DIY sales in Europe from Mind Factory, AMD is outpacing unit volume compared to Intel by a significant amount, even if revenue after the holidays has now leveled to be more comparable between the two companies. In other words, price per performance is suffering on Intel's side. The primary factor is that they are stuck on 14nm for an extended period of time, which is not all their fault (but there is some blame for not listening to a key engineer which told them in 2016 to backport the sunny cove/ice lake architecture to 14nm, which would have been enough to release that on 14nm last year or this year with the improved IPC, thereby allowing for the justification of the higher pricing, which arguably the stagnation on less frequency gains and similar IPC from Skylake on really has taken its toll on the market).

    This is then compounded by AMD starting the core wars, being able to bring massive core counts and multithreading to the masses at a very affordable price point. It's taken awhile, but in the first months of this year, we have seen AMD CPU optimization in software and games which have given a significant increase in performance compared to what we have seen over the past two years. That optimization, including the general multithreading optimizations for the industry at large, which also benefit Intel, is creating a situation where the gap is closing enough where people will consider the AMD platform. This is compounded due to Intel trying to help MB manufacturers by limiting socket compatibility, while AMD allows for an upgrade path on the same socket for a longer period of time.

    Back on pricing, Anandtech said the following:
    "We also have no indication of price, which if one recent European retailer is to be believed, could mean an increase on the top end processor by almost double over Skylake, with a listing showing a retail price for an 8280L (?) of £15025.88 pre-tax, which equates to $19613, almost double the $10033 for the 8180."
    https://www.anandtech.com/show/1399...cessor-specifications-exposed-in-si-documents

    If this near doubling of cost is true, even with rebates to corps, etc., Intel is ever increasing cost to maintain margins as production issues and constraints have limited supplies, which is making the market ripe if AMD can deliver on price and performance of Zen 2. And that is further made worse with rumors on yields of 10nm, even if Intel is finally achieving yields high enough to put out a marketable product.

    Intel's tech is good. I may not like their business practices or strategies, but the tech is good. I know where I'd like to see the tech head in the future, but they do have amazingly talented engineers that have a better grasp on the technical limitations than I do. That is why I'm better at analyzing what the changes leaked mean than telling them what they should be doing.

    There is a larger problem that competitive overclocking is slowing down. In part, this is due to Intel's market share erosion and not being able to submit Win 10 benches for Zen CPUs due to RTC drift found on Zen CPUs. We even have people, like me, who are enthusiasts that went to AMD for more cores and PCIe lanes with Zen (my 1950X), which did so primarily on a cost basis analysis (I would have had a 7900X or 7920X otherwise, but I couldn't have gotten as high a binning on those chips with my budget during my build, and single threaded performance would have been much higher, but neither can really perform as well on multithreaded versus my 16-core, which admittedly winds up being around a 14-core heavily overclocked Intel chip (excluding the binned 9990XE) and gets beat by Intel's 16-core). Also, Intel behind the scenes working to get some benchmarks (not all) to favor Intel by loading tasks that run better on Intel than AMD is problematic and has caused people to care less about some reviews, instead favoring the real world performance analyses. A recent example of this is Geekbench 3 and Geekbench 4. The multi-threaded portion of Geekbench 4 was changed to allegedly better represent common tasks of users. What it did is heavily favor tasks that run better on Intel than AMD. I believe this, along with a couple other reasons, is why Geekbench 4 is primarily for single thread performance on HWBot and Geekbench 3 is primarily for multicore performance. Intel really was much worse at doing this long ago, and after the incident last fall before the 9 series release, I don't think they will overly try to do this again in a very misleading way, but Intel does have a marred history as a company in this regard. I've already admitted overall they are the faster and better CPU. Instead, this is a comment on their business practices, not the technology they put out.

    It's obvious why you want to do it, to show your product in the best light. But there is a point where it then skews perception and becomes unrealistic. Take for example when Zen was launched. Intel immediately floated to use 720p low settings to show a wider gap between Intel and AMD than any user would ever experience. Ryan Shrout of PCPer ran with it, but most tech journalists rejected it as no one uses 720p to play anything anymore ever. Instead, most tech journalists, for showing CPU bottleneck, agreed that 1080p medium settings is about right, considering the state of GPU performance. They then use a 2080 Ti, or whatever the highest performing GPU is at the time, to remove the GPU as a bottleneck and to show how the CPU is limiting performance, and we all have seen that in 1440p and 4K, it narrows as the performance bottleneck switches back to the GPU. But that is my point, they were trying to embellish the degree of the win when they already had the win. That is a problem.

    By toning down the aggressiveness of that competitive behavior in a very competitive field, and by highlighting where they excel, but also mentioning where there is less difference to a competitor, it can help to alleviate consumer confusion and buyer remorse. There is also the behavior of what they did with Adobe, but I don't want to get too far into their behavior relative to software vendors.

    And, yes, I have critiques of AMD as well. When I am not dealing with fanboys or trolls trying to slam AMD where I have to defend what they do right, I have actually given some scathing critiques of AMD as well in the AMD thread (which is part of the reason I browbeat fanboys posting in there because many times they are trying to play "team mentality" which can prevent me from discussing in more depths where AMD's ups and downs, specifically the downs, are). That is a problem of the enthusiast community separate from any company: the playing of team mentality, which is part of brand loyalty, and the needless arguments between them instead of honest discussion and critiques of each technology on its own merits and comparisons between the technologies.

    There is a deeper part of psychology to this in relation to purchases, as no one wants to feel they got ripped off or ineffectively evaluated the value of their purchases and equipment. So people, regardless of the company from which they bought their hardware, will bend over backwards and stretch logic to justify their course of action, many times leaving logic as a casualty in the war to justify their actions. No one is above that and everyone has been guilty of that at some point in their lifetime to varying degrees (everyone's hands are dirty, in other words). That is why, as a tech community, we need to remind ourselves of this in our own interactions. (Also, many times I am an a-hole on different forums, but please look between when I am doing it because of my personal character misgivings versus this idea I just presented. I think you may find I'm just gruff and uncouth more than anything.).
     
    jclausius likes this.
  7. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    jclausius and ajc9988 like this.
  8. Falkentyne

    Falkentyne Notebook Prophet

    Reputations:
    8,396
    Messages:
    5,992
    Likes Received:
    8,633
    Trophy Points:
    681
    CPU Package Power cannot be used accurately if not using auto voltages, and can only be used if using auto (or normal) voltages WITHOUT an offset and with the DC loadline matching the VRM loadline (1.3 mOhms=LLC Low, 1.6 mOhms=LLC Normal/standard/auto). CPU Package Power is VID * Amps.

    Power (POUT) is much more accurate.
    Power (POUT) is VR VOUT * Amps.

    There are some cases where Power (POUT) doesn't register the right value (i saw it happen in Apex Legends).

    The only issue I see is you have a 9C core temp difference with the conductonaut between IHS and block, but less C difference with the Gelid. LM requires that both surfaces be literally perfectly flat.

    Also the core 0/2/4 temps being higher than the others (especially core 2 and 4, labeling them cores 0-7) are because the CPU die is convex. A very small, careful sand of the slug should help with that, without having to do it as much as Der8auer. But I would not suggest anyone sand anything unless they get an issue with "runaway core temps" after weeks from application.
     
    lctalley0109 likes this.
  9. bennyg

    bennyg Notebook Virtuoso

    Reputations:
    1,567
    Messages:
    2,370
    Likes Received:
    2,375
    Trophy Points:
    181
    So to clarify I'm understanding, the greater the vdroop, the more that the "package power consumption" reading will be an overestimate of actual cpu power consumption?
     
  10. Falkentyne

    Falkentyne Notebook Prophet

    Reputations:
    8,396
    Messages:
    5,992
    Likes Received:
    8,633
    Trophy Points:
    681
    It depends on how closely the AC loadline boosts the VID (each CPU has a 'preset' default VID which is based on core + cache ratio, then AC loadline boosts it up (to help keep the default 1.60 mOhms of "VRM Loadline" vdroop from making overclocking unstable at auto voltages, but your voltages wind up quite high at idle), and by how much VRM loadline calibration you use. It's really sort of a mess.

    Here i used pure auto voltages (at 4.7 ghz core, 4.4 ghz cache) and made sure IA AC and IA DC loadline were both set to 1.6 mOhms. VR VOUT is VCC_Sense on-die voltage.
    VRM loadline at standard/auto/normal is by default supposed to be 1.6 mOhms of vdroop, and DC loadline droops the VID (after AC loadline has boosted it) by the same 1.6 mOhms, except the VID droop is ignored by the VRM completely. If you are using auto voltages (no offsets) and LLC is set to standard, you can actually SEE the vcore signal that goes to the VRM by setting AC loadline to 1.6 mOhms and DC loadline to 0.01 mOhms.

    That will make the VID *sky high* then you can easily calculate the droop based on amps (1.6 mOhms * current) and subtract that from the VID and it SHOULD match VR VOUT. "should" (I didn't actually test this math btw). This would only work with pure auto voltages and NO loadline calibration. (don't try AC loadline=0.01 with pure auto voltages, you'll instacrash from too low voltage).

    http://forum.notebookreview.com/thr...lounge-phoenix-5.826848/page-31#post-10876493
     
  11. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    I know you come from an MSI notebook background where VCore is not reported, only VID, yet on desktops VCore is reported. I know from my own testing with my desktop that has both VID and VCore readings that the CPU Package Power actually relates to VCore and not the VID value. I know this because VID doesn't change in my PC, yet VCore increases when I increase the voltage, the CPU Package Power reported in HWInfo also increases with that increased VCore - as expected. I think you're talking about notebooks that don't have the VCore variable, ictalley is on a desktop.
     
  12. Falkentyne

    Falkentyne Notebook Prophet

    Reputations:
    8,396
    Messages:
    5,992
    Likes Received:
    8,633
    Trophy Points:
    681
    No I'm talking about my desktop with a Gigabyte Z390 Aorus Master.
    CPU Package Power is a MSR I believe, which Throttlestop can read, and is influenced by IMON slope/offset as well. This is a function of VID (Unclewebb even said this in the Throttlestop section).
    Some motherboards however have a CPU package power that is reported directly by the VRM and labeled the same way. On my Gigabyte board, this is called Power (POUT).

    I can show you that "CPU Package Power" is VID * Amps on mine if you still want me to.
     
  13. Falkentyne

    Falkentyne Notebook Prophet

    Reputations:
    8,396
    Messages:
    5,992
    Likes Received:
    8,633
    Trophy Points:
    681
    See here's your proof.
    In this example, I used IA AC loadline=1, DC loadline=320. 0.01 mOhms / 3.2 mOhms.
    This causes the VID to drop DRASTICALLY At full load. The VID was 1.126v at full idle, but doing something as simple as resizing the HWinfo64 window dropped it down to 1.055v (lol).

    Notice Current iOUT (Amps), multiply it by the VID, and you get CPU package power, see?

    But look what the REAL package power is.
    Multiply VR VOUT * Amps and that's the real package power. Also the MLCC caps vcore is a lot closer to the VR VOUT than the VID is.

    ac1dc320.jpg
     
  14. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    I just know that my MSI motherboard in my sig has "VCore" and "CPU Package Power" listed in HWInfo, and CPU Package Power in influenced by VCore and not VID - from my own testing. Maybe different motherboards have different ways of reporting & calculating CPU Package Power, but it's off topic to the conversation so I won't pursue it any further.
     
    Falkentyne likes this.
  15. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,706
    Messages:
    29,840
    Likes Received:
    59,618
    Trophy Points:
    931
    Intel's Core i9-9900KF May Overclock Better Than 9900K Tomshardware.com | 23, 2019

    Analyzing all of this info leads to a few theories. Could the 9900KF's have a refined, higher-quality silicon? Intel did similarly with its Engineering Sample 9900Ks versus retail chips. Retail 9900K CPUs are clocking much better on average than their ES counterparts. I have purchased three retail 9900K CPUs, and unless I'm the luckiest man in the world (I actually am but for other reasons), they were all 6.8 GHz+ chips on LN2. I've tried plenty of ES CPUs that maxed at 6.6 - 6.7 GHz. (see above)
    [​IMG]
    Theory two: The 9900KF iGPU has no power pins from socket, which might have some effect beyond the benefit of just disabling the iGPU on a 9900K. Crazier things have happened!

    Could we see a refresh of the 9900K series with an updated stepping, higher quality silicon, and better oc’ing like the 9900KF? Perhaps…...

    Folks, these are theories. I don’t work for Intel. I’m not a shareholder and I don’t have a dog in this fight beyond clawing for every ounce of performance and clocks I can get. If you already own a nice 9900K, should you go out and buy a 9900KF? Probably not unless you are into overclocking and are displeased with your K-model. If you are in the market to upgrade and are into overclocking, then I would definitely suggest trying the 9900KF if you can find one for sale.


     
    pressing, jclausius, Ashtrix and 4 others like this.
  16. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    nice article!

    Sent from my Xiaomi Mi Max 2 (Oxygen) using Tapatalk
     
  17. t456

    t456 1977-09-05, 12:56:00 UTC

    Reputations:
    1,959
    Messages:
    2,588
    Likes Received:
    2,048
    Trophy Points:
    181
    Could use some single-threaded power for some project and a 9900k seemed like a good step up from my 4930mx. Has to go into a laptop, so every little bit helps:

    [​IMG]

    Read somewhere that liquid metal could help remove the indium. That makes sense, considering galinstan is an indium alloy. Had quite a bit left so soaked the die for an hour or so and turns out that works very nicely indeed; it softens it up a bit and makes removal much easier.
     
    Falkentyne, Ashtrix, bennyg and 3 others like this.
  18. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,706
    Messages:
    29,840
    Likes Received:
    59,618
    Trophy Points:
    931
    Yeah, probably not big difference vs Rockit quicksilver. But I expect your box with galinstan is a lot cheaper :D
     
    Falkentyne and hmscott like this.
  19. t456

    t456 1977-09-05, 12:56:00 UTC

    Reputations:
    1,959
    Messages:
    2,588
    Likes Received:
    2,048
    Trophy Points:
    181
    Ah? So there's 'official' stuff for that kind of thing ...

    And yes; looked it up and that quicksilver thingy is about 40x more expensive than my industry-marketed galinstan. There plenty of suppliers on the market today and bulk purchase really makes it an easy consideration; use it for anything from modern i7s to old Turions, Penryns and Atoms.
     
  20. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Yeah, good article, I read it after I saw your link. One thing to add to the weight of the validity of his testing is this quoted from their article:
    "Surprisingly, according to the mere five samples I received, the -9900KF appears to overclock better with extreme cooling than the 200 Core i9-9900K’s I’ve binned."

    So he really did have quite a large sample size from which he drew these conclusions, that's a pretty solid indicator/probability that these KF processors overclock better than the regular 9900K processors.
     
    tilleroftheearth likes this.
  21. Talon

    Talon Notebook Virtuoso

    Reputations:
    1,482
    Messages:
    3,519
    Likes Received:
    4,694
    Trophy Points:
    331
    I pointed out retail 9900Ks overclocking far better than the ES samples nearly ALL tech reviewers received around launch and a couple on here claimed there was zero difference. Funny how this is coming up again from an independent tech reviewer. Retail sample 9900Ks in general seem to be better binned, more refined and use less voltage for any given clock. Silicon lottery is still very much the biggest determinant, but the ES samples that most reviewers received seemed to be far worse than what most retail owners got.
     
    tilleroftheearth and Papusan like this.
  22. bennyg

    bennyg Notebook Virtuoso

    Reputations:
    1,567
    Messages:
    2,370
    Likes Received:
    2,375
    Trophy Points:
    181
    Another possibility is that Intel have refined their solder process. Some 9900Ks seem to have pretty bad temps for their given maximum and I'm wondering if it's not all down to the silicon lottery - a few of the post delid pics show the remnants of a blob of solder stuck to the IHS that would have been overhanging the edge of the die

    e.g. [​IMG]
     
    Raiderman, Ashtrix, Arrrrbol and 3 others like this.
  23. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Yeah, kind of silly supposition with a slim basis for validity.

    Sampling 200 CPU's and then trying to suggest a sample of 5 has a higher % of top OC result isn't a statistically valid comparison.

    In fact, I've mostly been lucky when getting personal CPU's, they all clock high and undervolt well, except for 1.

    My sample size is small so I wouldn't draw any conclusions from it other than to share how I lived with that 1 off CPU and enjoyed OC'ing and undervolting the others.

    Then to postulate that this supposed advantage is due to something specific, that reasoning also goes nowhere.

    If the KF CPU's are indeed failed production dies - bad iGPU's - and Intel is having such bad yield that recovering this small percentage of CPU's to sell is the point of significance.

    That need to recover those failed iGPU dies says more about Intel pushing the boundaries of huge monolithic dies @ 14nm, and that 14nm is running out of ways to get more from it effectively, at least using the same architecture.

    I don't see Intel's incoming 14nm 10 core CPU's being any more likely to succeed, and unless Intel drops something the area will increase again for the die size and yield will drop further.

    Intel could drop Hyperthreading altogether in the upcoming 10 core generation. That's about the only space savings I could see saving enough room for more cores without increasing die size.

    Leaving out the iGPU would help too, but that would mean failed die sections would as a percentage ruin more dies - no recoverable dies with bad iGPU, just all failures.

    Intel is going to have to find another way to entice buyers than increasing core count, power draw, and thermal problems if Intel are going to continue to stay stuck on 14nm.
     
    Last edited: Mar 24, 2019
    Raiderman likes this.
  24. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    valid point, except that solder quality doesnt really play much of a role at LN2 temp levels....there its just important that u HAVE solder.
     
  25. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,706
    Messages:
    29,840
    Likes Received:
    59,618
    Trophy Points:
    931
    I think overclocker Splave looking for higher overclock possibilities than the average Joe. But of course more examples would be better.
    I’m sure Intel know AMD’s comparative chips lack iGPU. Why not push those chips out to the hungry high clock speed market? One more sales means one less for AMD :)
     
  26. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Yeah, more samples would make the comparison significant to others, rather than a unique observation from his limited test pool.

    AMD has APU's - on board GPU's - but as the CPU performance goes up AMD assumes the builder is going to be using a discrete GPU and AMD leaves out the onboard GPU from the CPU.

    It's a thoughtful design decision that Intel would do well to imitate.
     
    Raiderman and jaybee83 like this.
  27. bennyg

    bennyg Notebook Virtuoso

    Reputations:
    1,567
    Messages:
    2,370
    Likes Received:
    2,375
    Trophy Points:
    181
    Aha, mystery solved perhaps, new stepping

    https://www.anandtech.com/show/14128/intel-readies-new-stepping-of-9th-gen-core-processors



     
    jaybee83 likes this.
  28. scarletfever

    scarletfever Notebook Evangelist

    Reputations:
    52
    Messages:
    348
    Likes Received:
    43
    Trophy Points:
    41
    So I’ve read the last ~5 pages of this thread but my question remains, if I have a laptop with an 8750 still in the return window, does it make sense to hold off for 9th gen at this point?
     
  29. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    I assume you have another notebook/desktop that you can use while you wait...

    Now, you're betting that the current 8750 you have will have negligible usability/performance benefits or deficiencies vs. a 9th gen platform (when $$$$ are also considered). The additional benefit of keeping what you have is that you're actually using it from now until a suitable 9th gen platform becomes available.

    In your position as I've assumed above; I would wait for that 9th gen platform and decide then. Nothing will stop you from re-purchasing what you have now. It may even be cheaper then. But you'll 'know' what the additional benefits are for the new platform. Particularly if you put one to the test (within your return window, then).

    If this is your only system? Not much choice really. But I have to ask; do you feel lucky? :)

    I have the luxury of using multiple platforms/devices within a single day. To rush a big purchase just before the new gen/platforms land makes no sense to me.

    What is your specific situation? Can you also afford to wait, even if it means buying the same platform again in a few weeks/months?


     
    Robbo99999 likes this.
  30. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    You should have saved yourself the trouble and read the title first. You are in the wrong thread, and I don't know what the correct thread is for that question.

    This is a desktop CPU thread, you have a laptop CPU, go fish. :)
     
    Last edited: Mar 29, 2019
    lctalley0109 likes this.
  31. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    The $500 Memory Stick: ZADAK 32GB Double Capacity Overclocking
    Gamers Nexus
    Published on Apr 18, 2019
    These 2x 32GB 3200MHz RAM sticks cost about $1000 total, but they have very few use cases. Today, we're testing them to see if they're ever worth it. Article: https://www.gamersnexus.net/guides/34...
    ZADAK and GSkill are the only two memory module manufacturers who presently "double-capacity" DIMMs, following the ASUS DC DIMM standard designed last year. Samsung makes the actual memory, and overclocking support is overall reasonable. The challenge is that this double-capacity memory treats each stick as a set of two, limiting motherboard selection to only those that opt for 1DPC (DIMM per channel) slot arrangements. Examples would be the ASUS Apex, ASUS Gene, and ASUS Z390-I Strix Gaming motherboards.

    G.Skill Unveils 32 GB Trident Z RGB DC DDR4: Double Height, Double Capacity Memory
    by Anton Shilov on October 11, 2018 11:00 AM EST
    https://www.anandtech.com/show/13458/gskill-unveils-32-gb-trident-z-double-size-ddr4-dimms
    Comments
     
    lctalley0109, Papusan and jclausius like this.
  32. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    i9 9900k UHD 630 vs Ryzen 3 2200G VEGA 8 Test in 7 Games
    Testing Games
    Published on Apr 23, 2019
    Intel Core i9 9900k vs AMD Ryzen 3 2200g in 7 Games
    Project Cars 2
    Metro Exodus - 01:12
    Assassin's Creed Odyssey - 02:31
    Battlefield 5 - 03:45
    Grand Theft Auto V - 05:55
    Shadow of the Tomb Raider - 07:42
    The Witcher 3 - 09:06

    System:
    Windows 10 Pro
    AMD Ryzen 3 2200G 3.5Ghz
    Gigabyte GA-AB350N
    Intel i9 9900k 3.6Ghz
    Asus ROG Strix Z390-F Gaming
    16Gb RAM 3200Mhz
     
    joluke likes this.
  33. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    The 16-core is not excessive, if you can use them. Say the games are optimized for 8-core, then the other 8 can encode. NVENC still only has a certain level of quality and doesn't match doing CPU encode at medium. Intel's QuickSync is the same story.

    So, a couple points:

    1) Ryzen 3000/Zen 2 CPUs will have UMA, not NUMA, but will still need a scheduler to be aware of core distribution on 2 dies for the 12 and 16 core chips. But the memory architecture will only have unified regardless of the chip on the lineup. What you are asking for is an Intel dual-ring bus 16-core. AMD likely would not build a 16-core single die chip due to the current yields for manufacturing such a chip rising cost. In fact, although the 16-core variant using an active interposer to get a 64 core chiplet CPU did perform better than the 8-core chiplet, using an active interposer produced on 32nm or 22nm is around the same cost as a monolithic die, but with really good latency.

    2) The 12 core is coming. The 16 core is the one they fear cannibalizing threadripper stock, at least until the new chips drop. That is likely why the 1950X is now on sale at Newegg for around $520, same as the 9900K, that way they can clear inventory, which they will also need to do with the 2950X (which is only $850). As such, even though it doesn't have the quad channel memory or the 64 PCIe lanes that TR has, it would perform just fine at things like Adobe Premiere, etc. The 1900X you can pick up for around $300, which makes it a great buy if building a firewall/NAS that can handle a huge amount of work (medium sized business type hardware).

    Now that is a good point on the Intel socket, but unfortunately that is a matter of having a BIOS guru adding support. Then again, below a certain level of board, evidently you will not be able to use it with the new AM4 CPUs. So....
     
    bennyg likes this.
  34. t456

    t456 1977-09-05, 12:56:00 UTC

    Reputations:
    1,959
    Messages:
    2,588
    Likes Received:
    2,048
    Trophy Points:
    181
    This topic is meant for the 2nd generation Coffee Lake cpu's. Moved the 10nm and 7nm debate to a new thread:
    Intel's upcoming 10nm and beyond

    If anyone has a better suggestion for the title then please use the 'report' function. A few posts were lost in the process, unfortunately. My sincere apologies about that; I was dual tasking, which is not something you want to be doing at the end of the day.
     
    Dannemand, ajc9988, hmscott and 4 others like this.
  35. Talon

    Talon Notebook Virtuoso

    Reputations:
    1,482
    Messages:
    3,519
    Likes Received:
    4,694
    Trophy Points:
    331
  36. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Time for an upgrade from 2600K then if:
    • you game above 60fps.
    • do something computationally intensive with your CPU
    • need more storage performance from things like NVMe SSD's: video & photo editing mainly.
    Stay with 2600K if:
    • you game at 60fps.
    • you just do normal consumer stuff like internet browsing, office applications
    Sandy Bridge was truly iconic as a CPU though, my Dad's got a Sandy Bridge Dual Core Hyper Threaded Dell Inspiron 17 laptop (bought in 2011) which I modified by putting in an SSD in back in 2012 (wasn't an option to include from the factory), and that's still perfectly fine for internet & office, on the latest Windows 10. (8GB RAM)
     
    Papusan likes this.
  37. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    indeed! my gf's laptop is also a 2011 model, dell vostro 3350 to be exact. upgraded it from a dual to a sandy bridge quadcore 2760qm, 850 evo ssd with 500 gb, 8GB 2133 mhz ram and intel 7260 wifi.

    next upgrade cycle coming up soon: new high capacity battery, intel 200ax wifi (with m.2 to mpcie adapter), samsung 850 pro 1tb, 16 gb 2133 mhz ram and maaaaaybe a 2960xm cpu, but not sure yet on the last item hahaha. also plan to install win10 enterprise ltsc 2019 for her, should be easier with the longer upgrade cycles.

    should be running strong for quite a while longer :)

    don't u guys just love upgradeable laptops? ;)

    PS: oh almost forgot! when her mobo gave out i also upgraded her gpu from the intel igpu to a dedicated amd gpu on the mobo :)

    Sent from my Xiaomi Mi Max 2 (Oxygen) using Tapatalk
     
    Last edited: May 11, 2019
    bennyg and Robbo99999 like this.
  38. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Cool initial upgrades for sure, but not sure if your latest planned upgrades for her laptop are going to be that meaningful - is it worth doing for the extra cost (and little extra performance) and seeing as it's already an old laptop?
     
  39. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    well she would like it to last another 2 years at least. the 850 pro i already have, so its just a swap for her current 850 evo 500 gb (gonna use that as my secondary drive next to the 970 pro). the wifi adapter doesnt cost much and the ram is at a longtime low currently, around 100€ for 16gb 2133 mhz kingston hyperx sticks. she can offset that by selling off her current 2x4gb sticks. so that only leaves the new battery :)

    all in all, definitely cheaper and much more fun for me than just buying a new laptop haha ;)

    Sent from my Xiaomi Mi Max 2 (Oxygen) using Tapatalk
     
    Last edited: May 12, 2019
    Robbo99999 likes this.
  40. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Is her laptop heatsink rated for a quad core? If it is not, you could have to clock the 4 core down considerably, which means the 2960XM would be a waste in that machine, especially for the cost. I have a 2760QM laying around the house in a box somewhere from when I put a 2960XM in the old P170HM. The cooling in that machine barely allows for it to really stretch the 2960XM's legs, so to speak (excluding use of AC cooling). So please keep that in mind before making the purchase.
     
    Robbo99999 likes this.
  41. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    yep, exactly why im not completely convinced of the 2960xm upgrade yet. the 2760qm runs just fine in her machine, although temps reach mid to high 80s under heavy workloads.

    and no, her machine never officially received any support for quadcores ;) thats the fun part about it haha

    Sent from my Xiaomi Mi Max 2 (Oxygen) using Tapatalk
     
    ajc9988 likes this.
  42. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Checkout this video about disabling HT, J used the 8700k for testing but comes to the conclusion that HT isn't that important for gaming, and suggests saving $ and getting the 9xxxKF => CPU's without HT, when building new PC's.

    Now it makes sense why Intel came out with a whole line of HT-less CPU's, they are more secure without HT in current architecture. Might as well save the silicon real estate, power, thermals, (and $?) and get HT-less to start.

    Is Hyper-Threading Even Necessary? ZombieLoad Impact Testing (8700k)
    JayzTwoCents
    Published on May 20, 2019
    Well Intel has once again found itself at the center of another CPU Flaw/Exploit... this time the only way to completely mitigate the threat on a local level is to turn off Hyper-Threading on its CPUs... so what does that mean for performance loss?? Let's find out!
    http://forum.notebookreview.com/thr...ke-z370-and-z390.809268/page-42#post-10913286
     
    Last edited: May 24, 2019
    ajc9988 and joluke like this.
  43. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,706
    Messages:
    29,840
    Likes Received:
    59,618
    Trophy Points:
    931
    jaybee83, hmscott, Robbo99999 and 2 others like this.
  44. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Intel doesn't seem interested in shutting its doors just yet, as the common wisdom seems to indicate. :D :D :D

    That i9-9900KS is still on that 'ancient' 14nm++ node too. :D :D :D

    I thought they lost the race since everyone else is at 7nm now. :rolleyes:

     
  45. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Talk about short sighted!

    Meanwhile, the 9900KS is like what the 8086K was: a binned variant that reaches a couple hundred MHz higher. They finally binned enough that they could release that and charge a premium. Pricing and availability (both volume and time frame for release) are unknown.

    Moreover, while you smugly make the comment, you miss that the competition is likely to release their product before this hits the market.

    So I wouldn't take the position you are yet. But, it is a new product and is something to look at. Now, how will this fit in with the upcoming comet lake is my question.
     
    hmscott likes this.
  46. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Nah, not short-sighted. See the quote I included in my post?

    Doesn't matter what the 'KS is. Seems like it will likely be the new hotness when it's released. And at 14nm++ too! ;)

     
  47. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,706
    Messages:
    29,840
    Likes Received:
    59,618
    Trophy Points:
    931
    8086K had same 6 core boost as 8700K. With 9900KS you'll get 300MHz extra all over the cores. Or better say real 5.0GHz for each and everyone out there ( Average Joe) :)
     
    Robbo99999 and tilleroftheearth like this.
  48. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Believe what you want. When a 6-core at 4GHz is beating the 2700X, it says the 8-core variants at way better pricing is going to take more of the market, especially since the KS will likely have a premium over the K/KF chips.

    And tell me what the TDP is? That's right, you can't (no one can yet). Almost all of the chips could already reach 5GHz all core with enough voltage. So that argument does not fly. Intel likely took the best binned chips, then took up more of the headroom with stock clocks, and you get this.

    That isn't to say that the binning and speed achieved is not impressive. It is to say there is more at play.
     
    jaybee83 likes this.
  49. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Yeah, there is more at play here. More than you care to admit to. ;)

    TDP, $$$, and everything else you're trying to throw at Intel here to diminish this announcement is not important when the goal is the performance, period.

    I usually translate 'performance' into an all-encompassing 'productivity' increase when all aspects of the platform as a whole are included.

    A slightly higher, one-time cost. A few $$$ more a year in power costs or other non-important aspects does not diminish the productivity gained.

    Especially for someone like me that won't overclock at all. I'll simply test and use the platform as-is and actually buy it if/when it proves better than the Intel platforms I currently have now. And, it's better enough to make a $$$$$ investment in it too. :)

     
  50. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
← Previous pageNext page →