id say thats actually quite a lot of difference. remember: we are comparing the highest performing liquid metal paste to the second highest performing regular paste and the latter even manages to beat the former, albeit by a small margin.
if you want a proper comparison (apples to apples), we should look at "GC extreme between die/ihs and ihs/heatsink" compared to "GC extreme between die/heatsink"
-
Robbo99999 Notebook Prophet
1) Silicon Die->CLU metal paste ->IHS->'regular non metal paste'->water block
2) Silicon Die->Conductonaut Metal Paste->water block.
The second configuration beats the first option by a small margin, so you got it the wrong way round when you said the regular paste was beating the liquid metal paste. In addition, the second configuration had the combined advantage of direct water block contact combined with using only liquid metal paste (no regular paste involved), yet it only won by a small margin. Given the small temperature margins we're talking about, the experiment has to be very tightly controlled to determine the extent of the real temperature performance differences of the two configurations.
You're right though that apples to apples should involve apples, and we've got a few oranges in this experiment! :-Dlctalley0109 and jaybee83 like this. -
aaaaah, got it! here i thought the second config with direct die was with GC Extreme
lctalley0109 and Robbo99999 like this. -
Robbo99999 Notebook Prophet
-
https://www.hardocp.com/article/2019/03/19/goodbye_hardocp_hello_intel/
HardOCP Editor in Chief Kyle Bennet heads to Intel as a Director of Enthusiast Engagement.
https://twitter.com/KyleBennett/status/1108020954096193536jclausius, Robbo99999 and ajc9988 like this. -
But, I cannot speak ill of his work, as even when I disagreed with certain elements of it, his work was good and he did have a responsiveness to the community not seen at many publications these days, save a few. And considering the nature of his new job, enthusiast outreach, I really think this is a great opportunity for him to give straight feedback internally, act as a mediator between the community and Intel, and help to crack the normal issue of being wary of comments due to the lawyers holding them back, making members of Intel always having to carefully craft and double think on whether they are allowed to say something (although that will ALWAYS be in place to varying degrees).
If Intel wants advice on moving forward, to be honest, they need to address pricing. Ever since Ryzen came out, AMD is back in the game, even if not the top performer. Looking at DIY sales in Europe from Mind Factory, AMD is outpacing unit volume compared to Intel by a significant amount, even if revenue after the holidays has now leveled to be more comparable between the two companies. In other words, price per performance is suffering on Intel's side. The primary factor is that they are stuck on 14nm for an extended period of time, which is not all their fault (but there is some blame for not listening to a key engineer which told them in 2016 to backport the sunny cove/ice lake architecture to 14nm, which would have been enough to release that on 14nm last year or this year with the improved IPC, thereby allowing for the justification of the higher pricing, which arguably the stagnation on less frequency gains and similar IPC from Skylake on really has taken its toll on the market).
This is then compounded by AMD starting the core wars, being able to bring massive core counts and multithreading to the masses at a very affordable price point. It's taken awhile, but in the first months of this year, we have seen AMD CPU optimization in software and games which have given a significant increase in performance compared to what we have seen over the past two years. That optimization, including the general multithreading optimizations for the industry at large, which also benefit Intel, is creating a situation where the gap is closing enough where people will consider the AMD platform. This is compounded due to Intel trying to help MB manufacturers by limiting socket compatibility, while AMD allows for an upgrade path on the same socket for a longer period of time.
Back on pricing, Anandtech said the following:
"We also have no indication of price, which if one recent European retailer is to be believed, could mean an increase on the top end processor by almost double over Skylake, with a listing showing a retail price for an 8280L (?) of £15025.88 pre-tax, which equates to $19613, almost double the $10033 for the 8180."
https://www.anandtech.com/show/1399...cessor-specifications-exposed-in-si-documents
If this near doubling of cost is true, even with rebates to corps, etc., Intel is ever increasing cost to maintain margins as production issues and constraints have limited supplies, which is making the market ripe if AMD can deliver on price and performance of Zen 2. And that is further made worse with rumors on yields of 10nm, even if Intel is finally achieving yields high enough to put out a marketable product.
Intel's tech is good. I may not like their business practices or strategies, but the tech is good. I know where I'd like to see the tech head in the future, but they do have amazingly talented engineers that have a better grasp on the technical limitations than I do. That is why I'm better at analyzing what the changes leaked mean than telling them what they should be doing.
There is a larger problem that competitive overclocking is slowing down. In part, this is due to Intel's market share erosion and not being able to submit Win 10 benches for Zen CPUs due to RTC drift found on Zen CPUs. We even have people, like me, who are enthusiasts that went to AMD for more cores and PCIe lanes with Zen (my 1950X), which did so primarily on a cost basis analysis (I would have had a 7900X or 7920X otherwise, but I couldn't have gotten as high a binning on those chips with my budget during my build, and single threaded performance would have been much higher, but neither can really perform as well on multithreaded versus my 16-core, which admittedly winds up being around a 14-core heavily overclocked Intel chip (excluding the binned 9990XE) and gets beat by Intel's 16-core). Also, Intel behind the scenes working to get some benchmarks (not all) to favor Intel by loading tasks that run better on Intel than AMD is problematic and has caused people to care less about some reviews, instead favoring the real world performance analyses. A recent example of this is Geekbench 3 and Geekbench 4. The multi-threaded portion of Geekbench 4 was changed to allegedly better represent common tasks of users. What it did is heavily favor tasks that run better on Intel than AMD. I believe this, along with a couple other reasons, is why Geekbench 4 is primarily for single thread performance on HWBot and Geekbench 3 is primarily for multicore performance. Intel really was much worse at doing this long ago, and after the incident last fall before the 9 series release, I don't think they will overly try to do this again in a very misleading way, but Intel does have a marred history as a company in this regard. I've already admitted overall they are the faster and better CPU. Instead, this is a comment on their business practices, not the technology they put out.
It's obvious why you want to do it, to show your product in the best light. But there is a point where it then skews perception and becomes unrealistic. Take for example when Zen was launched. Intel immediately floated to use 720p low settings to show a wider gap between Intel and AMD than any user would ever experience. Ryan Shrout of PCPer ran with it, but most tech journalists rejected it as no one uses 720p to play anything anymore ever. Instead, most tech journalists, for showing CPU bottleneck, agreed that 1080p medium settings is about right, considering the state of GPU performance. They then use a 2080 Ti, or whatever the highest performing GPU is at the time, to remove the GPU as a bottleneck and to show how the CPU is limiting performance, and we all have seen that in 1440p and 4K, it narrows as the performance bottleneck switches back to the GPU. But that is my point, they were trying to embellish the degree of the win when they already had the win. That is a problem.
By toning down the aggressiveness of that competitive behavior in a very competitive field, and by highlighting where they excel, but also mentioning where there is less difference to a competitor, it can help to alleviate consumer confusion and buyer remorse. There is also the behavior of what they did with Adobe, but I don't want to get too far into their behavior relative to software vendors.
And, yes, I have critiques of AMD as well. When I am not dealing with fanboys or trolls trying to slam AMD where I have to defend what they do right, I have actually given some scathing critiques of AMD as well in the AMD thread (which is part of the reason I browbeat fanboys posting in there because many times they are trying to play "team mentality" which can prevent me from discussing in more depths where AMD's ups and downs, specifically the downs, are). That is a problem of the enthusiast community separate from any company: the playing of team mentality, which is part of brand loyalty, and the needless arguments between them instead of honest discussion and critiques of each technology on its own merits and comparisons between the technologies.
There is a deeper part of psychology to this in relation to purchases, as no one wants to feel they got ripped off or ineffectively evaluated the value of their purchases and equipment. So people, regardless of the company from which they bought their hardware, will bend over backwards and stretch logic to justify their course of action, many times leaving logic as a casualty in the war to justify their actions. No one is above that and everyone has been guilty of that at some point in their lifetime to varying degrees (everyone's hands are dirty, in other words). That is why, as a tech community, we need to remind ourselves of this in our own interactions. (Also, many times I am an a-hole on different forums, but please look between when I am doing it because of my personal character misgivings versus this idea I just presented. I think you may find I'm just gruff and uncouth more than anything.).jclausius likes this. -
Robbo99999 Notebook Prophet
-
Falkentyne Notebook Prophet
Power (POUT) is much more accurate.
Power (POUT) is VR VOUT * Amps.
There are some cases where Power (POUT) doesn't register the right value (i saw it happen in Apex Legends).
Also the core 0/2/4 temps being higher than the others (especially core 2 and 4, labeling them cores 0-7) are because the CPU die is convex. A very small, careful sand of the slug should help with that, without having to do it as much as Der8auer. But I would not suggest anyone sand anything unless they get an issue with "runaway core temps" after weeks from application.lctalley0109 likes this. -
-
Falkentyne Notebook Prophet
Here i used pure auto voltages (at 4.7 ghz core, 4.4 ghz cache) and made sure IA AC and IA DC loadline were both set to 1.6 mOhms. VR VOUT is VCC_Sense on-die voltage.
VRM loadline at standard/auto/normal is by default supposed to be 1.6 mOhms of vdroop, and DC loadline droops the VID (after AC loadline has boosted it) by the same 1.6 mOhms, except the VID droop is ignored by the VRM completely. If you are using auto voltages (no offsets) and LLC is set to standard, you can actually SEE the vcore signal that goes to the VRM by setting AC loadline to 1.6 mOhms and DC loadline to 0.01 mOhms.
That will make the VID *sky high* then you can easily calculate the droop based on amps (1.6 mOhms * current) and subtract that from the VID and it SHOULD match VR VOUT. "should" (I didn't actually test this math btw). This would only work with pure auto voltages and NO loadline calibration. (don't try AC loadline=0.01 with pure auto voltages, you'll instacrash from too low voltage).
http://forum.notebookreview.com/thr...lounge-phoenix-5.826848/page-31#post-10876493 -
Robbo99999 Notebook Prophet
-
Falkentyne Notebook Prophet
CPU Package Power is a MSR I believe, which Throttlestop can read, and is influenced by IMON slope/offset as well. This is a function of VID (Unclewebb even said this in the Throttlestop section).
Some motherboards however have a CPU package power that is reported directly by the VRM and labeled the same way. On my Gigabyte board, this is called Power (POUT).
I can show you that "CPU Package Power" is VID * Amps on mine if you still want me to. -
Falkentyne Notebook Prophet
In this example, I used IA AC loadline=1, DC loadline=320. 0.01 mOhms / 3.2 mOhms.
This causes the VID to drop DRASTICALLY At full load. The VID was 1.126v at full idle, but doing something as simple as resizing the HWinfo64 window dropped it down to 1.055v (lol).
Notice Current iOUT (Amps), multiply it by the VID, and you get CPU package power, see?
But look what the REAL package power is.
Multiply VR VOUT * Amps and that's the real package power. Also the MLCC caps vcore is a lot closer to the VR VOUT than the VID is.
-
Robbo99999 Notebook Prophet
Falkentyne likes this. -
Intel's Core i9-9900KF May Overclock Better Than 9900K Tomshardware.com | 23, 2019
Analyzing all of this info leads to a few theories. Could the 9900KF's have a refined, higher-quality silicon? Intel did similarly with its Engineering Sample 9900Ks versus retail chips. Retail 9900K CPUs are clocking much better on average than their ES counterparts. I have purchased three retail 9900K CPUs, and unless I'm the luckiest man in the world (I actually am but for other reasons), they were all 6.8 GHz+ chips on LN2. I've tried plenty of ES CPUs that maxed at 6.6 - 6.7 GHz. (see above)
Theory two: The 9900KF iGPU has no power pins from socket, which might have some effect beyond the benefit of just disabling the iGPU on a 9900K. Crazier things have happened!
Could we see a refresh of the 9900K series with an updated stepping, higher quality silicon, and better oc’ing like the 9900KF? Perhaps…...
Folks, these are theories. I don’t work for Intel. I’m not a shareholder and I don’t have a dog in this fight beyond clawing for every ounce of performance and clocks I can get. If you already own a nice 9900K, should you go out and buy a 9900KF? Probably not unless you are into overclocking and are displeased with your K-model. If you are in the market to upgrade and are into overclocking, then I would definitely suggest trying the 9900KF if you can find one for sale.
-
Sent from my Xiaomi Mi Max 2 (Oxygen) using Tapatalk -
Could use some single-threaded power for some project and a 9900k seemed like a good step up from my 4930mx. Has to go into a laptop, so every little bit helps:
Read somewhere that liquid metal could help remove the indium. That makes sense, considering galinstan is an indium alloy. Had quite a bit left so soaked the die for an hour or so and turns out that works very nicely indeed; it softens it up a bit and makes removal much easier.Falkentyne, Ashtrix, bennyg and 3 others like this. -
Falkentyne and hmscott like this.
-
Ah? So there's 'official' stuff for that kind of thing ...
And yes; looked it up and that quicksilver thingy is about 40x more expensive than my industry-marketed galinstan. There plenty of suppliers on the market today and bulk purchase really makes it an easy consideration; use it for anything from modern i7s to old Turions, Penryns and Atoms.Falkentyne, jaybee83, Papusan and 1 other person like this. -
Robbo99999 Notebook Prophet
"Surprisingly, according to the mere five samples I received, the -9900KF appears to overclock better with extreme cooling than the 200 Core i9-9900K’s I’ve binned."
So he really did have quite a large sample size from which he drew these conclusions, that's a pretty solid indicator/probability that these KF processors overclock better than the regular 9900K processors.tilleroftheearth likes this. -
tilleroftheearth and Papusan like this.
-
Another possibility is that Intel have refined their solder process. Some 9900Ks seem to have pretty bad temps for their given maximum and I'm wondering if it's not all down to the silicon lottery - a few of the post delid pics show the remnants of a blob of solder stuck to the IHS that would have been overhanging the edge of the die
e.g. -
Sampling 200 CPU's and then trying to suggest a sample of 5 has a higher % of top OC result isn't a statistically valid comparison.
In fact, I've mostly been lucky when getting personal CPU's, they all clock high and undervolt well, except for 1.
My sample size is small so I wouldn't draw any conclusions from it other than to share how I lived with that 1 off CPU and enjoyed OC'ing and undervolting the others.
Then to postulate that this supposed advantage is due to something specific, that reasoning also goes nowhere.
If the KF CPU's are indeed failed production dies - bad iGPU's - and Intel is having such bad yield that recovering this small percentage of CPU's to sell is the point of significance.
That need to recover those failed iGPU dies says more about Intel pushing the boundaries of huge monolithic dies @ 14nm, and that 14nm is running out of ways to get more from it effectively, at least using the same architecture.
I don't see Intel's incoming 14nm 10 core CPU's being any more likely to succeed, and unless Intel drops something the area will increase again for the die size and yield will drop further.
Intel could drop Hyperthreading altogether in the upcoming 10 core generation. That's about the only space savings I could see saving enough room for more cores without increasing die size.
Leaving out the iGPU would help too, but that would mean failed die sections would as a percentage ruin more dies - no recoverable dies with bad iGPU, just all failures.
Intel is going to have to find another way to entice buyers than increasing core count, power draw, and thermal problems if Intel are going to continue to stay stuck on 14nm.Last edited: Mar 24, 2019Raiderman likes this. -
-
-
AMD has APU's - on board GPU's - but as the CPU performance goes up AMD assumes the builder is going to be using a discrete GPU and AMD leaves out the onboard GPU from the CPU.
It's a thoughtful design decision that Intel would do well to imitate. -
Aha, mystery solved perhaps, new stepping
jaybee83 likes this. -
So I’ve read the last ~5 pages of this thread but my question remains, if I have a laptop with an 8750 still in the return window, does it make sense to hold off for 9th gen at this point?
-
tilleroftheearth Wisdom listens quietly...
I assume you have another notebook/desktop that you can use while you wait...
Now, you're betting that the current 8750 you have will have negligible usability/performance benefits or deficiencies vs. a 9th gen platform (when $$$$ are also considered). The additional benefit of keeping what you have is that you're actually using it from now until a suitable 9th gen platform becomes available.
In your position as I've assumed above; I would wait for that 9th gen platform and decide then. Nothing will stop you from re-purchasing what you have now. It may even be cheaper then. But you'll 'know' what the additional benefits are for the new platform. Particularly if you put one to the test (within your return window, then).
If this is your only system? Not much choice really. But I have to ask; do you feel lucky?
I have the luxury of using multiple platforms/devices within a single day. To rush a big purchase just before the new gen/platforms land makes no sense to me.
What is your specific situation? Can you also afford to wait, even if it means buying the same platform again in a few weeks/months?
Robbo99999 likes this. -
This is a desktop CPU thread, you have a laptop CPU, go fish.Last edited: Mar 29, 2019lctalley0109 likes this. -
The $500 Memory Stick: ZADAK 32GB Double Capacity Overclocking
Gamers Nexus
Published on Apr 18, 2019
These 2x 32GB 3200MHz RAM sticks cost about $1000 total, but they have very few use cases. Today, we're testing them to see if they're ever worth it. Article: https://www.gamersnexus.net/guides/34...
ZADAK and GSkill are the only two memory module manufacturers who presently "double-capacity" DIMMs, following the ASUS DC DIMM standard designed last year. Samsung makes the actual memory, and overclocking support is overall reasonable. The challenge is that this double-capacity memory treats each stick as a set of two, limiting motherboard selection to only those that opt for 1DPC (DIMM per channel) slot arrangements. Examples would be the ASUS Apex, ASUS Gene, and ASUS Z390-I Strix Gaming motherboards.
G.Skill Unveils 32 GB Trident Z RGB DC DDR4: Double Height, Double Capacity Memory
by Anton Shilov on October 11, 2018 11:00 AM EST
https://www.anandtech.com/show/13458/gskill-unveils-32-gb-trident-z-double-size-ddr4-dimms
Commentslctalley0109, Papusan and jclausius like this. -
i9 9900k UHD 630 vs Ryzen 3 2200G VEGA 8 Test in 7 Games
Testing Games
Published on Apr 23, 2019
Intel Core i9 9900k vs AMD Ryzen 3 2200g in 7 Games
Project Cars 2
Metro Exodus - 01:12
Assassin's Creed Odyssey - 02:31
Battlefield 5 - 03:45
Grand Theft Auto V - 05:55
Shadow of the Tomb Raider - 07:42
The Witcher 3 - 09:06
System:
Windows 10 Pro
AMD Ryzen 3 2200G 3.5Ghz
Gigabyte GA-AB350N
Intel i9 9900k 3.6Ghz
Asus ROG Strix Z390-F Gaming
16Gb RAM 3200Mhzjoluke likes this. -
1) Ryzen 3000/Zen 2 CPUs will have UMA, not NUMA, but will still need a scheduler to be aware of core distribution on 2 dies for the 12 and 16 core chips. But the memory architecture will only have unified regardless of the chip on the lineup. What you are asking for is an Intel dual-ring bus 16-core. AMD likely would not build a 16-core single die chip due to the current yields for manufacturing such a chip rising cost. In fact, although the 16-core variant using an active interposer to get a 64 core chiplet CPU did perform better than the 8-core chiplet, using an active interposer produced on 32nm or 22nm is around the same cost as a monolithic die, but with really good latency.
2) The 12 core is coming. The 16 core is the one they fear cannibalizing threadripper stock, at least until the new chips drop. That is likely why the 1950X is now on sale at Newegg for around $520, same as the 9900K, that way they can clear inventory, which they will also need to do with the 2950X (which is only $850). As such, even though it doesn't have the quad channel memory or the 64 PCIe lanes that TR has, it would perform just fine at things like Adobe Premiere, etc. The 1900X you can pick up for around $300, which makes it a great buy if building a firewall/NAS that can handle a huge amount of work (medium sized business type hardware).
Now that is a good point on the Intel socket, but unfortunately that is a matter of having a BIOS guru adding support. Then again, below a certain level of board, evidently you will not be able to use it with the new AM4 CPUs. So....bennyg likes this. -
This topic is meant for the 2nd generation Coffee Lake cpu's. Moved the 10nm and 7nm debate to a new thread:
Intel's upcoming 10nm and beyond
If anyone has a better suggestion for the title then please use the 'report' function. A few posts were lost in the process, unfortunately. My sincere apologies about that; I was dual tasking, which is not something you want to be doing at the end of the day. -
Robbo99999 Notebook Prophet
- you game above 60fps.
- do something computationally intensive with your CPU
- need more storage performance from things like NVMe SSD's: video & photo editing mainly.
- you game at 60fps.
- you just do normal consumer stuff like internet browsing, office applications
Papusan likes this. -
next upgrade cycle coming up soon: new high capacity battery, intel 200ax wifi (with m.2 to mpcie adapter), samsung 850 pro 1tb, 16 gb 2133 mhz ram and maaaaaybe a 2960xm cpu, but not sure yet on the last item hahaha. also plan to install win10 enterprise ltsc 2019 for her, should be easier with the longer upgrade cycles.
should be running strong for quite a while longer
don't u guys just love upgradeable laptops?
PS: oh almost forgot! when her mobo gave out i also upgraded her gpu from the intel igpu to a dedicated amd gpu on the mobo
Sent from my Xiaomi Mi Max 2 (Oxygen) using TapatalkLast edited: May 11, 2019bennyg and Robbo99999 like this. -
Robbo99999 Notebook Prophet
-
all in all, definitely cheaper and much more fun for me than just buying a new laptop haha
Sent from my Xiaomi Mi Max 2 (Oxygen) using TapatalkLast edited: May 12, 2019Robbo99999 likes this. -
Robbo99999 likes this.
-
and no, her machine never officially received any support for quadcoresthats the fun part about it haha
Sent from my Xiaomi Mi Max 2 (Oxygen) using Tapatalkajc9988 likes this. -
Checkout this video about disabling HT, J used the 8700k for testing but comes to the conclusion that HT isn't that important for gaming, and suggests saving $ and getting the 9xxxKF => CPU's without HT, when building new PC's.
Now it makes sense why Intel came out with a whole line of HT-less CPU's, they are more secure without HT in current architecture. Might as well save the silicon real estate, power, thermals, (and $?) and get HT-less to start.
Is Hyper-Threading Even Necessary? ZombieLoad Impact Testing (8700k)
JayzTwoCents
Published on May 20, 2019
Well Intel has once again found itself at the center of another CPU Flaw/Exploit... this time the only way to completely mitigate the threat on a local level is to turn off Hyper-Threading on its CPUs... so what does that mean for performance loss?? Let's find out!
http://forum.notebookreview.com/thr...ke-z370-and-z390.809268/page-42#post-10913286Last edited: May 24, 2019 -
Say welcome to i9-9900KS (higher 4.0 GHz base frequency and an all-core 5.0 GHz boost frequency).
Intel Announces 5.0 GHz Core i9-9900KS, Unveils 10nm Ice Lakejaybee83, hmscott, Robbo99999 and 2 others like this. -
tilleroftheearth Wisdom listens quietly...
That i9-9900KS is still on that 'ancient' 14nm++ node too.
I thought they lost the race since everyone else is at 7nm now.
-
Meanwhile, the 9900KS is like what the 8086K was: a binned variant that reaches a couple hundred MHz higher. They finally binned enough that they could release that and charge a premium. Pricing and availability (both volume and time frame for release) are unknown.
Moreover, while you smugly make the comment, you miss that the competition is likely to release their product before this hits the market.
So I wouldn't take the position you are yet. But, it is a new product and is something to look at. Now, how will this fit in with the upcoming comet lake is my question.hmscott likes this. -
tilleroftheearth Wisdom listens quietly...
Nah, not short-sighted. See the quote I included in my post?
-
Robbo99999 and tilleroftheearth like this.
-
That isn't to say that the binning and speed achieved is not impressive. It is to say there is more at play.jaybee83 likes this. -
tilleroftheearth Wisdom listens quietly...
Yeah, there is more at play here. More than you care to admit to.
TDP, $$$, and everything else you're trying to throw at Intel here to diminish this announcement is not important when the goal is the performance, period.
I usually translate 'performance' into an all-encompassing 'productivity' increase when all aspects of the platform as a whole are included.
A slightly higher, one-time cost. A few $$$ more a year in power costs or other non-important aspects does not diminish the productivity gained.
Especially for someone like me that won't overclock at all. I'll simply test and use the platform as-is and actually buy it if/when it proves better than the Intel platforms I currently have now. And, it's better enough to make a $$$$$ investment in it too.
-
tilleroftheearth Wisdom listens quietly...
See:
https://www.tomshardware.com/news/msi-gt76-titan-features-price,39439.html
Intel Core i9-9900k 8c/16t, i7-9700K 8c/8t, i7-9600k 6c/6t 2nd Gen Coffee Lake CPU's + Z390
Discussion in 'Hardware Components and Aftermarket Upgrades' started by hmscott, Nov 27, 2017.