The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    *Official* NBR Desktop Overclocker's Lounge [laptop owners welcome, too]

    Discussion in 'Desktop Hardware' started by Mr. Fox, Nov 5, 2017.

  1. Ashtrix

    Ashtrix ψυχή υπεροχή

    Reputations:
    2,376
    Messages:
    2,081
    Likes Received:
    3,281
    Trophy Points:
    281

    Add more here Papu.. Intel's new "Dynamic Memory Clock"

    Intel's "Alder Lake" Desktop Processor supports DDR4+DDR5, (only few) PCIe Gen 5 and Dynamic Memory Clock
    That screams to me more than Gearing system, esp that Gear 2 drama and new Command Rate for G1, G2 on RKL vs the older CML high performance IMC. The new inferior Gear system was supposed to exist in all Intel future lineup, now they are having new memory turbo system.

    Bet they had to compromise a lot to compensate for the small phone efficient cores, a hypothetical case of DDR4 - How are those SKL cores able to run DDR4 3733MHz+ G1 (which also is shared by Golden Cove P cores which should probably have a better IMC ?) at such low core frequency and voltage plus these small cores have a different uarch than the basic SKL itself, also the best IMC was CML 10900K. On top sharing the ring bus system with the Golden Cove Cores , so Intel will have a new memory clocking turbo, that will probably downclock :oops:when E Cores are enabled in the the CPU ? DDR5 is an entirely new aspect, hard to even speculate anything.

    Who is going to enjoy this guinea pig platform ?
     
    Last edited: Aug 20, 2021
    Clamibot and Papusan like this.
  2. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,755
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    That is not exactly true. If you look at their original 2 10nm designs, meaning cannon lake and ice lake, there is a frequency regression. So there are many generations before alder lake which had a frequency regression without IPC or with minimal IPC progress. Do not cherry pick data on a single release, as you will get the wrong picture!

    Further, AMD has increased frequency drastically going from Zen to Zen 3. Where did AMD sacrifice frequency for IPC? AMD has a completely different architecture on a completely different process. When they went from the weak days of Bulldozer with high frequency and low IPC to Zen, they finally became a competitor with Intel, yet you say that is wrong. Let's examine Intel's NetBurst then and them going for top frequency right as AMD releases the Athlon 64. You want to discuss that?

    The point is that frequency is no longer king and arguably never was. IPS is king, which is IPC*frequency. So, if you have enough IPC, it makes up for less frequency. If you have enough frequency, it makes up for lacking IPC. Let's take an example:

    Intel - 5GHz*1IPC=5IPS

    AMD 4GHz*1.25IPC=5IPS

    My point is, if you are going to try to do this crap of frequency is king and ignore everything else, you are going to miss the point. ALWAYS go by benchmarks (which Intel was saying ignore benchmarks, which also makes no sense) for your specific workload.

    But, you pointing to FX misses the point. AMD demolished those chips moving to Zen with lower frequency and arguably made their way to king of the hill at the moment from doing the exact opposite of what you said. So you ragging on AMD is giving them bad advice. It is a bit absurd.

    I came here to praise Intel's going wider on these new designs and talk about the intricacies of their new architecture in a positive way, but get dragged into ragging on them to help show a point on why frequency is not the be all and end all. I wanted to talk about where their innovation is actually occurring, but then responding to things like this sucks that right out of me.

    It is really the fear from Apple that drove them, with fear of Nvidia in the backseat.

    Also, 10nm still isn't that efficient. Even with finally getting the super fin and some kinks worked out so frequency is now roughly what was seen on 14nm, which this is at least the third iteration of 10nm if you count cannon and ice lake, and more for the multiple delays and brokenness and redesigning that went into the process, they were headed for a cliff.

    The heat point is likely also why they dumped AVX-512 from the consumer chips. And they haven't found an adequate way to deal with the thermal density of the smaller node.

    Ian or Andrei made a great point at Anandtech. If the small cores are 8% faster than skylake, which really is a testament to how far ahead Intel was for how many iterations they did and how long it stayed relevant, then why not make a server chip with 64 small cores on it. AMD doesn't have AVX-512 right now, so why not basically do an equivalent to Skylake+++ server processor with all those little cores? And with them all being cores of the same type, you do not need the advanced hardware scheduler most likely (part of what I wanted to talk about because that was cooler than I thought, but reminded me I had heard about work on a hardware scheduler a couple years back, not just the I/O die, but an actual dedicated hardware scheduler to bring that closer to the metal and rely less on M$ and their non-innovating issues on scheduling). But, that is an interesting thought, considering Intel is still doing two monolithic dies on the same chip in order to reach into the 50 some core area. Those small yet efficient cores doing skylake +8% performance, according to anandtech as an estimate, would possibly help to cram more into a space and run cooler than the 14nm Cooper Lake, etc. It may even give Ice Lake servers a run for their money.

    Also, check out below on how the scheduler works! lol

    OK, so a quick note on how the cores work:

    1) schedules on the P cores (big)
    2) moves to the small E cores
    3) moves to the SMT/HT logical cores

    So, in fact there isn't the advanced using of the efficiency cores in that way, if this reported order from anandtech is correct.
    upload_2021-8-20_8-39-58.png
    https://www.anandtech.com/show/16881/a-deep-dive-into-intels-alder-lake-microarchitectures/2

    See, with Zen Threadripper (first gen) (great example of why Intel had to create the Thread Director because microsoft), there was a multi-step latency, especially seen on memory. You had two dies with direct access to the memory controller on the die, then their partner dies with the memory controller turned off. You also had matched pairs. So, you had different layers of latency depending on if you needed something on a memory call. And if not on a paired die, you had to jump to the mirrored die of the other set, then out to memory or then to the other paired die, then out to memory

    1) direct to memory
    2) jumps to paired die, then out to memory
    3) jumps to mirrored set, then out to memory
    4) jumps to mirrored set, then jumps to paired die, then out to memory

    With those four different sets of latency, you would wind up where sometimes, because microsoft's scheduler sucks and they refused to integrate and innovate a good latency aware scheduling that could use more than two NUMA nodes (which was designed during Intel's Core 2 Duo when they had the two fake cores glued together, yes, that is how old the origins of the current scheduler is with some updates, but ... which is why I said the Win 11 scheduler is the first real update in a long while).

    This also isn't discussing if pulling things from caches of other dies.

    But, since Microsoft would not fix it for AMD, they went around Microsoft by designing the I/O die, so that the memory call latency would be equal from the I/O die at all times. This helps for scheduling to prevent data being stale and unneeded by the time the data got to the cores calling for it. This is also before discussing Microsoft's scheduler causing thread thrashing on Zen.

    Further, to help with latency, they added latency so that any time you needed to do a cache hit, you would go back to the I/O die, then to the other CCX. They did this even when the CCX was on the same CCD (same die) as the other core complex. Why? Because it standardized latency, because Microsoft's scheduler could not deal with the different amounts of latency and how that effects what should be served to the cores when. That leaves only two latencies, that of cores on the same CCX and that of traveling to any core on any other CCX.

    They then broke down the wall so now in Zen 3, the 8 cores have full access on the same CCD, so instead of the extra delay going from 4 core CCX on Zen 2 to the other CCX on the same CCD, it just all talks with each other. Huge advantage. It also cuts down on jumps off the CCX, reducing latency. But standardizing the latency made the CPU work better with the scheduler, which thereby allowed for AMD to make large leaps with their core design without Microsoft getting in the way.

    Now, M1 chips are ARM chips. It is not a big.LITTLE design pioneered by ARM, though. ARM cores are just more efficient, generally. That allows them to suck down less power while doing what they do. You combine that with cutting edge node process and a great design team that integrated top to bottom and you get a good product. It cannot fully do what an X86 chip can yet, but it is getting there.

    What Intel are scared about is the M1X and the M2. One is rumored to have 16 cores, similar to AMD, which Intel's 10nm isn't there yet on power consumption, even though this is their third iteration or more, so you need the low power cores to get the rest of the way at the moment. But, there is also a rumor that for the Apple workstations, they are working on a 40-core beast to take on AMD's chips on HEDT, somewhere Intel is currently absent. As such, Intel is really fighting to keep the core count up, because some people just look at core count, not performance overall. It also is to try to compete with low power states, but if the scheduler works as described, you are starting with high power cores and only moving to low power after those are filled. It is perplexing. But, I'm sure Intel tried the other way around and the faster it finishes with the workload with big cores allows for more idle time than having the little cores cranking away and overflowing to the big ones. Also, there is more to learn about their process, so...

    Hope that gives more info.

    Nope. Simple look at the diagrams show that the IMC is only in one spot on the chip. So no separate speed according to cores. Just like the I/O die has the memory controllers on it, Intel has one spot for all memory controllers, which means it feeds them all the same. Intel is reaching on the efficiency side hard, which means being able to even drop the memory speed to then reduce power consumption in idle states. That is what is going on here. But, it does beg the question of data collision for varying speed states from memory, latency for exiting the idle state (think of the time effects of exiting idle C-states on Haswell and before; even though skylake did improve on the C-state delay, many overclockers turn off the feature anyways).

    When you are so desperate to approach ARM power efficiency that you literally idle the ram. lol.

    Edit:

    With that said, it is interesting to have dynamic clocks possible moving forward. I just want more information on it.
     
    Last edited: Aug 20, 2021
    jc_denton and Papusan like this.
  3. Ashtrix

    Ashtrix ψυχή υπεροχή

    Reputations:
    2,376
    Messages:
    2,081
    Likes Received:
    3,281
    Trophy Points:
    281
    Apple's marketshare is not that big to create a havoc at Intel / AMD, 10% for the Netmarketshare of Mac OS and Apple's own share of revenue from Mac is under 10% again, massively lower than Services . With Apple putting millions and probably compounded to billions in TSMC for A Series iPhone processors on top of the VRM problems, heating issues etc on Macs. Apple would definitely move away from Intel which probably is a lot of billions paid to Intel for their specific edram equipped processors and other BGA trash. On top the Intel's node disaster which basically goes against Apple's thin and light drama.

    But Apple taking crown from AMD's HEDT is a tall order. They not only have to beat the Clock and IPC of the Threadripper but also the SMT aspect of the x86 processors and the memory controller which is Quad Channel to Octa Channel high speed DDR4. Also the M1 CPU is not a ground shattering processor either. It loses out AMD's Renoir CPUs in performance, it slams AMD and Intel flatout on efficiency. M2 and MX whatever is all that speculation. If Apple can make an ARM processor which beats ARM based Ampere's Altra 80C CPU (which currently beats Ice Lake) why not simply make their own marketshare for datacenters ? Idk, maybe Apple thinks they do not want Enterprise revenue stream or maybe they think it's too entrenched and not worthy, or something else.

    Here's the NBC's M1 vs 4700U and 4900H, the latter H BGA processor is Zen 2 on the inferior node (TSMC N7) vs M1 processor (TSMC N5). Not just node, the uArch is inferior, Memory is slower vs AM4 desktop parts. As we know Clock speed is very important for x86 because they scale with power a lot and as per spec sheet we can see U processor is severely starved on all fronts so is H. But look at the performance. Also the M1 in Mac is 20-24W TDP (as per AT). Extrapolating that to MX hypothetical Mac Pro HEDT processor, so that is going to lose the efficiency and add more CPUs to the core design and maintain the clock speed so it must beat a Threadripper on Zen 2 ? Too much imho. Talking about pure performance here. Also note DRAM is another aspect here, TR runs on Quad Channel. And TR Pro is Octa Channel.

    I personally feel people put too much hype and regard Apple as some saint revolutionary in the CPU industry. In reality when talking about real workloads of the CPUs like Qcomm vs A series, which is same as the IPC / Clock / memory all things. I'm talking about the Application speed and benchmarks like the layman Youtube Videos speed tests of iPhones vs OnePluses / Samsungs etc. They do not show that exceptional boost, they even throttle quite contrasting as by the hype talk. Plus Apple has this bonus of controlling their userbase itself. Anything they do doesn't have any blowback. Not even the latest on device scanning has, so they happily downgraded the Mac OS into a mobile iOS type OS now. Also killed 32bit entirely.

    About the Alder Lake I'm not saying it has 2 IMCs, Intel already confirmed ADL has single IMC for all types of memory from LPDDR4/5 to DDR4/5 but I'm suggesting that the IMC is downgrading the memory clock speed for the smaller cores, which is what that "Dynamic Memory Clock" is meaning as per them. I don't believe that IMC will overclock the Gear 1 DDR4 3733MHz to 4000MHz on this platform infact change the frequency to lower as per load that's what they are suggesting.

    About the efficiency, yeah are mentioning the ring bus will cut on power.
    As per Anandtech -

    "The Alder Lake processor retains the dual-bandwidth ring we saw implemented in Tiger Lake, enabling 1000 GB/s of bandwidth. We learned from asking Intel in our Q&A that this ring is fully enabled regardless of whether the P-cores or E-cores are being used – Intel can disable one of the two rings when less bandwidth is needed, which would save power, however based on previous testing this single ring could end up drawing substantial power compared to the E-cores in low power operation. (This may be true in the mobile processors as well, which would have knock on effects for mobile battery life.)"

     
    Last edited: Aug 20, 2021
    Papusan likes this.
  4. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,755
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Nope. Simple look at the diagrams show that the IMC is only in one spot on the chip. So no separate speed according to cores. Just like the I/O die has the memory controllers on it, Intel has one spot for all memory controllers, which means it feeds them all the same. Intel is reaching on the efficiency side hard, which means being able to even drop the memory speed to then reduce power consumption in idle states. That is what is going on here. But, it does beg the question of data collision for varying speed states from memory, latency for exiting the idle state (think of the time effects of exiting idle C-states on Haswell and before; even though skylake did improve on the C-state delay, many overclockers turn off the feature anyways).

    When you are so desperate to approach ARM
    Apple was a large portion of sales for Intel. It was about 10% of the PC market. Do you think 10% is something to sneeze at? Also, Apple DO have servers, although most people do not look at their server and workstation parts.

    Sure, the M1 is not the best silicon out there, but there is something to remember: this was their first attempt. M1X is the refinement plus going to 16 cores, which Apple want to put in their desktop (or the monitor AIO lineup, I forgot the name of it) offerings. Considering how well it does against mobile, 16 cores and being able to stay on the platform you want is a significant upgrade versus just staying with Intel.

    For the 40 cores I mentioned with the M2, that is an overhaul and meant for their workstations. Sure, that isn't a server offering yet (current products still use Intel chips for server and workstation products from Apple), but that should not be that far behind. And that means Intel is going to lose further business from Apple once those ARM chips are ready. It will be interesting when Apple hurdles the 40-core chip soon and tries to go to higher core counts like Ampere's Altra 80-core. But, we must remember that Ampere has been doing this for years and has a lead on Apple. Also, they were knee-capped due to development for software for ARM server. But Ampere is a good company. Apple, the M1 was the first time out. No company does the best work on their first product. Usually you wait for a couple iterations and it gets much better. Take Skylake, for example. The frequency was pretty low, even though you could overclock it. Kaby added over 200MHz in frequency. From there, the iteration in how much it overclocks and opportunistic boost was made, but the point is from the first product on that architecture to the second, you can usually find more ways to optimize, and that scares Intel that other clients will decide to move to customized hardware.

    But, as I also mentioned, you have Nvidia developing ARM chips now, and possibly buying ARM. If they do, they could create a product that exceeds Apple, is even more competitive with X86 from both AMD and Intel, and then starts eating their lunch. With AMD already stealing market share on their very lucrative server chips, decimating the high margin HEDT market Intel used to enjoy, Intel selling off different divisions in recent years, etc., Intel is trying to just get their process correct to get back on track.

    I'm impressed at how wide they have alder lake for a microarchitecture. It is beautiful. But I've never had too much bad to say on the talent there, just the process development hit a hard brick wall.

    I really don't think it was the smaller cores for why they are doing this. They are trying to make a processor that, at idle, sips the lowest amount of power. Being able to clock down the memory, like being able to turn off part of the ring bus (which data fabric is notorious for sucking down large amounts of power), is one more way Intel is creating a power efficient chip. Think of when you overclock memory. You often overclock VCCSA and VCCIO in order to stabilize a higher speed overclock on the memory. By allowing the memory to clock down, you are now able to have the VCCSA and VCCIO lower their voltage, which lowers the total package power consumption. This is about power savings, not about the low powered cores.
     
    Clamibot likes this.
  5. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,649
    Trophy Points:
    931
    The tomorrows Intel chips K moniker means Gracemont cores aka the small nice pretty power efficient phone cores turn Intel's new processors into an blasting performance monster, LOOL

    Yep, the "K" designation forwards will have an fantastic value.

    Alleged Intel Core i5 Alder Lake-S CPU Leaks Without Gracemont Efficiency Cores hothardware.com

    However, an interesting new Alder Lake-S desktop processor just appeared in UserBenchmark, and it shows 6 cores and 12 threads. That would indicate this particular SKU is lacking the Gracemont efficiency cores. According to WCCFTech, this would either be a Core i5-12400 or Core i5-12500 processor without the "K" designation.
     
  6. Rage Set

    Rage Set A Fusioner of Technologies

    Reputations:
    1,611
    Messages:
    1,682
    Likes Received:
    5,068
    Trophy Points:
    531
    https://www.tomshardware.com/news/l...poses-600-series-chipsets-for-alder-lake-cpus

    The hints about Intel's return to HEDT is all around us. I keep seeing them everywhere, little nuggets of info. This is another nugget.

    This doesn't mean it is going to be great or a poor showing. Just that Intel is going to come back.
     
    jc_denton, ajc9988, Ashtrix and 4 others like this.
  7. Ashtrix

    Ashtrix ψυχή υπεροχή

    Reputations:
    2,376
    Messages:
    2,081
    Likes Received:
    3,281
    Trophy Points:
    281
    Tomshardware is saying "Alder Lake's Heterogenous design can fit in" I doubt X699 is going to have Atom cores, if they really do then probably 8 big cores and 40 small cores lmao to battle Zen 3 Threadripper 64C but from what was published, Sapphire Rapids is not using Big.Little system, it's having Tiles on a single PCB upto 56C unlike Alder Lake silicon.

    Also note ADL has PCIe 5.0 only on the first X16 slot, only a GPU can take advantage of that or x8 and x4/x4 for Storage. Maybe the OEMs will add in PCIe5.0 SSD option, ASUS DIMM.2 for instance. Other than that, it's all 4.0 and 3.0. I wonder if this new Sapphire Rapids based X699 will have all 5.0 or 4.0 only, Enterprise Xeon is still not disclosed yet. One thing is clear, PCIe5.0 is going to drive up the motherboard cost to peak along with DDR5.

    Note below, the Memory Controller and design is similar to that of Threadripper / EPYC.

    [​IMG]
    [​IMG]
     
  8. Clamibot

    Clamibot Notebook Deity

    Reputations:
    645
    Messages:
    1,132
    Likes Received:
    1,566
    Trophy Points:
    181
    Agreed. Intel has historically had the advantage in total single core performance. With the IPC increases on Alder Lake combined with the frequency retention from Comet Lake, they should still have better single core performance than AMD when Alder Lake releases..

    It'd be awesome to see Intel and AMD continue to duke it out on single core performance in a bid to outdo each other. AMD won this round for single core performance. Intel should win the next. I'd like to see them keep swapping back and forth because we benefit from that as the consumer.
     
  9. tps3443

    tps3443 Notebook Virtuoso

    Reputations:
    746
    Messages:
    2,421
    Likes Received:
    3,115
    Trophy Points:
    281
    The last time I was posting my freshly setup, and configured build with the new 3090KP installed. I said, “I loved my build, if only for a moment”

    Now I’m tearing it all down! So I guess the moment is over.

    it’s easy to love the components we select. But difficult to love any particular configuration we may run them in.

    Out with the old and in with the new. I’m just building the new system on the table without a case. I have some large external 350x350 radiators on the way, and I still haven’t settled on a test bench yet. Unless a perfect case some how miraculously releases overnight. Probably just gonna order a primo chill test bench SX. If I go with no bench at all, I can order that fancy fancy Optimus Kingpin block haha. But I hear that thing weights like 7.5 pounds. So it would probably need to be supported by something. Sounds like a real PCI-e snapper. Or the reason why they put metal supports on motherboards.

    I’m sick of high temps, so I’m done with the sissy radiators haha.





    [​IMG]
    [​IMG]
     
    jc_denton, Mr. Fox, Ashtrix and 8 others like this.
  10. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,755
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    @Ashtrix - I do owe you an apology, I saw another slide that also states the background tasks get offloaded to the small cores. This is in addition to half of the cores shutting off and the second data fabric strip shutting off for energy savings.

    But as for supported memory speed, they support LPDDR4 speeds and LPDDR5 speeds that are faster than DIMMs and SO-DIMMs.
    upload_2021-8-23_7-48-42.png
    That means that it couldn't be the small cores slowing memory performance. In fact, this suggests the desktop format is hampering the performance to a degree, except with desktop you can go over spec and the LP ram is soldered, so stuck for life. I digress.

    But I did owe an apology on how it works. Sorry about that.
    Let's put this in perspective: Sapphire Rapids, as a server platform, does not release until Q2 of 2022 in volume. Going from historic trends, Intel would then unveil at computex the new HEDT platform. It would release those chips anywhere from late June to end of August, depending on year and generation.

    So, what does that mean? Intel will be competing with a Zen 3 design, which considering they will support DDR5 they will have more memory bandwidth to serve the cores and if reports on IPC are correct, they could have the single core performance which is beneficial for workstations with per core licensing. That will bring them back to competitive, although AMD Zen 3 will win in multithreaded applications without the per core licensing consideration. That is also a year out.

    I'm happy for the return, but the real question comes how long before AMD is able to release a Zen 4 TR? For Zen 1 and 2, AMD was able to release the TR platform 6-9 months after mainstream. Zen 3, however, due to no competition and trying to fulfill orders in the server market with strained production at TSMC not able to meet demand, we have been waiting well over a year. I'm sure if Intel had something, they would have thrown out some token chips to technically have a line released.

    So, even though Intel may return, I wouldn't say it is ruling the roost, as well as questioning the longevity of where there wins lay. But, that means competition and AMD not sitting on their butts to release something for HEDT due to no competition.

    For PCIe 5.0, see my next comment.

    I want to give some background on PCIe 4.0 and 5.0, why Intel is even debuting it on mainstream, and why it doesn't matter for graphics cards that cannot use it, but should have been primarily for storage at the moment.

    So, when AMD returned from their dark journey near bankruptcy, they brought with them PCIe4. Intel, when it started, did not have PCIe 4 and AMD's solution got adopted as the de facto way to do PCIe 4. Many companies didn't trust AMD's longevity in their return to the market and did not like their partner would not be paying to help develop products for the standard (meaning AMD would not be contributing R&D funds to other companies, much like Intel does for all sorts of things, from laptop design to networking cards and everything in between).

    Many companies, because of this, and because they didn't want to develop AMD's protocol, decided to skip the short lived PCIe 4.0 and wait for PCIe 5.0, which Intel's version was going to be adopted along with CXL protocol for cache and memory heterogeneous architecture along with gen Z protocol for PCIe to connect nodes. This means, starting with PCIe 5.0, on board will be able to pull from other components memory architectures in new ways and that can also be done across nodes of systems, which is going to increase server power by leaps and bounds!

    Now, of course, PCIe 5 will also be short lived as in a couple years, they will have PCIe 6.0. But the new heterogeneous protocols will be wrapped up into that one. Consumers won't see that for a long while, and to be honest, they do not need it at all.

    So, back to on point. Intel never created a development platform where partners could test their PCIe 4.0 products. That means all PCIe 4.0 products developed were pretty much designed on AMD platforms for the most part. That helped with why AMD had fewer issues than usual getting products to work on their platform, but that is why we mainly saw them stick to storage and graphics cards, which even with graphics PCIe 4 is not seeing major gains with the extra bandwidth.

    That brings me to PCIe 5. If PCIe 4.0 doesn't help with graphics cards much at all at this point, why is intel making the primary slot PCIe 5.0? Why not the storage? Why not many things?

    Most likely, Intel is offering this for the general testing and bringup of products for servers which will have PCIe 5.0 down the road in Q2 2022. They have development systems they lend to partners, sure. But this will allow any company wanting to develop for PCIe 5.0 to grab a relatively cheap system and start testing. Most of those systems do not need fast GPUs to test, unless your product needs access to the cache and memory layers of the GPU for CXL development, at which point you may already qualify to get their servers under a partner program to bring that up for servers releasing next spring.

    Even with that, we likely won't see the burgeoning products on PCIe 5.0 for a while after that.

    Now, AMD agreed to take a back seat, but with Zen 4, it will also support PCIe 5.0 on servers (don't remember if the rumors say it will be on mainstream or not). But, even with that, their chips are rumored to have AVX-512 and be going up in core count to 96. So, they still will have other things going for them.

    Meanwhile, so far the best areas for exploiting the extra bandwidth have been storage (getting at your critique) and networking, allowing for multi-hundred gigabit hardware. Imagine 500Gb network cards. But, I don't see graphics even saturating 4.0 for awhile, at least not until the number of chiplets on a die for multi-die GPUs shoot up (talking a couple generations, not AMD's first attempt with RDNA3 or Nvidia with Hopper).

    But that is my take.

    Edit: I cannot find his other video discussing the lack of development of PCIe 4.0, but this is a good discussion on CXL.



    I actually am looking forward to that. Even though they will lose in MT workloads, Intel should take the performance crown in ST, so long as they are not overstating their IPC, which they have done in the past. But, the graph shown demonstrates a regression in some workloads, whereas others get a significant boost. We are not told what programs though. Also, as the platform ages and they further refine the scheduler, I would not be surprised if Alder Lake ages well, much how AMD's often do for more performance after code is better optimized for them. Since this is the largest change in the scheduler since Core 2 Duo, I truly believe this is going to age well, whereas when they were putting out skylake over and over and over and over again, there was no wait because the platform has been fully developed for years now. That is why it is my belief that whatever platform is seen at launch from Intel, it will improve with time.






    Edit: TechTechPotato coverage of Hot Chips
     
    Last edited: Aug 23, 2021
    jc_denton, Mr. Fox, Ashtrix and 2 others like this.
  11. tps3443

    tps3443 Notebook Virtuoso

    Reputations:
    746
    Messages:
    2,421
    Likes Received:
    3,115
    Trophy Points:
    281
    Ok I finally have this 3090 Kingpin cooled down with a good remount. I found out my D5 pump was in bad shape it was just chaos inside under and around the floating impeller. Debris everywhere, and caked on hunk. I cleaned it up, re-mounted my 3090KP HC block, then I installed my new Alphacool BFR External radiator. (I made that name up)
    Kept my same gamer overclocking profile applied, no classified tool at all!

    I broke 15,700 on the first run in Port Royal!

    I may just have 16K Port Royal in the bag!

    My ambient temperature is 24.2C and my water temp inside the reservoir is 26.1C under load.

    ^ Amazing results! I’m wondering if the 2nd Alphacool 1080x45 will even do anything?



    [​IMG]
    [​IMG]
    [​IMG]
     
    Last edited: Aug 24, 2021
  12. Rage Set

    Rage Set A Fusioner of Technologies

    Reputations:
    1,611
    Messages:
    1,682
    Likes Received:
    5,068
    Trophy Points:
    531
    Congrats bro. Awesome score. To answer your question, a second radiator won't do anything outside of restricting the flow. You're at the point of diminishing returns.

    There is always a chiller....
     
  13. pathfindercod

    pathfindercod Notebook Virtuoso

    Reputations:
    1,940
    Messages:
    2,343
    Likes Received:
    2,345
    Trophy Points:
    181
    Very nice.... As Rage mentioned your water will never be cooler than your ambient temp without a chiller. Adding another rad will not make a significant difference at this point.

     
  14. tps3443

    tps3443 Notebook Virtuoso

    Reputations:
    746
    Messages:
    2,421
    Likes Received:
    3,115
    Trophy Points:
    281
    Yeah I finally have a good cooling setup, Something I have always wanted. I wonder why more people aren’t running these big external radiators? performance PC’s sells them for $115.00, Alphacool launched the enclosure and feet earlier this month too.


    I think I’m alright on cooling though.


    PS: only using the 520 watt bios. I am power limited at 15,700 PR score.

    Now the XOC 1KW bios is finally useful. Tonight I will do some more benching.
     
  15. tps3443

    tps3443 Notebook Virtuoso

    Reputations:
    746
    Messages:
    2,421
    Likes Received:
    3,115
    Trophy Points:
    281
    This score was on the 520 watt bios too. I’m
    excited to run the 1KW bios tonight, and finally break 16K!
     
  16. tps3443

    tps3443 Notebook Virtuoso

    Reputations:
    746
    Messages:
    2,421
    Likes Received:
    3,115
    Trophy Points:
    281
    This memory is amazing guys. This is a G.Skill Royal-Z DDR4 4000.CL15 4x8GB set that I bought on newegg about 10 months ago. I could never really test this memory due to X299 being pretty IMC limited. I was gonna sell the set, and purchase a 2x16GB kit due to the new setup (10900K and Z490 Dark KP edition) that I purchased from @Mr. Fox

    Anyways, I just kept my memory and I am running half of it. So I am testing memory overclocking first, then after I find the maximum memory OC, I will move on to the CPU overclocking.

    These timings are not all that great yet, but I am gonna tweak these next.

    CL17-17-17-36-700-2T@4933Mhz.

    I will get some benches up soon! Trying to get this platform optimized. Its gonna taker a few days lol


    Intel Core i9 10900K @ 4901.18 MHz - CPU-Z VALIDATOR (x86.fr)
     
    jc_denton, Ashtrix, Papusan and 5 others like this.
  17. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,649
    Trophy Points:
    931
    [​IMG]
    Nvidia RTX 3090 SUPER with 10,752 cores or full GA102 chip might be in the works neowin.com · 2 hours ago

    According to a new report, Nvidia might be working on a new flagship graphics card dubbed the RTX 3090 SUPER. The 3090 SUPER is rumored to feature the full GA102 GPU with 10,752 CUDA cores. Nvidia uses the cut-down "GA102-300-A1" version of the GA102 in the RTX 3090 which is currently the flagship graphics card in its Ampere gaming cards lineup.

    https://videocardz.com/newz/nvidia-...red-to-feature-10752-cuda-cores-400w-of-power
     
    ajc9988, Ashtrix, jc_denton and 2 others like this.
  18. Kana Chan

    Kana Chan Notebook Evangelist

    Reputations:
    50
    Messages:
    356
    Likes Received:
    214
    Trophy Points:
    56
    It is possible to get a card like this a year earlier if that person has the ability to unsolder/solder gpus? They'd need a spare A6000 to acquire the gpu die and a 3090 for the GDDR6X. If it's possible to swap a 980 desktop onto a 980M board, it should be possible to put a A6000 onto a 3090 pcb, right?
     
  19. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,649
    Trophy Points:
    931
    I think @Khenglish is the right person to answer that.
     
    Ashtrix, jc_denton, Clamibot and 3 others like this.
  20. Tenoroon

    Tenoroon Notebook Deity

    Reputations:
    144
    Messages:
    747
    Likes Received:
    575
    Trophy Points:
    106
    This makes no sense, performance is probably going to be like the 3080ti compared to the 3090, like less than 10% IIRC.

    What are they going to do now, stick 48gb of blazing hot, power inefficient G6X VRAM on a GA102 card without giving it semi-professional drivers that the Titans have? There's a reason the A6000 doesn't use G6X ;)

    I'm assuming this "rumor" is just BS, I know Nvidia can be retarded sometimes, but there's no way that they are this stupid. And besides, there's no way in hell anyone will be able to adequately cool it without needing a mega cooler. Most 3090's are triple slot cards, and that's already too big.
     
    ajc9988, Ashtrix, Clamibot and 2 others like this.
  21. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,649
    Trophy Points:
    931
    Maybe a stop-gap before 4000 series cards is out. But greed is always the driving factor.
     
    Ashtrix, Clamibot and ole!!! like this.
  22. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,983
    Trophy Points:
    431
    years ago, that probably should have been the 580 or maybe even 570.

    paying for SUPER/titan class for old flagship isn't my thing. gonna drag this out as long as I can before I upgrade. good thing is game I play on gets 100+ fps on 1070
     
    ajc9988, jc_denton and Papusan like this.
  23. electrosoft

    electrosoft Perpetualist Matrixist

    Reputations:
    2,766
    Messages:
    4,109
    Likes Received:
    3,946
    Trophy Points:
    331
    ajc9988, Ashtrix, jc_denton and 2 others like this.
  24. Reciever

    Reciever D! For Dragon!

    Reputations:
    1,525
    Messages:
    5,348
    Likes Received:
    4,332
    Trophy Points:
    431
    If I Could have a 4 slot card, I would have it.
     
    Papusan and Tenoroon like this.
  25. Tenoroon

    Tenoroon Notebook Deity

    Reputations:
    144
    Messages:
    747
    Likes Received:
    575
    Trophy Points:
    106
    I wouldn't mind too much either, but I know most people would dislike having a 4 slot card, that's why so few of them exist :(.
     
  26. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,235
    Messages:
    39,339
    Likes Received:
    70,655
    Trophy Points:
    931
    The way I look at it, if you can't buy it then it might as well not be real. Who cares about a paper launch? Offering a product that most customers can't buy because it is never available doesn't make sense, even though there will be people standing in line volunteering to be financially raped for the privilege of having a massively overpriced product that will be obsolete in just a few months. It's also tantamount to screwing everyone that bought a 3090 with the expectation they were buying the best... Major disrespectful slap in the face if it actually surfaces and a fantastic way of destroying buyer confidence as it relates to future GPU purchasing.

    Thanks. I was eager to get a closer look at it. Looks good.

    I am not particularly impressed with the Crosshair VIII. Single BIOS, no auxillary power for PCIe and less than dual 8-pin CPU power has all of the makings of a half-assed gamerboy mobo. It might be better than most of the other X570 options, but it is still not a very impressive product and it does not qualify as something awesome in my opinion. Even if it is better than most, it is still borderline disappointing. And, the firmware leaves a lot to be desired, which surprises me since that is generally an Asus strong point.

    But, to keep proper perspective, the Dark mobo will probably cost twice as much. So, it damned sure needs to be a WHOLE LOT more impressive. I suspect that it will be, assuming the firmware is equally excellent. If Vince is endorsing it, then my optimism is high. He would be nuts to have his name and reputation associated with it otherwise.

    The single NVMe slot on the X570 Dark is pretty disappointing though. Giving up the WiFi would a better solution than giving up an NVMe slot.
     
    Last edited: Aug 25, 2021
    Ashtrix, ole!!!, jc_denton and 3 others like this.
  27. Reciever

    Reciever D! For Dragon!

    Reputations:
    1,525
    Messages:
    5,348
    Likes Received:
    4,332
    Trophy Points:
    431
    Im actually quite smitten with that Saphire 570 x2/duo/dual card.

    But like the above, I cant even window shop the card let alone buy it :/
     
    Papusan and Mr. Fox like this.
  28. tps3443

    tps3443 Notebook Virtuoso

    Reputations:
    746
    Messages:
    2,421
    Likes Received:
    3,115
    Trophy Points:
    281
    Anyone try the 10900K direct die without the die frame screws in?

    Check it out! Such a socket doesn’t need them.

    I was having a little bit of a tough time
    Getting core to core temp deviation right enough. Maybe 3-4 re-mount attempts later, and rotating the waterblock etc. I couldn’t get it quite right. It turns out mounting these 3 screws is tedious. They don’t screw in smoothly. And it’s just not precise at all.

    These were the before temps (With Die frame screws installed)

    Core 1,2,3,4, all ran 77-83C
    Core 5,6,7,8,9,10 ram 64-66C

    Literally one side of the CPU was running drastically warmer.




    So I just removed all of the (3) screws around the die frame. The CPU block alone holds this thing down perfectly.

    This is the first mount with Kryonaut extreme thermal paste. I was surprised to see the results being so good!

    Core 1,2,3,4 all run 68-71C
    Core 5,6,7,8,9,10 all run 64-68C

    Just figured I’d share. I’ve played with direct die on X299 a lot lol. This was much easier to deal with.


    Anyways, these results are quite good considering it’s only on thermal paste. This is an all core OC 5.1Ghz to 5.3Ghz.





    [​IMG]
    [​IMG]
    [​IMG]
    [​IMG]
     
    Last edited: Aug 25, 2021
  29. electrosoft

    electrosoft Perpetualist Matrixist

    Reputations:
    2,766
    Messages:
    4,109
    Likes Received:
    3,946
    Trophy Points:
    331

    That was about my reaction when yet another rumor popped up. 3090 Super makes zero sense at this point. An official Titan? That makes sense. A Titan run on TSMC 7nm makes even better sense. I think we went through this same some and dance with the 2080ti when there were rumors of something more powerful but it never came to fruition and they squeezed the 2080 Super in there between the 2080 and and the 2080ti.

    In regards to GPUs, I no longer even get super excited and I was actually shocked to win the Newegg lottery for the Gigabyte Aorus 3070 Master a few weeks ago then promptly went back to losing so that puts me at 1-98 I think at this point. I ended up selling the EVGA 3060 I picked up for the wife back in April and gave her the Aorus 3070 (total bust at 4k for me) so now she's running buttery smooth at 2560x1600 Ultra 10 in WoW everywhere versus a few chunky spots that would pop up from time to time on the 3060 that she would tell me about....daily. I just keep on truckin with my KPE 3090 since January but I didn't even touch WoW last week at all and once this week.

    Sorry to hear the Crosshair VIII is kind of a bust. You had your reservations right from the start. I think you even questioned the silicon quality of your 5950x too. Hopefully you get it sorted or just wait for the Dark (there's always Alderlake on the horizon...). AMD chips are available in abundance and on sale at best buy. $50 off 5900x and 5950x.

    You feel about Asus a bit at the moment like I feel about MSI

    Take that disappointment and add in even a barely updated, poorer BIOS, absolutely zero reliable per core control in the BIOS (even though it was touted as such), and still flaky USB ports and you have the MSI X570 Tomahawk. Hopefully EVGA is the redemption sought.

    I even swapped out my MSI Z590-A Pro and swapped in the Gigabyte Z590 Aorus Pro AX I also "won" from Newegg along with the 3070. I also swapped out my EVGA 360mm and put in an Arctic 420mm I had sitting on the table for weeks. That's how bad I want MSI out of my sight...
     
    Ashtrix, Clamibot, jc_denton and 3 others like this.
  30. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,235
    Messages:
    39,339
    Likes Received:
    70,655
    Trophy Points:
    931
    That is actually a great idea with an important word of caution. This is perfectly fine with motherboard mounted in a horizontal (flat) orientation. If you have it installed in a normal case (vertical) it would be super important to never remove the waterblock while it is in an upright position. You would run a huge risk of having bent CPU socket pins if it separates and the CPU slides out of the socket. You could also end up with the edge of the CPU PCB chipped or cracked. The screws are only needed to avoid accidental damage such as that from occurring.

    Other than that, if you are very deliberate in never taking it apart except with the motherboard in a horizontal position, then the screws are totally unnecessary. The die frame's primary value is to avoid fracturing the CPU due to uneven mounting. The die frame eliminates the possibility of that happening whether screws are installed or not.
     
    Last edited: Aug 26, 2021
    Ashtrix, tps3443, Clamibot and 4 others like this.
  31. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,235
    Messages:
    39,339
    Likes Received:
    70,655
    Trophy Points:
    931
    Yeah, the entire PC enthusiast landscape it riddled with disappointment at every turn right now. Intel, AMD, NVIDIA... doesn't really matter what your platform preference is, there is just a lot of stupid and goofy crap happening with CPUs and GPUs. When you factor in the issue of "respected" brands selling half-baked rubbish to enthusiasts the situation becomes more grim. On the side you have NZXT cases catching on fire and Gigabyte PSUs exploding, and both companies making excuses rather than jumping on fixing their mistakes. Windows just keeps gettings crappier and dumber with every version release.

    How can anyone think anything other that OMG, we are completely overrun by dishonest and incompetent retards selling us broken trash? Poor availability and absurd pricing is bad enough, but it adds insult to injury when there seems to be a total disregard for functionality, near zero quality control or pride in product quality or workmanship, and a complete disregard for customer experience on the part of nearly every major contributor to PC tech doing stuff that is ruining everything for PC enthusiasts. Overall, it is a really horrible, sorry state we find ourselves in right now.

    I think the only thing more screwed up right now is the utterly devastating political leadership failures that are swiftly destroying so many things that are far more important to all of us than PC tech in the grand scheme of things.

    While I am feeling a bit disappointed with the silicon quality of my overpriced 5950X, it might not be a horrible sample. It is hard to gauge at this point because I don't know what I don't know and some of my frustration could be my lack of familiarity with the platform. It does offer impressive performance, but it feels very limited and seems like an overall buggy architecture. That was one of my greatest fears about taking a chance on Ryzen, but it might be unfair of me to render that verdict when I still have a lot to learn about how to make it work best. There could be a combination of user error and crappy firmware affecting my perception at this point, and being sick with COVID and getting back up to speed at work, I haven't given it the time and effort it deserves yet.

    What is really weird (and totally unlike Intel overclocking) is the fact that I get better results from the CPU leaving most of the CPU-related BIOS settings at default and using ASUS TurboV Core to change the multipliers and voltage in Windows. That causes me to think that crappy firmware might be affecting my experience. If I try doing things in the BIOS it might work great for 2 or 3 days, then settings that used to work fine are suddenly and unexpectedly no longer bootable. No reason I can identify. The only thing that changed was the day of the week. I have to clear the CMOS and start over, and even then the exact settings that used to work without a hitch are mysteriously unbootable. Where I am at right now is I can bench the crap out of it at 4.7GHz on all cores with 1.375V, but for some weird reason moving to 4.8GHz using any amount of voltage it just shuts off under load. It's not thermal from what I can tell, because it happens even with the CPU under chilled water. Even something as simple as a CPU-Z benchmark, it turns off after like 1 or 2 seconds at 4.8GHz. I can run CPU-Z stress or AIDA64 Stability test literally for hours at 4.7GHz. So far I can't figure out what setting needs to be changed to stop that from happening at 4.8GHz. I think it is a setting I need to find and change because it is too difficult to believe a 100MHz difference takes it from fantastic to non-functional. I hope that is merely a condition of personal ignorance of the platform that needs to be resolved and not a trashy firmware or defective hardware situation. Unless/until I can figure out a solution using the Crosshair VIII, I am apprehensive about spending a ton of money on an X570 Dark only to find the situation is the same. That would really suck.
     
    Last edited: Aug 26, 2021
  32. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,235
    Messages:
    39,339
    Likes Received:
    70,655
    Trophy Points:
    931
    OK, I started poking around at more sensors in HWINFO64 and I think the issue is crappy VRM cooling. My CPU is only hitting around 60-65°C at 47x on all cores, but the VRM temp is getting to like 85°C. I can only imagine it's going higher than that with the CPU getting more voltage and moving to 48x. It might be the VRM temp causing it to turn off. I had similar issues with the 7980XE on the X299 Dark and running the VRM fans full blast resolved it, but these are passively cooled. Will have to investigate that more when I have time. If that is the case, it is going to be difficult to position a fan to help enough. The heat sinks on the VRM are uncomfortable to the touch they're getting so hot. With the system idling, the VRM temp showing in the BIOS is 72°C. That just seems way too high under a minimal/no load condition.
     
    Last edited: Aug 26, 2021
    Ashtrix, Rage Set, Clamibot and 3 others like this.
  33. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,649
    Trophy Points:
    931
    Yet another article/rumors but from the known leaker kopite7kimi

    NVIDIA GeForce RTX 3090 SUPER to be the first GeForce with 1TB/s memory bandwidth? videocardz.com | Today

    Just a day after Greymon55 broke the news about the upcoming RTX 3090 SUPER SKU, another respected leaker kopite7kimi shares more details on the supposed flagship SKU.

    It seems that GeForce RTX 3090 SUPER power consumption is growing by the day. According to Greymon55, the TDP of RTX 3090 SUPER is expected at higher than 400W. Kopite appears to agree with this rumor, but he further adds that this SKU’s TGP is equal to or higher than 450W, a 100W more than RTX 3090. Do note this might simply be a typo and he actually means under 450W.

    -----------------------------------------------------------------------------

    Maybe the new single core King.

    Intel Core i9-12900K is 12% faster than Ryzen 9 5950X in leaked single-core Geekbench 5 benchmark videocardz.com | Today
     
    Last edited: Aug 26, 2021
  34. Ashtrix

    Ashtrix ψυχή υπεροχή

    Reputations:
    2,376
    Messages:
    2,081
    Likes Received:
    3,281
    Trophy Points:
    281
    RTX 3090 Super sounds like the biggest Ngreedia milking scheme ever since Turing GPUs. 1TB/s bandwidth that card is going to miners no matter what and more power on the 8N node is going to get toasty, even more than 3090 chips, damn they are simply riding the demand wave.

    I wonder what pricing it would have at this point, the 3090 FE has $1500 tag and this new one will have full cores, 250CUDA cores to be precise and no more NVLink, I guess it will be $1800. AIB cards won't exist probably as Nvidia will milk it for themselves, if AIB cards come then $2600+ for sure. As all the 3090 AIBs are north of $2200+ plus bonus is GA103 rumors for 3080 Super refresh. Now Nvidia will discontinue the top class best card in Ampere, the 3080 FE $700 superb value and performance card and mint cash for next year with this new refresh, $850 will be the base value now with same less VRAM. I really wish all this is a poor troll attempt than legit leaks about Ngreedia plans. Oh bonus all cards across the market get LHR castration treatment.

    That GB V5 is interesting, I found out some things important with respect to this benchmark and Windows 11 (where 12900K was benched on specifically vs AMD's 5950X on Win10, that's a lot of improvement on Intel side to be honest since it doesn't have AVX512 anymore vs RKL scores but still Win11 is skewing up)


    Just watch the GB Win10 and Win11, 10875H CPU -
    W10 ST 1148 / MT 6308
    Win11 ST 1232 / MT 7687, magic of Windows 11 ? lol

    And in reality with Hardware Unboxed, we saw nothing since they do not use this GB.
     
    Last edited: Aug 26, 2021
    Clamibot, Mr. Fox and Papusan like this.
  35. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,235
    Messages:
    39,339
    Likes Received:
    70,655
    Trophy Points:
    931
    Based on the 6 weeks I have forced myself to use the abomination called Windows 11, probably the one truly positive thing I can say about it is that performance is better than Windows 10 with respect to memory and CPU. It doesn't make the loss of functionality, diminshed efficiency and butt-ugly UI any more unpleasant, but it is good to finally see the CPU performance needle move in the right direction for the first time since Windows 8.X was released. Every version of Windows from 8.X forward has been inferior to Windows 7 (by a lot) with regard to CPU performance.

    I don't know that I will ever view it as a nice product. It's pretty damned disgusting overall. There is so much that is screwed up and ugly, but it's good to have at least one redeeming attribute that can be identified. The inefficiency and loss of functionality is harder to forgive than the aesthetic mess. I guess we just can't have nice things from the Redmond Retards any more. Those clowns haven't really done anything great since Windows 7. That is a lot of years to have failure upon failure.
     
    Last edited: Aug 26, 2021
    ajc9988, Ashtrix and Papusan like this.
  36. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,649
    Trophy Points:
    931
    Yet another great idea to keep up the prices on the "real" gamers cards for the gamers. Just flush out a huge load Ampere Blower cards into China market. Same silicon but not for the gamers.

    RTX 3090/3080 Blower Cards Are Coming Back, in a Limited Fashion tomshardware.com 2 days ago


    Blowers will be China exclusive

    It's easy to see why system integrators were adding blower-style RTX 3090 and RTX 3080 cards into their builds, as the price to performance was too great to pass up. That's also the reason why Nvidia cut the cord, since it didn't want Quadro sales to get eaten up by consumer grade RTX GPUs.

    But apparently Nvidia has changed its mind,
    and is allowing blower-style coolers to be built by Galax... but only for the Chinese market. As far as we're aware, Galax's blower-style cards are only listed on its Chinese website, and nowhere else. Sorry.

    The two blower cards are the RTX 3090 Classic and the RTX 3080 Classic. Both cards are re-releases of the original Classic cards from 2020 featuring an all-black plastic shroud and black metal backplate paired with a copper heatsink. It's a simple and stealthy design, but one that works very well in prosumer applications where functionality is desired over aesthetics.


    Also AMD want a piece of the cake. But not from the Gamers.

    XFX card with AMD Navi 21 GPU for cryptomining spotted in Vietnam videocardz.com

    And to make it complete. The waffers makers also want their piece of the cake.

    TSMC Price Hikes Confirmed by Crypto Chip Giant tomshardware.com 27 minutes ago


    Bitmain shares TSMC's secrets

    Yep. Keep up prices is a must as long as possible. The word "milking scheme" make sense/give meaning.

    [​IMG]
    TSMC Raises Chip Prices by Up To 20 Percent as Chip Shortages Continue

    by btarunr Today, 16:07

    The main supplier of advanced logic chips to the likes of Apple, Qualcomm, and AMD, among hundreds of other customers; TSMC, is reportedly planning to raise its prices by up to 20 percent, according to a report in The Wall Street Journal. The WSJ report talks about a roughly 10 percent increase in prices of logic chips built on the company's latest nodes (possibly N7 or newer); while prices of chips on older processes could rise by around 20 percent. This would have a direct impact on prices of not just PCs, but also smartphones and much of the ICT industry. The report, however, doesn't mention whether specific clients such as Apple and AMD would be affected by the new prices, as their large purchase volumes afford them bargaining power for their contracts. It will, however, wreak havoc with smaller clients that order based on demand, as well as companies planning future products.

    Edit.
    Have some of you this KB?

    Wild deal: EVGA's luxurious $130 gaming keyboard is on sale for $50
    This should blow any other keyboard you can find for $50 out of the water. It even has hotswappable switches!
    [​IMG]


     
    Last edited: Aug 26, 2021
  37. tps3443

    tps3443 Notebook Virtuoso

    Reputations:
    746
    Messages:
    2,421
    Likes Received:
    3,115
    Trophy Points:
    281
    Anyone recommend a coolant additive that cleans our water loop and blocks? I have tiny black plasticizer in my 3090KP Waterblock. You can just barely see it, but it’s kinda annoying me.

    The culprit is using that EKWB Commercial matte black tubing. It always does this for some
    reason, no matter how much Pre-flushing you do to the radiators,blocks, etc. That black ekwb tubing will still break down inside, and these tiny black particles appear after a few weeks or month.

    I’m trying to avoid taking the block apart if possible. But I guess I could do it, if there isn’t an alternative.

    And I’d rather not damage the copper or acrylic with random chemical additives.
    .
    I’m Honestly hoping to hear some first hand experience of a possible loop additive that helps remove things like this.

    I really appreciate the help everyone!!


    [​IMG]
    [​IMG]
     
    Papusan, Rage Set and Mr. Fox like this.
  38. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,235
    Messages:
    39,339
    Likes Received:
    70,655
    Trophy Points:
    931
    Try flushing it out using water from the kitchen sink tap in reverse direction through the block and see if it'll dislodge it and wash it out. You can use the full pressure of the water from your kitchen sink faucet and that may do the trick. That has worked for me before.

    As far as I know there is no additive that is going to physically remove debris, but if it's not stuck real bad the water pressure might be enough to dislodge it and flush it out of there.

    I can tell you from taking apart the HC block on the 2080 TI FTW3 that EVGA made it intentionally difficult and I would be shocked if they didn't do the same thing on the newer parts. This solution should be a last resort. They obviously do not want customers taking it apart for some reason and if you break something in the process it probably will be denied under warranty. As far as I know they don't sell any maintenance or service parts. That's why I was hoping to have the OptimusPC option available before the HC.
     
    Last edited: Aug 26, 2021
    Ashtrix, Clamibot, Rage Set and 2 others like this.
  39. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,235
    Messages:
    39,339
    Likes Received:
    70,655
    Trophy Points:
    931
    Thank you for the link. That is a truly amazing price. I have been wanting one for a long time, but didn't want to pay full retail. I ordered one and was tempted to order two. Being that it is an EVGA product, I expect it to be excellent. I hope I do not regret not buying a second one while the price is that low.

    My Logitech G512 Carbon is a really great mechanical keyboard. I like it a lot, but it relies on the lighting settings being stored in NVRAM and has no internal memory, so it frequently "forgets" my all white lighting preference. FN+ six strikes on the F5 restores it, but it happens often enough that it is becoming extremely annoying with the flakey X570 system that I am burning a lot of calories trying to get tuned to the point that I am truly satisfied with it. (It performs well, but I am going to be pretty disappointed if it ends up maxing out at 47x on all cores. Even if the benchmark results are great, 47x on all cores just ain't gonna cut it for me. That's an unimpressive 2012 era overclock ratio and I'm not going to be happy with anything less than 50x on all cores/threads.)

    I will move the G512 to my work desktop since I am not constantly tinkering with settings. I have it set for 50x and 4200 CL17 in the BIOS and never change it, so hopefully that source of frustration with the G512 will no longer be an issue.
     
    Last edited: Aug 27, 2021
  40. tps3443

    tps3443 Notebook Virtuoso

    Reputations:
    746
    Messages:
    2,421
    Likes Received:
    3,115
    Trophy Points:
    281
    I'm so close guys!

    Anyways, this is what I have so far.



    [​IMG]
     
    Ashtrix, Papusan, electrosoft and 3 others like this.
  41. Ashtrix

    Ashtrix ψυχή υπεροχή

    Reputations:
    2,376
    Messages:
    2,081
    Likes Received:
    3,281
    Trophy Points:
    281

    I ordered this keyboard. Full disclosure - I have zero knowledge in the mechkb arena, too much of things to learn about and price with a lot of other aspects. So avoided buying any keyboards from Mechanicals. However I always wanted a reliable KB with solid features but done using a crappy $14 Microshaft membrane board here since 3 years.

    This one instantly gave me some crash course lol...

    Standard Keycaps so easily replaceable. These are ABS plastic so probably low quality basic keycaps vs PBT not a biggie plus a normal KB variant which is what I need, not that TKL which is uber popular. I like the EVGA font too, Corsair also has a superb font on their keyboards (Going to get a wire keycap remover as the ring one scratches the keycaps apparently)
    Kailh Speed Silver Linear switches for gaming performance. Zero idea on these switches stuff but I like the idea of this, perfect for me not since it's not loud & simply press and boom with speed that's what I want bonus they give some free Bronze switches .
    Hot swap
    board I didn't even think things like that option would even exist, wow . Not even their Z20 has this feature.
    Metal top frame, chassis for premium feel and durability.
    Magnetic wrist rest, plastic yeah but I like it as I don't to deal with flaking or foam compression issues or any others, it will do the job very well. Plus it also has good strength.
    Per Key RGB system, which can be customized easily.
    Media keys with Scroll wheel that's what I always wanted, Corsair's KBs have that insane scroll wheel made of metal which so many keyboards lack esp those custom KBs, some of them miss out on media keys some of them do not have scroll wheel etc.
    4000Hz polling rate, apparently their Z20 optical switches are faster & Corsair's K70 Rapidfire is best in terms of that low latency performance even without 4000Hz mode but just look at the price lol. Also single USB cable (I'm fine without pass-through it cleans up desk), Fn key options which are legit useful, tons of cool specs and options.

    It not blows it just vaporizes all the other KBs in the market esp over $150 fancy wireless mice and $200 KB peripherals which were never a point for me. Literally this KB is the best deal that I have seen in years, the price is insane - $52 shipped !! plus folks are saying it's EVGA's new steps but the quality, feature set are great. Hope my first MechKB experience would be out of this world.

    Thanks a lot bro @Papusan


    Some videos on the topic of this KB, the second video lady says the logo is too pronounced, someone in the comments mentioned to take a black permanent marker and black it out for that sleek look haha.


     
    Last edited: Aug 27, 2021
    Mr. Fox and Papusan like this.
  42. electrosoft

    electrosoft Perpetualist Matrixist

    Reputations:
    2,766
    Messages:
    4,109
    Likes Received:
    3,946
    Trophy Points:
    331
    Ashtrix, Mr. Fox and Papusan like this.
  43. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,235
    Messages:
    39,339
    Likes Received:
    70,655
    Trophy Points:
    931
    Great score. That is about 100 points higher than I have ever been able to get on Port Royal. There must be some kind of configuration option I am missing or something I am doing wrong because I have been getting top 10 and top 20 scores in just about every benchmark on all of the systems I have been benching for a long time in everything EXCEPT FOR Port Royal and I don't know why. Really frustrating, so I don't burn many calories on Port Royal.
     
    Last edited: Aug 27, 2021
    Rage Set and Papusan like this.
  44. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,235
    Messages:
    39,339
    Likes Received:
    70,655
    Trophy Points:
    931
    That is also a great price. It's good having the feedback as well, so thank you. Your wife confiscating it... enough said, LOL.

    I don't use programmable macro keys like those on the left side, so I generally avoid buying keyboards that have them for that reason only. Can you disable that sensor so that it does not turn off the lighting when you are not using the keyboard? I know some people love that kind of feature and it is cool they offer it for those that do. I am just a weirdo when it comes to dynamic behavior (hate it no matter what it is involves) and it would drive me nuts having it change in any way from where I had it set. I want everything static (CPU clocks, c-states, p-states and voltage, lighting, everything) and never want anything to change from where I set it manually.

    I wish more products were offered with the option of ONLY white LED lighting rather than RGB. It would be nice to save a few bucks on the cost and never have to set anything to have it be permanently white LED with no form of animation, exactly the way I want it. But, it seems that RGB and animated lighting effects is becoming the only option that remains in today's world.

    As I am spending more time on this, I am remembering some of the struggles I initially had with 7960X and 7980XE. It was a long time ago now. While it sucks that I can't delid or run bare die, I am remembering how horrible my thermals were with both of those HEDT CPUs trying to use normal thermal paste. It was hard to tell because the temps would go nuts faster than the polling of the sensors and it would shut down faster than the sensors could reflect an overheating condition. I'm probably going to have to go liquid metal because this is very similar to the Intel HEDT struggles now that I have spend more time analyzing this. I just wish I could delid this and run it bare die. But, if past experience holds true, I'll see at least a 10-15°C drop moving it to liquid metal. I was hoping to keep the IHS looking like brand new, but I'd rather have the overclocking headroom.

    I am also seriously considering accepting the dust problems living in the AZ blast furnace and going back to the Praxis WetBench. The Corsair 5000D is a super nice case, but it's just not a good fit for my use scenario. It's fine on my work system. Set it and forget it. But, the open bench approach is just way better. The horizontal mobo and vertical GPU is also better. In addition to that, the placement of overclocking buttons and switches on the bottom edge of the Crosshard VIII mobo is inexcusable stupidity on ASUS's part. They are not accessible where they placed them, so they may as well not exist if you use a conventional case. Why they did not place them where they should be (top and right edge) reflects an unexplainable and unforgivable lack of understanding of what they are selling... I am more convinced than I ever have been that they're just a cluster of idiots working at ASUS.
     
    Last edited: Aug 27, 2021
    Ashtrix and Papusan like this.
  45. electrosoft

    electrosoft Perpetualist Matrixist

    Reputations:
    2,766
    Messages:
    4,109
    Likes Received:
    3,946
    Trophy Points:
    331
    LOL, yeah I basically said, "Hey I'm trying out this new EVGA keyboard and mouse (X17)" and she said, "let me see how it feels." She typed for a few minutes and gave me that look and...well....I then proceeded to log back into EVGA with my old keyboard and order another.

    We really like the Z20 keyboards. I don't use the macro keys much either, but I really do enjoy the overall typing experience and feel.

    You can turn off the TOF (as they call it) in the UnleashRGB (!) software or set it to perform certain functions:

    upload_2021-8-27_13-50-34.png
     
    Ashtrix, Mr. Fox and Papusan like this.
  46. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,235
    Messages:
    39,339
    Likes Received:
    70,655
    Trophy Points:
    931
    I suppose I will find out soon enough for myself, but if you know... can you do anything in terms of LED configuration without their software installed? Or, use the software to program to memory on the keyboard and then disable or uninstall the software? Before it got worn out and thrown away, that is how I managed the Corsair K95 keyboard that served me well for years. Once I had the LED lighting programmed to the onboard memory profiles, I could remove the software I no longer had any need for. The cheap HyperX membrane keyboard I have doesn't even offer any Windows software. It is 100% controlled internally by the keyboard. It is a pretty mediocre product, but I love that about it.
     
    Ashtrix and Papusan like this.
  47. Khenglish

    Khenglish Notebook Deity

    Reputations:
    799
    Messages:
    1,127
    Likes Received:
    979
    Trophy Points:
    131
    The core would be electrically compatible, but the on-die controller would make it shut down when it sees a mismatch between the core ID and vBIOS ID, and if the vBIOS ID was changed then the controller would reject the vBIOS due to the signature being invalid. This has been the way this all works since Pascal.
     
  48. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,649
    Trophy Points:
    931
    Hi Thanks. Both. And thanks for the video's bro Ashtrix. Please push out feedbacks when you'll get it.

    I need to find other ways to find such a deal. And it has to be delivered to Norway. Oh'my God I hate Norwegian rules, shop rules and the Norwegian governments greed.... https://www.toll.no/en/online-shopping/

    And thanks bro @electrosoft for the feedback on the Z20 KB. Sometimes I feel very small and can't do much what I want... As buy or sell what i want. The greed is everywheere.... Look at this beauty. Even a limit as 40 United States Dollar is tooo much. The limit is put at 0.00000$ :(

    The Norwegian government has decided to abolish the NOK 350 limit for duty-free allowance. This will take place in two stages; 1 January 2020 and 1 april 2020. From 1 January 2020 the NOK 350 limit was removed for all food stuff, drinks and excise duty goods or goods that are subject to import restrictions. This means that you have to pay VAT and any customs and excise duty on all import of such goods.

    Even Russia have better deals for its residents.
     
    Last edited: Aug 27, 2021
  49. electrosoft

    electrosoft Perpetualist Matrixist

    Reputations:
    2,766
    Messages:
    4,109
    Likes Received:
    3,946
    Trophy Points:
    331
    That's exactly what I do/did. I won't use items that require you keep their bloatware (yes even EVGA) installed to use their products once they're set up and configured like I want. You can set and forget the EVGA items and uninstall the software. I turned off TOF and set my color scheme months ago. There is a new firmware waiting, so you will want to install it at least once. I will only reinstall briefly if there is a meaningful update and even the firmware updates didn't reset or erase my settings. I like that every EVGA device I've used so far retains the settings so I can remove the software after.

    One reason I despise Corsair is that so far one of their AIOs and their memory RGB kit won't work without their software each time during a cold boot, so I wrote a script to run it, enable it then exit out but there is no reason for it to try to remain and be intrusive.
     
    Mr. Fox, Papusan and Ashtrix like this.
  50. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,649
    Trophy Points:
    931
    Some fun. I found one of the old pict with an old water cooled setup I had with I think Northwood (maybe around 2000-2005). This before the HW went into my phase cooler setup :)
    [​IMG]
     
← Previous pageNext page →