Just read that Nvidia is are bringing back the OC'ing on the GTX's, thanks for the update Santander!!
-
-
Meaker@Sager Company Representative
Brute force from run attempts? Higher mem clocks too
-
Meaker@Sager Company Representative
Ah miss read it as 1475mhz. There is your answer lower mem clocks give the core more power room.
-
Yes, lowered memory clock, lowered voltage and ran it on AC cooling. Bumping the memory back to 1400 with over 1400 on core absolutely doesn't fly with my GPUs. Giving them more voltage just makes the screen go black sooner. I can only run 1400+ on memory if I back off on the core clock. I have not tried lowering the power target below 100% yet. That might work.
I am focusing more on my CPU overclock at the moment. Even with Liquid Ultra it seems to run way too hot compared to my Alienware 18 or M18xR2, even running on AC cooling. I am going to check the contact between the CPU and heat sink with pressure paper to see what's up with that. -
whenever its talked about CPU OC im always in. whats your clock and temp under full load with full fan without AC, and with liquid ultra? I am tempted to get one of these and solder mini pcie connector to get that msata working lolMr. Fox likes this.
-
OK, interesting. So, when I took this bad boy apart to find out what the deal is, I saw that none of my brush strokes on the die or the copper plate are disturbed to speak of. Normally after cinching things together with Liquid Ultra everything smooths out and you cannot see those brush marks any more. So, I clean everything up and grab my pressure-sensitive paper and find I cannot get any impression from the contact. None! So, I remove the thermal pads and still nothing. So, contact is either minimal or none.
I slapped some IC Diamond on there, huge glob (about 2 or 3 pea size) right in the middle of the die, button it up and now the temps are "fair" (as good as I can expect for not being Liquid Ultra). The first time I pasted monster this with Liquid Ultra I used a whole tube and applied a huge puddle of the stuff in the middle of the die and temps were good. The last time I applied a thin layer to both surfaces as I normally do with Liquid Ultra and it appears the two halves perhaps never came together.
I am going to take it apart again and see how much of the IC Diamond spread out. I put enough that it should have made a gigantic mess if there is good contact. Will report back shortly. -
Mr. Fox, you can use a speed square or other triangle, to check the concavity of the copper plate. My first one was pretty concave and not perfectly flat, thus liquid ultra did not work on it very well. The newer sink that I purchased was dead nuts flat....fyi
Last edited: Feb 21, 2015Mr. Fox likes this. -
I may have to try another heat sink. Since this is still under warranty, I may see if Eurocom will send one to me. When I say no imprint, I mean literally none at all. Not light around the edges or light in center as one would expect if the copper plate were not flat. It was as though the heat sink was not making direct contact with the CPU. So, with your new heat sink Liquid Ultra works amazing, as it normally does?
After tearing it down again (since my last post) the giant glob of IC Diamond spread out nicely, but there was not the mess I expected to find from a ton of extra paste, which leads me to believe there is an inordinately large gap that need to be bridged. Man, that sucker was stuck super hard this time, LOL. For a minute I was wondering if I was going to get it off without the plastic strap handles snapping off.
So, this time I tried using a big drop of Liquid Ultra instead of brushing it on. Will see how that goes. Has anyone tried their Liquid Copper yet?
reborn2003, Solariseir, ole!!! and 2 others like this. -
Mr. Fox, yes liquid ultra works great on my machine, I would also be leery of your last couple of tries. I have had several runs where I did not get the "seat" of the CPU heat-sink correct, with similar issues as you have had, I find I have to wiggle the entire unit a little in order for it to fall into place perfectly. You may have a bad heat sink, but I would check your assembly method several times first, as I have done this with a fair amount of experience and still not had it seat correctly......so that leads me to think that you might be having the same issues....possibly, The first heat-sink that I said was concave, still sometimes would not seat perfectly every time.
-
My machine came with IC diamond which was very good, I just decided to try some different ones when I did the first hydro-mod. Then I tried Antec7, Arcticsilver7, JunPus D9000, and liquid ultra, Liquid ultra does keep the idles temps the lowest, but the JunPus D9000, also did a very excellent job, maybe liquid ultra beats it by a delta of -2 deg C.
-
Meaker@Sager Company Representative
My contact pattern above. Almost the opposite of yours as most stays on the core. -
Your memory is fine. You should never need to increase that memory clock, to be honest. At least not as far as games go. I know Unigine Valley loves memory clock OCs, but I am unsure of any other benchmark.
160GB/s is perfectly fine for most games I've ever seen. 192GB/s IS enough for anything I've ever tossed at my GPUs.
SLI doubles bandwidth, so you have 320GB/s at stock 5000MHz clocks once SLI is turned on (yes, even if you force to single GPU; it still copies memory to the slave in that mode). Also, you have Maxwell's memory optimizations, which were prevalent in the CUDA benchmark over that 970 vRAM issue. 780Ms at 5000MHz were distinctly slower than 980Ms at 5000MHz, so I'm certain you don't need to worry about memory so much and you can focus on your core as far as your OC runs go. Who knows? Maybe it might get you over 1500
Mr. Fox likes this. -
Meaker@Sager Company Representative
Sli does not double bandwidth, there are overheads (along with both cards accessing the same data more than once) and write bandwidth remains the same.
Prema likes this. -
@Meaker - Interesting the thermal paste contact patterns are the opposite. I wonder if it was because my paste was on for less than a couple of hours just for testing? Paste always seems to be stuck more aggressively to copper than it does to a heat sink, probably due to porosity of copper compared to whatever the heat sink die and/or IHS are made of. Also interesting is how difficult it was to get the heat sink off with the paste being so new... crazy how much force it took to lift the heat sink off with paste that had not cured yet.
After applying the tube of Liquid Ultra as a large glob in the center of the die, my temps are notably better than IC Diamond was, even though I did not apply it in the recommended manner (spreading it using a brush). That does suggest there is a low contact pressure issue with this heat sink. Will contact Eurocom tomorrow to see if they will send another heat sink. If not, I may buy one to see if that helps.
@D2 Ultima - most of what I do with my systems is focused on benchmark number-chasing and games are merely an afterthought. But, you are correct that memory (both vRAM and sRAM) speeds are generally not critical. Even for benchmarks, memory overclocks add minimal performance compared to core clocks. Every little bit helps chasing numbers, but "little bit" is quite literal where memory speeds are concerned. Other than making YouTube videos to show off with, I almost never overclock my GPUs when playing games. Also bears mention that I game in my lap with the internal laptop display and no more than 1080p or a custom 1440p resolution created with NVIDIA Control Panel. It has been about 8 years since I gamed sitting at a desk, in front of a monitor. -
Have been tinkering with Runtime Turbo settings and they actually work pretty nice with XTU. Using them for the first time is a 3-step process. Enable it in the BIOS first, save and exit, Apply with XTU and reboot, then reapply. After that they can be changed on the fly (without a reboot).
Attached are 4.3, 4.4, 4.5 and 4.6GHz profiles for anyone that is curious.Attached Files:
-
-
Interesting. So what DOES happen to memory bandwidth? If you could explain in greater detail. I was always fairly certain that SLI was indeed a boost, if not exactly 100%, to memory bandwidth.
-
Mr Fox, if there is warranty, why not use it? Your heatsink definetly has something wrong with it.. You've already in a sense wasted 2 tubes of liquid ultra so it might worth trying to get a new heatsink from Eurocom..
-
Meaker@Sager Company Representative
Each card gets it's full bandwidth to render its frame, so over two frames any data that a card accesses for a frame that the other does not have to effectively is increased bandwidth. It's a good chunk extra (and certainly still worth it) but not double. This is part of the reason sli will not double your performance.
-
I see. And this load reduction doesn't show up in memory controller load in sensor programs, does it? It appears to be only beneficial to games, no?
-
Nice scores. Your 4960X appears to be what is the driving force. Quite a bit better than I am able to squeeze out of this 4930K. It seems to want too much voltage at 4.6GHz and doesn't want to run 4.7Ghz at all. If the BIOS had more configuration options, it might do a little better. Maybe around Christmas I will grab a 4960X off of eBay if I can find one for a decent price. Before you and @johnksss went with the Extreme CPU, everyone was saying there was no advantage and the 4930K was a better overclocker. I had a hunch all along they were totally full of crap, and it's clear now that they were, LOL. Extreme CPU has always been the best option for overclockers.
-
@Santander, very nice score there. You guys are making me nervous getting pretty close to my 4.9 GHZ 1465 XTU mark!, Hope you all break that barrier, then I can play follow the leader.
also just to mention, I had the best luck with PLL override and runtime turbo "ENABLED" for my best runs.
here you can see the light between the square and the old heat sink copper plate, definitely concave! lolLast edited: Feb 22, 2015Santander likes this. -
Almost makes a guy wonder if it wouldn't be better to buy a 3970X instead of a 4960X. I know my 3920XM is far more reliable and stable at overclocking than the 4930MX, and run MUCH cooler on top of being more stable. Everything published absolutely says otherwise, but the same can be said about the 3920XM versus 4930MX.
Takaezo likes this. -
Well other than Xeon's 3970x is the only 150watt CPU out there, even the newest ones are only 130watt's I believe, a cool 150watt 6core seems more powerful than a cool 130watt 6core CPU. But As architecture changes power decreases because heat becomes the greatest determinant, if you notice on my 1465 XTU score there is no thermal throttling that occurred, that means that the actual power draw to the CPU is not maxed out imho, it is simply limited by the quality and temperature of the Vrm's and mosfets I believe. Shear horsepower as Meaker has said, as it relates directly to the power/stability, basically we have the same L3 caches and the same core count, the power that can be supplied to the cores is the only difference (other than some extra lanes I believe).Santander likes this.
-
Try setting your core current to around 293 A, everything else looks good except your PLL override and the enable runtime turbo override, The key is to find a bootable core current, then increase turbo time and mV to "keep that boot getting hit with turbo voltage", even though it is underpowered, it is effectively like feathering the throttle on a motorbike or a snowmobile, in order to keep a poor fuel mixture from killing the engine, effectively I used the turbo with a under-volted power package to adjust my air fuel mixture by "feel"
@Mr. Fox I have no idea how you can boot with a core current of 350A The highest I have ever been able to boot is 309A, this leads me to believe that your system is superior, and that you may be able to supply the greatest amount of power/stability and capable beating 4.9ghz.Last edited: Feb 22, 2015 -
Well the 48xx and 49xx chips are Ivy-E, so going to a 3970X would be going to Sandy Bridge-E... even you would rather Ivy Bridge XD
-
Nothing I have tried allows this 4930K to go above 4.6GHz. Maybe I haven't tried the right thing yet, but I am running out of ideas on what I can change that I haven't already tried. With the multis set to 47, starting with my stable settings with multis at 46, going higher with voltage causes the clocks to go down. Going lower with voltage cause 0x124 (low voltage) BSOD. Lowering core current below 320A cause the clocks to go down. I'm thinking 4.5GHz is the sweet spot and 4.6GHz is the max stable OC for this particular 4930K. If I recall correctly, @johnksss said it was good for 4.5GHz stable in his Clevo, which is where I bought it, so I probably just need to be happy that it is better than most 4930K samples running 4.5GHz stable and 4.6GHz too hot and enjoy it for what it is worth until I can find the spare cash to upgrade to 4960X and hope it's a nicely binned one when I do.Takaezo likes this.
-
Well that is what I am saying is to lower your core current and increase your turbo mV decrease your turbo wattage in like increments of 2, often decreasing the time setting will enable you to clock higher and more stable....but that's just my experience. Possibly you need more core current in order to have both 980m' SLI? I don't understand how your k series at 130watts would allow more current than an x series that is 150watts?
good luck -
Btw if you guys are still looking at squeezing performance out of your Maxwell cards, this might be useful.
On my desktop 970s there's very roughly a 10:1 benefit ratio between boosting core vs memory. By this I mean every 100MHz increase in memory frequency is roughly equivalent to +10MHz on the core.
Unless you're gaming at high resolutions on an external LCD, I'd prioritize core over memory.Last edited: Feb 23, 2015Takaezo likes this. -
There always has been a minimal performance benefit in GPU memory overclocking. This is largely true with system memory as well. That said, sometimes just a few points means the difference changes benchmark ranking. If all you're doing is playing games, both are a waste of time, and where system RAM upgrades are concerned, for gaming it is a waste of money, too.
-
I meant to convey the point that if you had to choose between squeezing out more core speed but having to lower memory a notch versus the other way around, squeezing the core might be more fruitful.
-
sandy at 32nm is easier to oc than ivy at 22nm, and as it gets smaller it'll be hard and harder to find golden chips. im still waiting for mine from a guy i know sitting in his desktop, hes got 5.5ghz at lower than 70c with custom mini AC inside his water cooled desktop lol. but the machine p570wm i get the mobo maybe junk, render CPU useless.
-
The only problem with Extreme Edition CPUs in this system is cooling as they have a much higher default voltage and while we can overvolt every chip we can't undervolt any.
What you would ideally want to look for is a cherry picked Extreme Chip that already stays exceptionally cool on default voltage. -
Yup, that's right, and speed generally trumps efficiency with Intel CPUs. With AMD, not so much... they are lame no matter what speed they are running. All the banter about clock-for-clock efficiency of the newer Intel chips is mostly marketing hype to sweep the trash under the rug. An "efficient" chip that can only run 4.3 or 4.5GHz is going to get spanked by one that runs 4.8 or 5.0GHz stable. Haswell is a great example of this. Putting the memory controller on the chip was also a retarded move. Efficiency is a "nice to have" but it is never going to make up for brute force; and when it gets in the way of overclocking it is no longer a "nice to have" feature.
I don't know that I would say the BIOS is "junk" on this machine, but it is definitely missing some important overclocking features, including CPU input voltage, settings for static versus adaptive voltage, c-states controls (more than the mickey mouse options it has for this), cache ratio and voltage, base clock and CPU thermal configuration options. I am disappointed that all of those BIOS basics are nowhere to be found. I know Prema unlocked everything there was available to unlock, but those are apparently really missing and not hidden. -
I looked back a few pages and all the XTU screenshots are really threadbare indeed. For example, this is what mine looks like (Not even using Prema BIOS)
Takaezo likes this. -
Meaker@Sager Company Representative
That's because haswell has some of the voltage circuitry built in so there are extra controls.
Takaezo likes this. -
Yes, this may be true, but his alienware BIOS looks mostly like this (with the greyed out controls I have unlocked) and that's using Ivy, so I don't get why it's an issue here.
-
XTU makes many CPU control settings accessible even when those menus are hidden in the BIOS menus. The fact that they are missing in XTU indicates there is nothing available for Prema to "unlock" for us. There can be other important settings not directly related to the CPU and overclocking that an "unlocked" BIOS is useful for. The Alienware 18 has all of these same settings with XTU (identical, in fact,) and no "unlocked" BIOS. The reason Alienware 18 owners have been complaining about not having an "unlocked" BIOS is the other extremely important stuff Alienware has blocked us from accessing, such as VGA configuration (PEG, SG, Auto, IGFX controls), CPU thermal configuration, SATA port configuration, ACPI controls, cTDP configuration, etc., etc. The lack of access has made using 970M and 980M on the Alienware 18 extremely difficult and vBIOS flashing impossible without a UEFI version of NVFLASH.
For the P570WM, CPU static versus adaptive voltage control, CPU input voltage, VDROOP, CPU cache ratio and voltage and CPU thermal configuration options would add a great deal of value and make overclocking easier and more stable.
-----
Alienware 18 - stock BIOS XTU access... appears it actually has a few features that the Clevo has greyed out.
-
Meaker@Sager Company Representative
Vmem clocks are important going beyond 1080p and exceeding 60fps.Mr. Fox likes this. -
Does SSAA/Downsampling stress as much as actual higher resolutions? If this is the case, at stock mem clocks I've not seen a game pass 60% memory controller load (while in SLI at least); even BF4 at ultra preset with 200% resolution scale at 1080p base res.
-
Meaker@Sager Company Representative
Memory controller load is not important, it's the moment to moment delaying of processing while waiting on data, the average over an entire second (1.27 billion cycles) does not give you much information.
-
OK, this is more interesting. That big (entire syringe) drop of Liquid Ultra that I showed in the photo the other day didn't spread hardly at all. See photos below. What is interesting is that the temps with that were better than my temps now with IC Diamond by about 3-5°C under sustained full load. That further confirms why the brush marks in the paste on my last Liquid Ultra application were never disturbed and why I could not lift a contact impression using the contact pressure paper test. I think it is clear from all of this that there is definitely too wide of a gap between the copper plate and CPU die/IHS. Looking at what happened here, the gap appears to be substantial. To the best of my knowledge there should be NO AIR GAP between them. I wonder if the Xeon CPU heat sink is different? This machine came from the factory with a Xeon processor.
As a side note... The fact that the temps with Liquid Ultra in this sad situation were better than using enough IC Diamond to fill the gap is a pretty amazing testimony for Liquid Ultra, too.
Edit: this dog-gone Xenforo software really stinks. I am so sick of having images not display. If you run into that when attempting to view this post, here are the links to them...
http://forum.notebookreview.com/attachments/003-jpg.122001/
http://forum.notebookreview.com/attachments/0001-jpg.122082/
http://forum.notebookreview.com/attachments/0002-jpg.122083/
http://forum.notebookreview.com/attachments/test-jpg.122084/ -
So essentially there's no real way to tell how much memory bandwidth is "enough" for a game? How does one find out what "enough" exactly is, or when to increase it etc, if sensors don't display any useful information?
-
I LOVE how your third core was all like "yeah man, screw 4.3GHz for this screenshot."
-
LOL, just timing of the screen shot... There is no way to completely disable c-states with this BIOS, so had I waited a bit longer all of them would have been at 1.2GHz at idle.
-
Meaker@Sager Company Representative
You have enough bandwidth when performance stops increasing from higher memory clocks (that or your memory is starting to reach its limit).
-
So then what's the big deal about huge memory bandwidths on cards like the Titan Black etc. We've already determined that architecture trumps mem bandwidth at higher resolutions; with the R9 290X smacking Titan Blacks senseless at 4K, as well as the 980 smacking both senseless at 4K, despite having the least memory bandwidth (even with maxwell memory optimizations). We've been stagnated at memory bandwidths in high end cards for ages; the GTX 285, GTX 580 and GTX 680 were all in the same general ballpark for memory bandwidth. Only GK110 and above has really pushed the bandwidth envelope.
I'd have liked to have seen a bottleneck point for most demanding games; like where you can clearly see a bottleneck with vRAM on 2GB cards on games that need a whole lot more than 2GB, even though the power remains the same (such as 680 2GB vs 680 4GB in games like modded skyrim, etc).
Otherwise, everything we know and say about memory bandwidth in gaming is pointless, and all cards might as well come with the same bandwidth and just increased vRAM counts as necessary. -
Meaker@Sager Company Representative
4K speeds can depend on other factors and not just raw memory bandwidth, cache sizes are important and having an efficient linking mechanism in crossfire/sli setups is massively important. The point to point Xfire engine going over the PCI express bus helps a LOT with 4k scaling.
-
That might make sense, but I was strictly speaking single GPU in this case. cache sizes are the only viable thing you've listed there. I really am trying to understand more about the whole ordeal. Mainly so I can learn, secondarily so I can go fix my vRAM guide's information.
-
Read somewhere that because of GDDR5's error correcting ability, sometimes if you push the speeds too far your performance can actually decrease due to this mechanism kicking in?
*** Official Clevo P570WM | P570WM3 / Sager NP9570 Owners Lounge ***
Discussion in 'Sager/Clevo Reviews & Owners' Lounges' started by jclausius, Feb 5, 2013.
![[IMG]](images/storyImages/NA0taszh.jpg)
![[IMG]](images/storyImages/fCRIukXh.jpg)