So my repaste job seemed to help my GPUs. Ran the FFXIV stormblood benchmark at 4k uncapped and GPUs peaked at 78c. Better than bouncing off the limit at 84c.
Sent from my iPhone using Tapatalk
-
-
As Fox said. In fact your stock vBIOS throttled so much that it was never even holding stock clocks, much less the overclock. NVIDIA likes to give the user the impression that he is able to overclock while in reality it doesn't actually run it more than a few seconds before dropping.
Now with the premamod It'll hold it's clocks no matter what, unless getting too hot.
So now your stock benches will already be higher than your previous overclocked ones, even though the mod is using stock clocks.
So now you have to find your GPUs 'real' overclock values. Maxwell also had this thing where we have to drop back the vRAM the higher we go with the core.
So in that gen always find the core limit first, then only add the vRAM OC.
ENJOY!
Nice!
What kind of voltage does 5.3 require? -
Spartan@HIDevolution Company Representative
Do you have a Prema VBIOS for my GTX 1080 on my MSI GT73VR Titan Pro laptop (G-SYNC)? -
Your systems own low level firmware throttle would get worse when you run a vBIOS with higher TDP and make it counterproductive.
On the Clevo's I always start by removing that, otherwise an unlocked (v)BIOS makes no difference...sadly it sometimes takes months to do that, so as time consuming as those things are can't dive into it.Last edited: Nov 19, 2017 -
Too much, LOL. Or, at least it seems that way. There are probably some other settings I need to elevate and do not know what they are yet.
That validation required about 1.480V. I was able to run 3DMark 11 Physics Test and got almost 20K points (62 FPS) but even using 1.500V it ends up with a BSOD in Cinebench or wPrime. Will have to go through the tedious process of searching for settings like I had to do to nail 5.2GHz with 7700K. What is weird is 5.2GHz is pretty easy with this 8700K, but the extra 100MHz is like the threshold where it gets much harder.TBoneSan, Prema and Stress Tech like this. -
Falkentyne Notebook Prophet
You would have to hardware mod the 1080 yourself with the SPI programmer, 1.8v adapter and the Pascal editor. The system can NOT throttle the video card at all; it's unknown what would happen if you exceed 330W total power draw, if it functions like exceeding 230W on the GTX 1070 model (without changing the powerID to 11 or 91 for 330W), it would most likely throttle the CPU to 45W, and if 330W is still exceeded, 25W, but this has only been tested at the 230W (GTX 1070) power cap. No one has tested the 1080 on the GT73VR as far as I know. The 25W CPU reduced TDP can be "circumvented" by using an invalid power ID value in EC RAM register E3 (e.g. anything that is NOT 10-1F, or 90-9F); this will enforce 45W TDP at all times (the MSI 16L13 seems to not even have a recognized power ID built in, thus it is limited to 91-95W (6700K/7700K TDP)).
however it is still unknown how much power the MSI MXM slot + power connector can deliver safely. (the MXM slot itself is good up to 195W, e.g. with a modded 1070 @ 195W). Modding a 1070 is simple because 1070 users can simply change the power ID to 330W (EC RAM register E3; 11 (skylake) or 91 (kaby lake)=330W, then you buy the 330W Delta and you're good to go.
The problem with the 1080 is there is no power ID higher than 330W, except the SLI configuration, but the 1070 SLI on the GT73VR still uses the 230W power ID (10 or 90), but this gets "doubled" somehow to 460W. I have been completely unsuccessful at finding how to double it in EC RAM; it may require the physical presence of the second card. There is no "1080 SLI" configuration on this mainboard (that's on the GT83VR only, which has a different mainboard, bios and EC firmware), but I have been completely unable to activate the 460W "SLI" power limit on the single 1070 at all, only the 330W one which you already use by default.
You're perfectly free to try hardware modding your 1080 to 250W TDP and then seeing if your CPU gets PL2 throttled when you exceed 330W of total system power, but don't expect a positive outcome. I tried to exceed 330W with the 195W 1070; the most I was able to draw was 350W from the wall (315W to the system), by running AVX prime + Valley at the same time.
One thing to keep in mind is that there are 256 total possible values of EC RAM register E3 (00 through FF). I haven't tested every single value, so it is "POSSIBLE" that MSI might have a custom value that might allow unlimited system power to be drawn, but I havent found it. I tested 00-0F, 20-2F and A0-BF, with no success, so it may not exist. Then again it may exist. when I get bored I'll try testing some more values. -
Hmm..The interesting thing is that my score in heaven with stock vbios is the same as flashed vbios. With both bios on stock clocks, the stock bios had the same score as modded so I don't think stock was throttling. How come? Second thing, my original overclock on stock bios was 1173 core and 5800 effective memory. My new overclock on modded bios so far is 1438 core and 5000 effective memory. The two gave me the same score in unigine heaven. How come? The second overclock would seem to give much better performance. Also, as a side note, is it safe to run the 970m at 1.2v 24/7 if temps are always under 80?
-
Good to for once see a 'real Notebook' brand being advertised in a TV-show:
Compare a GPU bound benchmark like TimeSpy or FireStrike, then you'll see the difference between throttle and holding clocks.
Yes Maxwell can be safely run into the low 90s.Last edited: Nov 20, 2017D2 Ultima, TBoneSan, cj_miranda23 and 3 others like this. -
yrekabakery Notebook Virtuoso
Rami Malek! Been a fan of his since The Pacific!
Sorry for O/T.
Papusan, Stress Tech and Prema like this. -
Meaker@Sager Company Representative
If clocks are the same, performance is the same, test it out in many places.Stress Tech likes this. -
-
Alright. Is there anyway to bypass the higher core clock lower memory thing for Maxwell? I have seen people with high OCs on both. If not, then prioritize the core clock right?
Sent from my Moto G (5) Plus using Tapatalk -
yrekabakery Notebook Virtuoso
IME there isn't. You just have to find the optimal balance between core and memory clocks that is stable and gives you the highest scores. -
Just to keep it OT here:
Anyone started Futureman (on Hulu)?
I think I haven't had such a big grin on my face ever since the original BTF trilogy back in the days. Love the retro style.
yrekabakery likes this. -
Dang it , dont think i can dodge that Hulu trial anymore
The Previews looked super cool for it though.Prema likes this. -
Hey another question. Do you by any chance know why Maxwell is like that? Is it for all Maxwell cards or just mobile? Also, I haven't seen this mentioned anywhere else. How come?
Sent from my Moto G (5) Plus using Tapatalk -
We did a lot of analysis back when Maxwell was new and many of us posted about these findings and advised user to find the core limit first and then only the one for the vRAM. It maybe due to MXM, BGA power designs that the vRAM instability increases with higher power draw by the DIE.Last edited: Nov 22, 2017Ashtrix, Stress Tech and temp00876 like this.
-
I didn't know Dell got own special designed unlocked Core i7 BGA versions from Intel for use in their notebooks Jokebook's. Damn, nice... 7820hk Extreme Edition and Intel® Turbo Boost Max Technology 3.0. Only they could get rid of the TRIPOD. Yeah, Dell is very good at advertising, but the execution is as usual... More like Scam
And I didn't know Core i5 processors come with Intel® Turbo Boost Max Technology 3.0. I really need to step up and read me up more on newer tech
Maybe someone still remember... Dell adverticed new AW18 "Limited Edition" with i7-4790K? This start to get better and better. Where will this end?
Last edited: Nov 22, 2017Ashtrix, temp00876 and Stress Tech like this. -
Is firestrike harder than heaven? I can run the core at 1438 in heaven no problem, but I can't even get a 1138 in firestrike demo. It's starting to look like I lost the silicon lottery
. Also, does it matter if I have inspector opened while stress testing?
Sent from my Moto G (5) Plus using TapatalkLast edited: Nov 22, 2017 -
Meaker@Sager Company Representative
-
Yeah 4,4 GHz most probably on a single core... That is a joke (bad one) not a OC...Papusan likes this.
-
Good times i still remember my 970M pushed to +300 MHz on core (For me 85 C deg was max because everything around 90 resulted in problems). For Pascal 10XX magic we need to wait longer i suppose.
PS sry for doublePrema likes this. -
Meaker@Sager Company Representative
Pascal has a lot less headroom left in than maxwell had to be fair.
-
So currently I can run at 1488/1525 at 1.2v in heaven, games, whatever with no instability at all. However, in firestrike, even 1138/1250 at 1.2v crashes within 30 seconds. Normally, I would pass it off as just losing silicon lottery but everything else tells me otherwise. Does firestrike just hate me?
Sent from my Moto G (5) Plus using Tapatalk -
yrekabakery Notebook Virtuoso
What notebook is this, the P650SE?
Have you measured draw at the wall using a Kill-a-watt meter, or checked GPU power in HWiNFO64, when the 970M is overclocked?
How many watts is your AC adapter? -
Well I can't quite get 35K GPU score in 3dm11
https://www.3dmark.com/3dm11/12506127
firestrike unfortunately does not run well on Win7 and I can only get 25180:
https://www.3dmark.com/3dm/23484034
In 3dm11 I just need a better 1080. CPU is slightly limiting in test 1 but it's very little. With a max temp of 53C and added caps I can only do 2100 core. The caps did end up adding a boost bin. With equivalent setups to johnksss and Mr.Fox I'd only have been able to do 2075 and they got 2113. -
yrekabakery Notebook Virtuoso
Just curious if you or any of the other overclocking experts here have some input on this:
Does setting power management mode in the Nvidia driver to 'prefer maximum performance' improve overclock stability? -
I've found it to reduce crashes on clock transitions. If I have it set to optimal I'll often get crashes when the card switches from 2D to 3D clocks. Once the clocks are set though there is zero difference.Ashtrix, Papusan and yrekabakery like this.
-
yrekabakery Notebook Virtuoso
Yes this is exactly what I have also noticed. -
Happy Thanksgiving @ Everyone!
Ashtrix, cj_miranda23, ssj92 and 11 others like this. -
Meaker@Sager Company Representative
Yes to @ Everyone and you too Prema. -
It is a p650se-a. I don't have access to a kill a watt meter. I'll try to check the hwinfo data. I'm using the 180w adapter. If I'm understanding, then you believe that I'm running into power issues. Why would it be for firestrike only? Is it because firestrike uses the CPU and GPU intensively? I undervolted the CPU (5700hq) by -80mV if that's useful.
Happy Thanksgiving @ Everyone
Sent from my Moto G (5) Plus using Tapatalk -
Firestrike only? You could of course test with 3DMark11 and see if the same happen.
Happy Thanksgiving to all of you
Ashtrix, clayton006, D2 Ultima and 3 others like this. -
yrekabakery Notebook Virtuoso
Yes, because Fire Strike loads both CPU and GPU. Prema mod removed your GPU throttle, and you've overclocked it pretty good. An unthrottled Maxwell GPU can draw a lot of power when overclocked.
A Kill-a-watt meter is a useful tool to have. It told me that the default 180W PSU which came with my P650SG was insufficient, because I was seeing 220W pulled from the wall in 3DMark 11 Graphics Test 1 on stock settings once all the firmware throttle had been removed. Assuming 87% PSU efficiency, that's about 190W of actual system usage. With the non-stock settings in my sig, I'm pulling 245W from the wall in 3DMark 11 GT1 after upgrading to a 230W PSU.
You have a Broadwell CPU and 970M, which use less power than my Haswell and 980M. But it would still be good to rule out any PSU issues.
To further test, you could run 3DMark 11 like @Papusan mentioned, which draws more power in GT1 than at any point in Fire Strike.
One day, @Everyone is gonna log in to NBR and be like "where did all these random happy Thanksgivings come from?"
Papusan likes this. -
MSI vBIOS sucks, clocks are everywhere
Stock: https://www.3dmark.com/fs/14224321
Minor OC (GPU +150/200 & CPU @ 3.9Ghz): https://www.3dmark.com/fs/14234972 -
Meaker@Sager Company Representative
Seems like targeting lower clocks/voltages could help.
-
Wait a minute. If the PSU can't handle the power draw wouldn't the laptop draw power from the battery + PSU? Or does the battery auto switch off when the laptop is plugged in. I want to know because I would prefer not spending $15 on a meter that I would only use once
Sent from my Moto G (5) Plus using Tapatalk -
yrekabakery Notebook Virtuoso
Any number of things could happen. It could throttle, discharge battery, crash, shutdown/reboot, auto disconnect the PSU, etc.
Like I asked before, what's your GPU power in HWiNFO64 look like? Do you get crashing in 3DMark 11 in addition to Fire Strike? Did you check with your CPU undervolt removed? -
Ok so even with battery, an underpowered PSU would still be a problem. I've run with and without CPU undervolt and it still crashes. I haven't checked with 3DMark 11 yet.
Edit: It's definitely a PSU issue. When firestrike crashes, my GPU alone is pulling 180w. Throttlestop won't work after crash but I'm sure the CPU is pulling at least 40w as well. Looks like I'll need at least a 240w PSU. Any recommendations? Can GPUs artifact if they don't have enough power?
Sent from my Moto G (5) Plus using Tapatalk -
Falkentyne Notebook Prophet
What should happen is the AC adapter should trip and shutoff from overcurrent (AC light will turn off) and the laptop should run on battery power until you unplug and replug the AC into the wall. That's assuming you bypass the limits of the AC adapter without bypassing the limits of the laptop itself. That's if I understand the original question. -
Ok. What would be the limits of the battery? I'm asking to troubleshoot an overclock that seems to be caused by the PSU. The DC adapter is rated at 180w and the GPU itself uses 180w sometimes. Would that be enough to cause crashing?
Sent from my Moto G (5) Plus using Tapatalk -
Falkentyne Notebook Prophet
Then you will be tripping either the AC adapter or the laptop's power delivery system. Do you know what video card / TDP that mainboard can actually handle? Some ODM's use the same mainboard for multiple video card TDP's (MSI and some Clevos), so you wouldn't be tripping the system if you had a lower TDP video card and then TDP modded it up to the higher spec one.
What is the stock TDP of that 980m?
wait a minute here.
You're pushing 180W from a 980m which has a 125w stock TDP and complaining that the laptop can't handle it?
How much power can the OP's laptop deliver to a 980m? Did he even list what laptop he had?Last edited: Nov 25, 2017 -
I have no idea. My laptop, the P650SE-A only has ~1000 built so I can't find any info
-
Falkentyne Notebook Prophet
Pinging @Prema
If you're pulling 180w through a 980m, if that exact same laptop configuration does not have a "desktop" 980 then you're going far beyond the power delivery system of the hardware.
The 980m has a TDP of 125w. The "Desktop" mobile 980 has a TDP of 125-150W. If your laptop was NOT designed to handle a non-m 980 card, then you're exceeding the power delivery system completely. Just FYI, while the clevo 870DM can handle 260w directly through the MXM Port ( @Khenglish tested this directly), very few are capable of this. The MXM 3 port is electrically specified at being able to handle 195W maximum of burst power. The MSI GT73VR and GT75 can handle 195W sustained to the port. I think you have both AC problems AND PSU problems. You can try a 230W PSU but i cant expect results. -
Oh shoot I'm actually running a 970m which has a TDP of 100w. Is it time to dial back my insane overclocks? Why is the power consumption so high? Also, how come other overclockers can get such big OCs? Is my power delivery system screwed?
Sent from my Moto G (5) Plus using TapatalkLast edited: Nov 25, 2017 -
yrekabakery Notebook Virtuoso
180W? That's...actually pretty impressive for a 970M, considering it has has 4 power FETs on the P650SE board while the 980M board has 6 FETs, and if you can cool it. I don't think I've ever seen my 980M, at the settings in my sig, use more than about 150W according to HWiNFO64, and it hits 80C on max fans after repasting with Conductonaut and repadding.
You can get the P6xxRS/P6xxHS 230W PSU. That's what I'm using on my P650SG. It's just plug-and-play.
I found this. That's only on stock vBIOS though. For BGA systems like his, TDP (chip) is listed instead of TGP (package). Eurocom lists 60W for their BGA 970M, for example.
He has a 970M, that's why I thought it was pretty high, considering my 980M maxes out just over 150W at 80C at 1351/5412 and 1.15V.
Keep in mind that you're talking about MXM though, so it might not necessarily be applicable to BGA.Last edited: Nov 25, 2017 -
Yeah my posts actually go a while back to 14413 when I first asked about overclocking. I'm not complaining, just new to the laptop OCing community where there are hardly any resources. Thanks for the help btw
Sent from my Moto G (5) Plus using TapatalkLast edited: Nov 26, 2017 -
Thanks a lot. I think I'm close to unravelling all this overclocking stuff. Where are the fets at and what kind/thickness of thermal pads do you recommend?
Also while I'm at it, where else should I put thermal pads?
Would copper shims + thermal paste be better than thermal pads?
Is my power delivery screwed or is it fine as long as the fets and vrms and whatnot are cool? L
I'm getting a u2 plus and modding it pretty soon as well as modding the bottom plate by opening the whole thing up. So the power delivery should be find temperature wise.
What's the copper tape on the back plate for?
Sorry for all the questions, and thanks for the help so far.
Sent from my Moto G (5) Plus using Tapatalk -
The fets on all MXM 3.0b cards are up the opposite end of the slot. Small rectangular chips between the big square blocks (inductors) and the edge of the pcb.
If you're going to be ballz2wall like that you want to double check and test everything extremely carefully. 140W on 980Ms @ 1375/1.06V gave me the heebie jeebies even tho core temps barely broke 75C, because I simply never knew what the VRM temps were, and as power draw goes up their inefficiency (heat produced) increases exponentially. Black screen crashes are your indicator the VRMs are shutting themselves off but running them near their max destroys their longevity, its something you want as cool an operating temp as possible.
With the 980M VRMs spaced so far apart I found multiple smaller lengths just to cover the FETs was no worse than one big long strip, uses about 1/3rd of the amount of pad too
I was just starting on modding my heatsinks with copper shim and thinner pads but shortly after I started looking into it I ruined the socket and while waiting on a fix (which has turned into a bloody saga) I started playing with 1070s-ing the older P370EM.
The copper/foil tape thing I dunno. Maybe its some insulation to try and keep case surface temps down but usually a few C gain can be had from improved airflow if its blocking intake vents like the tape over the P870DM/DM3 slave GPU fan -
Thanks. My laptop is soldered not mxm but the other info is useful nonetheless. How would I check the temps of my vrms?
Sent from my Moto G (5) Plus using Tapatalk
Clevo Overclocker's Lounge
Discussion in 'Sager/Clevo Reviews & Owners' Lounges' started by Spartan@HIDevolution, Mar 4, 2016.