Stock watts are at 206W for timespy....
-
That sounds exactly what it should be and pretty much identical to the MSI 1080 with modded vBIOS at stock clocks.Johnksss likes this.
-
yep. And max is running just under 270W so far, but now it's voltage limited instead of power limited....
Run test 2 of timespy and about mid way through is when the voltage drops and stays there until the rest of that run finishes. (1.062 to 1.050)Mr. Fox likes this. -
-
Yeap.
But its pretty amazing for a 150W GPU. Im guessing its only gimped because they thought the cooling solution cant keep up?
Ashtrix likes this. -
It's not gimped or a TGP muscle play.
I am obliged to make sure that end-user don't destroy their hardware by using the firmware.
In order to ensure that we have to measure temps and power draw under worst case scenario test conditions and if required put safety measures into place.
The above picture was just that:
A test to find out what the GPU is ready to pull on stock clocks and high temps.
It is in no way representing what @Mr. Fox or the system is capable of.
It's not all fun and play for brother Fox. If it would be up to him it would be full-board AC cooling and OC all the way...
Last edited: May 24, 2017 -
-
-
NICE! I have been trying to find the right addresses for that, no go for now
Needs your touch.
Also thanks for that graph! -
Yes, the last thing we want is machines dying from firmware mods that allow the user to over-tax their hardware and ruin it.
A potential fire hazard if nothing else. MXM is technically supposed to be like 75-100W max, even though it can handle more. 980M and 980 N and the 150W 1080 MSI GPU are already pressing that spec way beyond its original intent.
Another reason keeping it on AC is a good idea beyond performance. The core is not the only part that can get hot from being severely stressed out.
Better to throttle than to burn up the laptop, or burn the house down having fun.Last edited: May 24, 2017 -
Ill give that a go when i put all the screws back in. Right now it only has 2.
Sent from my SM-G925P using Tapatalk -
I sort, of kind of agree with that, but not entirely. If it's gimped too much to do everything I want it to do, then I would prefer to sell it as a potent and fully functional machine and move on to something more robust that can handle the abuse that I want to dish out. I'm not in a position to absorb a massive financial loss. Downgrading from SLI to single GPU because I killed one of my GPUs would suck and so would having to fork out $1200 for a replacement GPU. This is something I would do everything within my power to avoid. So, I am always going to approach this with some caution and stop pushing things if I see any signs of trouble. If I am not satisfied with the final result within reasonable limits of insanity, then I would prefer to just sell the machine optimized to its fullest and use the money to do something different, like build a desktop.
My expectations of the little MSI bruiser are less than my expectations of the DM3 based on how they are built. The Tornado F5 is amazing, but it is small... and with that smallness comes some functional limitations that the DM3 does not (or should not) have.Last edited: May 24, 2017 -
Very true. For me though all im using this machine for is benching. Like literally i have not used it to play games for more than 10 hours collectively at the most. At the end of the day i swear by my desktop and pretty much done with DTR's after the DM3.Last edited: May 24, 2017
-
Yup, I'm totally synced up with you on that. I am leaning toward the idea that the DM3 is going to be the last monsterbook for me. The high end laptop gig is too expensive, and the cat-and-mouse nonsense with firmware castration is truly maddening stupidity. Taking competent hardware and cutting its nuts off with cancerous firmware and crippled EC and what-not deserves a bullet to the head for those architects of the asinine. And the middle-ground gamer-boy BGA trash from Alienware, MSI, ASUS, Razer and the other filth-peddlers is totally disgusting to me. Literally makes me want to puke. That's why I love Brother @Papusan avatar so much. It's 100% accurate for me, LOL. So, probably going back to building desktops and buying dirt-cheap dispopsable Acer/Dell/HP turdbooks from Walmart and $50 Kindle Fire tablets from Amazon, because there's no way in hell I am going to spend more than about $500 for a BGA piece of crap. That's even pushing it for BGA rubbish. I think $250-$300 is the sweet spot for that trash... even the gamer-boy rubbish is worth no more than that as far as I am concerned. I'd rather stoop to using a console for gaming than pay $1000 or more for a castrated BGA gaming turdbook.Last edited: May 24, 2017Ashtrix, Stress Tech, temp00876 and 2 others like this.
-
Actually the reason that these Notebooks can be tweaked so much is the sole reason that keeps me interested in them.
I would be bored to death with something like a desktop that works right out of the box. It's like buying a Lamborghini where all that's left to do when you don't drive it is keep polishing it.
-
That is very true. And, you are the only reason they are worth having today. It's a mutually amicable arrangement. I absolutely would not own one if I had to use it with stock firmware. Without your talent they would all be junkbooks, BGA or not. It's a shame they have gotten to the point of being malicious though. Would it be fair to say it gets increasingly difficult to fix their broken junk with each generation, or is that just a false impression on our parts? And, my God, the premium they charge for these video cards is just beyond retarded now. I thought 980M prices were stupid (and they were) but starting with 980 N MXM cards they went total retard on us with the pricing.
We are extremely grateful for your talent and your friendship. It is priceless to us, Brother @Prema.Ashtrix, Stress Tech, jaybee83 and 8 others like this. -
Hey man you, Fox and John are the only actual reasons why i even picked up the DM3. The DM1 is working great and serving me well alongside my desktops.
Ofcourse the main driving force was being able to try and catch upto John though
Last edited: May 24, 2017 -
I think there are two main interrelated reasons for the 'big guys' to pull this off:
One is of course profit. The technology we are being sold today has already been succeeded by a couple generations by the time it's released and they attempt to milk a product as long as they can > delay instead of innovating at the fastest pace possible. So in order to do that 'artificial' limitations are put into place, which ensure that the 'next best thing' isn't obsolete today.
The other is fear. Under the umbrella of 'security' & its supposed remedy the idea of 'being in control' loads and loads of artificial signatures, blocks and locks are put into place. The good old 'create a problem and then sell them the solution' mentality is being applied by one part of the industry for integration into the other.Last edited: May 24, 2017 -
OK. Here is my first run with @Prema Clevo vBIOS test mod with CPU and GPU both stock. Both GPUs pulling 200W, total system power 597W and GPU clock speeds do not seem to fluctuate erratically (as expected). This is with fan fans blowing on the U3 (not AC obviously from the temps). Look at that GPU load % flat-liner. Ain't it pretty?
-
Where is all that extra power being drawn from ? Im sure the GPU's arent constantly at 200W, even then we are looking at another 200W left before the system touches 600W.Last edited: May 25, 2017Papusan likes this.
-
-
What is it that draws 198W without special load? What was the cpu package power in this run? And could you test with single graphics?
-
I don't know. If I unplug the power to the DM3 the UPS shows zero output, so I know for a fact nothing other than the laptop is drawing power.
-
Useless run is now legit - http://www.3dmark.com/fs/12700314
Thanks to @Johnksss@iBUYPOWER !Last edited: May 25, 2017 -
No problem.
-
-
I forgot to answer your question. CPU package power is generally maxing out around 109W for 5.2GHz unless I go jacking around with some of the VR settings, then it can go up to around 120W. I am running BIOS defaults for those VR and IA limits..
So, I guess my overclocked RAM and WiFi and SSDs and screen are eating up the extra 100W. -
Ionising_Radiation ?v = ve*ln(m0/m1)
Wait, you have overclocked Wi-Fi?
How?D2 Ultima, jaybee83, Scerate and 1 other person like this. -
Ionising_Radiation ?v = ve*ln(m0/m1)
Sheesh, English. Now I realise he was talking about overclocked RAM, and then his Wi-Fi and SSDs and display...
-
Did this while it was apart for vBIOS re-programming last night. @Mobius 1 @iunlock
Found more of the stock pads were too thick than what was mentioned before. The pads on the edge of the GPU next to the power connector were especially too thick.
What I used makes solid contact. I had a hunch that the 4.0mm pads were ludicrous. They were.
@D2 Ultima @Ted@HIDevolution @Donald@HIDevolution @thattechgirl_viv @Meaker@Sager @Prostar Computer @Eurocom Support
Yup, I'm sure. I stuck them on the GPUs to start, got the vapor chamber squeaky clean with alcohol, screwed the vapor chamber down without any thermal paste then removed it again. Most of the pads came off stuck to the vapor chamber. Kind of hard to argue with that since they were on the GPU to begin with.
Edit: There is enough sloppiness to how Clevo builds heat sinks that it could vary from one system to the next. Best to check each machine. That's how I discovered most of them were thicker than they needed to be.Last edited: May 25, 2017Ashtrix, cj_miranda23, Prema and 8 others like this. -
what 3.0mm pads did you use? I am having a hard time trying to find anything(quality wise) over 2.0 pads
Stress Tech and Mobius 1 like this. -
Are you sure though? Bloodhawk and Prema posted conflicting information on what pad should be what thickness and I'm now confused.
Prema has the 2nd revision vapor chamber with the 1 pipe bridge, and bloodhawk has the early model 1st revision with shiny copper and not the dull copper 1st revision that everyone seems to have. -
eBay from China... http://www.ebay.com/itm/100-100mm-3...Compound-Thermal-Conductive-Pad-/172237986386
The other pads are the 11.0W/mK Fujipoly Extreme pads I got off of Amazon.Papusan likes this. -
There's a seller from china that sells thermal pads of varying thickness from 0.5 to 5mm in 0.5 increments, I don't remember exactly but you can get his complete set for under 25usd.
I will link you later, so remind me. -
Yup, I'm sure. I stuck them on the GPUs to start, got the vapor chamber squeaky clean with alcohol, screwed the vapor chamber down without any thermal paste then removed it again. Most of the pads came off stuck to the vapor chamber. Kind of hard to argue with that since they were on the GPU to begin with.
Edit: There is enough sloppiness to how Clevo builds heat sinks that it could vary from one system to the next. Best to check each machine. That's how I discovered most of them were thicker than they needed to be.
That's how I originally got an assortment of spare pads without breaking the bank. They are decent pads, too.Papusan likes this. -
fml I need to spend 2-3 hours to get everything perfect again
I use 6/wmk arctic on the memory, they're squishier than fujipoly so it works better for the tolerance that clevo hasPapusan likes this. -
Part of what I posted was sarcasm. I started to say overclocked WiFi and overclocked SSDs to throw people off, but I decided that might create too much turmoil and a barrage of messages asking how to do that, LOL.jaybee83 likes this.
-
Was there any noticeable difference in temps? I pretty much replaced all the ones you pointed out except the 3.0mm. I left the stock 4.0mms on there.. Going to order some 3.0mm and change those see what results I get.Mr. Fox likes this.
-
Haven't had enough time to test, and I am using a modded vBIOS now, so comparison of before and after won't be accurate. I seriously doubt it would be measurable in core temps (the only sensor we have) if the core was making good contact before. Mine has never had any GPU thermal issues, even using the pads that were thicker than ideal. Pretty amazing it works at all, since the vapor chamber surface that contacts the GPU core on both GPUs is about as smooth and flat as a sack full of golf balls. It's a wonder it works at all, LOL. To call it rough would be an understatement.
Last edited: May 25, 2017Juang1985 likes this. -
SO TRUE.
Im currently using the layout Prema posted the other day, that came from HIDE, and that seems to be working the best for me so far. Ill post pictures later tonight.
These are the pads im using -
4mm - Phobya 5w/mK (modmymods)
3mm - FujiPoly 6w/mK (FrozenCPU)
2.5mm - FujiPoly 6w/mK (FrozenCPU)
1.5mm - FujiPoly 11w/mK (Amazon)
1mm - FujiPoly 11w/mK (Amazon)
o.5mm - FujiPoly 11w/mK (Amazon)
The 1.5/1/0.5 mm can be replaced by Artic ones. Which are much cheaper and easier to acquire and will work just as well because of the heat sink.Last edited: May 25, 2017Stress Tech, Prema, Papusan and 1 other person like this. -
Yes, 3DMark 11 is my preferred benchmark above all others and that is one reason. The other reason is you get credit in the overall score for having a strong CPU. With Fire Strike it has minimal effect on the overall score even with a sucky CPU. (You get an accurate Physics score, but it doesn't carry as much weight in the overall score. I think we can thank the BGA turdbook peddlers for that. It seems Fire Strike is geared for marketing benefits above all else.) Even Sky Diver is a better test than Fire Strike.
I will have it apart to program another modded vBIOS tonight and we will find out. The one that was on it last night was not stable. I actually expect power draw to be less with the better vBIOS. I was idling at almost 200W, which isn't ideal. Should be able to pull about 300W from each GPU once we get the voltage limit issue corrected. Voltage limits are holding us back now that power limits issue are eliminated.Last edited: May 25, 2017bloodhawk likes this. -
can you post that pic again? I think my slave gpu contact is not so good on one of the corners and I need to readjust thermal pad
my pad on cpu vrm is too thin, need to change that to 3mm now -
Exactly. Desktops are too easy. I choose Clevo over MSI because I had the motherboard schematic. I knew it would come in handy.
Unfortunately laptop modding has been transitioning from fixing 100C+ temps and PLL and VID mods for 33% overclocks to just getting the hardware to run right out of the box. It's been taking a more frustrating trend.
Although to be fair laptop hardware is far closer to desktop hardware than it used to be. It used to be a new API would come out, and you wouldn't have a laptop GPU that supported it for 2 years. -
-
600 for the graphics, 120 for processor and up to 100 for the rest. Anyone who can sum up the final result?
Or maybe I'm too slow
Last edited: May 25, 2017Mr. Fox likes this. -
Yeah , I followed that guide to the dot.
Some of the pads are thinner (3mm -> 2.5mm) than what @Mr. Fox suggested?
For the 2.5mm I just used arctic 6w/mk and sandwich them with a little kryonaut inbetween. -
Never sandwich , specially with thermal pads and paste. Even with sandwiching thermal pads you are loosing at least half of the transfer.
Yeah they seem to be thinner, even my other heatsink uses slightly thicker pads than the 2.5mm. But with the proper fitting heat sink this one works perfectly for me.
Clevo Overclocker's Lounge
Discussion in 'Sager/Clevo Reviews & Owners' Lounges' started by Spartan@HIDevolution, Mar 4, 2016.