Thank you for this. I will make sure to look into this and post results. I really want to push this laptop to its true potential. I know this CPU can easily reach 4.3 GHZ and stay stable so once I am done I will let you know.![]()
-
-
I also have a more serious problem: I can't use Wi-Fi. I've had this problem before, but I've always managed to fix it: I turn on the laptop, I can't see the WI-Fi icon (there is nothing next to Airplane mode), then I proceed to turn it off and on a few times, and it suddenly appears. When it doesn't, I reinstall my LAN and WLAN drivers in order to fix it. This time I reinstalled them over 10 times... and still nothing! I've checked and both Killer E2400 Gigabit Ethernet Controller and Killer Wireless-n/a/ac 1535 Wireless Network Adapter appear under Network adapters in Device Manager. I also unticked the "allow the computer to turn off this device to save power" for both of them. I don't know what to do. My ethernet works perfectly fine. My Wi-Fi adapter is enabled, but there are no packets and IPv4 and IPv6 are both not connected. The status of the wireless adapter says "this device is working properly", but I just can't use Wi-Fi! I have tried different combinations (FN+F9, FN+F10) just to make sure that I haven't turned it off... nothing. The Windows troubleshooter does not help at all.
Last time this happened I went to settings and did a network reset. I don't remember if it helped, but I do remember that I eventually reinstalled the Windows, and it got fixed. Nevertheless, I don't want to do this now.
I updated my BIOS (and I think that's why the "display off button" got even more bugged), reinstalled Dragon Center, reinstalled my LAN and WLAN drivers multiple times, made sure Radio Switch is working... I don't know what else to do. Any ideas? I don't want to reinstall my Windows (or recover the laptop as my last image is very old).
Edit: Did a network reset - nothing happened at all. Weird.Attached Files:
Last edited: Dec 24, 2017 -
-
Spartan@HIDevolution Company Representative
I've had that happen to my on my GT73VR Titan Pro when I switched to the nVIDIA GPU but after the latest BIOS update this never happened again
What do you mean the button won't even turn the display off? Please elaborate -
-
-
Spartan@HIDevolution Company Representative
then install the latest 388.71 driver again -
-
Spartan@HIDevolution Company Representative
But wait, I think the GT83VR also don't have Optimus since you have the option to switch GPUs like the GT73 completely so how did you ever have that option? -
-
Spartan@HIDevolution Company Representative
-
-
Falkentyne Notebook Prophet
Another trick (at least for your Gt73VR):
EC RAM register F1 controls the MUX switch.
One of two values is for dGPU and the other for iGPU.
Too hard and annoying to figure it out but I think 02=dGPU and 05 or 06 is iGPU (after rebooting). -
Spartan@HIDevolution Company Representative
-
-
hmscott likes this.
-
Does anyone have experience with the killer network bonding?
I have a project in mind.
Sent from my SM-G930W8 using Tapatalk -
NuclearLizard, hmscott and Kevin@GenTechPC like this.
-
Kevin@GenTechPC Company Representative
You can also try using the clean up tool to remove the package and driver completely, then do a fresh install of the driver package.
hmscott likes this. -
This is probably the most frustrating day of my life notebookwise. I was using the beast perfectly fine and afk for about an hour. Came back to a dark screen. "oh i don't remember setting it to turn off the display after non usage?". Shook the mouse..nothing. Manually turned it off and back on, no logo, can't boot into bios. F3 & F11 doesn't work. Called msi and they told me it could've possibly be an update error that caused it. Then they told me to ship it in for further t/s. Has this happened to any other user on here? Just seeing if there's anything else I can possibly do before I ship it out for a 10-30 day turnaround. Thanks all!
-
Falkentyne Notebook Prophet
Hold the power button down for 60 seconds, physically, then release.
Then wait for another 60 seconds for the power LED to cycle.
Then wait again to see if the logo comes after the LED light cycle.
Other options:
1) if using 4 sticks of RAM, remove one stick on the easy to access slot, and wait 60 seconds for the power LED cycle.
2) remove the slave video card and then power cycle check.
3) if all else fails, and all else should NOT fail, remove the slave and master video cards, replace the slave into the master slot (this will REQUIRE a repaste), keep the "old" master out, reassemble and try to boot. -
Ehh, hello everyone, need help!
I was looking to buy this laptop
https://www.cyberpowerpc.com/system/Fangbook-V-Extreme-VR-SLI-1080K-Gaming-Laptop
Just want to confirm that IS IT AND MSI AUTHORIZED REBRAND OF THE GT83VR?
If yes then incase i ned repairs, then can i get it repaired by msi service centre?(i wil pay of course) -
Sent from my SM-G930W8 using Tapatalk -
GT83VR 6RF / 6RE. Has anybody successfully updated their Thunderbolt 3 Firmware (through official or unofficial channels)? I am wondering - shouldn't the controller be able to support two displays attached to it?
-
Are the geforce drivers past 385** really unstable for anyone else? Every time i've tried to install one it quickly crashes windows and makes it unable to boot if i change any of the monitor settings.
Last edited: Jan 3, 2018 -
Spartan@HIDevolution Company Representative
-
That's the version i just installed, and was unable to boot until i uninstalled it with safe mode. I'm using external monitors and nvidia surround, however just attempting to open the surround config window crashes the computer.
-
I use the drivers directly from nvidia usually.
Kind regards, -
Aaaand its dieing again. I tried the exact DL linked and once again opening the surround configure window crashes the computer.
Is there a significant chance that it's something screwed up with the windows install? One monitor is USB C, one is HDMI, and one is DP
Could it be a BIOS issue?
And is there a way to force bringing up the startup repair menu? Its getting incredibly demoralizing waiting for it to come up on its own every 10-15min.
Windows seems to keep autoinstalling a may driver before i can install a DL'd driver, could that be compromising everything?Last edited: Jan 3, 2018 -
Spartan@HIDevolution Company Representative
2) the moment you log back into Windows, put your laptop in airplane mode to disable wireless and prevent Windows from installing its own driver
3) Also use O&O ShutUp 10, it has an option to prevent Windows Update from updating your drivers: O&O ShutUp10 - Do not use Windows 10 without it! -
Tried that stuff and no dice. I'm gonna give a full windows reinstall a try this weekend.
-
LIQUID METAL REPASTE (PHOTOS + DESCRIPTION) - MSI GT83 VR (1070 sli + i7-6820HK)
What you need:
- Grizzly Conductonaut Liquid Metal
- Arctic Thermal Pads (1 MM + 0.5 MM) - MSI GT83 VR - RF6 uses both 1mm and 0.5mm termal pads, I will detail this later on
- Super 33 Electrical Tape (high temp resistant)
- Isopropyl alcohol (to wipe clean all copper heatsinks/cpu/gpu's surfaces)
- Nails top coat (also called nails strengthener/hardener)
1) Back cover removed:
2) Applying 2-3 layers of nail hardener on the PCB around HIGH RISK areas (do this on all 3 core components: GPU1, CPU, GPU2 - let dry for about 30-45 minutes between layers - this stuff dries in about 5 minutes I was taking no chances with it).
3) For this particular laptop model (MSI GT83 VR - 6RF) the 1st GPU and the CPU benefit from sharing 2 fans and 2 x copper pipe systems. These 2 are also interconnected (copper on copper) and need to be wiped clean + liquid metal repaste for maximum efficiency:
4) "Organised" chaos! (messy workplace)
5) Electrical tape just applied around main core components (GPUs + CPU) to further protect high risk a areas from any liquid metal dripping:
6) Liquid metal just applied (got just a bit messy on the CPU):
I really hope someone else can benefit from this post as this was bit of a struggle for me. I am quite techy and I have been servicing my own gaming laptops for a very long time but never did a liquid metal repaste - I kind of went 110% on the safety side by applying nail hardener + electrical tape (some people don't even bother to use'em, some use just the tape) - I just don't want to end up with a bricked device.
Conclusion and small mistakes to be taken into account:
Is it worth it? Absolutely. Temps dropped by 15 C (heavy gaming load will not get the CPU hotter than 74-76C and GPUs (sli enabled) will stay at a max of 72-74C (sli enabled games tested with max graphic detail usage for long periods 3hrs+)
Make sure copper surfaces are VERY well cleaned - initially with a tower/paper towel to remove old paste, then wipe very good with isopropyl alcohol otherwise the liquid metal will not adhere to the copper - eventually it will but it will be a struggle.
Also make sure to replace thermal pads. I got some notes on my dead phone which thermal pads are 1mm and which are 0.5mm - will edit post when phone is fully charged and provide details.
Regarding how much liquid metal you need to use for 1 surface: LESS IS MORE! Seriously, a 4th of a tic-tac mint and that is still probably a bit too much for 1 surface (CPU + Copper Sink).
If you accidentally drop liquid metal on the main motherboard (cpu/gpus, other PCBs):
DO NOT try to WIPE - you will spread it all over the place, instead try to use a vacuum or a hoover and with a pointy/small/micro vacuum tool try to absorb it (kit here, I happened to have it around and this happened to me, a small drop went on the motherboard, no direct contact with SMD components but was able to hoover it and it left no trace)
Good luck.Last edited: Jan 5, 2018 -
-
Falkentyne Notebook Prophet
@thebigbadchef nice pictures and detail.
I'm still curious which pads are 1mm and which are 0.5mm.
If you're talking about the *CPU* area.....?
Are the "R22" chokes (grey) 0.5mm and the black small square VRMs (right behind them) 1mm?
Also, is it me or does that system only have three phases for the CPU? (3+1 if you include the smaller single one next to it).
(can only see 3 chokes with 3 VRM's they cover...not even phase doublers...)
Someone correct me if I'm wrong...
Also, while you're answering the thermal pad and VRM issue:
You don't need nail polish AND super 33+ tape together. That's excessive and can raise temps an extra C or two, although it is better to be safe than sorry.
All you need are either 3 coats of nail polish, OR no nail polish at all, and the super 33+ or Kapton tape. The tape by itself will prevent any LM from touching the SMD resistors, but it has to be done carefully, which is a bigger job than just 3 coats of nail polish.
Then what we like to do now for extra insurance, is to apply a 'cutout' foam dam that is highly compressible (down to 0.1mm), as a barrier to prevent LM from escaping onto the PCB. Electrical tape will not prevent runoff, neither will nail polish. Yes, using the proper amount will, under normal conditions, but if the laptop is ever bumped, carried in a backpack and subject to shock, you never know when a tiny conductive ball of doom will work itself out and start going where it pleases. And if just one ball gets on the PCB, well, if it bridges something it's going to play havoc with your components. That's where the foam dam comes in. And the foam dam is NOT to be used in place of nail polish, ever! You go foam dam+nail polish or foam dam+tape (Super 33+ or kapton). But tape+polish together are overkill
A few days ago, I threw LM on my R9 290X because i was bored. No tape at all. Just 3 coats of transparent nail polish over SMD's and that LM wasn't going anywhere (the depressed housing itself on the factory blower sink also acts as a dam to prevent any runoff on the PCB).Last edited: Jan 5, 2018 -
-
Falkentyne Notebook Prophet
I was editing the last post so I added more information,
but does your laptop look like this?
That looks like a 3+1 VRM setup (3 big phases+chokes and a smaller one).
Just wondering why its only 3+1.
Maybe more power delivery to the video cards, I guess?
GT73 and GT75 have 5+1.
Anyway keep me posted. Very curious about the thermal pad height because that would also apply for GT75VR And GT73VR too! -
Falkentyne Notebook Prophet
@thebigbadchef hello
Did you get any more information about the pad thickness for us?
-
- so thanks for that.
Regarding thermal pads please see below photo.
I got this laptop about 4 months ago (bought it new) and when I did the metal repaste I was a bit surprised to see that the manufacturer actually used 2 different sizes for the thermal pads. The difference is noticeable as the 0.5 mm ones were also of a different colour (and ofc noticeably thinner).
As described below - the green areas were covered by 0.5mm pads whilst all other components around cpu/gpus' had the 1mm kind.
You would think that the grey R22 thingies are for 0.5mm pads and the black small ones are for 1mm pads but the heat sink is different with these 2, it's got some sort of levels and it is thicker (the heat sink) on top of the black square VRMs.
Hope this makes sense.
Last edited: Jan 6, 2018 -
I am about to push this beast to its limits and see what kind of overclocking results I get. I am aiming for a stable 4.3 gHZ and hopefully a 4.4 ghz for benchmarkings and hoping temps wouldn't go higher than 84C on full load. Fingers crossed! Will post results.
Last edited: Jan 6, 2018 -
Falkentyne Notebook Prophet
Thank you.
The GPU ones make sense, but what about the ones above the CPU? (the three R22 chokes and the 3 VRM squares above them?).
Are those also 0.5mm pads on the blacks and 1mm on the greys above the CPU also? -
Hope this helps.
Then again I would be very careful as you might have different heatsinks (chassis) - for me it was quite easy, they had 2 colours and you can see them being different (you can actually notice and tell them apart). I was guided by the colours but also by common sense - the 0.5 mm ones were quite thin, if you are in doubt get yourself a Vernier Caliper - I did not use one but im sure it would be accurate (ofc don't apply too much pressure when measuring)
Falkentyne likes this. -
Well overclocking results are back...I don't know if I should be shocked or not but I decided to try and see where it starts to become unstable WITHOUT increasing the voltage limit (I am doing this via BIOS) (for some reason I don't trust XTU) - the damn Dragon Center gave me such a headache so I had to uninstall it.
Anyway, back to the results. I managed to get a stable 4.3 ghz without even touching the voltage limit which seemed unreal...(I thought I would have to increase by at least 100 mV). Stable in the sense that I managed to go through the games I play on a regular basis with no crash. It would however reboot after the MSI logo if I try to go for 4.4 ghz (x44 multiplier) - I will play with this tomorrow, perhaps increase voltage by a little bit and see if it stops rebooting when trying for 4.4 ghz? What are your recommendations @Falkentyne if I may ask for your advice?
Temps remained the same (surprisingly) still low/mid 70s (with the max temp being reached by the CPU - 78C after a 90 min gaming session).
NuclearLizard likes this. -
Falkentyne Notebook Prophet
The reason you don't need to increase the voltage is because the voltage is being increased for you by a hidden setting (which you should have access to, called IA AC DC loadline in CPU VR settings (Core IA domain)
The reference value for Kaby/skylake and 8700K chipsets is 2.10 mOhms, which the Auto setting uses, so setting a manual value of 2.10 mOhms (=210).
This value causes the VID to RISE based on CPU load, however this mainboard does not report the current VID accurately when using this setting; at full load, the VID reported will be much lower than the actual VID currently being used. The actual vcore (voltage) is based on this starting VID. This is before vdroop is even applied. If there were an actual vcore sensor (only some clevos have a vcore sensor), you would see that the current vcore would be higher than the VID actually shown, even WITH vdroop applied.
Basically, use Throttlestop 8.50 (which I recommend), and have your computer sitting at FULL IDLE with absolutely nothing running.
And watch the VID.
Do you notice the VID is fluctuating by as much as +100mv up and down? Yet when the VID does that fluctuation, the power draw doesn't change at all. Seems to be just some buggy implementation in the firmware, with the VID bouncing like wild with no load. But note carefully the highest VID that you wind up seeing. for example if the VID ranges from 1.11v to 1.23v at FULL idle with no load on the CPU at all, then that means that the VID will be 1.11v at idle and 1.23v at full load. But most likely when you do run something at full load (like NON AVX prime95 (please don't try using AVX unless you're a masochist), and with FMA3 disabled (trying to run small FFT with FMA3 enabled will most likely just make the system VRM's power off and reboot), in this vid example (1.11v to 1.23v fluctuating), you will wind up seeing 1.12v with non AVX prime95 and 1.14v with AVX or FMA3. Yet the temps will be sky high--because this VID is actually inaccurate--it's more like 1.23v VID (=1.23v STARTING vcore before vdroop; true vcore will be after vdroop, but you need a scope and the read points (meaning, some sort of mainboard datasheet) to be able to know what the actual vcore currently is at load).
That is just an example.
If your core VID is 1.262v at FULL IDLE, again use something like Throttlestop and watch the VID range carefully at full idle. You should see it bouncing around all over the place. With the default auto setting, the highest VID you can observe will be the VID in use at full load (regardless of what you actually SEE it reported as during full load). But again that is not the live voltage. Because there is vdroop.
The reason for this "VID boosting" shenanigan of 2.10 mOhms is because of the Fully Integrated Voltage Regulator. Basically it boosts the VID at load to try to 'compensate' for vdroop. Usually vdroop is countered by a different, although similar setting, called "Loadline Calibration" (LLC), but these laptops don't have this setting at all. Desktop boards do, so for desktop boards, it's recommended that you set IA AC DC loadline to 1 (or 0.01), and then use Loadline Calibration to handle the vdroop.
For MANUAL (override) voltages, the same thing would basically happen: VID at full load would be grossly underreported, with the default "Auto" setting.
Example: 1.25v (1250mv) static voltage, with AC DC loadline=Auto (or 210 = 2.10 mOhms), would probably show up at full idle as 1.265V-1.37v VID (bouncing around like wild), then at full load, would probably read as 1.29V, when actually it's 1.37v ......
Note: if your CPU is downclocking when idle to 1.6 ghz or 800 mhz or something, you need to disable cstates if you want to see the "overclock" VID when idle. for some strange reason, when on AC POWER, Cstates seem to be disabled when using STATIC VOLTAGES, but not when using adaptive voltages. Yet on battery, the cstates are enabled when using static...buggy MSI cancer bioses...
But again that's why I recommend using Throttlestop 8.50 if you want to disable cstates and EIST, whatever
(BTW make sure you set your control panel power plan to "maximum performance" for Throttlestop to work). Then you can even enable "Speed shift" In throttlestop. That way you can stop all downclocking when idle! Set SST to 0 in throttlestop main window then your CPU will run at full speed when idle (you will first need to enable that in the power limits button in Throttlestop)
Of course, if you are unstable with the auto setting (or the 210 value placed manually for 2.10 mOhms), then that's where you can increase the offset.
To know your CPU's absolute true DEFAULT VID, you need to set IA AC DC loadline to 1 with ADAPTIVE VOLTAGE (the lowest value; 0.01 is only for Asus boards). Then at full idle, the true VID will be shown, without VID boosting, although you will see about a 30mv fluctuation. But expect this to completely crash on you if you are using ADAPTIVE VOLTAGE, so then you will NEED to use positive offsets. I suspect, at 3900 mhz (3.9 ghz), you may be able to use AC DC loadline=1, WITH adaptive voltage with no offsets, then you will see your default VID for 3.9 ghz (the default VID will always change based on CPU frequency).
When using static manual override voltages, AC DC loadline=1 combined with a static voltage (e.g. 1275mv) will show 1.275v VID at full idle instead of VID boosting, then at full load, should remain about 1.275v. However there will be a LOT Of vdroop (remember, no vcore sensor, right?), so most likely you will BSOD here. To counter, you would either need to:
1) raise the manual voltage up more
2) raise the AC DC loadline up slightly (don't go above 25, IMO).
3) balance slight VID boost with a slight vcore increase---example: override voltage 1285mv + AC DC loadline=10 (0.1 mOhms), instead of 1275mv + AC DC loadline=25.
How you go about that is up to you.Last edited: Jan 6, 2018thebigbadchef likes this. -
I am perfectly fine with a stable 4.3 ghz (I will try to run a few stress tests and see if it trully stable).
Thanks once more.hmscott likes this. -
Prime 95 stress test now successfully ran through current CPU OC. config and no crashes/restarts were reported.
Regarding temps, cpu reached a maximum of 82C. (i suspect this is due to the liquid metal repaste - otherwise this temp would be in the high 90s...). Ofc no thermal throttling was noticed due to relatively low temp.
Amazed by the capabilities of this device... @Falkentyne - will fully investigate/research/document your post regarding cpu voltage and perhaps I will try to push it further and see if I won the silicon lottery or not
So far I am more than satisfied by the stable 4.3ghz without touching any other settings (although porting/unlocking the bios was quite risky but I followed every step of @sirgeorge's post. Thanks for pointing me to it. -
Falkentyne Notebook Prophet
Did you have AVX or FMA3 enabled or disabled in prime 95? Good work.
-
-
Falkentyne Notebook Prophet
If it did not show "FMA3" in the prime 95 iteration, then it's disabled. Core 2 type FFT or Type 0 FFT= AVX (and FMA3) disabled.
AVX=AVX enabled. AVX2=AVX2 enabled (I could never get avx2 to activate anyway), FMA3=FMA3 enabled.
AFAIK, FMA3 "seems" to be some sort of extension of AVX, or maybe this only applies to prime95, because disabling AVX in "Local.txt" also prevents FMA3 from being used even if CPUSupportsFMA3 is set to 1, if CPUSupportsAVX is set to 0. -
I've spent quite a few hours during the past 2 days trying to figure this out, doing research and learning more before altering my device's configuration...now I do have a 9-5 job and this thing has been keeping me up at night as I want to see how far I can push this CPU.
I cannot seem to be able to find out what my idle/load voltage is
I have however raised the CPU core voltage by 100 mV and went for 4.4 gHZ - now this has worked fine. I then was able to run a few tests (first a few games then run prime95) and system was stable (now ofc it could deserve more fine tunning) but I have just tried it and it works HOWEVER - the temps are kind of high, pretty sure this is because of the overvolting...now bare in mind I did a LM repaste for best temps - the 4.4ghz whilst running Prime95 gave me a max temp of 92C which seems very damn high- at this point thermal throttling usually kicks in...can't even think what it would be without the liquid metal repaste it would prolly reboot itself.
I have a feeling I need to stick to my stable 4.3ghz and keep default voltage. I am more than happy with it - but then again I am amazed by this CPU's OC capabilities...
Thinking I should stick to my stable build/config. Any thoughts? @Falkentyne -
Falkentyne Notebook Prophet
Don't run prime at 4.4 ghz. You will be approaching the absolute limits of 6820k I don't know anyone who managed to exceed 4.4 ghz on a 6820hk. Can you please tell me what the VID was shown in Throttlestop 8.50, when you running prime? Because if the VID was 1.3v or higher, AND if you had AVX/FMA3 disabled (prime showed "Type 0 FFT or Core 2 type FFT", rather than AVX or FMA3 FFT), then 92C is pretty good for a VID of 1.3.
***The Official MSI GT83VR Titan SLI Owner's Lounge (NVIDIA GTX-1080's)***
Discussion in 'MSI Reviews & Owners' Lounges' started by -=$tR|k3r=-, Aug 13, 2016.