@vulcan78 , thanks. I need to take a brake and think a little. There is something wrong with my CPU/PSU anyway. Physics and combined scores are mostly from CPU impact. I have tried to disable all CPU OC totally and got this.
-
Attached Files:
-
-
Your combined went u
Mr. Fox and others have used Killawatt meters and they showed that 680M SLI was exceeding 330W by a good bit; and they were all doing dual PSU back when they were 780M SLI. I imagine that 970M SLI is in the same boat with a similar TDP.
One of the first things I noticed about upgrading to single 980M is that my PSU was no longer piping hot when I would go to pack it up after a day of gaming in a library. It used to get HOT. Now it gets mildly warm.
PSU life-expectancy is directly tied to load. You run your PSU at 110%, expect to replace it every so often. And good luck finding them for less $100.
http://www.ebay.com/itm/100-Origina...229021?hash=item4b0ec3531d:g:dioAAOSwo4pYYFJJ
If you go dual PSU, there goes your mobility, youre absolutely NOT going to lug that contraption around.
http://forum.notebookreview.com/thr...er-supply-mod-with-wattage-meter-660w.773419/
Just another argument in favor of single 980M in the M18x R2.
And 8GB of VRAM on tap is always nice, the console ports are chugging that stuff down like crazy! Rise of the Tomb Raider, Resident Evil 7, and Titanfall 2 among many other titles are using all 8GB!
You could probably sell both of the 970M's for enough to pick up a single 980M.Last edited: Feb 7, 2017kaza likes this. -
http://www.guru3d.com/articles-pages/geforce-gtx-970-sli-review,4.htmlAttached Files:
-
-
Isn't in amazing how efficient these GPU's are? I have half the power of my 980 Ti, at 10.5k GPU but the card only consumes a third of the power of my 980 Ti (325W vs 110W).
Well if it's not wattage starvation, it's probably just SLI then.kaza likes this. -
@vulcan78 , HWinfo64 shows 164W in total for 970M SLI after highest load. Still don't think there is any wattage starvation. Would be interesting to see your Fire Strike result with a stock vBios non-OC'ed, BTW. But all these GPU-to-GPU comparings are a little offtopic. I'm going to have some experiments with my RAM and Thaiphoon Burner. Since @Mr. Fox and even @woodzstack did the trick, why not to try the same...
Attached Files:
vulcan78 and woodzstack like this. -
-
woodzstack Alezka Computers , Official Clevo reseller.
yeah the 970M are great ! More powerful then a 680M cheaper and like half the power consumption and 1.5x the RAM...and DX12... pretty decent card considerign what was boss for 2 years before it.kaza likes this. -
Liked and
Oh here's a run at 4.3GHz, and the voltage isn't much higher, 1.291v, Picked up 300 points in Physics. Out of curiosity I tried setting PhysX to CPU but it didn't make a difference. I don't know why it is with your system.
http://www.3dmark.com/fs/11643547
-
woodzstack Alezka Computers , Official Clevo reseller.
So many people keep asking me about this GT72 heatsink... okay 3-4 people, but I don't understand the curiosity, could you tell me what or why you want to know, maybe I can update us all when I get it with the details your wondering. -
Well, my very first thread here wasn't so popular... So did my english 5 years ago.
-
After all these tests&settings I got some very strange glitch yesterday - anti-throttling. Under browser (!) load my both GPUs got max. core&memory clocks and didn't drop them even after changing some settings in Nvidia control panel or closing the browser and going to/from sleep mode. I had to reinstall driver and the problem gone. Don't know what it was, but, I suppose, it started right after setting "always maximum perfomance" in Nvidia control panel and some Fire Strick tests with HWinfo64 monitoring. -
I have read that Vengeance was very popular in Europe in 2014-2015, got many prizes etc. But my Vengeance sticks even don't have XMP profiles filled! And they should be there, because it's a gamer class memory. Corsair just took the original 10T preset (10-10-10-32) and overclocked it a little (10-10-10-27)...Last edited: Feb 8, 2017Tulius likes this. -
-
Tested on a single 7970m (not OC) because I've already burned 2 of them (at stock).
Here is it
1600mhz on the left, 2133mhz on the right column
http://www.3dmark.com/compare/fs/11648336/fs/11648423
This time it goes from 0,2% to 0,7%
So less..kaza likes this. -
It seems that AMD cursed them.
Let me ask you, how do you do your 1600@2133cl11? Do you OC it via BIOS or use Thaiphoon Burner? -
In Thaiphoon Burner page says its only compatible with windows 7/8 but I'm using W10 just fine.Attached Files:
kaza likes this. -
-
kaza likes this.
-
The thing is, higher RAM frequency will threaten your CPU stability, necessitating more core voltage, more RAM voltage, or just losing 100MHz on the CPU and keeping all voltages nominal. None of this is worth the time, mental energy or added CPU instability, not for 1% or less performance gain.
Nevertheless I'm still interested to see if you do find some kind of meaningful performance gain. -
I believe, if your 1866 RAM is a 2x8 Gb set, you can set up it with 2133 1.5V. 65% success rate. But if they are two sticks 1x8 Gb the success rate is poor. I approximate my result with 1600...Last edited: Feb 9, 2017 -
-
I have contacted Corsair support regarding some issues with their CMSX8GX3M1A1600C1 memory and they replied something like, as we do say it in russian, "for the f*ck off". LOL. Next time I'll buy Kingston.
-
First test results with a speedy 1600 RAM. About +15% gain in AIDA64 reported memory speeds compared to the stock 1600 profile.
And another test with the same 1866J speed profile but 1.35V RAM voltage. Less V, the same speeds.
Update: my RAM totally rejects any 2133 speed profile. I have tested 2133K/11-11-11/1.50/1.55/1.60V, 2133L/12-12-12/1.50V and 2133M/13-13-13/1.55V so far. Anyway, 1866J/10-10-10-32/1.35V works just fine.
And I have also tried to upload @Tulius 's 1866 RAM dump to my memory and test it. Unfortunatelly, both sticks rejected to load with 1866... It seems that they have different ICs manufacturer and thus don't support that dump timings.
So, for the price of Thaiphoon Burner license (11,6 USD in my case) I got (so far) 2x8 Gb DDR3L 1866J and 2x8 Gb DDR3L 1600J RAM instead of 4x8 Gb DDR3 1600J RAM. I recommend Corsair Vengeance DDR3 RAM with Micron's D9PCP chips only. They worth every penny. As to other 2 Vengeance sticks... I have not identified their chips manufacturer yet and I don't know their real possibilities because they are not friendly (to me).Attached Files:
Last edited: Feb 10, 2017 -
-
Fire Strike results for 1600 stock 1.35V, 1866J 1.5V and 1866J 1.35V.
Attached Files:
-
-
Back to 970M real TDP. Under haviest gaming load (Blade and Soul MMO, 1037.8 MHz / 1 252.8 MHz / 1.025 V / no throttling) HWInfo64 measured 67.288 W for the main GPU and 22.924 W for the secondary. Game doesn't support SLI and thus only the main GPU is usually active. Test was perfomed for half an hour.
-
I had tested CPU OC parameters, found some optimal configuration for my current system. I have started from this message because it was unusual CPU behaviour. Currently I'm using XTU only to OC my CPU, I found it's more handy than BIOS OC. With 2x8 Gb RAM 1866J 1.35V sticks working and CPU Core Current Limit = 78,00 A I have 16,578 - 16,583 V battery voltage (was 16,70 - 16,72 V with 4x8 Gb RAM 1600J 1.5V and CPU Core Current Limit = 112 A).
Attached Files:
-
-
800 points increase in physics solely going from 1600 to 1866 MHz on the RAM? Is this normal? Damn I better see if I can get 2133 MHz stable lol. -
I'm just testing and finding the better numbers so far... But it seems that this increase comes not from 800 => 933 MHz memory clock only. Further auto-OC of 933 MHz reaches 936-964 MHz and keep in mind that I have saved the initial 10-10-10 timings which improved latency a little. Also, all this is for DDR3L memory... Basically, yes, this monkey business could give you some juice if: a) your RAM has no write-protection; b) has decent initial timings; c) has Micron ICs (memory chips) or other trusted manufacturer.
-
-
New achievements: 16Gb DDR3L 1866J 10-10-10-32- 2T 1.35V => 16Gb DDR3L 1866J 10-10-10-32- 1T 1.35V and 16Gb DDR3L 1600J 10-10-10-27- 2T 1.35V => 16Gb DDR3L 1600J 10-10-10-27- 1T 1.35V.
Unfortunately, I can't force them work as 32Gb 1600J 10-10-10-27- 1T so far. For full 32Gb RAM CR (Command Rate timing) is always 2T, don't know why.
Update: 1T configuration is unstable comparing to 2T and causes 0x0000003b BSOD while perfoming XTU benchmarking... Back to 2T.
Also, I have checked 1T influence on Fire Strike result - it does nothing to it (see attached). But 1T speeds up all programs launch and RAM memory latency. A little... And causes BSODs sometimesBut not in Fire Strike benchmarking.
Attached Files:
Last edited: Feb 13, 2017 -
-
Well, after making additional tests I decided to keep 1T CR, because it gives some boost and I've done with BSODs. New TDP settings with i7-3940XM - new score in HWBOT XTU )
Attached Files:
-
-
New 3940XM OC (4.4 GHz @ 1.271V) with new TDP set - more score for "PhysX => CPU" setting in Nvidia control panel. Both 970M are at stock as usual.
Attached Files:
Last edited: Feb 14, 2017 -
-
-
-
Interesting thing regarding my Samsung SSD 840 EVO currently in use. Will test it soon.
Update: indeed, Samsung SSDs does drop their read/write speeds when some old FW is used or SSD's temperature is high.Last edited: Feb 16, 2017 -
Meanwhile, here is my latest best result in Fire Strike (CPU OC'ed, GPU stock, 1866J 1.RAM, Perfomance mode in Nvidia control panel). What I'm actually doing is finding the best settings for gaming&everyday use.
Attached Files:
Last edited: Feb 16, 2017 -
-
New Nvidia's driver ( 378.66) is arrived. And it's time to make another driver's mod. If you don't know how to do it, here's the basics:
1. Download the latest driver from Nvidia, unzip.
2. Open nvdmi.inf (Display.Driver folder), search for 05AB and replace all these strings with 0550, save.
13D8 = 970M, 13D7 = 980M, 0550.1028 = our M18X R2 from Dell.
3. SLI changing without reboot. Search for a string RmDynamicSLIAllowed,%REG_DWORD%,0 and replace it with RmDynamicSLIAllowed,%REG_DWORD%,1 everywhere in nvdmi.inf, save.
4. Delete all the folders except Display.Driver, Display.NView, Display.Optimus (if you don't use it - delete it too), HDAudio, MSVCRT, NVI2 and PhysX.
5. Reboot in safe mode, launch DDU, perform Nvidia drivers uninstall and reboot.
6. Install your modded drivers.Attached Files:
Last edited: Feb 16, 2017 -
-
I have tested 378.66 driver. Single 970M vs 970M SLI in Fire Strike 1.1 (SystemInfo v4.48.599):
total score +58%,
graphics score +87%,
physics score -0,3% (i.e 0, no changes),
combined score +32%.
All tests were perfomed for 3940XM with 43x-41x-41x-41x-80W(PL1)-80W(PL2)-28Sec(PL1 time)-112A-39,063V, 1866J RAM and "Let the 3D application decide" settings in Nvidia control panel.
Also, for CPU-does-PhysX vs GPU(Auto)-does-PhysX:
total score +0,233%,
graphics score +0,244%,
physics score -0,164%,
combined score +0,409%.
Second result shows that current CPU OC settings make CPU as much powerful as secondary GPU is. For PhysX purposes, of course.Attached Files:
Last edited: Feb 16, 2017 -
-
Oh and memory speed is very relevant to some games apparently, I've been quite active in this thread over on Fallout 4's Steam Forum, check it out:
https://steamcommunity.com/app/377160/discussions/0/135509024339143828/?ctp=2
https://steamcommunity.com/linkfilt...cle/1171-ddr4-4000-mhz-performance/page3.html -
%NVIDIA_DEV.13D7.05AB.1028% = Section339, PCI\VEN_10DE&DEV_13D7&SUBSYS_05AB1028
replace with
%NVIDIA_DEV.13D7.05 50.1028% = Section339, PCI\VEN_10DE&DEV_13D7&SUBSYS_05 501028
There will be 3 such strings (Section339, Section341 and Section340).
Also, you will need to replace this
NVIDIA_DEV.13D7.05AB.1028 = "NVIDIA GeForce GTX 980M"
with this
NVIDIA_DEV.13D7.05 50.1028 = "NVIDIA GeForce GTX 980M".
The other part of the instruction (SLI) is valid for 980M without changes. -
I have Dell SupportAssist 1.3.6855.61 installed. If I launch Checkup => Scan a Specific Device => Memory => Memory Test => Run Test I always have some errors with 16, 24 and 32Gb RAM installed. These errors do differ for various memory modes (single or dual) and they also differ for memort slots used (A, B, C or D). The inner 2 slots are marked as C and D (or 4 and 2). The outer slots are A and B (3 and 1). If you read the official Dell manual for M18X R2 you'll see that for a dual core CPU you can use slots number 3 and 1 only. This is interesting, because they don't say any two slots are usable, they state the exact slots. So, I did some tests to find out the difference between all four memory slots in M18X R2.
All tests were perfomed with default memory clocks (800 MHz) and timings (10-10-10-27-2T) and 1.35V. First, all four my slots are working. The same goes for all four RAM sticks. If I insert 8Gb stick in each slot and run SupportAssist's memory test I see no errors at all. For two RAM sticks inserted the situation is different - there are some errors and even more, the number of errors depends on:
1. RAM stick's natural stability (at default and OC timings/clocks) - ability to work without BSODs under some testing or gaming load,
2. memory slot used (1, 2, 3 or 4),
3. memory chips temperature upon tests (initially cooler sticks show less errors),
4. crosslinking configuration between all four RAM sticks installed (two sticks from the same set being installed into 3 and 1 slots (that's dual mode) or into 1 and 2 slots (single mode)).
Conclusion:
1. The inner slots (4 and 2) are weaker and unstable, perhaps due to some overheating issue. I recommend to install in it some stable RAM sticks only (stable means they work without BSODs with default and OC settings).
2. If you, like me, have two speedy RAM sticks (2x8Gb) and two unspeedy (2x8Gb) and want to maximize your overall system perfomance, you should use a speedy sticks in slots 3 and 1 only and forget about slots 4 and 2. But if you want to maximize your system memory and use all 32Gb, you have 2 different options with less memory errors numbers:
a) two unspeedy sticks go to slots 1 and 2, two speedy sticks go to slots 3 and 4. This crosslinking configuration (natively single and mixed) gives you a minimal error count (3 errors only) and faster read/write/copy memory speeds, lower latency;
b) two unspeedy sticks go to slots 1 and 3, two speedy - to slots 2 and 4. This configuration is natively dual, has 4 errors only, higher latency and lower read/write/copy memory speeds.
The other two possible configurations show per 5 errors each.
3. M18X R2 (at least my MB) has some issues with RAM memory controller - only 8Gb stick-in-slot is a safe configuration while other sets with 16, 24 and 32Gb total memory work with some memory errors which number multiplates with OC and further chip heating due to the absent of an active RAM memory cooling.
If someone can confirm all this with other RAM memory (not Corsair Vengeance) and other M18X R2 it will be great. Thanks in advance.Last edited: Feb 18, 2017 -
Also, motivated by the discovery that memory speed is hugely influential in performance in Fallout 4 I managed to up my desktop PC's memory speed from 2133 MHz to 2400 Mhz!kaza likes this. -
-
These GPU read/write memory speeds look like a RAM speeds (single channel mode)...
Here you can read some interesting info about GPU bus width and bandwidth. Especially, when it comes to SLI / CF configs, it's important to know that:
SLI and CrossFireX systems for the most part "add" the memory bandwidth for vRAM access. 2-way SLI of 192GB/s cards? 384GB/s. 3-way SLI of 192GB/s cards? A cool 576GB/s. Tossed three 980Ti cards in SLI? Enjoy a sexy ~1TB/s memory access bandwidth. Now, this doesn't affect memory "fill" time (that is still limited to each card's bandwidth, your RAM and your data storage, and likely your paging file too), and the multiGPU overhead will not allow you to see a true doubling of bandwidth, but the benefits definitely exist.
So, for a single 980M you have 256 Bit bus width and 160,4 GB/s bandwidth. 980M SLI has 2 * 160,4 GB/s bandwidth. The same goes for 970M / 970M SLI: a single 970M has 192 Bit and 120,3 GB/s, while 970M SLI has 240,6 GB/s. So, another point for 970M SLI config vs. single 980M is a higher initial memory bandwidth at stock.
And this is useful too. However, I can't confirm that a Google chrome SLI disliking no longer happens as of WHQL 340.52 drivers. Personally I have seen it with 378.49.Attached Files:
Last edited: Feb 19, 2017vulcan78 likes this. -
-
@Mr. Fox 's old M18X R2 (i7-3920XM OC'ed and GTX 780M SLI OC'ed, 32Gb 2133 RAM) vs my M18X R2 (i7-3940XM OC'ed and GTX 970M SLI stock, 16Gb 1866J RAM): P17205 vs P16004 (graphics score 20864 vs 19749) or P17205 vs 16097 (graphics 20864 vs 19915) for CPU-does-PhysX preset.
Attached Files:
Last edited: Feb 19, 2017vulcan78 likes this. -
-
Updated Fire Strike results with 378.66 driver. CPU is doing PhysX better now.
Attached Files:
vulcan78 likes this. -
-
I wonder has anyone ever tried to install 1x16 Gb RAM stick in M18X R2?
-
Anyhow, dude, in the process of trying to root out a low Combined Firestrike score I remembered how you set Physics to CPU and saw an improvement so I tried that with my desktop and saw a 1k point improvement from 7700 to 8700!
http://www.3dmark.com/compare/fs/11807761/fs/11809878 -
I'm wondering about 16Gb RAM stick because such sticks are being produced. You can install 2x16Gb into 2 slots. The question is will M18X R2 handle it...
Meanwhile I'm still finding the better RAM timings for my sticks. I have a goal to reach 30000 MB/s memory speed in AIDA64. Current average speed is 28500. Also, I have tighten almost all timings for 32Gb-1600J mode.Attached Files:
-
-
After reading this, this and this you may want to upgrade your Intel ME driver and FW, but I don't recommend you to do it. I have tried to upgrade the driver from v8.1.0.1263 to the latest 11.0.6.1194 (11.0.5.1189) and that caused many 0x00000124 BSODs in XTU benchmarking with OC'ed RAM and several CPU thermal throttlings. Also, the cooling fans spinning policy has changed - GPU1 fan starts to spin after reaching 69 C (was 61-64 C with v8.1.0.1263). Thus I've rolled back to v8.1.0.1263 (July, 2012). It seems that this version works with Ivy Bridge properly. Anyway for those of you who may want to upgrade everyhing and test here is the useful info:
1. Upgrading Intel ME Firmware may dramatically improve (or decrease) your mobo's OC ability. It can be difficult to restore the old FW version in case of emergency.
2. Upgrading the driver only doesn't so dangerous - you can always install the previous version. If you find it of course. I've spend some time to find my old driver because Intel doesn't list it on their site.
3. M18X R2 has 7-series Intel chipset and the proper last versions of the driver and FW are 11.0.6.1194 and 8 1.5MB FW 8.1.65.1586. My current ME FW is 8.0.4.1441 (SKU: PAV).
4. If you have Windows 8/10 it may be necessary to upgrade your ME FW.
Intel ME Firmware Versions 8.x history for 1.5MB for Consumer Systems (Hxx, Mxx, Xxx, Zxx), For Intel Chipset 7-Series systems which come with Intel ME firmware v8.x:
8.1.65.1586
- First ME FW 8.1 version to be released to support ME Software V11.0 for the Windows 10 launch.
8.1.52.1496
- (Fix ) When the DNS infrastructure for the end of the FQDN when with a little PKI configuration failed .
8.1.51.1471
- [ Important ] Added support for Windows 8.1.
8.1.40.1416
- ( New) Added support for CAM (Continuous Aware Mode) wireless redirection session.
- ( New ) Updated Intel WiDi is HDCP to version 2.1
- ( New ) link protection will host connection and reconnection time reduced from 5 seconds to 3 seconds .
- (Fix ) security vulnerabilities.
8.1.30.1350
- (Fix ) LMSService enhanced security features .
8.1.20.1336
- ( New) Added support for Microsoft Windows XP Professional x64 Edition.
- Unable to establish TLS session ( correction ) using TLS1.1 version.
- (Fix ) sent via special parameter WS-MAN command will trigger a firmware exception.
- (Fix ) ME update some system configuration error . ( Delete ' from the update ME error recovery ' part of the text . )
8.1.2.1318
- ( New) Added ' error from the update ME Recovery' section.
- ( New) Added support for Microsoft Windows 7, Windows XP.
8.1.10.1286
-Intial support for Windows 8.Last edited: Mar 2, 2017 -
Dont remember my ME version in use, 2nd thing I will check after the rebuild.
I'm going to check the status of CLU on the cpu and repaste also the chipset (but this with thermal compound that I have, the mx4), I thinking also to put a thermal pad on the cpu mosfets (0,5mm arctic 6,0W/mK)
Alienware M18X R2 upgrade (lazy and actual guide)
Discussion in 'Alienware 18 and M18x' started by kaza, Dec 14, 2016.