EDIT: I kind of take my word back on this partially. The Sound Center is actually integrated in the Alienware Command Center, under Fusion tab. The sound sounds almost the same as the R4.
I don't see or hear a difference. I am currently using an unsigned Dolby Home Theater v4 driver which is better since the leveling is pretty terrible especially in “soft” games like Overwatch.
But, I don't like the fact that Alienware Sound Center isn't installed anymore and u cannot install it too by installing the R4's driver. So Dell kind of messed up the sound driver choice again![]()
Sucks for an Alienware machine too
-
-
Not sure if it’s any diferent for r5 however my r4 has just auto installed the blocked windows 1803 update from April .. No issues so far.
-
Isn't that because XTU runs a service which starts at startup and does the business. If you disable the service (or set to manual) no settings will take effect on next reboot. XTU also self-starts so even if you do set the service to manual, the XTU app will start it right after logon. Setting it to "disabled" puts you in control if for some reason you do not want XTU messing with your system. The way I know is because I run as "user" and XTU needs the admin password to start so I can see it asking for it after I log on, not always mind you, sometimes it does not bother to auto-start.
Sorry no. First, I set BIOS limits to 110W/110W or to the only other setting of 45W/90W. These are the ONLY two choices you have at the BIOS. After logon you can run XTU or TS and mess with PL1/PL2 ***BUT*** if you have booted with 45W/90W, you will not be able to set or exceed the 45W no matter what, and even if TS or XTU is happily reporting any larger value. And if you have booted with 110W/110W you will not be able to exceed 110W no matter what XTU is reporting.
Now during my tests I start with 110W/110W in the BIOS, then use TS or XTU to set PL1/PL2 to anything, eg 80W/90W, and then observe that the actual running TDP is say, 50W or say, 65W. We know we have hit a power throttle because XTU / TS / Hwinfo are all telling us so. And when we look at the actual TDP, is as I said something completely different to what the PL1/PL2 settings are shown as. And then we summarise: (a) we have a power throttle (b) it seems to be, say, 65W because running TDP is a straight line at 65W, (c) I have set the limits to 80W - therefore my limits have been overridden by some other device. Reading the Intel manuals it appears you can only make suggestions as to the limits and the governor might and will override you. It is a rather long read and I might allocate time to educate myself so I could potentially write my own "XTU".
I have taken lots of screenshots because I thought "no one is going to believe this".
Difficult to explain the pictures though better than I have verbally.
In very short summary :
(1) you start a stress test designed to cook the CPU on all cores and you hit all the thermal throttles in the first few seconds, after 3-4 seconds this NEW power throttle kicks in, adjusts power as it
sees fit and keeps the CPU precisely below 94C and of course all thermal throttles completely disappear for the remaining of the test.
(2) to get some thermal throttles back, you scale down the threads of the stress test to occupy only a few cores and not all six. There, it seems the system gets confused, and individual cores heat up exceeding 100C and thermal throttles now also kick in, in addition to the ever present power throttles. It is like whack-a-mole as the stress test threads jump from core to core and the system cannot suppress them.raz8020 and Vistar Shook like this. -
Here is:
HWinfo shows PL1=PL2=110W.
Top left chart CPU temps just below 94C.
Bottom 4 charts, 2 and 2 side by side, are the throttles. We see a thermal throttle right at the beginning of the stress test and then it goes away and is replaced by all the other power throttles. Effective/running TDP is clearly 66W.
On this screenshot we see PL1=80W, PL2=90W, yet we are power throttling at 63W. Temps just below 94C.
raz8020 and Vistar Shook like this. -
Aristotelhs2060 Notebook Virtuoso
Does anyone know if the 4K screen on R5 support HDR or not? The HDR option is disabled on windows 10 and I cannot enable it. Something corrupted or it just have no HDR? Sounds strange...
Last edited: Jul 9, 2018 -
no HDR
@doofus99 you said its a ,,new,, power throtlle... can you proove this? Was it different before?
For me it sounds quite normal: Once you hit the thermal limit the CPU starts to limit its power/clock to reduce heat and stay below its thermal Limit...
This is exactly what i expect my cpu to do in case of an overheating
maybe its just me...but i dont see your Problem... do you want to cook your cpu with temps above 94 degree? ( no offense, just a question)Last edited: Jul 9, 2018 -
The 17 has never had HDR support (with built in displays). You can connect it to an HDR monitor just fine.
-
Vistar Shook Notebook Deity
@doofus99, have you used RWEverything to see if the EC firmware is indeed taking control of the power limits?
Once the program is loaded select Memory and Address=00000000FED15900
and see what is reported at the A0 first column (it should be a hex number 32bit Dword). My for example, reports 00181E0 which translates to decimals as 60...so 60W).
-
Vistar Shook Notebook Deity
Thermal throttling and power throttling are different. That is what he is pointing out, that his machine will power throttle even though he has set higher PL limits, instead of just thermal throttling, as before. -
Aristotelhs2060 Notebook Virtuoso
I am pretty sure my previous R5 with UHD had HDR.. I could enable it in farcry 5 -
Some settings could be retained in BIOS, this is why it is recommended to restore BIOS defaults after uninstalling XTU.
I didn't knew what options you had in the BIOS, so that info was relevant, especially the fact that you can't change those limits with XTU or TS.
The strange part is that your limits do seem to change (or at least that's how PL1 is reported).
I can see in hwinfo sensors that your PL1 was as low as 27w watts and the avr 106w.
XTU also shows different limits for at "boot" (which correspond to what you set in the BIOS) and active (80w ???), but neither of them are accurate, since you are limited at a lower wattage.
Somethings seems to be actively interfering with the limits that you've set in the BIOS.
I'm still not sure if this is a bug or if it's something that dell intentionally introduced (like the dynamic throttling).
Since you have different readings for the limits, I wonder how are they displayed in TS TPL window?
It would be great if you can try Vistar Shook's suggestion (to see what is reported at that offset in RWEverything), since the only way to figure this out out is by eliminating the variables.Vistar Shook and Papusan like this. -
XxAcidSnowxX Notebook Consultant
So I replaced paste with Kryonaut, kept the same thermal pads, undervolt -.135 . . . any suggestions? seems like cores 0,2,4 seem to be trouble . . . . .? I attempted repaste 2x's same results . . .
Attached Files:
Vistar Shook likes this. -
-
When you open up next time, take pict and post pictures of the underside of heatsink + the Cpu/gpu die (paste and pads imprint). I want also see how the brand new small Cpu Vapor chamber looks like.Ashtrix, raz8020, Vistar Shook and 1 other person like this.
-
XxAcidSnowxX Notebook Consultant
That's the thing it seemed like they were a perfect fit, not too squished and not untouched.... Plus they seem really soft anyway really squishable... I get some 1mm and 0.5mm and try again I guess.... But in your opinion those 3 cores will equal out with the rest if I replace the pads? -
Yes, this is exactly it.
OK, thanks for the pointer, I had a look at this memory location.
Running without throttles it is 8370, therefore 0x370 = 880 / 8 = 110W, which is my default today.
Then we run a stress test and we see 81E0, therefore 0x1E0 = 480 / 8 = 60W and looking at the actual TDP it is hovering around the 55W-60W !! Perfect.
Then we let it rest and the fans wind down, and we set PL1=49W and it changes to 8188, 0x188 = 392 / 8 = 49W - perfect.
Then I set it down to 28W, and I see 80E0, 0xE0 = 224 / 8 = 28W - perfect.
Now I am setting it to 80W, and I read 8280, 0x280 = 640 / 8 = 80W.
Start Prime95 again it drops to 81E0 = 0x1E0 = 60W. TDP ~= 59W.
60W seems to be my limit today, without GPU stress, room temp 29C, plenty of draft with two fans in the room. Now I shall also stress the GPU to create more heat and am expecting to see the 60W dropping a bit. I have an effective TDP = 45W and this flag reads 81E0 = 0x1E0 = 60W which is not the actual TDP now, I am being throttled at 45W - so we need to discover more flags. I can also see the multiplier I think at byte offset C2! Anyway TDP has now throttled to 42W which should be 0x150 (well 8150), I cannot see it anywhere.
So it seems there is a power throttle set by something else but we still need more info.raz8020, Vasudev and Vistar Shook like this. -
ah ok. sorry, i missunderstood you.
this is on fresh win10 installation?
what fan Profile is selected in your Command Center? With fan profile silent, the TDP is limited to 45/60w... even with 110w in bios and throttle stop.Vistar Shook likes this. -
Yes fresh Win 10 Pro, minimum bloatware, I believe I have no Dell apps. I have not installed the dreaded command centre - fans in BIOS are set to "performance".
Room temp 27C, TDP: 60W, CPU temp 88C.
With GPU on at full max and after about 1 hour's gaming with P95 in the background I get about 40W out of the CPU at 90C.
captn.ko I would be very interested if you could run a similar test to see (a) do you get throttled at 94C like me? (b) what power can your heatsink extract out of your CPU and what are the room temps?raz8020 likes this. -
i dont want to cook my cpu @94+ degree. Sorry, i will not test this.
This was Battlefield 1. Peak Temps High 80...and even this was a bit too warm for me. AVG 78 degree would be ok but i dont want to see an Peak over 90.
Peak 87.5w and AVG 74w. Ambient temp 24Vistar Shook likes this. -
You both Could you run Aida64 as the guide with same clocks (stock Clocks) and voltage.Vistar Shook, raz8020 and Vasudev like this.
-
you know that i ran it a few weeks ago?
here we go brother @Papusan
i quote myself (was posted in the repaste thread)
1st test
- stress CPU, stress FPU, stress GPU
CPU: 4,3ghz (dont know if it was with undervolt or without) CPU with 87- 89 degree avg and 92-97 Peak temps. Peak load 110w, avg 105w with dips down to 3.5ghz (which is still no throttling)
GPU: 1850mhz with undervolt, 63 degree max and 62 avg
2nd test
- stress CPU, stress FPU, stress GPU
CPU: 4,0ghz CPU with 77-81 degree avg and 81-83 Peak temps. Peak load 77w, avg 74w
GPU: 1850mhz with undervolt, 62 degree max and 61 avg
3rd test
- stress CPU, stress GPU
CPU: 4,6ghz CPU with 79-82 degree avg and 83-87 Peak temps. Peak load 75w, avg 72w
GPU: 1850mhz with undervolt, 62 degree max and 61 avg
Result: the AW17 R5 heatsink (or at least the CPU part of it) is not able to handle the i9 8950HK @110w while the GTX1080 is @ full load for permanent load but Its ok for short 110w load peaks.
BUT:
1. The i9 8950HK is a 45w/65W CPU for what the heatsink is well suited with LM. I would say the heatsink is able to handle up to 85w (ca. 4.1ghz depends on voltage) with the GTX1080@full load with LM.
2. stress CPU + stress FPU + stress GPU is a very theoretical load scenario. Even Battlefield 1 @ 4.7ghz which is one of the most CPU demanding games, was not able to pull more than 90w (avg 75w)
You can see it in the 3rd test. With more realistic load (still 100%) 4.6ghz is possible without melting the cpu
So.... I (my own opinion) can live with that. -
Stock clocks is enough for him. Then he could test equal and see how it goes. But you should post exactly what voltage/power settings/Power plan you used. In short everything. Easier for him to compare. And bios version.Last edited: Jul 10, 2018Vistar Shook, raz8020, Vasudev and 1 other person like this.
-
i will check it when iam back home from work
but i think its not comparable because with LM i can cool much more heat than with stock paste... he will throttle @60w down to 40w while my cpu can run 80w without throttling.
-
Then you both could lock Cpu to 4.0 or 3.8GHz for the 6 cores (change only the 6 cores) Everything is possible
Vistar Shook and raz8020 like this. -
Thanks guys. From the screenshot I can guess something like 79W @ 85C, 74W @ 83C, with 24C in the room. For comparison mine is 40W @ 90C and 27C in room. Both had the GPU full power at the same time. All I care is how many watts we can pull out of the CPU which is the job of the heatsink, fans, thermal interfaces and chassis. I do not care what frequency or what undervolting. By the way captn.ko I have noticed that while playing WoW I can increase the CPU frequency and heat the CPU but there are no extra FPS to be gained as I am VSync-ed at 60 FPS. So I switch my CPU down to 30x or 35x and it runs cooler. The same with Far Cry 5 I noticed that I was so much GPU bound that increasing the CPU speed offered me no extra FPS.
raz8020 and Vistar Shook like this. -
Thats normal. If your GPU is limiting (95%+ load) then you wont gain more FPS with higher clock. But higher clock help against framedrops (gpu load is dropping)...often seen in Battlefield (bf4,bf1,bf5 doesnt matter)
But with reducing the clocks you fight the effect, not the cause..... 3ghz vs 4.3ghz...you talk about reducing the CPU power down to 68% of what it should have.Last edited: Jul 10, 2018Vistar Shook, Papusan and raz8020 like this. -
XxAcidSnowxX Notebook Consultant
Attached Files:
Vistar Shook likes this. -
-
Aristotelhs2060 Notebook Virtuoso
Finally! I finished re-pasting using LM and gelid pads. All I can say is that if you want to use something use LM. This thing is amazing! My maximum temps hardly reached 83C while playing farcry 5 at 3820x2160! I played for about 30 minutes. Maximum GPU temps at 63C (sorry did not include screenshot of that by fault). And high temp spikes are now only history.
If farcry 5 temps are so good, I am sure every benchmark is going to be too. I only run OCCT test (large files) for about 15 minutes and maximum temps reached 73C! I could not even hear the fans working!
Here are my farcry 5 temps (30minutes at 3820x2160)! Average temps (max 64C) are during gaming as I started hwinfo after started playing Farcry 5!
https://imgur.com/a/v2MeVADLast edited: Jul 10, 2018Vistar Shook likes this. -
mhm...
this is a part of your Screen... Average CPU load 67% but average CPU clock 24... Looks like something is going wrong...
can you run Cinebench R15?
Vistar Shook and Papusan like this. -
Thanks and +rep. As expected. Half baked . http://forum.notebookreview.com/thr...r5-owners-lounge.815492/page-57#post-10732418
Vistar Shook likes this. -
ok guys here we go @Papusan @doofus99
3.5ghz
4ghz
as is said before: 80-90w is the maximum the heatsink can handle while the gpu is at full loadLast edited: Jul 10, 2018Vistar Shook, Niarus and raz8020 like this. -
If you could balance the heatsink a bit better your mileage would be better at 4ghz for all cores I think.
-
maybe, but i am happy with the temps i actually have. @4GHZ i have 90w of constant load with this test... Not even Battlefield 1 @4,7ghz can creat this load... And if i open the notebook again temps could be even worse. As long i havent seen a better result i will not touch it again
Atm i run a few rounds Battlefield 4 with the same setting so that we can see the difference between theoretical and practical load ^^ will post them laterVistar Shook likes this. -
BS delete pls
-
Finally, that's the benchmark I wanted to see for 17R5.
Here is the thermal limit of my 17R4 for comparison. United HS is BS..
Prime95 v26.6 - (Small FFTs) + Heaven benchmark (Ultra, Tessellation Normal, Anti-aliasing 8x, 2560x1440, High priority process)".
I can play games @46 at 80Cs, but usually I'm not in need to push it that far.Last edited: Jul 15, 2018Vistar Shook and raz8020 like this. -
just ask
and yes...seems the R5 heatsink is a bit better.
R5= 90w with 85 degree AVG over all cores
R4= 80w with 92 degree AVG over all cores
So we can say they improved the heatsink a bitraz8020 likes this. -
Don’t forget the 6 core die is bigger. This will automatically lower the Cpu temp with same Heatsink
Both for average and max temp.
I will call it a minor improvements. And this with a Vapor chamber.Vistar Shook and raz8020 like this. -
but also 2 cores more
lets call it minor improvement. Thats okVistar Shook, raz8020 and Papusan like this. -
-
My laptop finally arrived, repaste and repad will be the first step
captn.ko likes this. -
please post results
found a guide to run CB15 in Loop. Made a 25 Loop run for 40,41,42,43
@Papusan
Last edited: Jul 11, 2018Vistar Shook and raz8020 like this. -
Aristotelhs2060 Notebook Virtuoso
Now that I sorted the temps and CPU/heatsink contact, I am looking for a good heatsink for the samsung NVME drives. Has anyone tried different ones? And which one is the best or really good for the 9xx samsung NVME drives? I have one which is good for normal things (temps at 60s are lower while doing normal things but under gaming they can go over 80s.
So, any good-tested NVME heatsinks for samsung 9xx NVME drives? -
Have you? this was a question from me 2 pages ago. Maybe you can have a look. The Picture is a part of your result screenshot
-
Aristotelhs2060 Notebook Virtuoso
Yes, I have. Maximum temps hardly at 83C (maximum average 65) with farcry 5 gaming for 30 minutes at 3180 x2160 and without undervolt. If you want me to test anything special please tell me what. Difference is obvious for me. For me, Farcry 5 was always the most demanding thing to try stressing both CPU and GPU (previously had temp spikes of 96C and core package could go even higher). And I am not even talking about the GPU temps reaching highest of 63C under farcry 5.
I will post another screenshot when I have time. I am trying to find a way to have more hwinfo sensor results in one screenshot. Hwinfo with two sensor pages covers the whole screen... ANy way to change that?Last edited: Jul 11, 2018Vistar Shook likes this. -
but average! only 2400nmhz...You said you started hwinfo right after starting FC5... so there should be a much higher clock... min 800mhz is way to low too
thats why i asked for cinebench
-
It could be that the power settings or sst are not set to prefer maximum performance.
This would also explain the fast down-clocking when the usage is low and those 800Mhz.Vistar Shook and captn.ko like this. -
sure but then the temps of 83max and 64 AVG would be very high for just 2400mhz average...
doesnt matter how you look at it...something seems not right.
Temps would be ok for 4.3ghz stock and 2.4ghz would be ok for energy saver mode... but it does not fit together ^^raz8020 likes this. -
Compared to your AW heat dissipating potential, yeah those temps are high, but 64C avr for 30w avr power draw (also 83C for 67w max) might be considered ok or "normal" if it isn't LM'd and the pads were not changed (and the HS was not balanced).
It depends on the power draw, if those clocks are low for that power draw, then the voltage is not optimized.
*OFFICIAL* Alienware 17 R5 Owner's Lounge
Discussion in '2015+ Alienware 13 / 15 / 17' started by alexnvidia, Apr 11, 2018.
