Yeah i knew that but above 310.9 the core locked at 925Mhz, and my video memory failing at 2300![]()
-
thanks guys! i finally got the balls to OC my gpu's. i was really hesitant at first because of how hot it is in here. but yeah, it had to happen one point in time.
i started off with 980/2300 and temps are already getting hotter and i even forgot to turn turbo boost back on, hence the lower physics score. i really envy your temps, they run way much cooler. gonna be increasing clocks next time. for the meantime i will be testing this clock in real-time gaming.
-
How do you turn turbo boost on?
-
-
-
yeah it's the cpu temps that i'm worried about. with turbo boost, it reached almost 90 once when i was playing farcry3. could be a badly applied stock paste. i'm not up to reapplying them myself so i guess the only way to get around this is to turn turbo boost off. i guess oc-ing the gpus more will have little effect, if it has any, on cpu temps.
would the temps be cooler if i had opt for and i5? or 3632qm? -
From what I have seen overclocking the GPU increases the temperature of the CPU some as well. I would not recommend disabling Turbo Boost because then you're not getting the performance you've paid for. Far Cry 3 is a relatively demanding game CPU-wise and would benefit from Turbo Boost. I would recommend you to use a good cooling stand if needed to keep the CPU below 95 C. Anything past it is a bit too close to thermal shutdown (105 C) for my comfort.
Given the same cooling system, an i5 will run cooler and an i7-3632QM probably will as well but then you're sacrificing performance. Even though the i7-3632QM is still a quad core and not much slower in terms of MHz it's thermal behavior might be different due to it being a lower-TDP chip and that might decrease performance as well. -
Here's my highest score, at the highest stability I could get with 3dmark. Haven't tested it with Furmark.
System: Lenovo Y400, i7-3630QM, 1 x GT 650m
Settings: 720p, 1120 Mhz gpu core, 2500 Mhz gpu memory, cpu in high performance mode
View attachment 94660
3151 baby! -
What is your ASIC quality by the way? -
It was fully stable in all games I've been playing for a week (like Bioshock at max settings, 720P) and furmark at 1105 Mhz core, 2555 Mhz memory. With that I did get ooone little green artifact during the game's ending, which I attributed to the memory, so I brought that down to 2500. I'll do some furmark testing when I get home.
What's ASIC quality? -
Mine shows 88.7%. Is that typical?
-
Highest stable overclock is 1125 Core and 2750 Memory with 310.90 Drivers. Stable in Crysis 1 and Crysis 2, 3DMark 2011 and Furmark (2800 Mem gave me artifacts)
When running in SLI, I have both cards overclocked to 1100 Core and 2700 Memory.
No issues with benchmarks 3D Mark or Games (Crysis 1 and Crysis 2)
I also have an i5 Y500, so all temperatures stay in check. -
-
ASIC is basically a number assigned at the fab indicating the quality of the the section of wafer that the GPU was cut from. The center of the wafer is always best and quality goes down as you move toward the edges. Binning saves the fab a lot of money because they can simply take the lesser wafers and sell them as lower models with reduced performance instead of throwing them out. For example, the best cuts of GK104 go into the GTX 690 and GTX 680 while the lesser ones go in the 670 and 660 Ti.
In our case, I'm thinking they put the highest ASIC GK107 wafers in the GTX 650 and GTX 660M and then our GT 650M and the GT 640M get lower ones. -
. The core speeds are the same though.
Octiceps, I read on a thread in here where most people were showing 100% ASIC for their gt 650m cards. -
Anyways yours seems to be pretty good number considering this is a laptop. It's already 10% higher than mine. My Ultrabay GPU ASIC quality is in the 60's lol. -
-
Recently got my Y400, and SLI GT 650m, tried to use nvidiainspector and the bat files provided in this thread, no luck, seems to be locked to my default clocks, Running 3Dmark11 in SLI got me 3770, sounds a bit low...
-
See here: Lenovo Y400 / Y500 - unlocked BIOS / wlan whitelist mod -
-
-
As reference to post 87, this are my batch files with new drivers.
OC:
@echo off
C:\nvidiaInspector\nvidiaInspector.exe -setBaseClockOffset:0,0,135 -setMemoryClockOffset:0,0,250 -setGpuClock:0,2,925 -setMemoryClock:0,2,2250 -forcepstate:0,0
||-forcepstate:0,5
Default clock:
@echo off
C:\nvidiaInspector\nvidiaInspector.exe -setGpuClock:0,2,790 -setMemoryClock:0,2,2000 -setBaseClockOffset:0,0,0 -setMemoryClockOffset:0,0,0 -forcepstate:0,16 -
Never mind I got it. I had to rebrowse the thread but to clarify it was because I didn't make an nvidia inspector FOLDER from where the batch files could run the command. It works now.
-
Y500 650M SLI
Turbo Boost Off, both cards at 1100/2500
3DMark11 score of P4502. I can bench it at higher clocks but it isn't 100% stable. I have a suspicion that it's my UltraBay card holding me back.Forcing Turbo Boost on gives minimal gains and raises temperatures significantly.
I'm using the latest drivers and a modified vbios. -
-
If I stress test with the OCCT PSU test (which simultaneously runs a Linpack calculation and an SLI-enabled Furmark-type test) I'll get ~95C on the processor (this seems to be the actual peak temperature @ 2.4GHz, there is no throttling), 70C on the built-in card, and 72C on the Ultrabay GPU. Running 3DMark11 gives a max of 82C on the CPU and the same 70/72 on the GPU's.
-
-
I avoid tests like OCCT because my CPU passes 100 C in less than a minute.There's a reason Lenovo programmed the CPU to throttle by default under simultaneous GPU and CPU workload because the CPU runs way too hot.
-
I guess I should have clarified throttling to mean throttling below it's constant(non-turbo) clock, as it'll do above a certain temperature. I like OCCT as a stress test as it provides for headroom since thermal properties will degrade over time due to dust buildup, etc.
Turbo boost certainly makes a difference in physics score, but very little difference in overall 3DMark score/real world FPS. I have tested with Throttlestop enabled and the temperature tradeoff is not worth it in my opinion. Real life gaming performance seems to be GPU-limited, even in the overclocked SLI configuration. -
-
If you don't mind testing, what temps do you get with your OC and Throttlestop enabled during a 3DMark 11 run?
-
-
The reason I ask is I get unacceptably high temperatures with Throttlestop enabled(>100C max) and the cards OC'd, but I think that might occur as a result of the CPU test, which like Prime95 or OCCT isn't a realistic usage scenario. I might look into that tonight.
-
Yeah Prime95 and Linpack are unrealistic CPU usage scenarios and I can make my CPU thermal shutdown easily with those tests. -
Unacceptably high with either 3dMark or Prime95 (hottest core maxed out at 103). I was hypothesizing that the 3DMark 11 CPU test is responsible, since it is unrealistic for the same reason as Prime95/OCCT. I only have the basic edition, but I might just quit before the CPU test to see what my temps are on the graphics tests only. Throttlestop temperature is acceptable when I'm just gaming, but the tiny performance increase isn't worth the significantly higher temperatures.
-
-
I'm not in front of my computer right now, so I may be confusing my recollection of the max temperatures. Otherwise, what may be occurring is a combination of two factors. Your temperatures were run with a cooler (which would get progressively more effective as temperatures increased above ambient, meaning you could see very little effect for <90C and a noticeable effect above it, keeping your temps at around 90), and your run was on stock clocks without forcing p-states, which means you had more thermal overhead since your cards weren't working at peak clocks/volts during the CPU test.
I might run it at stock clocks/performance states with Throttlestop on and see what happens. -
It might've been caused by your GPU overclock, especially a high OC on the main GPU since that shares the same fan and copper heatpipe as the CPU. You should test with everything at stock. I haven't OC'ed my GPU yet so that's why I might not have experienced that yet. -
It's the higher GPU clocks.
With stock clocks/performance states and Throttlestop enabled I get a score of P3620 and a physics score of 7563. Max CPU temp is 96 (which I expect to be higher than yours since my current airflow environment around the laptop isn't the greatest), and not in the 100's as when both Throttlestop and GPU OC is enabled.
In any case, I have no intention of using Throttlestop considering I get >4500 with the GPU OC with CPU temps staying below 80. Any real-life usage that is CPU limited will likely have the processor turbo-ing anyway since the GPU won't be near full load.
Interestingly enough, once OC'd past a certain point, the main GPU stays cooler than the Ultrabay one, despite having to share its thermal dissipation potential with the CPU. -
Anyway, I'd find some way to get that max CPU temp in 3DMark 11 down to around 90 if I were you. That should give you enough overhead to OC your GPU and not worry about thermal shutdown. I find that the maximum CPU temp in 3DMark is a pretty accurate indicator of the max in a game. -
I can't comment on Crysis 3, but the Bioshock: Infinite benchmark gives identical results (Throttlestop off faster by 0.1 fps) whether or not Throttlestop is turned on, indicating that throttling is not occurring in that usage case. I did order an adjustable cooler anyway, I'll see what effect that has on temps in general. Considering it's not noticeably throttling anyway while gaming, I question the wisdom of running Throttlestop, at least without simultaneously monitoring temperatures.
-
I hate to throw free performance away and as long as my CPU stays around 90 C when gaming I'm perfectly fine with letting it run as fast as it can. -
The usefulness of Throttlestop is obviously dependent on the specific program. I was able to get a 4fps (56->60) increase in the World in Conflict benchmark, but I chose that game because it
is particularly CPU-intensive. I'll start using Throttlestop if I can get temps down with a cooler, but with my current airflow setup its's not worth the 0-7% boost it provides at a cost of 15C. -
That statement about GPU OC headroom would probably be true if the cards were closer to their max operating temperature but they're so far away I don't know if it matters at all. I think the only thing that can make this GPU go higher is more voltage, which not to sound redundant wouldn't bet the best thing for an already too-hot CPU. -
I did a quick Google on this topic and the consensus seemed to be that there was no point to running differently clocked cards in SLI since they would both operate at the lower frequency. Do you know if there is a point to OC'ing the two card asynchronously?
I'm asking essentially out of pure interest, I like to leave a little stability headroom on my OC's anyway. -
-
-
Can you tell me how to push the OC on both card if i am using SLI is there a line i need to add to the bat file..?? -
Character Zero Notebook Evangelist
Y500 GT650M Overclock
Discussion in 'Lenovo' started by n1smo, Dec 13, 2012.