I'm just playing overcloking my GT 240M. Lucky with memory and shaders, but not with gpu clock. Currently GPU 550MHz (+0), Memory: 890MHz(+100), Shaders: 1375MHz (+165). I'm a little impressed about the shader overlock
. By the way, my questions are:
1) Should i keep in sync with GPU clock? Keeping it out of sync can degrade performances?
2) What is the equation for this? (original: 550 -> 1210, for GT 330M is 575 -> 1275).
Can't find any useful information on the net. Thanks![]()
-
I would imagine it can degrade performance if the shader ends up waiting on the data from the GPU units, so it could act like a bottleneck. But other than that it probably isn't an issue.
-
shader clock isn't that hard to raise compared to core or even memory depending on the quality of it.
I used to have my old 9800m GS from 530/1325/800 to over 600/1500/900 with not much issues.
Not all GPUs are the same though. Luckily my current radeon can overclock well at least for the core, not so much for memory.
And well yeah, there is a ratio on the clocks for a reason, but I can't explain to you why. Most probably it is to keep a balance between what can be processed, and what will be processed. -
Memory is probably the hardest to raise since it's pretty much the main piece of the card. Core/Shader are both relatively easy to raise.
on my 9600M GS, I had default of 450 core 400 mem and 1075 Shader.
I overclocked it to 650( +200) core 510 mem and 1600( +525) shader without any problems and under 70C stock cooling, which is kinda miraculous. Cuz even in FurMark once it gets to 72 the fans actually start doing stuff then it drops down to 67, .
Anyways, when I try to raise the memory a bit more I get automatically downclocked and sometimes the driver crashes.
BTW I voted no as I never locked them and had no problems. -
you have to keep core and shader in 1:2.5 ratio...
best i can suggest is not to OC memory that much.. 100MHz is a lot.. drop the memory OC to 50MHz and increase the core and shader speed. -
ViciousXUSMC Master Viking NBR Reviewer
shader clock is a subsection of the core clock. Its responsable for rendering of course.. the shaders portion of the video rendering.
I do not think they have to be kept in a specific ratio though its generally recommended because not often is there any advantage not to do so. -
The ratio GPUclk:shaderclk should be constant.
-
From my extencive 8 and 9 series OverClocking experience on desktops I can say that the GPU/Shader clocks do not have to be clocked synchronously and you're better off in most cases OverClocking the GPU clock and leaving the shader clock lower. I tested this time and time again with a 512mb 8800GT, 512mb 8800GTS, 9800 GTX+ and a GTS250 and the results in Crysis, Fallout 3 and Far Cry 2 were about the same as far as the trend of better response with a higher GPU and Shadr clock combined but raising just the GPU clock offered much better performance oer raising the Shader clock though the GPU clock will bost temps more than the shader clock with of course the highest temp increase coming from synchronous GPU/Shader OverClocking.
I'd say OverClock synchronously untill you reach your limit then try dropping he shader a tad and see if you can get the actual GPU higher but beware, this more than anything will peak your temps. If tapping you shader down 50 mhz or so gets you an extra 20 - 30mhz on the GPU clock then that would be worth it but it a 50mhz shader drop oonly gets u 5 - 10mhz extra on the GPU forget it and just put the shader clock back. -
LOL why would you have a poll about a question that has a definitive answer?
-
-
that's wierd how you say not to oc the memory on the dell/alienware we can oc the memory to hell without any drawback i can do a 300mhz oc on the memory on my gpu and nothing will hapen i just stoped pushing as the core was not following so it gave no more benefit
-
High OC for the memory of the GPU is not recommended because it is very easy to damage it since there are no temperature sensors to give you a reading during stress testing.
-
MSI afterburner and GPU-z lets you monitor mem of ATI cards.
-
Maybe on desktop cards, but last time I checked, notebook GPUs had no thermistors by their memory modules, so it wouldn't be possible to check them...
-
I can check mem on my mobility 5870
-
i also can... the 5870M seems to be able to give memory temps.. i think ATI 5 series GPU's can...
-
Good to see things advancing finally then.
-
the 4 serrie also seem to
-
niffcreature ex computer dyke
ATI users, could you post the difference in your vRAM temps vs core and ambient temps at stock?
It would be good to have some point of reference... I've always wondered what can possibly be done to get an idea with vram temps... -
mem is about 3-4deg higher (Celsius). On my 5870 Core is maybe 74-75 and mem is about 79 for me. Ambient 25C
-
The only way to find vRAM temps is to have the laptop apart and measure the temps on vRAM manually. Problem for that is the temps won't be what you are looking for with laptop apart, cooling would be much better than normal.
What do you want for temps for? Idle? Furmark? 3DMark Vantage?
Is shader clock related to gpu clock and how?
Discussion in 'Gaming (Software and Graphics Cards)' started by Gremo, Oct 14, 2010.