The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    Is shader clock related to gpu clock and how?

    Discussion in 'Gaming (Software and Graphics Cards)' started by Gremo, Oct 14, 2010.

  1. Gremo

    Gremo Notebook Geek

    Reputations:
    25
    Messages:
    91
    Likes Received:
    0
    Trophy Points:
    15
    I'm just playing overcloking my GT 240M. Lucky with memory and shaders, but not with gpu clock :( . Currently GPU 550MHz (+0), Memory: 890MHz(+100), Shaders: 1375MHz (+165). I'm a little impressed about the shader overlock :eek: . By the way, my questions are:

    1) Should i keep in sync with GPU clock? Keeping it out of sync can degrade performances?
    2) What is the equation for this? (original: 550 -> 1210, for GT 330M is 575 -> 1275).

    Can't find any useful information on the net. Thanks :)
     
  2. Paralel

    Paralel Notebook Evangelist

    Reputations:
    57
    Messages:
    396
    Likes Received:
    0
    Trophy Points:
    30
    I would imagine it can degrade performance if the shader ends up waiting on the data from the GPU units, so it could act like a bottleneck. But other than that it probably isn't an issue.
     
  3. ryzeki

    ryzeki Super Moderator Super Moderator

    Reputations:
    6,552
    Messages:
    6,410
    Likes Received:
    4,087
    Trophy Points:
    431
    shader clock isn't that hard to raise compared to core or even memory depending on the quality of it.

    I used to have my old 9800m GS from 530/1325/800 to over 600/1500/900 with not much issues.

    Not all GPUs are the same though. Luckily my current radeon can overclock well at least for the core, not so much for memory.

    And well yeah, there is a ratio on the clocks for a reason, but I can't explain to you why. Most probably it is to keep a balance between what can be processed, and what will be processed.
     
  4. Bearclaw

    Bearclaw Steaming

    Reputations:
    463
    Messages:
    1,615
    Likes Received:
    6
    Trophy Points:
    56
    Memory is probably the hardest to raise since it's pretty much the main piece of the card. Core/Shader are both relatively easy to raise.

    on my 9600M GS, I had default of 450 core 400 mem and 1075 Shader.

    I overclocked it to 650( +200) core 510 mem and 1600( +525) shader without any problems and under 70C stock cooling, which is kinda miraculous. Cuz even in FurMark once it gets to 72 the fans actually start doing stuff then it drops down to 67, .

    Anyways, when I try to raise the memory a bit more I get automatically downclocked and sometimes the driver crashes.

    BTW I voted no as I never locked them and had no problems.
     
  5. sean473

    sean473 Notebook Prophet

    Reputations:
    613
    Messages:
    6,705
    Likes Received:
    0
    Trophy Points:
    0
    you have to keep core and shader in 1:2.5 ratio...

    best i can suggest is not to OC memory that much.. 100MHz is a lot.. drop the memory OC to 50MHz and increase the core and shader speed.
     
  6. ViciousXUSMC

    ViciousXUSMC Master Viking NBR Reviewer

    Reputations:
    11,461
    Messages:
    16,824
    Likes Received:
    76
    Trophy Points:
    466
    shader clock is a subsection of the core clock. Its responsable for rendering of course.. the shaders portion of the video rendering.

    I do not think they have to be kept in a specific ratio though its generally recommended because not often is there any advantage not to do so.
     
  7. Marecki_clf

    Marecki_clf Homo laptopicus

    Reputations:
    464
    Messages:
    1,507
    Likes Received:
    170
    Trophy Points:
    81
    The ratio GPUclk:shaderclk should be constant.
     
  8. WARDOZER9

    WARDOZER9 Notebook Consultant

    Reputations:
    35
    Messages:
    282
    Likes Received:
    8
    Trophy Points:
    31
    From my extencive 8 and 9 series OverClocking experience on desktops I can say that the GPU/Shader clocks do not have to be clocked synchronously and you're better off in most cases OverClocking the GPU clock and leaving the shader clock lower. I tested this time and time again with a 512mb 8800GT, 512mb 8800GTS, 9800 GTX+ and a GTS250 and the results in Crysis, Fallout 3 and Far Cry 2 were about the same as far as the trend of better response with a higher GPU and Shadr clock combined but raising just the GPU clock offered much better performance oer raising the Shader clock though the GPU clock will bost temps more than the shader clock with of course the highest temp increase coming from synchronous GPU/Shader OverClocking.

    I'd say OverClock synchronously untill you reach your limit then try dropping he shader a tad and see if you can get the actual GPU higher but beware, this more than anything will peak your temps. If tapping you shader down 50 mhz or so gets you an extra 20 - 30mhz on the GPU clock then that would be worth it but it a 50mhz shader drop oonly gets u 5 - 10mhz extra on the GPU forget it and just put the shader clock back.
     
  9. naticus

    naticus Notebook Deity

    Reputations:
    630
    Messages:
    1,767
    Likes Received:
    0
    Trophy Points:
    55
    LOL why would you have a poll about a question that has a definitive answer?
     
  10. Gremo

    Gremo Notebook Geek

    Reputations:
    25
    Messages:
    91
    Likes Received:
    0
    Trophy Points:
    15
    As you can see from the poll, there is no definitive answer yet... :rolleyes:
     
  11. granyte

    granyte ATI+AMD -> DAAMIT

    Reputations:
    357
    Messages:
    2,346
    Likes Received:
    0
    Trophy Points:
    55
    that's wierd how you say not to oc the memory on the dell/alienware we can oc the memory to hell without any drawback i can do a 300mhz oc on the memory on my gpu and nothing will hapen i just stoped pushing as the core was not following so it gave no more benefit
     
  12. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    High OC for the memory of the GPU is not recommended because it is very easy to damage it since there are no temperature sensors to give you a reading during stress testing.
     
  13. Bearclaw

    Bearclaw Steaming

    Reputations:
    463
    Messages:
    1,615
    Likes Received:
    6
    Trophy Points:
    56
    MSI afterburner and GPU-z lets you monitor mem of ATI cards.
     
  14. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    Maybe on desktop cards, but last time I checked, notebook GPUs had no thermistors by their memory modules, so it wouldn't be possible to check them...
     
  15. Bearclaw

    Bearclaw Steaming

    Reputations:
    463
    Messages:
    1,615
    Likes Received:
    6
    Trophy Points:
    56
    I can check mem on my mobility 5870
     
  16. sean473

    sean473 Notebook Prophet

    Reputations:
    613
    Messages:
    6,705
    Likes Received:
    0
    Trophy Points:
    0
    i also can... the 5870M seems to be able to give memory temps.. i think ATI 5 series GPU's can...
     
  17. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    Good to see things advancing finally then.
     
  18. granyte

    granyte ATI+AMD -> DAAMIT

    Reputations:
    357
    Messages:
    2,346
    Likes Received:
    0
    Trophy Points:
    55
    the 4 serrie also seem to
     
  19. niffcreature

    niffcreature ex computer dyke

    Reputations:
    1,748
    Messages:
    4,094
    Likes Received:
    28
    Trophy Points:
    116
    ATI users, could you post the difference in your vRAM temps vs core and ambient temps at stock?
    It would be good to have some point of reference... I've always wondered what can possibly be done to get an idea with vram temps...
     
  20. Bearclaw

    Bearclaw Steaming

    Reputations:
    463
    Messages:
    1,615
    Likes Received:
    6
    Trophy Points:
    56
    mem is about 3-4deg higher (Celsius). On my 5870 Core is maybe 74-75 and mem is about 79 for me. Ambient 25C
     
  21. Ruckus

    Ruckus Notebook Deity

    Reputations:
    363
    Messages:
    832
    Likes Received:
    1
    Trophy Points:
    0
    These are the temps can provide. The Display Output, Memory Controller and the Core temps.

    The only way to find vRAM temps is to have the laptop apart and measure the temps on vRAM manually. Problem for that is the temps won't be what you are looking for with laptop apart, cooling would be much better than normal.

    What do you want for temps for? Idle? Furmark? 3DMark Vantage?