Since everyone else has this thread up in their section I figured why not here eh?
I've seen certain people manage upwards of 850mhz on the core.
![]()
And it doesn't matter what GPU you're using, just seeing what everyone can do here.![]()
-
-
0o0o0oo0 I'm too scared to overclock my GPU.
HOWEVER, when I do manage to obtain the courage, it is good to see what some stable frequencies are for the Qosmio GTX 460m.
Good info -
Remember how the 5770 lost to the 4890 at the same clock speeds? It wasn't the bus-width that killed the 5770, it was the memory bandwidth. Fillrates were the same, however pixel and texture fillrates are only important if your bandwidth is already through the roof.
Note how much higher my bandwidth is above a stock 460m. My Vantage score rose 1600 points just because of that. The increase on the core/shader only managed 200 points to further increase. So your bandwidth matters a lot for these lesser cards. It's what separates the 4850 from the 4870. Literally. -
I'm using the specs someone else posted on this forum: 800/1480/1600.
I haven't noticed any significant temp increase (maybe +1C?), and it's stable. Your stats seem better though, I really don't know much about OC'ing. Can anyone explain how important Pixel Fillrate and Bandwidth is? I'll try your specs.
EDIT: After changing to your settings, my Pixel Fillrate is still 6.0 GPixel/s, not 18. All the other specs match though...not sure what any of this means. -
-
The highest benefit of an overclock here is the memory. Use Nvidia Inspector and push the memory as high as it can go. Two Vantage runs should deem it stable enough for games.
As for explaining the parts, I'll do my best.
Start with the top of GPUz, you'll see the GPU name, revision, die size, bios version, etc. These are just basic for the GPU being used. However the transistor count is a good way of determining how powerful a GPU is. So we've got 1.17 billion transistors running in this chip. Nice for a mobile chip.
Next is the Shader and ROP, Raster Operator, count. ATI and Nvidia shaders cannot be directly compared. The best way to compare the two is that for every 5 ATI shaders it equals one Nvidia shader. So when you compare the Mobility Radeon 5870 vs the GTX 460M, you get 800 Shaders (160 Cuda Cores) vs 192 Cuda Cores, meaning at the same clock rate, the GTX 460M would be vastly quicker.
The Shaders do the work for the GPU, the more the better here. The ROP count is displayed and these cards have 24, which is a sizable number for a mobile GPU. Example, the highly revered 9800GTX+ only had 16 ROPs, so there's an edge over that card you'll always have. These calculate the pixel fill rate. It's simple: Core Clock Speed x Raster Operator Count = Pixel Fillrate. So if I remember correctly, the correct amount is 19.2 billion pixels a second.
The other part of that is the TMU, Texture Mapping Unit, count, which isn't displayed. The only way to find out this number is to look up a similar card, the GTX 550Ti for example, and see what fill rates it has at the same clock speeds. I've since looked it up and found it has 32 TMUs. The same calculation for the Texture Fillrate is the one for the Pixel Fillrate. No difference.
After all that you'll see the memory amount, memory type, and memory bus-width, along with the total bandwidth you receive.
The memory type is most important here. DDR, Double Data Rate, was the first to come out and DDR2 is just a revision of that. GDDR3 and DDR3 have double the bandwidth that DDR2 produces, and GDDR5 doubles it yet again. However, the Memory Bandwidth is calculated by the Bus Width. I'm not entirely sure on what the mathematical calculation is, but it's not really needed for this. Keep in mind that GPUz shows the "real" clock speed for the memory. MSI afterburner shows the same thing but doubled. So 1500mhz in MSI Afterburner really means it's running at 750mhz. Keep this in mind.
Also keep in mind that since GDDR5 doubles the bandwidth of GDDR3, it's known as QDR or Quad Data Rate. Meaning you take the base clock speed, at stock it would be 625mhz, and multiply it by four. Then that number is calculated by the bus width and produces your bandwidth. So 625mhz GDDR5 = 2500mhz DDR
GDDR5 is the best out yet. Coupled with a Bus Width of 192-bits @ 625mhz = 60GB/s. That's gigabytes a second mind you. Also the video ram, commonly referred to as Vram, on your GPU is ridiculously fast, several times faster than your system ram even. This is the most important part of the GTX 460M. Your core and shader can stay at stock and you will receive a greater performance boost from pushing the most out of your memory.
Someone might ask, "Where's this proof at?" Well I'll tell you. Remember the HD4850 and 4870 ATI put up against Nvidia's 9800GTX and GTX 260? Wanna know what the difference between the two were? One had GDDR3 memory and the other had GDDR5. That's all. I kid you not. Crazy no?
Or what about the 5770 and the 4890? The 5770 had the same pixel and texture fillrates as the 4890 and had the same type of memory too. Guess what made the former perform slower? The bus width.
5770 - 800 shaders/16 ROPs/ 40 TMUs/ 128-bit bus/ 13.6 Gpixels/s - 34 Gtexels/s - 1200mhz GDDR5 @ 76.8GB/s
4890 - 800 shaders/ 16 ROPs/ 40 TMUs/ 256-bit bus/ 13.6 Gpixels/s - 34 Gtexels/s - 975mhz GDDR5 @ 124.8GB/s
As a last idea, I suggest you downgrade your driver set to 285.62 and overclock then. I achieve way better results from that set.
Hope this clears things up! -
Like Alex I am afraid to O/C the graphics. I used to do it back in the day, but in reality the risks are far to great for the small performance increase they generate. I dont ever want to be looking for a replacement graphics card for my X500, EVER! I had a Dell XPS1710 that I O/Ced the 7900GTX and I sold the machine to a friend. A few weeks later he called and said the screen was all wonky. Damn vid card died. I had to find him a replacement and it cost me dearly. THen the card in my spare 1710 died and I found a replacement for it. Guy claimed it had never been baked. I paid top $ for it and a few weeks after installing it started doing weird things. Within a year that card was dead also. So for the small gains I got from O/Cing I paid dearly. NOT WORTH IT PERIOD.
-
Both versions gives me fillrate errors on UL30vt..so I only post highest numbers -
@imglidinhere, thanks for the excellent explanation. I've been using your settings for a couple days, and it definitely feels smoother. I can't say whether my FPS has gone up, but I'm definitely not seeing stuttering I used to get sometimes.
@alexUW, SMOKE SKULL, like you guys I was also wary of OC'ing, I didn't for almost a year. My advice is get Nvidia Inspector and HWMonitor, and watch the temps. If they don't change after OC'ing, there really shouldn't be any risk to the GPU. I'm glad I'm OC'ing now, the 460M is definitely more capable than Nvidia first said (by all accounts, the current 560M is about the same as an OC'ed 460M). -
As a minor addition to this, I looked up the Vantage GPU score for a 5770 and compared it to mine.
Stock 5770s usually hit around 8700 on average.
My overclocked GTX 460M gets 8528.
Also, GPUz supposedly reads the fillrates correctly. So 0.5.8 is out now and nothing has changed. So apparently my GPU has a really crappy low pixel fillrate. I'm still hoping that's incorrect, since it is the absolute maximum fillrate of that GPU and I'll be testing with Vantage at my native resolution just to test it out.
Highest GPU overclocks
Discussion in 'Toshiba' started by imglidinhere, Jan 16, 2012.