I recently read an interesting comment while browsing the interwebs: "CPU is frame rate and GPU is texture quality".
For a long while I've been interested in what-does-what when it comes to computers, which hardware bottlenecks what sorts of programs, and things of the like. I know the answers to these questions are somewhere out there, but often forming the correct question and finding the search results that will supply you the answers is more difficult done than said.
Having stated this, I'm wondering how valid is the statement "CPU is frame rate, GPU is texture quality", and the bottleneck relationship between CPUs and GPUs. For example, I've read that some games running on low resolutions run slower than on higher resolutions because the GPU is underutilized and the CPU is carrying most of the processing weight (of course, we've all experienced the opposite of this; "But can it run Crysis?!"). What is the relationship then between texture quality, frame rate, CPU processing and GPU processing?
I have other novice questions that I haven't been able to easily find an answer on, but I suppose I'll leave this as is for now. I know somewhere there's an answer to this, and if someone could share a link that would be great, but if not I would appreciate a person that enjoys explaining to enlighten me, and perhaps others, on this subject.
-
-
ViciousXUSMC Master Viking NBR Reviewer
The statement you just made is... totally and completely false on all accounts.
As for the bottleneck relationship between cpu & gpu I have always explained it like this.
Ever seen those flip books where you thumb through the pages and create an animation? Each page has a drawing slightly different than the last.
These pages and there animation is your frame rate, aka frames per second.
It takes both the gpu & cpu to make this happen.
Its the GPU's job to "draw" the images onto the pages and the CPU has the job of flipping the pages.
The higher your graphics settings, the higher your resolution, anything that makes rendering the image harder causes a higher load on the gpu.
This is including how fast the gpu needs to draw the images. If your cpu is very fast and is flipping the pages at over 100 times per second the gpu load will go up higher than say it was only 60 times per second.
A bottleneck is not always bad, and without vsync (limiting the frame rate to 60fps) you will almost always have one.
Some things that can take up cpu power are all the physics and calculations the game is running in the background. Lighting physics, loading textures into memory, computer AI, etc.
If your in a game with way too much CPU load due to those reasons and have a slow cpu and find you reach 100% cpu load and your GPU is very under loaded like 50% or less then you have a bad CPU bottleneck where it can limit your system performance and create a un ideal situation.
A reverse situation can happen where your playing an old game say half life 2 that is very very easy to render (aka draw) for the gpu and the gpu is capable of drawing well over 300 pages (frames) per second before its fully loaded. So the cpu will continue to flip those pages as fast as possible until it reaches 100% load or the gpu hits 100% load. So you could find yourself with a 100% cpu load again and the gpu only at 70% load. This is NOT a bad bottleneck.
It simply means your system is more than capable of running that game and since nothing is preventing the system from running full speed the cpu will hit 100% load, its not a bad bottleneck because your already well over 100fps. Turn on vsync and watch the frame rate cap at 60 and the system load go down.
This will save you power, heat, and wear and tear on the system.
The other kind of bottleneck is gpu bottleneck. This happens when the GPU itself is limiting the relationship between gpu & cpu because the frames are too hard to draw. Graphicy demanding games (or unfortunately more common poorly coded games) may take a tremendous amount of gpu power (load) to render only a few frames per second. Think games like Crysis when it was new. The gpu could hit 100% easy and leave the cpu with plenty of free power left over because the cpu has no new pages to turn while it is waiting on the gpu to draw them so it has nothing to get loaded on other than the other cpu tasks the game is presently sending.
This analogy of mine pretty accurately represents the relationship between cpu & gpu and how a bottleneck happens/works. However it is not a word for word actual description of what a cpu & gpu do when gaming so dont take it literally. -
To be fair to my slightly hurt ego, I didn't make the statement. I quoted a statement I read and inquired the veracity of it.
Thanks man. The flip book analogy was perfect, though it does slightly agree with the whole rough equivalence of CPU = frame rate, GPU = texture quality. So, I'm not sure how the statement I quoted was "totally and completely false on all accounts". Maybe it was oversimplified, but not completely false.
Thanks for the answer.
[edit] content switched to new post -
ViciousXUSMC Master Viking NBR Reviewer
I meant that statement you just stated but that sounds redundant so I stated it differently lol.
-
As a follow up:
I assume then that this same bottleneck relationship exists between VRAM and the GPU processor? If the GPU creates X images and the VRAM stores Y images, if X < Y then there's no reason to increase the VRAM?
I'm curious about this because I'm interested, hypothetically, if it would be possible allocate more shared video RAM (from system RAM) to increase graphics performance. Specifically, in my case, I have a 1gb VRAM video card + 1.7gb shared system RAM, total ~2.7gb VRAM (approx.). I'm curious if I were to increase the possible shared system RAM to, say, 4gb total, 1gb from the video card and 3gb from system, if that would increase graphics performance. I'm not sure if it's possible, and I don't think if it were possible it would be easily done, but I'm wondering if the logic is sound, and if it is sound, if it would end up being bottlenecked anyway by the GPU's speed, making anything over X redundant. And, since system RAM doesn't work exactly the same as VRAM (at least the GDDRs), if the system RAM's performance would be a graphics bottleneck in itself.
OBVIOUSLY it's clear I'm no computer sciences student, so I hope there are no logical impossibilities or too-incorrectly described functions written above. Again, if anyone has links, or if anyone would like to share their knowledge, I'd be super-duper thankful. -
ViciousXUSMC Master Viking NBR Reviewer
VRAM caches (holds) texture data that is required to draw the scene. Else the card would have to pull that stuff from the HDD and that would be slower and cause a performance problem.
That is the most common use of it.
The larger the textures and larger your resolution the more need for VRAM.
Most laptop gaming can easily get away with 1GB, though some games may push past that.
Desktop gaming in eyefinity (triple monitors) can easily push past 2GB.
If you do things to the game like tons of texture mods you can push it past 4GB. If you end up needing more RAM than is available the card has to pagefile and that can cause lag, slow loading textures, etc.
Shared system RAM never works as good as native GPU ram as its not as fast nor on the GPUs buss but its still better than HDD loading.
But much like system RAM, more RAM does not really increase performance, it just prevents performance issues. -
What ViciousXUSMC says is entirely correct. Bottom line is it is an intricate balance between CPU and GPU that provides both framerate and displays image. More vRAM won't help with low performance video cards because the bandwidth isn't enough to ever accommodate that amount of data pushing through the pipe at any reasonable framerate. Your video RAM is used as a frame buffer as well as storing textures. The higher the resolution, the more memory is consumed by the frame buffer(s) and adding more and/or higher resolution textures to the scene, require processing time.
More vRAM =/= more performance except in cases of very high resolution. Anything under 1920x1200 1GB is more than sufficient. -
from what I understand, when you move, it is a change of picture in the game, and CPU reports back a matrix multiplication (generates a matrix, which you multiply with the previous one your next frame is generated) and GPU generates the next frame using this multiplier matrix. And usually generation of the new frame, is the bottleneck operation, after all, turn right left is already a preprocessed move (upto some scale) and calculation of the multiplier matrix is not too tough.
-
masterchef341 The guy from The Notebook
i see what you're getting at, and i think it's misguided. if you're going to try and come up with the math, there's no point looking at it any way other than the correct way, and it's simply much more complex than this.
-
It seems to me that most laptops (or even desktops, for that matter) are bottlenecked by the graphics adapters. In my case, in games, the processor is bottlenecked by the GPU, which in turn is bottlenecked by the VRAM (I noticed that overclocking the video memory yields far better results per MHz than overclocking the core/shaders, which was a bit surprising to me).
-
ViciousXUSMC Master Viking NBR Reviewer
Most but not all games hit GPU bottleneck before CPU bottleneck.
Not always the case though.
But its not the GPU being bottlenecked by the VRAM. -
vRAM speed can be a factor, just not quantity.
-
and I am NOT expert in this, it is not related to my field of study but I know this much that graphics is basically matrix multiplication (at the most basic level, with AA post processing and whatever it changed a lot), btw I mean the location of objects, NOT the colors at the pixels
-
-
masterchef341 The guy from The Notebook
I'm sure there's matrix multiplication going on, I thought you were referring to the content of the pixels themselves, as if it were some sort of transformation based on the previous state of the pixels. That made no sense. Since you've clarified since, carry on!
Just so we're clear on what they do:
It's complicated, and dynamic (it changes as the technology evolves).
Originally, the CPU would feed the GPU a list of coordinates for the GPU to draw. Now, the GPU can manipulate vertices on its own, apply special effects (primarily programmable shaders), etc.
That's the hardware's capability. The interaction that ends up happening is dependent on the implementation.
The big issue I have with what you seemed to be saying was that somehow you were matrix multiplying using the previous frame as one of the inputs to that operation in order to generate the subsequent frame. This is not right. -
If you want to rotate an object in 3D space then you'll have matrix multiplication going on.
As for what bottlenecks a gpu internally, it really depends on how a game is using it and the stats of the hardware underneath. Of course, if you had a 580 with DDR3 vram, then overclocking the ram would lead to a much higher performance increase, however if you loaded an ancient GPU with GDDR5 you'd be better off OC'ing the core since it simply doesn't have the muscle to use the speed GDDR5 offers.
It's like with CPU's. At the moment, RAM speed (system ram this time) has very little effect on the speed of a system since you rarely ever max the bandwidth. If you were to have a CPU one hundred times faster then it would make much more of a difference since it would be waiting around doing nothing for a much larger proportion of the time. -
) I mean just the location of the objects, basic movement of the picture and so on, however with the introduction of new generation of graphics cards (GPUs, there is a reason they are called GPU) as you are saying, GPU also does a ton of work, dynamic shadows and light were not possible back in the time, but now the GPU does these calculations
-
masterchef341 The guy from The Notebook
I still think you're missing the idea. Try this book: Amazon.com: Fundamentals of Computer Graphics (9781568814698): Peter Shirley, Michael Ashikhmin, Steve Marschner: Books
-
-
masterchef341 The guy from The Notebook
you can use matrix multiplication on the vertex coordinates of an object to get vertex coordinates for that object after performing some basic operation on it, like rotating. That's spot on.
But you keep bringing up the idea of doing matrix multiplication on the picture, which is not spot on. -
) anyway this took a lot of time! let's talk about intel 520 ssd, almost about to be released
-
masterchef341 The guy from The Notebook
this sounds spot on now. perfect.
-
Awesome how the discussion has developed. Many answers have wonderfully fallen into place.
-
Ahh, I've remembered another simplistic statement that I wonder its truthfulness.
Regarding overclocking, is it Core Overclock = crashes/bluescreen, while Memory Overclock = artifacts? I know core overclocking can also cause artifacts that can be picked up by stress software with detection (like OOCT), but I'm wondering if this general rule of which-overclock-causes-what-effect is somewhat correct. -
Core overclock instability usually rears its head as lockups or crashes. Memory is artifacting, too high and can also cause lockups/BSOD. Core can cause some graphical anomalies, not necessarily artifacting. Colored blotches on the screen, speckled colored pixels across the screen, etc can happen with core overclocking.
-
Personal experience with my graphics says that I'll artifact from memory OC before I blue screen, but as soon as I even go one notch too high on the core then I BSOD and the drivers reset.
However, we all know that a correlation does not prove a theory true, it's just true of my case. -
I would say that in most cases it is true. However, both scenarios can sometimes happen to memory and GPU so while it holds, it's not an absolute.
-
When someone recommends to play a game on lower resolutions, say, 720p, are they recommending you play windowed mode or fullscreen? Obviously I can do both and check the frame rate increase myself or whatever I choose, but I'm wondering what the general tendency is. Of course, fullscreen will improve fps by a reasonable amount while lowering the image quality, while windowed will maintain image quality but make it smaller. What does the majority tend to do? Try both and see which is best for them?
-
man, just never play a game windowed mode unless you are working at the same time...
-
CPU = Frame Rate, GPU = Texture Quality? (and other questions)
Discussion in 'Gaming (Software and Graphics Cards)' started by *Yawn*...God?, Jan 10, 2012.