I know most of you are thinking "noob!" right now, and I admit that my knowledge of computer hardware is borderline non-existant, so I come to the guru's of notebookreview for wisdom. What makes a gfx card perform well?
I used to think that vram was the only important thing. On the back of games the requirements always say "512mb gfx card required" or something like that.
However lately I've understood that thats only a small part of the equation; for instance many of the 5000 and 4000 series cards have 1gb of vram, yet a large variety of performance; and f.x. the GTX 460m has 1.5gb vram and still performs on par/worse than the 5870 with 1gb vram.
And I suppose this is a rather broad question so if someone has a link to some article that would be great![]()
-
-
Well it's mainly the clock speeds and the shaders that counts
They don't exactly tell you the clock speeds when they advertise a notebook but rather just plain model number and VRam (Like 1.5GB GTX460M). But really just by looking at how they name it, you can pretty much figure out which ones are the high end ones and which ones are the low end ones.
-Waits for someone who can explain it better- <_< >_>
I googled and saw thisIt gives a rough idea I guess
-
H.A.L. 9000 Occam's Chainsaw
Shader count/clocks and memory bandwidth. Mostly... Drivers play a large part also.
-
Go HERE. Give it a minute, it can take a while to load.
It's not 100% accurate but does give a rough idea on how cards perform relative to each other on a grand scale. -
Yeah I know of notebookchecks benchmark list, its great, this was more just for personal knowledge. I've been trawling around the internet the last couple of months in search for a laptop to buy once I have the money lined up and just figured it was time to educate myself a bit on computer hardware
anyway thanks guys I think that I have a vague knowledge of the basics now
-
I'm not nearly as in to (or knowledgeable) about computers as I used to be. Right now the only things that matter to me are a card's benchmarks and how cool it runs. I'm a bit embarrassed to admit that clockspeeds and shaders mean very little to me next to the overall benchmarks.
-
VRAM size is mostly useful for texture storage. More VRAM = higher detail textures = higher playable resolutions. It also allows AA and other nice effects to be enabled. But after a certain point, more VRAM doesn't help. For a 1366x768 display, about 512MB VRAM will run most everything. At 1920x1080 you should be looking for 1GB or so. Much more is overkill, less is a huge performance penalty and will likely limit which effects and settings you can enable.
After that, you start getting into VRAM access times and bandwidth. More bandwidth = faster drawing. A perfect example is the difference between the Radeon 5850 with GDDR3 vs the 5850 with GDDR5. It gets something like a 30%+ performance boost for just going with the GDDR5.
And then you have the chip clock and shader count. GPUs and graphics processing in general are massively parallel operations. The more shaders you have, the fewer clock cycles it takes to get things done, and the higher chip clock you have the less "real" time it takes to get the same processing done. For example, a 200MHz clock will do twice as much work with 100 shaders as a 100MHz clock will do with the same 100 shaders.
Don't try to compare the number of shaders on a card between companies, or even between generations. The GeForce 4xx series and the upcoming Radeon 6xxx series are looking to have seriously different performance even with the same shaders and clock counts because of the efficiency and the design of the shaders.
Google any of the words above for more in-depth understanding -
spradhan01 Notebook Virtuoso
Nice explanation! I added some more info to my knowledge.
-
Most important factors to look for are:
1. bus width
2. memory type
3. shader count
4. clock speed
5. memory amount
ie. 128-bit GDDR5 card will always beat a 64-bit GDDR3 card, no matter how many shaders or how fast clock speed is or how much memory the latter has. -
Memory amount is absolutely the last thing to worry about, especially when no modern GPU is coming with less than 512MB. -
-
For as many technically inclined people there are on notebookreviews, Im surpised there hasnt been a more indepth analysis. Personally I think video cards are a load of bull. They should be proprietary to the motherboard and its manufacturer. The video card industry is conviluded. Theres workstation video cards and then gaming video cards then the streaming/video graphics cards. I think most people havent a clue which to get when they first get a laptop which makes things even worse.
But beyond that theres only 2 companys any way to chose from that make video cards. Kinda rediculous really. With as much trouble as people seem to have with them the choice would likly have to come down to one that dosnt stop working. -
The reason there are a lot of choice is because there are different uses for them. Some don't care about the highest settings but want video conversion capabilities. Some don't care about anything but the most basic things. Very few want earth shattering performance.
You only need to sit down for an hour and study the name conventions, price ranges and uses. Really. You don't need tremendous amount of info to understand the regular hierarchy of graphics cards per generation.
As for video cards, the most important things have been said already. Memory bandwidth (Memory type, speed + memory bus width), Memory size, amount of cores and Clock speeds along with their current gen of hardware instruction capabilities dictate their performance level. -
Right. Now SSD's are downright confusing...
-
Any modern video card is going to give high quality performance. This isn't the 90s where every crappy company in town had their "3d card" to sell (Creative Graphics Blaster, anyone?).
What makes a card good is how loud and hot it has to get to do so. -
Whats important is how we drive into the future of these technologies, faster processors, bigger games, better textures, more effects, more polygons (or maybe even a move away from polys) and so on and so on. AMD is saying by the end of the next year they will be selling their single GPU/CPU processors. and they are targeting netbooks and notebooks with this product.
Alot of the problems still is we are using old materials, there needs to be a shift to newer materials for better performance, you can only push old materials so far before they stop yeilding better performance.
Another big problem is when people buy a card or a console, they are expecting a huge leap in graphics or performance. When you think of the history of gaming, each generation brought a new groundbreaking graphics or physics or gameplay. A quick reference think nes to snes, then the age of ps1 and n64, then we moved to dreamcast - ps2 - xbox, and now the 360 and the ps3. Each one of these main generations has been a huge leap of the other.
Were Im going with this is that we have a certain expectation and its not being met, the last 4-5 years has shown no improvement, or very little. I have the 5870m in my laptop and I had a ps3, and theres very little difference in the graphics IMO, they are more sharper but thats naturally because im on a pc. Alot of the members are going to say its because of console ports but half of it is the hardware.
We need a radically new bump in game complixety, I feel Oblivion, Mass effect, and Fallout and games similar are starting to bring in that new "generation". Living environments, unpredictable AI, Open world free roam, consequences, that was a huge bump up to making a game world more believable, dont get me wrong COD is a great experience and Id put it on par with an action movie like rambo fun to play, but id compare something like mass effect more on par with a masterpiece movie, thats more than shallow explosions.
All in all DX 11 isnt enough of a leap or hasnt been emplimented enough because of DX9 being what consoles use, But at the same time Microsoft tried to push DX10 and DX11 too quickly to the masses, they should have focused on improving DX9 , as it is
http://www.tweakguides.com/images/CrysisWar_6a.jpg
http://www.tweakguides.com/images/CrysisWar_6b.jpg
http://www.tweakguides.com/images/CrysisWar_6c.jpg
are almost hardly different at all. -
Just to point out, AMD is already selling single GPU/CPU processors - the current Xbox 360s have the Xenon and Xenos on the same die.
-
-
ATI cards are good.. look at the 5870M.. its tells the story.. certainly for me.
-
-
I had a Voodoo3 3000 in my old PC, and later upgraded to a Voodoo 5 5500 (PCI no less).
It was also awesome, but quickly outshined by the Geforce and Redeon.
Heat and noise were a non factor back then. -
And yeah, what doesn't seem to be emphasized enough in this thread is that Nvidia and ATI cards cannot be compared apples to apples. For generations of cards by the same manufacturer though, they are quite comparable. Their basic architecture has not changed much from the 8000 series to present for Nvidia or the 4000 series to present for ATI, the performance of their shaders only being slightly tweaked along the line. -
The 6000 series is changing the shaders again, so that's only two years of ATI shaders that can be compared
Nvidia, sure, as long as you don't include the 400 series in the "present". They've only got 5 "generations" in a row that can be compared apples to apples
What makes a graphics card good
Discussion in 'Gaming (Software and Graphics Cards)' started by Audiophil92, Sep 14, 2010.