I mostly use an eGPU, but wanted to see what effect memory performance had on the HD 3000 for lan parties and such. I recorded benchmarks at 1600mhz CAS9, then flashed to 1866mhz CAS10.5 with thaiphoon burner. Since the HD 3000 shares with system memory, I expected a significant performance increase. Below are my results:
...........................................1600................1866...............gain
3dmark vantage graphics:.......1623................1674...............3.1%
3dmark06.............................4930.................5006...............1.5%
RE5......................................31.5.................33.5................6.3%
dmc4....................................40.3.................42.8................6.2%
3dmark03.............................12302...............12803.............4.1%
(why 3dmark03? still GPU limited!)
So a 16.7% memory bandwidth increase resulted in a 4.2% performance increase. I also played around with only 1 stick of 1600mhz, which will have half the bandwidth of 2 sticks. From 1 to 2 I generally saw a 25% performance increase. Going off of my other results, I speculate that 1333 to 1600 is a 5-6% performance increase.
So the main conclusion is that the HD 3000 scales pretty poorly with increased memory bandwidth. As long as you are running dual channel, you'll see pretty much all your HD 3000 has to offer.
The secondary conclusion is that my memory kinda sucks for not even being able to do CAS10 1866mhz at 1.5V.
-
Do you think there would be any performance gain on a dedicated GPU going from 1333mhz to 1866mhz?
-
get a real GPU bro -_-
-
says the guy with the macbook?
-
No. Sandy bridge has shown to not scale very much with faster memory, especially in games. Check out this link.
-
I'd rather see game FPS than a benchmark.
-
Karamazovmm Overthinking? Always!
indeed, as the x220 users have done.
there is quite a difference from going 1066 to 1866 using the igpu on these systems -
Benchmarks are standardized tests. Kept within the comparative scope, they are the best relative judge of performance.
-
Karamazovmm Overthinking? Always!
so far from reality -
I would doubt it, because dGPUs don't use UMA and GDDR5 is faster than DDR3 anyway.
-
Well, in more CPU bound games, like SC2, there would be a much greater gain (X220 Lenovo owner who tested that, posted it in the lenovo section somewhere...)
-
Lmao ahaha
-
If you somehow managed to run out of dedicated VRAM then memory speed might make a difference. Chance of that happening is very slim though.
-
Feel free to elaborate.
-
I doubt the HD 3000 has dedicated RAM
-
Karamazovmm Overthinking? Always!
companies can rig the tests. ex: nvidia
they arent related to game performance, just for bragging rights
they havent a use
they dont give meaningful info -
I know the HD 3000 doesn't, but someone asked whether it would make any difference to a dedicated GPU. Since most dedicated GPU's all (sensibly) still have the capability to use system RAM if they run out of their own, it is a situation where it could make a difference.
The difference will most likely be a normal slideshow or a slightly faster slideshow since you've run out of real VRAM but eh. -
Synthetic benchmarks are rarely indicative of real world performance. Such as the case with 3dmark, where everything is preloaded in RAM, and then frames are rendered (actual pre-rendered frames, not on-the-fly as games are).
That is why games are usually best to benchmark with. -
and not the timedemo runthroughs that almost always have nothing to do with the actual game (i.e, Crysis island demo, metro 2033 tunnel benchmark)
-
ya....if your dGPU needs to use system RAM, you're ed. no question about it.
-
Except that two of the benchmarks he used, DMC4 and RE5, actually do demonstrate in-game performance and they also happen to show the largest gains as well. To be fair, Capcom has optimized their PC games very well starting with DMC4 and onward.
Going from 1333 to 1866 might be worth the 10-15% fps boost if it were only $75 for 8GB. -
Yes, you are correct. Displaying min, max, and average game framerates from a standard sample is much more useful than any synthetic benchmark.
-
thats not enough....they should be stating how much time it spends at any given framerate
if a game runs for 1 frame/second for .1 second, its a hell of a lot different that then game running at 1 frame/second for 90% of the time. -
What are you trying to say? The entire point of 3dmark is to use real time rendering technologies (such as Direct3D and OpenGL). If they were pre-rendered it would be a film/video. The only thing pre-rendered are textures (ie someone made them in Photoshop), which remains the case for games*. Just because it doesn't react to your input doesn't mean it isn't generated on the fly. A demo is a perfect example of this.
* There are a few games that use procedural generation to create textures, but this an exception to the rule.
HD3000 performance: 1600 vs 1866 mem
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Khenglish, Aug 24, 2011.