If a dedicated GPU can process fast, then surely it can process slow too, I'm assuming that even if a dedicated GPU is tuned down, that it will still consume more energy than an integrated gpu.
But my intuitive side tells me that if a dedicated GPU is tuned down, shouldnt it have to use "less energy" to produce the same processing power of an integrated GPU using "more energy"?
Can anyone explain? Why have a laptop with both a dedicated and integrated, when the dedicated could be tuned down to use the same amount of power as an integrated GPU? (not saying that it can)
-
you need to look at the electrical/electronic complexity of the two gpus you are trying to compare.
That along should answer your question.
A more complex unit, with a higher component count, running at a higher clock rate, will almost naturally consume more electricity. -
then is it not possible to bypass some of the components and run at a lower clock rate in order to act like a weaker gpu when needed?
-
Integrated doesn't have it's own ram, that's a good place to start. Also, as said above, integrated chips tend to be far less powerful versions. Mostly because the lack of memory bandwidth would be a huge bottleneck to anything mid-high end.
An integrated 5870 would be both power hungry and terrible.
Ofcourse, intel is about to make this a non-issue with huron. -
Only known "workaround" for this is switchable graphics.
A weak / power efficient IGP (Intel GMA or something) combined with a high performance dedicated GPU (ATI / Nvidia). -
-
anything is possible. you know it is. you also know that there are design tradeoffs including ease of manufacture, cost of design, expected life cycle/saleability, and expected advantages in the marketplace.
it is important to remember that 90%+ of people who buy laptops just don't give a damn about leading edge tech issues. so putting design effort into chips that need to be produced to meet that 90% of a market is not going to make the 10% of enthusiasts and 'power users' very happy.
Even niche makers like Sager have to deal with the reality of mass produced cpus, gpus, support chips and the production run costs of motherboards. Other than what car makers call a halo-effect, there is little to distinguish the tech capabilities of a $5000 Sager machine from a $1000 HP or Dell machine. -
Dedicated GPU's will never reach the same power level as integrated even if undercocked since they have far more internal cores. Although you could run a dedicated GPU at integrated speeds, you can't shut off some of the different parts of the chip to make it a lower speed. The problem arrises because the voltage won't go below a specific level (usually 0.8 ish), so no matter how low a clock rate you run the chip at, it will ALWAYS use 0.8 volts on all the cores, whereas an integrated chip would run at that voltage on fewer cores, equaling less power. Although the integrated chip would have to run at a higher freqency because it doesn't have the same number of cores (I say "cores" in a simple context, in reality there's a lot more to graphics than simple cores, you have pixel shaders etc.), it would run at a suitable voltage for its clock speed, unlike the dedicated one which would be being fed far more than it needs. If the voltage weren't limited to a minimum then this problem wouldn't exist as it's actually more efficient to have many low clocked cores than a few high clocked ones since voltage rises exponentially to clock speed, thus giving you more total power for your buck with more cores.
This might not make sense but once you understand about how graphics cards work in a basic sense (lots and lots of low clocked cores in comparison to a CPU which has relatively few very high clocked cores) then it should click. -
Interesting that you should bring up this concept OP, since that's exactly what the folks at Nvidia are about to present with Optimus . The next generation of GPUs will be capable of switching from integrated (to conserve power) to discrete when extra GPU power is necessary. Look for them to start gaining popularity this fall.
-
What I would love to see is core shutdown as seen on CPUs and like cars,if you don't want to use all of your GPU,why not have it shut down most of it's shaders and cores,that is what needs to change.
-
you're assuming that 'core shutdown' on a gpu would buy you anything.
it might not.
the shaders and vertex units on a gpu are way less complex than a core on a cpu. -
intel currently uses this on its nehalem/westmere processor.
it IS coming in the next generation of two of graphic cards(or so i've hard at least). HOWEVER, don't expect it to rival the relative power savings that CPUs can achieve with it, even on paper. -
Why can't dedicated GPU use same amount of power as integrated gpu?
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Brawn, Jul 26, 2010.