Yeah, the IGP isn't available at all for me.
-
Then for those opting for an Optimus graphic solution will get higher clocks for their CPUs.
Nice.
-
nop since optimus need the IGP to work those opting for amd option will
-
-
davepermen Notebook Nobel Laureate
only if the cooling solution brings the gpu's temperature over to the cpu. if they're split in a way that the cpu wouldn't heat up, then yes, it could lead to higher cpu performance.
-
That's theoretical isn't it? I mean we're saying that having the integrated graphics would affect CPU's Turbo frequency. Because media decoding is done by the fixed function units for most of the pipeline, and encoding is also done by the tiny encoding circuitry, the only time when the graphics will heat up is when its running on 3D graphics. With CPU only apps the GPU can be completely power-gated.
Even if the graphics did affect it, its on the best 32nm process. -
davepermen Notebook Nobel Laureate
he's talking about discrete gpu's => a fully disabled on-cpu gpu unit. if the discrete gpus heat will not heat up the cpu (that's not theoretical, that's a question of ordering in the cooling chain), then yes, the cpu can run faster while the gpu is active, than it could while the on-cpu-gpu would be active.
it's just logical. -
Two words: Power Gate
-
davepermen Notebook Nobel Laureate
i don't get your point.
the point was, that an external graphics solution could lead to higher cpu performance than the integrated gpu from intel itself while gpu in usage. which makes perfect sense.
i don't even get if you agree or disagree with that. -
What do you mean by "GPU in usage" then?
1. If you are gaming heavily you are likely bound by the GPU and probably not using the integrated graphics either
2. If using QuickSync, its faster to do using the integrated one, and it uses nearly nothing in power and temperature
3. Same with multi-monitor, the platform for Sandy Bridge allows discrete and integrated graphics to work together to get support for 4 monitors
4. The notion that a hot part(like from a discrete chip) won't even affect another part of the laptop(through the chassis) even a tiny bit is ridiculous, even with seperate cooling
5. Again Power-Gate. Perhaps you don't understand it, but a disabled chip and power gated chip is the same thing. If you aren't using the integrated graphics, in Sandy Bridge its completely off, plain and simple.
6. Oh, yea I disagree. -
davepermen Notebook Nobel Laureate
he ment when using the IGP (actually using it, gaming on it), the cpu can't clock as high as if you would game with an additional gpu, thus having the IGP disabled.
and that makes perfect sense.
remember that the IGP is part of the cpu now, and directly affects the clocking. when the IGP computes something, the cpu gets slower. that is documented and planned that way. -
Soo, all this talk is theoretical?
Because if you are gaming on an IGP, even with the HD3000 graphics, you will be bound by the GPU, before CPU clock really does anything.
Even in Starcraft 2, it would be faster having Core 2 Duo 2GHz + HD5870 than Core i7 2820QM + Processor Graphics -
davepermen Notebook Nobel Laureate
no, it's not theoretical at all. in fact, i know quite some friends who will be very interested by this balancing. they're working on the next gen graphics solution, and use both gpu and cpu at the same time. it will be interesting, thus, to see, what the balance between gpu and cpu tasks have to be on the sandy bridges, as it suddenly isn't just "feed both 100% for best speed", as having both at full usage means they throttle each other down to slower speeds.
and what do i care about your solution about having some other system? i WILL get a sandy bridge, i WON'T get one with an additional gpu. and i WILL play the occasional game on it. thus i will have a slower cpu while gaming on it. unlike one, who would have an additional gpu.
you just talk around the topic actually. way around it. his question was simple, the answer was yes. a discrete gpu will allow the cpu to use all it's heat headroom for itself, and doesn't have to share it with the IGP => the cpu can clock higher, same hight as you would reach while tasking it with something non-graphics related (rendering, superpi, etc..) -
I was thinking at Clevo laptops mainly. As these have separate cooling systems for CPU and GPU. So we will noticeable higher performance out of our CPUs, which is really nice.
-
Let me explain it further:
Core i7 Sandy Bridge + HD Graphics 3000 in Starcraft 2:
Low: 106
Medium: 23
High: 9
Core i7 720QM + Mobility Radeon 5870 in Starcraft 2:
Low: 166
Medium: 61.8
High: 56
You can see even with the desktop Sandy Bridge CPU paired with HD Graphics 3000, the mobile Core i7 system significantly outperforms the former in Starcraft 2, in all settings. In High, the differences are 6x. Starcraft 2 is known to be a very CPU intensive game, yet even at Low, its better to opt for better GPU and slower CPU.
HD Graphics 3000 will be bottlenecking the 2600 chip.
Kinda funny arguing about such small CPU Turbo differences when the graphics will be holding it back so much. -
davepermen Notebook Nobel Laureate
we're not arguing at all, i just state that the cpu difference will be there (esp in the OP mentoyed system), and thus another bonus of buying a system with an additional gpu. that could then mean one doesn't need to buy the highest end cpu, but one of the cheaper ones.
you compare stuff here that is not comparable at all btw. so it doesn't make sense at all, your stacraft example. -
Every 35/45/55W Core i7 2nd gen chips have the exact same graphics, so you could get a lowest end quad core and still be fine.
The bold part seems to indicate you'll somehow save money by opting for a discrete chip?
-
davepermen Notebook Nobel Laureate
i KNOW that the gpu is the most important thing for games. the OP knows, too. what he wanted to know is, if there is an ADDITIONAL BENEFIT OF AN INCREASED CPU POWER when the igp is not used.
WHICH THERE IS. this can lead to conclusions like "then i might not pay the massive premium for the highest cpu but get the second-highest just as well".
in the gaming benchmarks, the cpu was never at 100% of it's possible gaming performance, as the benchmarks where done with the IGP enabled. so the cpu itself could deliver even better.
all the rest of you is obvious stuff that is absolutely not related to the topic. -
i7 2820QM:
-3.1GHz >TDP Turbo for 4 cores
-2.3GHz Base clock
-2.7GHz =TDP Turbo for 4 cores
So you are talking about 400MHz difference that cannot even be sustained.
Maximum gain is 14.8%, but with integrated graphics it'll nearly be bound by the graphics, made worse by:
-Clock speed doesn't scale perfect, in games its like 40-50%
-The guranteed clock speed is 2.7GHz
- No matter what you do, the above TDP Turbo mode can't be sustained indefinitely, not even for 1 min. The >TDP Turbo Modes have a maximum duration of 25 seconds. So you'll get 400MHz gain for 25 seconds, if you have perfect conditions, yippie! -
davepermen Notebook Nobel Laureate
interestingly, games don't task the cpu 100% all of the time, but they might have peaks (most do), where they suddenly get the cpu for some parts of the frame time the bottleneck (i'm involved in some gamedev stuff). ergo the turbomode won't be triggered much, only then, when the cpu would be the bottleneck, which would result in some stuttering in the game. and exactly at these points, one can't clock the cpu high enough (and only for those moments).
thats why this technology actually exists, and that's why it's so massively an improvement in perceived performance (like ssds). it removes bottlenecks.
if the igp is disabled, the cpu can go faster for those short bursts, and will help.
so you get the 400mhz gains when ever you need them, which, in a game situation, is never for 25 seconds, but maybe for some parts of the slowest frames you have.
as said, your stuff is all nice and true, but still not important for the actual point -
Sandy Bridge's thermal system is based on a concept where the delta between the time for power usage and heat difference is exploited to increase performance. Because power usage happens instantly while the heat takes some time to ramp up, the CPU can increase clocks further.
There might be some advantage as you say, but I'd be its nowhere near 400MHz. It's not like the integrated system would be unable to reach >TDP speeds. Even if there was a 400MHz difference in the beginning, the thermal spread would be better balanced since the GPU would have its own hotspot.
That's also not counting that higher frequency steps might require higher voltages which translates into greater thermals and diminishing returns.
We're splitting hairs. -
davepermen Notebook Nobel Laureate
you're splitting hairs. all he wanted to know if his thought is right that the igp, while gaming, can affect the cpu's performance. and not using the igp, thus, could result in better performance.
and the answer is yes. how much depends on the situation, of course. but the answer still can't be no. and that's what matters.
and btw, the bursts ARE that short. remember, a game repeats itself every 1/60th second (when you have 60fps). that means what EVER happens in a game, happens in those 16 milliseconds. and during those, parts is logic update, physics update, geometry update, and sending the new information to the gpu. so it's filled with few-millisecond-burts (physics might consume 100% cpu over all the cores for 5milliseconds, for example). on the other hand, while the system then waits some milliseconds for the frame to be finished drawing (with vsync on), it can completely cool down the processors.
and as we learned, quickbursts and turning off is better for the power consumtion (read it up if you want). so the higher that quick burst, the better for the created heat, the consumed power, and the quicker actually the tiny step (in the example: physics) is processed. this would lead to a higher framerate, a more STABLE frame rate, and thus a better experience (even with vsync on).
but yeah, splitting hears is what i got used to, when IntelUser is in the game. -
Well the engineers making the chips work night and day for 1-2% differences. So when some people inflate the negatives its just bothersome. Yes and no answers are also boring. -
davepermen Notebook Nobel Laureate
it's not important if it's boring. wrong answers are bad. and that's where you came into play
-
Right, 10% gains for 30% of the time is big, and that's a correct answer...
The answer is a yes, but its a no for whether its a big gain. -
Situation 1- (i7-2820QM)
SB + IGP - Running Aero interface (hence the IGP is somewhat busy)
Run a CPU intensive 4 thread app - SB turbo's as hard as it can 3.1GHz for 12 seconds then drops down as per Ananadtech review...
Situation 2 - (i7-2820QM + discrete graphics)
SB + discrete graphics - Running Aero interface (hence the discrete graphics is downclocked and downvolted, not generating much heat, and the heat isn't in the socket)
Run a CPU intensive 4 thread app - SB turbo's as hard as it can 3.1GHz for 14* seconds then drops down as per Ananadtech review...
(* = speculative, could be anything up to 25 seconds...)
The above situation is where it is most likely to happen, but to be honest isn't something that you should be getting so excited about... and to be honest the i7-2820QM without a discrete card didn't drop below 2.7GHz, so it is always going to be faster than a i7-2630QM as the fastest it's ever going to go is 2.6GHz even with the best cooling in the world... -
davepermen Notebook Nobel Laureate
only true if you actually have any animation going on aero. else, aero is effectively using ZERO performance
so you have to win-tab during the cpu usage. even then, it shouldn't tax the igp enough to actually change anything (aero doesn't use most of the gpu's parts)
ne, the situation where it can happen is with apps that use both cpu and gpu to the max. that can be games, that can be stuff like photoshop, or 3dsmax. anything using opencl or similar platforms to use both gpu and cpu.
i7 2820QM runs at 3.1GHZ under max load? bull?
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Macpod, Jan 3, 2011.