These tests of quad core IB cpus, and these of HD4000 from note bookcheck.net, are on the line of mobile quad core IB processors compared with SB quad cores.
Similar tests are performed by laptopmag.com , with similar findings.
Key findings:
1) cpu scores about 10-20% increase in performance, depending on the task (exception: x264 HD encoding, 20-25% speed improvement). In notebookcheck's words: "if you are the owner of a Sandy Bridge notebook, you will likely only see little incentive for the change to Ivy Bridge."
2) 20-40% improvement in gpu performance. You can now play Direct11x at reasonable frame rates with an integrated gpu; World of Warcraft 47 fps at "good" settings, @ FHD resoultion, Batman: Arkham City: 51fps at "high" settings, 1366x768, 24 fps@FHD resolution.
3) Major disappointment (for me - perhaps I was not well informed): no improvement in battery life. The article in laptopmag.com speculated that IB dual core SV and ULV cpus might show some relative battery life improvement over SB equivalents but there is no substantiation of why that would be so.
Boo![]()
-
lovelaptops MY FRIENDS CALL ME JEFF!
-
User Retired 2 Notebook Nobel Laureate NBR Reviewer
-
In theory, yes. But if the extra capacity is used for higher clock speeds you end up with equal power requirements but higher processing speed.
-
I don't think IB will bring lower running temps - as per tests @AnandTech, it appears IB actually runs quite hotter than SB.
The explanation for this is - die area is much smaller, but the TDP is approximately the same, which means much more heat per surface area to be evicted out. -
The question is, does Sony offer the IB/Z with a i7-3612QM? If so, how will performance and battery life compare with the faster-clocked, dual-core IB i7?
-
Karamazovmm Overthinking? Always!
That is the whole thinking of modern cpu gating tech as well. -
I tried to leave a comment on noteboockcheck but have no idea what means Verification: Anti-Spam: Bitte NBC eingeben / Please enter NBC:
I see no image -
lovelaptops MY FRIENDS CALL ME JEFF!
The last two statements are questions - I'm only stating what I've heard, asking for someone more knowledgeable than I (won't be hard to find!) set me straight on this issue. Thanks. -
Karamazovmm Overthinking? Always!
The lowering of the smallest part of the cpu to the 22nm led to a more power efficiency, meaning that there is less leakage of power in the chip.
The idea of the tri gate transistors take that as well, basically there is voltage leakage when you are running something in a small constrained environment, thus if if you add a third dimension to the equation it avoids that the voltage leak in great numbers.
SB and I would guess IVB consume more power than the core 2 duos and the arrandale, clarcksdale and whatever dale was available at the time. Simply because they use a smaller part in some pieces of their cpu, thus they can be clocked and run at higher voltages while still limiting the TDP, think that the first dual core cpu, was basically 2 cpu tied together with a bridge so that they could ''talk'' to each other.now its on the same die, and with more cores, with a base gpu, and more features and yes those features consume die space.
Its incredible.
Another thing that they did, that exists since P4, is power gating, while it was getting perfected it only lowered the clocks, now it lowers the clocks and shuts down cores that arent used. And that is one of the reasons for SB having the battery life that it has.
That is a basic summary of the techs in the processor, there is much more -
It may also do it faster, but that would depend on if IB can use fewer clock cycles to perform some instructions, it may for example have a better calculation unit and may be able to do multiplications using fewer cycles.
Then there's the issue with increased frequency. Doing that is power expensive, The basic physics behind it is that in order to raise the frequency, you need to raise the voltage. And when you raise the voltage, the current also increases, which means that the total power v*A increases exponentially.
This makes it much more power efficient to add cores, which can work at the same (or lower) voltage and still be able to perform the same work, i.e. process the same data using the same or lower power.
And here's where software comes in. Developers need to be able to handle concurrency. It's not hard, but many just can't be bothered with it. They still think you can just buy a faster computer if they run into performance issues. But the fact is that this doesn't work anymore. You'd need a nuclear power plant to run each computer to gain the performance some need. That's really sad, but not really the issue here. -
Karamazovmm Overthinking? Always!
Id have delineated the rough model of thinking of modern cpus. That guideline of how turbo boost and power gating works for all the modern cpus, be it x86, arm, or whatever ibm is thinking atm
While the idea behind diminishing the smallest part of the cpu continually is to ensure the voltage leakage, this will inherently create a problem of its own, the leakage will be so prevalent (contradictory enough?) as you go for smaller parts that in the end it makes the prevalent model of silicon based cpus not worth anymore. And that is one of the reasons that we are seeking new substrates for the cpus.
IVB and SB share the same arch with some tweaks, and those tweaks arent in the pipeline design thus voiding your claim that the cpu would be faster on its own.
The instruction sets are only there if you code for it, basically you have to design your programs around it to make use of it.
Multi core approach to software is a hard thing, it aint that easy, you have to make your project designed for that to happen. Thus voiding most software development that share the same core engine for a long time, which is there to save costs and development time, and its there for the obvious reasons that you have developed the core to last for a long time of new iterations.
You do are aware that many of the core of modern operating system share designs very similar to what we had in the 80 and 90. Granted those are massive softwares, but its true for most modern piece of software that have been developed along the years.
I do agree that multi core designed software is there for a reason, and I agree with your thinking behind that, and I also agree with your theory that its the future, and it is. -
lovelaptops MY FRIENDS CALL ME JEFF!
Wow, I have to admit, this discussion has gotten far more technically "high brow" than I imagined it would. It is fascinating to hear smart, knowledgeable people debating the implications of the latest innovations in micro technology.
Remaining questions:
(hate to be so pedestrian)
1) Would an IVB dual core cpu likely use more power, less or about the same performing a mix of tasks that the "average user" performs.
2) Same question, comparing quad core IVB to quad core SB.
3) Same question, comparing IVB dual core to IVB quad core (lowest clock speed)
4) Of IVB mobile cpus, would any/most of them generate more or less heat than their SB equivalents.
Sorry for so many questions, but I've tried to carve them out so you can focus on just the particular head-to-head comparisons listed. Each of them has a particular reason for being asked, but it would just make the post longer to do so and I think the reasons will be obvious to many. And, taken as a group, they help one decide whether to grab a good deal on an SB-based laptop or wait for the IVB and pay 10-15% more, at least in the beginning. -
Karamazovmm Overthinking? Always!
the answers are already there... but lets go:
But first, lets put what for me the average user does:
- Goes to internet, watches flash videos, plays flash games (those are cpu bound and a core duo can handle it perfectly fine, IVB gain minimal)
- Office jobs, like editing docs, and doing some presentations, along some spreadsheets (those are more HDD intensive usually, and have a claim on the cpu quite low, aside that they are badly coded, and have lots of hiccups)
- Watches movies on their pc (not intensive at all on the cpu)
- Play some games (some fps here and there, and some fun games like plants vs zombies, it can either be intensive or not, the gpu change would matter a lot for them)
1) IVB should be better in terms of power usage and since its already clocked higher and there is the performance jump already in the shrink, optimizations... it should be faster, will someone notice it? dunno
2) Same answer
3) The dual cores usually have a much higher battery life, this diminished greatly in SB, we have yet to see how it fares now, but basically quad gave you less battery life, there is a good article about this in anandtech, search for it.
4 Nope, the tdp is the same -
Let's say you'll calculate Pi with 1.000.000 decimals using both a quad code and a dual core system, otherwise identical. The dual core system will use two cores and will take twice as long than the quad core. The quad core will use twice as much power FOR THE CPU while doing it, but the load on the rest of the system is more or less identical.
In other words, both CPUs will do the job using the same energy, but since the quad code will do it faster, the total energy for the system will be less.
So, if quad core *systems* usually drain more battery than dual core, it's probably for other reasons.
Another thing, a good OS should shut down cores as much as possible, so a quad core at idle should be able to shut down 3/4 of the cores, while a dual core is only able to shut down 1/2. Modern CPUs are also able to idle the remaining core when it's not needed, which more or less neutralizes this effect.
But given the choice of more cores or higher frequencies, I'd pick more cores any day, mostly *because* they're more energy efficient. Old programs that don't need more than one core, or can't be bothered with multi-threading, will run fast enough on one core as long as the frequency isn't lowered too much. And as Mr MM said, they are not very CPU intensive anyway so it wouldn't help much. But programs such as media encoding/decoding, compilers and other CPU intensive number crunching can double their performance with twice as many cores.
And lastly, any OS worth mentioning support preemptive multitasking. Even in the late 60s this was fairly common. And this means that the same code also support multiple cores. You need some driver software for the CPU to handle the cores, but the code running the instructions already supports the fact that it can be interrupted anytime (i.e. preemptive multitasking), and the consequence - that global data can be changed anytime. There are big gains with multitasking, and even multi-threading within one process, even without multiple cores. So the use of multiple threads are much older than multi core CPUs. -
Karamazovmm Overthinking? Always!
You forgot that the scaling of quads and dual cores is not symmetrical, thus it wont mean that the quads had 2x as much in terms of processing power than the dual cores. If that were true we would see even more cores than the 8 we have now for consumer and 12 for servers.
if there is only one process you cant multitask, a process is a line of code that will be computed, if not I think you are referring to the process of windows -
For example, video encoding is very well suited for multi-threading. The picture is typically divided into 8x8 squares, and each square can be submitted to a core for processing. Since a picture has many squares, it can use pretty many cores to increase speed almost linearly. In fact, the Badaboom video encoder can use the, usually hundreds, of Cuda cores in nVidia GPUs to dramatically increase encoding speed. These cores are not handled by the normal OS, but by the nVidia driver, but theoretically, they're still cores running one thread each. -
Here is a question, what are the chances the less power hungry IB cpus fitting into something the size of the old P series or slightly bigger? Or is the heat too much?
-
The old P used Atom Z-series CPU's, running with a TDP of 2.4W. So even the ULV versions of IB will be way to hot to run in such a chassis.
-
-
Karamazovmm Overthinking? Always!
There was added overhead, or dunno if there still is, due to the communication between the cores, the first core duo chips were basically 2 cores slapped together in the same die, connected via a bridge.
The idea on the definition of process is that, most of the coding is in line, or stacks
for example to make this happen: 2 + 2 = 4
this simple thing is composed of 5 processes, the user input of nº 2, the user input of the arithmetic process, the user input of nº 2, the user input of the arithmetic process, the calculation, if you count the display of the result there are 6 process, which would be the correct amount.
However on multi threaded, several process in tandem you are bound to what you just described in the example of video decoding, the separation in clusters of computational power. And guess what? haswell or rockwell may be the first chips from intel for the mainstream usage to make the use of clusters. -
But yes, I'm using the process definition of the Windows OS, but that's also the same process definition used in Linux, Unix and probably very close to 100% of all computers using the CPUs we're discussing. So I fail to see the problem with using it.
However, I'm curious about which definition you use. You described some of it, and it seems pretty academic. And I guess that's useful when designing future chips.
For example, I've tried the OpenMP parallel programming API to make the threading a bit more transparent. Are you saying that Haswell or Rockwell will have specific instructions for dividing a large job into such clusters? -
I can't but comment on one thing: "concurrency is not hard"? When did that happen? Perhaps from a hardware designer's perspective it's "not hard" (or at least not too much: it does not care about the semantics too much, though these days, even they do), but from a software perspective, it means finding completely different algorithms for solving computationally intensive problems, and some problems are basically impossible to split to run in several threads at the same time. Sure, for tasks where this split is easy, a decent modern programming platform will let you achieve these things in an easy way (though, the deadlocking and starvation are still problems that one can hit, but I guess those'll just be classified as "minor bugs" by most modern managers, since they happen only "occassionally"
).
Also note that with the increase of number of software developers (which is a good thing), their academic background and understanding of concurrent, parallel processes gets lower on average, thus, we get crappier code all the time. With more "bugs" (or rather, core design problems).
Anyway, back to the thread topic: the speed/power ratio is exactly why I'd like to see the next Z with an ULV i7. It will give me the performance I need, while improving the battery life: perhaps I can carry it around for 2-3 days without having to plug it in with the extra sheet battery? -
^That's a quite correct description. I just disagree about the goodness of increasing the number of developers. And that's exactly for the ramifications you mentioned: Lower academic background and crappier design.
Personally, I'm sick of hearing things like: "You mustn't make a too complicated design, because other developers might not be able to maintain it". What that means is that I'm not allowed to make a core design that supports concurrency, leaving only small parts, such as large matrix manipulation, to be processed concurrently.
And this ties right back to energy efficiency, which is what Ivy Bridge is mostly about. In a modern computer, and even more so in a server, the CPU is the main energy consumer. I don't have any exact numbers, but I'd guess it's in the 70-80+ % at full speed. This means that software design is becoming increasingly important to energy efficiency.
And this takes us back to those developers with lower academic background. Because the same developers stopping us with higher goals from doing concurrent designs, are also doing extremely inefficient implementations. I've seen it way too often, and instead of solving some operation in constant, or linear, time, it becomes exponential and no computer in the world, present or future, will be able to provide reasonable response times.
Millions are wasted, both developer hours, but also in new hardware, cooling and electric bills, because of this crappy design and because of those crappy developers. A new generation of IB CPUs can save a few %, but a better design can save more than 10x, probably ^x.
Just my 2c, interested in other opinions. -
Karamazovmm Overthinking? Always!
Yes my definition of process is pretty academy
The core duo design was basically a bridge between 2 cores, with each other having its dedicated cache and a external cache to be used for both cores was implemented latter. The problem here is that if the program allowed for multiple threads it would create an overhead.
The idea of the cluster has only appeared on some intel slides, currently I dont know how it will be implemented, but its one way to circumvent the loss of voltage
sorry didnt answer earlier, I rarely check the sony forum
Ivy Bridge vs Sandy Bridge: benchmarks from reputable source
Discussion in 'VAIO / Sony' started by lovelaptops, Apr 24, 2012.