Is the advancement from the C2D to I3, I5, I7 considered significant?
How often do processors have significant advancements?
-
This has been discussed in many threads such as this one.
It's called Sandy Bridge and it won't have nearly as big of an impact as the jump from Penryn to Arrandale was. Basically, check out that thread. The processors themselves will be more efficient and higher clocked. -
The "significant" advancements occur each generation. Core 2 Duo / Core 2 Quad was one generation. The Core i3/i5/i7 are all part of another generation. In general, it takes about 2-3 years for a new generation.
You will often find minor enhancements within a generation every 6-12 months. Those minor enhancements include releasing new models with slightly higher specs (higher clock speed, more cache, more cores) and/or smaller manufacturing process (lower heat, lower power consumption). These minor enhancements are certainly nice, but will not give you the major jump in performance that a generational leap will give you. -
moral hazard Notebook Nobel Laureate
How long till the next leap?
-
2 1/2 years for the full on generation leap (dual cores down to quad to six to eight cores (or 12 cores depending if they want to skip 8 cores, I forget the article I read this on).
Maybe another year for the first higher end Sandy Bridge, probably a server then a desktop processor.. I wonder what they will call it... i8? -
so it does seem like an inevitable move towards multicores?
-
Thats the way we've been heading for the past few years.
-
Well, Intel is following what they call their "Tick-Tock" model. Each tick and tock will occur annually. So on the tick, they will release a new architecture, and on the tock, they will release a die shrink of that architecture.
-
H.A.L. 9000 Occam's Chainsaw
Exactly. Tick-Tock-Tick-Tock
It's a very good model IMO.
-
For now yes. Increasing clocks was hitting a bottleneck.
-
And not only that, but increasing clocks was showing diminishing returns rather quickly. Getting a CPU to clock from 3.0Ghz --> 3.6Ghz wasn't really bringing out that much real-world improvement. However, going with multiple cores was less expensive than trying to scale clock speeds, and yielded better improvement in applications that could use the cores. The only challenge was then to get software developers to write their software in such a way that could take advantage of those cores (multi-threaded applications)
We're getting there. We're a lot farther ahead than we were 3 years ago with software and games being multi-threaded. Not perfect yet, but the fact that just about EVERYONE has a multi-core system forces software developers to make a lot of progress towards writing their apps in a multi-threaded manner. -
H.A.L. 9000 Occam's Chainsaw
Another good point. Speed did not scale linearly with clock speed increases. All higher clocks tend to do is increase the heat levels, so that's why the market moved to SMP. Bring down the clock speeds and just add more cores, and make software multi-thread aware. Then when your lith node is small enough, increase the clocks. -
It depends on what you mean by significant advancements. On paper, processor specifications with each successive generation are always impressive because that's what the manufacturer wants you to think. Overall I would encourage you to be skeptical of the improvements touted by manufacturers, given that the smallest improvement is trumpeted as revolutionary.
Real world advancements are uncommon. Usually they happen because Intel/AMD realizes that their current platform is uncompetitive garbage or not in tune with ongoing software trends or consumer wants. Intel's release of the Pentium M (their first purpose-built mobile processor) to replace the troubled and sluggish Pentium 4 was not only a significant advancement, it was also a reaction AMD's competing solutions edging out Intel's leading product.
The i3/5/7 series were also significant in that they fully embraced multiple physical/logical cores for the purposes of improving multi-threaded performance rather than increasing clock speed. Real-world performance increased greatly as a result; this design decision was also a reaction to the tendency of consumers to multi-task.
Minor improvements come out with almost every subsequent product revision. For example, the transition from Merom to Penryn (Txxxx to Pxxxx) could be considered minor due to its moderate improvements in power consumption and heat output (TDP). -
We usually just see evolutionary changes from one generation to the next. It is not often that one generation gets demolished by the one that proceeds it. Like I would never have bought a core i7 if I had already had a core 2 quad and not a core 2 duo. That would be a waste of money. Processors from one generation to the next always have significant advancements. This can be as simple as a more efficient redesign or die shrinkage, or as big as something like the on-die memory controller of the ahtlon 64, which was almost just an athlon xp with on-die memory controller, or the move from netburst to core 2. Netburst was sucking really bad at the time, so anything new by Intel that wasn't netburst would be great. In all honesty, the move from core 2 to core i is not one of those big ones.
That depends on what diminishing returns you are talking about. Purely increasing the clockspeed of a processor does not really show diminishing returns if you are utilizing all of its clock cycles. However attempts at getting the clockspeed higher does show diminishing returns.
But the move to multicore really isn't about clockspeed, it is about not being able to get enough increase in the processing power of a single core. When you reach a certain point, and that point seems to have been around the time when the core 2 duo was released, having two cores per piece of silicon had a lot more processing power than having one big one, given the same real estate and power specs.
The big issue with that though was that now you had two separate cores so there had to be a lot of software support to slowly take advantage of that, which continues to this day as more and more cores with hyperthreading become the norm. Currently the gulftown processors have 6 cores. A program will have to be able to take advantage of 6 cores to see the potential of that, but there aren't a whole lot of applications for that. It will need to be able to take full advantage of 12 cores to wring the last 10% out of that processor. So now we have the issue of software not being able to take full advantage of the available processing power, which from now on seems to be a never ending battle. So although going multicore is the easiest way to get more processing power out of the same amount of silicon and electricity, it is more challenging than getting the full out of a single core.
Yes, like hyperthreading. If it was so amazing the first time around, why did it get nixed for core/core 2?
"But it turns a dual core into a quad core!!!!" -
From performance i don't think that jump from core 2 to i series was anything major like the jump from unicore to dual was much bigger. Overnight people could get almost double the performance power.
I would like to see a study of price/performance comparison - hint for notebookreview. For example does a typical i3 give much more than the ubiquous Q6600?
How often do processors have significant advancements?
Discussion in 'Hardware Components and Aftermarket Upgrades' started by JWBlue, Sep 2, 2010.