Is the advancement from the C2D to I3, I5, I7 considered significant?
How often do processors have significant advancements?
-
This has been discussed in many threads such as this one.
It's called Sandy Bridge and it won't have nearly as big of an impact as the jump from Penryn to Arrandale was. Basically, check out that thread. The processors themselves will be more efficient and higher clocked. -
You will often find minor enhancements within a generation every 6-12 months. Those minor enhancements include releasing new models with slightly higher specs (higher clock speed, more cache, more cores) and/or smaller manufacturing process (lower heat, lower power consumption). These minor enhancements are certainly nice, but will not give you the major jump in performance that a generational leap will give you. -
moral hazard Notebook Nobel Laureate
How long till the next leap?
-
2 1/2 years for the full on generation leap (dual cores down to quad to six to eight cores (or 12 cores depending if they want to skip 8 cores, I forget the article I read this on).
Maybe another year for the first higher end Sandy Bridge, probably a server then a desktop processor.. I wonder what they will call it... i8? -
so it does seem like an inevitable move towards multicores?
-
Thats the way we've been heading for the past few years.
-
Well, Intel is following what they call their "Tick-Tock" model. Each tick and tock will occur annually. So on the tick, they will release a new architecture, and on the tock, they will release a die shrink of that architecture.
-
H.A.L. 9000 Occam's Chainsaw
It's a very good model IMO.
-
-
And not only that, but increasing clocks was showing diminishing returns rather quickly. Getting a CPU to clock from 3.0Ghz --> 3.6Ghz wasn't really bringing out that much real-world improvement. However, going with multiple cores was less expensive than trying to scale clock speeds, and yielded better improvement in applications that could use the cores. The only challenge was then to get software developers to write their software in such a way that could take advantage of those cores (multi-threaded applications)
We're getting there. We're a lot farther ahead than we were 3 years ago with software and games being multi-threaded. Not perfect yet, but the fact that just about EVERYONE has a multi-core system forces software developers to make a lot of progress towards writing their apps in a multi-threaded manner. -
H.A.L. 9000 Occam's Chainsaw
-
It depends on what you mean by significant advancements. On paper, processor specifications with each successive generation are always impressive because that's what the manufacturer wants you to think. Overall I would encourage you to be skeptical of the improvements touted by manufacturers, given that the smallest improvement is trumpeted as revolutionary.
Real world advancements are uncommon. Usually they happen because Intel/AMD realizes that their current platform is uncompetitive garbage or not in tune with ongoing software trends or consumer wants. Intel's release of the Pentium M (their first purpose-built mobile processor) to replace the troubled and sluggish Pentium 4 was not only a significant advancement, it was also a reaction AMD's competing solutions edging out Intel's leading product.
The i3/5/7 series were also significant in that they fully embraced multiple physical/logical cores for the purposes of improving multi-threaded performance rather than increasing clock speed. Real-world performance increased greatly as a result; this design decision was also a reaction to the tendency of consumers to multi-task.
Minor improvements come out with almost every subsequent product revision. For example, the transition from Merom to Penryn (Txxxx to Pxxxx) could be considered minor due to its moderate improvements in power consumption and heat output (TDP). -
But the move to multicore really isn't about clockspeed, it is about not being able to get enough increase in the processing power of a single core. When you reach a certain point, and that point seems to have been around the time when the core 2 duo was released, having two cores per piece of silicon had a lot more processing power than having one big one, given the same real estate and power specs.
The big issue with that though was that now you had two separate cores so there had to be a lot of software support to slowly take advantage of that, which continues to this day as more and more cores with hyperthreading become the norm. Currently the gulftown processors have 6 cores. A program will have to be able to take advantage of 6 cores to see the potential of that, but there aren't a whole lot of applications for that. It will need to be able to take full advantage of 12 cores to wring the last 10% out of that processor. So now we have the issue of software not being able to take full advantage of the available processing power, which from now on seems to be a never ending battle. So although going multicore is the easiest way to get more processing power out of the same amount of silicon and electricity, it is more challenging than getting the full out of a single core.
"But it turns a dual core into a quad core!!!!" -
From performance i don't think that jump from core 2 to i series was anything major like the jump from unicore to dual was much bigger. Overnight people could get almost double the performance power.
I would like to see a study of price/performance comparison - hint for notebookreview. For example does a typical i3 give much more than the ubiquous Q6600?
How often do processors have significant advancements?
Discussion in 'Hardware Components and Aftermarket Upgrades' started by JWBlue, Sep 2, 2010.