The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    How often do processors have significant advancements?

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by JWBlue, Sep 2, 2010.

  1. JWBlue

    JWBlue Notebook Deity

    Reputations:
    85
    Messages:
    844
    Likes Received:
    9
    Trophy Points:
    31
    Is the advancement from the C2D to I3, I5, I7 considered significant?

    How often do processors have significant advancements?
     
  2. Xonar

    Xonar Notebook Deity

    Reputations:
    1,457
    Messages:
    1,518
    Likes Received:
    13
    Trophy Points:
    56
    This has been discussed in many threads such as this one.

    It's called Sandy Bridge and it won't have nearly as big of an impact as the jump from Penryn to Arrandale was. Basically, check out that thread. The processors themselves will be more efficient and higher clocked.
     
  3. kent1146

    kent1146 Notebook Prophet

    Reputations:
    2,354
    Messages:
    4,449
    Likes Received:
    476
    Trophy Points:
    151
    The "significant" advancements occur each generation. Core 2 Duo / Core 2 Quad was one generation. The Core i3/i5/i7 are all part of another generation. In general, it takes about 2-3 years for a new generation.

    You will often find minor enhancements within a generation every 6-12 months. Those minor enhancements include releasing new models with slightly higher specs (higher clock speed, more cache, more cores) and/or smaller manufacturing process (lower heat, lower power consumption). These minor enhancements are certainly nice, but will not give you the major jump in performance that a generational leap will give you.
     
  4. moral hazard

    moral hazard Notebook Nobel Laureate

    Reputations:
    2,779
    Messages:
    7,957
    Likes Received:
    87
    Trophy Points:
    216
    How long till the next leap?
     
  5. Crimsoned

    Crimsoned Notebook Deity

    Reputations:
    268
    Messages:
    1,396
    Likes Received:
    3
    Trophy Points:
    56
    2 1/2 years for the full on generation leap (dual cores down to quad to six to eight cores (or 12 cores depending if they want to skip 8 cores, I forget the article I read this on).
    Maybe another year for the first higher end Sandy Bridge, probably a server then a desktop processor.. I wonder what they will call it... i8?
     
  6. trvelbug

    trvelbug Notebook Prophet

    Reputations:
    929
    Messages:
    4,007
    Likes Received:
    40
    Trophy Points:
    116
    so it does seem like an inevitable move towards multicores?
     
  7. Crimsoned

    Crimsoned Notebook Deity

    Reputations:
    268
    Messages:
    1,396
    Likes Received:
    3
    Trophy Points:
    56
    Thats the way we've been heading for the past few years.
     
  8. damaph

    damaph Notebook Guru

    Reputations:
    0
    Messages:
    54
    Likes Received:
    0
    Trophy Points:
    15
    Well, Intel is following what they call their "Tick-Tock" model. Each tick and tock will occur annually. So on the tick, they will release a new architecture, and on the tock, they will release a die shrink of that architecture.
     
  9. H.A.L. 9000

    H.A.L. 9000 Occam's Chainsaw

    Reputations:
    6,415
    Messages:
    5,296
    Likes Received:
    552
    Trophy Points:
    281
    Exactly. Tick-Tock-Tick-Tock :) It's a very good model IMO.
     
  10. Bullit

    Bullit Notebook Deity

    Reputations:
    122
    Messages:
    864
    Likes Received:
    9
    Trophy Points:
    31
    For now yes. Increasing clocks was hitting a bottleneck.
     
  11. kent1146

    kent1146 Notebook Prophet

    Reputations:
    2,354
    Messages:
    4,449
    Likes Received:
    476
    Trophy Points:
    151
    And not only that, but increasing clocks was showing diminishing returns rather quickly. Getting a CPU to clock from 3.0Ghz --> 3.6Ghz wasn't really bringing out that much real-world improvement. However, going with multiple cores was less expensive than trying to scale clock speeds, and yielded better improvement in applications that could use the cores. The only challenge was then to get software developers to write their software in such a way that could take advantage of those cores (multi-threaded applications)

    We're getting there. We're a lot farther ahead than we were 3 years ago with software and games being multi-threaded. Not perfect yet, but the fact that just about EVERYONE has a multi-core system forces software developers to make a lot of progress towards writing their apps in a multi-threaded manner.
     
  12. H.A.L. 9000

    H.A.L. 9000 Occam's Chainsaw

    Reputations:
    6,415
    Messages:
    5,296
    Likes Received:
    552
    Trophy Points:
    281
    Another good point. Speed did not scale linearly with clock speed increases. All higher clocks tend to do is increase the heat levels, so that's why the market moved to SMP. Bring down the clock speeds and just add more cores, and make software multi-thread aware. Then when your lith node is small enough, increase the clocks.
     
  13. Bog

    Bog Losing it...

    Reputations:
    4,018
    Messages:
    6,046
    Likes Received:
    7
    Trophy Points:
    206
    It depends on what you mean by significant advancements. On paper, processor specifications with each successive generation are always impressive because that's what the manufacturer wants you to think. Overall I would encourage you to be skeptical of the improvements touted by manufacturers, given that the smallest improvement is trumpeted as revolutionary.

    Real world advancements are uncommon. Usually they happen because Intel/AMD realizes that their current platform is uncompetitive garbage or not in tune with ongoing software trends or consumer wants. Intel's release of the Pentium M (their first purpose-built mobile processor) to replace the troubled and sluggish Pentium 4 was not only a significant advancement, it was also a reaction AMD's competing solutions edging out Intel's leading product.

    The i3/5/7 series were also significant in that they fully embraced multiple physical/logical cores for the purposes of improving multi-threaded performance rather than increasing clock speed. Real-world performance increased greatly as a result; this design decision was also a reaction to the tendency of consumers to multi-task.

    Minor improvements come out with almost every subsequent product revision. For example, the transition from Merom to Penryn (Txxxx to Pxxxx) could be considered minor due to its moderate improvements in power consumption and heat output (TDP).
     
  14. Trottel

    Trottel Notebook Virtuoso

    Reputations:
    828
    Messages:
    2,303
    Likes Received:
    0
    Trophy Points:
    0
    We usually just see evolutionary changes from one generation to the next. It is not often that one generation gets demolished by the one that proceeds it. Like I would never have bought a core i7 if I had already had a core 2 quad and not a core 2 duo. That would be a waste of money. Processors from one generation to the next always have significant advancements. This can be as simple as a more efficient redesign or die shrinkage, or as big as something like the on-die memory controller of the ahtlon 64, which was almost just an athlon xp with on-die memory controller, or the move from netburst to core 2. Netburst was sucking really bad at the time, so anything new by Intel that wasn't netburst would be great. In all honesty, the move from core 2 to core i is not one of those big ones.

    That depends on what diminishing returns you are talking about. Purely increasing the clockspeed of a processor does not really show diminishing returns if you are utilizing all of its clock cycles. However attempts at getting the clockspeed higher does show diminishing returns.

    But the move to multicore really isn't about clockspeed, it is about not being able to get enough increase in the processing power of a single core. When you reach a certain point, and that point seems to have been around the time when the core 2 duo was released, having two cores per piece of silicon had a lot more processing power than having one big one, given the same real estate and power specs.

    The big issue with that though was that now you had two separate cores so there had to be a lot of software support to slowly take advantage of that, which continues to this day as more and more cores with hyperthreading become the norm. Currently the gulftown processors have 6 cores. A program will have to be able to take advantage of 6 cores to see the potential of that, but there aren't a whole lot of applications for that. It will need to be able to take full advantage of 12 cores to wring the last 10% out of that processor. So now we have the issue of software not being able to take full advantage of the available processing power, which from now on seems to be a never ending battle. So although going multicore is the easiest way to get more processing power out of the same amount of silicon and electricity, it is more challenging than getting the full out of a single core.

    Yes, like hyperthreading. If it was so amazing the first time around, why did it get nixed for core/core 2?

    "But it turns a dual core into a quad core!!!!"
     
  15. Bullit

    Bullit Notebook Deity

    Reputations:
    122
    Messages:
    864
    Likes Received:
    9
    Trophy Points:
    31
    From performance i don't think that jump from core 2 to i series was anything major like the jump from unicore to dual was much bigger. Overnight people could get almost double the performance power.

    I would like to see a study of price/performance comparison - hint for notebookreview. For example does a typical i3 give much more than the ubiquous Q6600?