The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    Why aren't CPU's getting faster?

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by KylePR, Oct 19, 2016.

  1. KylePR

    KylePR Notebook Geek

    Reputations:
    15
    Messages:
    80
    Likes Received:
    7
    Trophy Points:
    16
    Not sure if this applies to the desktop market as well, but from what I can tell it looks like notebook CPU's haven't really gotten any faster in the last 4 years. My old Computer had an Ivy-Bridge 3610qm which seemed to be on the higher end for the professional/multimedia/gaming range, just taking a look at today's options even for higher end gaming notebooks the Skylake i7-6700hq seems to be really commonplace. But as far as performance, it's really only 20% faster than the Ivy-Bridge processor.

    Meanwhile, GPU's seem to have become way better, especially with pascal. But even before pascal was released, it seems like the 900m range was close to twice as fast as the 600m range.

    Why is it that GPU's are getting better quicker as opposed to CPU's. Is it because there hasn't really been a need for more processing power? Is it because the industry is focused on pushing 4k?

    Curious to hear everyone's thoughts.
     
  2. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    They are faster, when the whole platform is considered.

    You also have do your homework and not get (or compare to) the commonplace offerings either (from both generations you're comparing to). In either case, the i7 67ooHQ is old tech (circa 2015 Q1). The latest (Skylake) iterations are faster, more efficient and cheaper (now) too.

    GPU's needed to get better because they were (and still are) far from plateauing. In power usage, heat output and actual, usable, performance (especially on battery power).

    Today's CPU's need a better balanced system where the fastest and most RAM, the fastest and biggest capacity SSD (PCIe x4, where applicable) and the best O/S (Win10x64Pro) is what lets the latest processors shine. And, put the older generations in their place (i.e. in the 'history' pile).

    I agree the 4 year old cpu is still a capable performer today. But that doesn't mean the latest platforms don't get more work done in less time with less (battery) power, less noise and less weight advantages... even if the nominal cpu performance is very similar.

    With everyone I see with their head stuck into their handheld 'toys', it is little wonder that notebook/desktop progress is not in the forefront at this time (I see this changing though...).
     
  3. djembe

    djembe drum while you work

    Reputations:
    1,064
    Messages:
    1,455
    Likes Received:
    203
    Trophy Points:
    81
    I think the difference between the progress of central processors and graphics development is based on 2 things: the increasing difficulty of shrinking semiconductor nodes and parallelism. Intel realized years ago that they couldn't continue to gain performance by pushing clock speed, so they have instead been using multiple cores and smaller node sizes to gain performance improvements. Since many tasks assigned to a central processor require one thread extensively, that limits the benefits of adding more cores, since there are significant parts of the workload that can't be shared across cores. As a result, CPU improvements have mainly come as a result of tuning tricks (boosted speed when not all cores are in use and/or there is thermal room, optimizations in manufacturing, etc.) and making the components smaller so more will fit on a chip. But the manufacturung processes required to continue shrinking nodes take increasingly longer to develop and test, which means processors themselves can't get much faster until the next node shrink.

    Graphics are also affected by the node shrink slowdown, but have the advantage of dealing with almost entirely parallel workloads, which means they can continue to increase performance by adding more cores. So graphics can continue to grow more powerful while central processors progress slower.
     
    alexhawker likes this.
  4. Galm

    Galm "Stand By, We're Analyzing The Situation!"

    Reputations:
    1,228
    Messages:
    5,696
    Likes Received:
    2,949
    Trophy Points:
    331
    CPUs were also around and being researched long before gpus. That's part of the reason the optimizations etc... Are not yielding as many gains. The early 2000s and late 90s saw cpus booming in performance, which is happening to gpus now. Also I believe 7nm transistors are about the smallest physically possible, so we are running out of room to shrink.
     
  5. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    The other big issue is there is no immediate need. I have only a i7-3820 andn nothing I do normally overly stresses the CPU. I just do not have to have a faster system CPU.
     
  6. StormJumper

    StormJumper Notebook Virtuoso

    Reputations:
    579
    Messages:
    3,537
    Likes Received:
    488
    Trophy Points:
    151
    CPU as other mentioned have reached a peak development until a technological change occurs with the CPU fabrications and GPU is starting it's rise but will hit ceiling as well. Only a major CPU processing evolution will we see more gains.
     
  7. tijo

    tijo Sacred Blame

    Reputations:
    7,588
    Messages:
    10,023
    Likes Received:
    1,077
    Trophy Points:
    581
    This. There is an actual wall in terms of shrinking reducing voltage to reduce heat generated. We are actually getting close to the limit dictated by the properties of the materials currently used in semi conductors that if you reduce voltages significantly, your transistors won't be able to reliably switch state between a 0 and a 1 100% of the time. Pretty much every CPU manufacturer is aware of this, new materials or new ways of making semi-conductors will be needed to overcome this. As the voltage remains similar, so does heat output which makes clock speed hit a kind of plateau. There is still manoeuvering room, but couple this with the mobile market and it drives research towards the power consumption and heat output problems rather than raw performance.

    There used to be a time where you could simple, shrink the process, reduce voltage and crank clock speed significantly to get the same thermal envelope. Not anymore. GPUs are still on overall larger processes and have more leeway in terms of thermal envelope. Couple that with the fact that they basically have to be good at one thing and you have the potential to tune your designs to the specific roles of a GPU whereas a CPU has to do many different things which usually leads less room to fine tuning.

    There is a lot of research being done on new materials, IBM and Intel are actually dumping a lot of money on that front, but going from research to something viable at the large scale and financially is an impressive challenge in itself. Once you got something nice out of research, you work out the kinks at a small scale usually and when it comes time to scale things up, there are often a boatload of new problems to fix. R&D can take a surprising amount of time to get something out the door and shipped to consumers.
     
    toughasnails and custom90gt like this.
  8. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,431
    Messages:
    58,189
    Likes Received:
    17,897
    Trophy Points:
    931
    I would say the biggest reason is a lack of competition.
     
  9. StormJumper

    StormJumper Notebook Virtuoso

    Reputations:
    579
    Messages:
    3,537
    Likes Received:
    488
    Trophy Points:
    151
    That and the $$ for R&D will be quite high before consumer can get to see it. Maybe in the future another CPU maker will challenge iNtel but til then it does take alot of $$ now-days to get it going especially in the CPU market when they already hit a peak.
     
  10. TomJGX

    TomJGX I HATE BGA!

    Reputations:
    1,456
    Messages:
    8,707
    Likes Received:
    3,315
    Trophy Points:
    431
    Also that Intel keeps focusing on improving and making the iGPU bigger hence leading to CPU performance increases being much smaller then they can be..

    Sent from my LG-H850 using Tapatalk
     
    Papusan and hmscott like this.
  11. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    That's the real reason, we have only so much headroom for power and thermal cooling, and Intel is wasting it on iGPU's.
     
    temp00876, Papusan and TomJGX like this.
  12. Jarhead

    Jarhead 恋の♡アカサタナ

    Reputations:
    5,036
    Messages:
    12,168
    Likes Received:
    3,133
    Trophy Points:
    681
    I wouldn't really say waste, since iGPUs do have their uses to non-gamers and other people who don't need a dGPU in their system.

    If your use case is actually bottlenecked by current CPU offerings with iGPUs, you can still buy CPU-only parts from Intel or AMD. You're going to pay for it, though.
     
  13. tribaljet

    tribaljet Notebook Consultant

    Reputations:
    30
    Messages:
    141
    Likes Received:
    63
    Trophy Points:
    41
    Software needs to start taking newer instruction sets into consideration as well, there is still a very large software majority using SSE2 for compatibility's sake, which is understandable on its own but there is already software with multiple builds for newer hardware and while that can increase development times to an extent, other than that it's a matter of end product size, which with ever increasing storage capacity, it becomes a moot point. On the other hand, I'm rather curious at the hardware stage beyond typical silicon.
     
  14. Eurocom Support

    Eurocom Support Company Representative

    Reputations:
    293
    Messages:
    505
    Likes Received:
    970
    Trophy Points:
    106
    Higher CPU frequency rating generates more of electromagnetic noise and most of the potential performance gain is lost due to" thermal loss". As laptops are getting smaller and smaller it is impossible for laptop vendors using Mobile CPUs to push CPUs to higher speeds and/or overclock them. This is why Eurocom uses desktop CPUs and makes their laptops bigger like EUROCOM Panther 5 series that supports LGA2011 CPUs running up to 200W.
     
  15. Jarhead

    Jarhead 恋の♡アカサタナ

    Reputations:
    5,036
    Messages:
    12,168
    Likes Received:
    3,133
    Trophy Points:
    681
    Nice advertisement ;)
     
    tilleroftheearth likes this.
  16. Jarhead

    Jarhead 恋の♡アカサタナ

    Reputations:
    5,036
    Messages:
    12,168
    Likes Received:
    3,133
    Trophy Points:
    681
    Never underestimate the power of "if it works, don't fix it". Software developers are expensive and unless there are plenty of complaints about performance that can be solved with using the newer instruction sets, they're still likely to use whatever the IDE(s) spit out.
     
  17. John Ratsey

    John Ratsey Moderately inquisitive Super Moderator

    Reputations:
    7,197
    Messages:
    28,840
    Likes Received:
    2,165
    Trophy Points:
    581
    Personally, I would call a machine with a 200W CPU a portable desktop and not a laptop.

    John
     
    tilleroftheearth likes this.
  18. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Sure it's a waste, the iGPU should be on it's own carrier, not robbing power and thermals from the CPU.

    Intel thought they would force it down everyone's throat by making it a non-removeable part of the CPU carrier. Greed.

    It's not supposed to be there, please take it off the CPU carrier and give it it's own place in the chipset, then it's an option, not a parasite. :)
     
    Papusan likes this.
  19. Jarhead

    Jarhead 恋の♡アカサタナ

    Reputations:
    5,036
    Messages:
    12,168
    Likes Received:
    3,133
    Trophy Points:
    681
    That's how it used to be. Motherboards optionally came with graphics chipsets built into them. My family's Vista-era HP Pavilion Elite desktop had a nVidia-made chipset built into the motherboard, for example.

    Can't remember who first started integrating CPU and GPU, though iirc AMD made it pretty popular with the APUs, and Intel's iGPUs soon caught up. Anyway, for the majority of computer buyers (including you, I suspect), current and near-future CPU performance is plenty with today's options. However, if you still feel like you don't want to buy a GPU-carrying CPU, you can still buy the higher-end Intel chips, or if you're more on a budget AMD's FX lineup still exists (hell, I'm typing this on a computer with the FX-6300 ;) ). Or, you can always start up your own CPU company that focuses purely on CPUs only :p.

    I mean, if you (as in your workload) really does demand more CPU power than you can currently buy, you might have a point. Though I suspect most of the time, your CPU is processing HALT instructions.
     
    hmscott likes this.
  20. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    @Jarhead

    This thread is about "Why aren't CPU's getting faster?"

    I answered the question supporting @TomJGX , it's the wasted silicon for the iGPU on the CPU carrier that's held us back.

    I then followed up suggesting it was Intel's greed motivating them to force their iGPU onto the CPU carrier, locking out any other third party chipsets or add-ons that provide low power GPU solutions.

    When AMD and Intel remove the iGPU's, we can grow the CPU performance significantly again.
     
    Papusan likes this.
  21. Jarhead

    Jarhead 恋の♡アカサタナ

    Reputations:
    5,036
    Messages:
    12,168
    Likes Received:
    3,133
    Trophy Points:
    681
    ...Plus all the other factors mentioned in this thread, like materials engineering, economics, competition, and actual need.
     
    hmscott likes this.
  22. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    No, I think those contributing factors are all in the noise.

    Intel's forcing of the iGPU onto the CPU carrier to lock out other vendors is the main point that has frozen CPU development, physically limiting real estate, limiting power budget, and putting a block to thermal headroom.
     
  23. Jarhead

    Jarhead 恋の♡アカサタナ

    Reputations:
    5,036
    Messages:
    12,168
    Likes Received:
    3,133
    Trophy Points:
    681
    Locking out vendors...?

    Wait, you mean to tell me there could have been an i5 with Radeon iGPU graphics?? ;)
     
    hmscott likes this.
  24. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Unless a user needs a (headless) server, a gpu is needed. And most times, 'all' that's needed too.

    Except for specialized uses (gamers, video editors, etc.) an igpu in a powerful (i7 QC non 'U') cpu is more than welcome. Who needs those smelly, hot, power hungry GPU's to roast all the other components for anyways?

    ;)

    This is progress. Really. really.

    The stuck in yesteryear crowd will get the benefits too while they're kicking and screaming as they're being propelled to the future.

    Who wants/needs GPU's? Not I.

    An iGPU from Intel gives me a platform that not only advances me in the mobile space, but also on desktops too (leaving a few dozen workstations running 24/7 costs money too).
     
    Jarhead likes this.
  25. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Yes, but the iGPU does steal space, power, and thermals from the CPU when on the same die, that's limiting CPU performance growth in it's space.

    Which is the topic of this thread. :)

    Not "Hey we all need GPU's, lets just put them on the CPU so we don't lose'em, now where did my GPU go... oh, yeah, it's here right next to the CPU!"

    You 2 @Jarhead and @tilleroftheearth are missing the point of the thread, and creating a circular argument of no interest to the subject at hand.

    The GPU was external, and could have just as easily in current iGPU power been kept as part of the chipset, off the CPU carrier, with sufficient display power for all the situations you mention where a powerful discrete GPU isn't needed.

    Growth of CPU performance is limited by the co-existant iGPU on the same carrier.
     
    Papusan likes this.
  26. Jarhead

    Jarhead 恋の♡アカサタナ

    Reputations:
    5,036
    Messages:
    12,168
    Likes Received:
    3,133
    Trophy Points:
    681
    I'm not disagreeing that the space used by the iGPU is using up die space that could be used for CPU hardware.

    I'm disagreeing with the relative importance of that, and your assertion that it's "Greed being forced down our throats" (again, FX series being a counterexample). I'd imagine that the limitations of silicon is of greater importance.
     
    hmscott likes this.
  27. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    The space limitations of silicon are what I am talking about, along with the power and thermal limitations of that space being shared / split between a iGPU and CPU. Limiting the CPU growth in that same space.

    When I have tried to increase the power of the iGPU - tune it for performance - the CPU performance suffers in direct proportion.

    When the iGPU is active on an Optimus laptop, where the iGPU and CPU are both powered, the thermal headroom added at full load is 10c to the CPU - it runs 10c hotter than a CPU without the iGPU enabled and powered on.

    The iGPU is in direct competition for CPU resources, and vis versa.

    The movement from the motherboard of the lower power GPU to the CPU carrier locked out other vendors from providing and growing their low power chipset GPU solutions.

    There could have been a whole class of low power GPU's developed by vendors other than Intel to provide video and display connectivity, but Intel locked them all out.

    And, as a result of the iGPU, Intel foisted Optimus on us.

    I don't know which is worse, Optimus or no growth in CPU performance. It's a tough call :)
     
    Last edited: Oct 24, 2016
    Papusan likes this.
  28. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    No, it's not stealing anything of consequences.

    For those that use GPU's, the cpu is giving all it can.

    For most that use the iGPU, it gives much more than it 'costs' (thermals, power, $$$).

    The circular argument is presented by you, I imagine.

    I don't care about performance as an absolute 'number' or 'score'. My metric is much more important; Productivity.

    With today's CPU's, I am more productive away from the office and away from AC power than I've ever been. Including any power robbed from the CPU by the iGPU.

    I won't be happy until Intel integrates not only the iGPU fully (so as to never need an external GPU...), but also the RAM (yeah; X-Point), Storage (yeah; X-Point), WiFi, Sound, and any other 'module' I rely on today.

    Will the CPU be less than it could be on it's own? Sure. But the platform as a whole will be what Kaby Lake is to ENIAC so long ago.

    A CPU doesn't do anything on it's own. And a mobile platform can't do it 'all' either. On a mobile platform, Intel's interpretation works best for anyone remotely in my shoes.

    Gamers, tweakers and 'score' keepers may be lost in this skirmish, but the end goal Intel has chosen is the right one not just for today, but for the foreseeable future too (well past when I'll close my eyes for the last time).

    Btw, Intel didn't 'lock out' anyone by having an iGPU onboard it's processors. What did is called progress. A platform that gives me a few % more 'raw' processor power but a lot less battery life (because of the GPU) is not that progress.

    In my use, when my current workloads hit anything over 70% (sustained) or so of available processor power on the platform I'm on; it is time for a processor upgrade (or, wish for one). Being able to use the CPU at 80, 90 and 100% doesn't give me ~14%, ~29% or ~43% more productivity.

    So to get 5% more 'processor' but lose an iGPU is worthless (yeah; to me).

    The layout of a modern processor is not just about the processor (and it hasn't been for a long, long time already). It is about the balance of what it will do in a complete system/platform.

    Not what it will do in a BM 'score'. ;)



     
  29. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,710
    Messages:
    29,842
    Likes Received:
    59,625
    Trophy Points:
    931
    But you get a faster iGPU :D Maybe a good reason for upgrade? :oops:
     
    hmscott likes this.
  30. Jarhead

    Jarhead 恋の♡アカサタナ

    Reputations:
    5,036
    Messages:
    12,168
    Likes Received:
    3,133
    Trophy Points:
    681
    So, you're ultimately looking for a SoC attached to a monitor/keyboard/mouse/battery? :p
     
  31. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Nah, just directly attached to my brain. lol...

     
  32. Jarhead

    Jarhead 恋の♡アカサタナ

    Reputations:
    5,036
    Messages:
    12,168
    Likes Received:
    3,133
    Trophy Points:
    681
    Better hope it doesn't BSOD then ;)
     
  33. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Nah, by then, they'll be bulletproof. Besides, who wants to compute forever? :)
     
    alexhawker and Jarhead like this.
  34. gull_s_777

    gull_s_777 Notebook Consultant

    Reputations:
    34
    Messages:
    244
    Likes Received:
    5
    Trophy Points:
    31
    I have a 5 year old 2760QM, and still 6700HQ doesn't seem enough improvement in processing power.
    On the other hand, its on chip 530 graphics are now faster than my GT 555M.
     
    TomJGX and hmscott like this.
  35. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Yeah. Need to compare to something higher up on the food chain. At the time, the four year older 2760QM was placed higher than the i7 6700HQ is today.

    See:
    http://forum.notebookreview.com/threads/why-arent-cpus-getting-faster.797265/#post-10366383

    See:
    http://www.cpubenchmark.net/compare.php?cmp[]=2586&cmp[]=884&cmp[]=878


    The passmark scores for the more directly comparable i7 2670QM above show a ~35% improvement for multithreaded performance.

    While that may seem low, consider that if you were being paid the same rate per hour today and could finish an hour of previous work in 39 minutes, your effective hourly pay rate has actually increased by almost 54%.

    Even using the CPU's you choose to compare, a 21% performance increase translates to an almost 28% pay rate increase. Even with a nominally 'lower end' CPU.

    Processing power (aka 'Productivity') has everything to do with CPU's getting faster.

    Thinking that progress isn't being made is sticking your head in the sand.


     
  36. OverTallman

    OverTallman Notebook Evangelist

    Reputations:
    111
    Messages:
    397
    Likes Received:
    273
    Trophy Points:
    76
    But then buying a new laptop to replace your quad core SB to get that performance gain doesn't seem economical to budget-minded.

    True those who don't have a budget limit like you won't give a damn on cost/performance, you'll just buy the best available anyway, but not everyone can spend like that.
     
    hmscott likes this.
  37. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Everyone has a budget, including me.

    I showed why I do give a damn about cost/performance. And why even a lower end 'current' notebook is bester than yesteryears offerings.

    Those that are not using these as the 'tools' they are, shouldn't spend their money (the games won't be 'better'). Those that don't have the budget to buy something current, shouldn't (as in; don't get into debt). Those that realize that time is money (if they are actually making $$ with their systems); better to trade what you can of the disposable stuff (yeah; $$$$) and get back more of the important stuff (yeah; time).

    A lack of money is not a budget. It is an arbitrary limit, at best.

    In general, buy what you need. With computers? Buy what you can afford (always).

     
  38. Jarhead

    Jarhead 恋の♡アカサタナ

    Reputations:
    5,036
    Messages:
    12,168
    Likes Received:
    3,133
    Trophy Points:
    681
    Even with his performance-vs-time metrics, I doubt tiller could justify buying an i7-6950X despite a possibly-unlimited budget.
     
    hmscott and tilleroftheearth like this.
  39. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    And you'd be right.

    See:
    http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-6950X+@+3.00GHz


    See the single thread PM rating (2102)? If (my) software ever catches up to current processor capabilities and actually leverages multithreaded processor performance more linearly, I'd be all over that 'old' Broadwell E (yeah; over six months old already) CPU. ;)

    See:
    http://www.cpubenchmark.net/compare.php?cmp[]=2792&cmp[]=2598&cmp[]=2543


    But the right workload will make even that processor a sweet deal. Just not right for me.

    (The link above is telling of how much 'work' can be done with how much TDP is indicated. Too bad the 47W i7 5960HQ isn't more readily available in systems I'd be willing to purchase - above an almost 20W hotter desktop class processor, not bad...). ;)

     
  40. OverTallman

    OverTallman Notebook Evangelist

    Reputations:
    111
    Messages:
    397
    Likes Received:
    273
    Trophy Points:
    76
    Nah that's too noob, better shoot for a Xeon E5-2699 v4!
    MOAR CORES MOAR THREADS HURRRR!!!11!1!!1
    :p
     
  41. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    I think you missed my post above?

    Anyway, the Xeon E5-2699 V4 is even worse (single core performance) than the i7-6950X.

    Even if it almost costs double.

    I don't buy on spec's, 'scores' or price points.

    I buy for what makes my workloads, flow.

    (Yeah, I realize I'm using PM for 'scores' with these CPU's. But I'm not taking them verbatim. I am just using PM to show the relative strength between CPU's - and on that PM aligns very close with my experiences/testing of different platforms).

     
  42. OverTallman

    OverTallman Notebook Evangelist

    Reputations:
    111
    Messages:
    397
    Likes Received:
    273
    Trophy Points:
    76
    Depends on the situation: If strong cores are needed (e.g. most single computer works), i7-6950X is better; If every threads can be used up (e.g. server in office), Xeon E5 is a lot better.

    Not like I care anyway, they're way out of my budget.
     
  43. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    From MIT Technology Review, 6 years ago:

    Why CPUs Aren't Getting Any Faster

    "In releasing Sandy Bridge, Gwennap observes, Intel has little to tout in terms of improved CPU performance:

    Sure, they found a few places to nip and tuck, picking up a few percent in performance here and there, but it is hard to improve a highly out-of-order four-issue CPU that already has the world’s best branch prediction.

    Instead, Intel is touting the chips’ new integrated graphics capabilities and improved video handling, both of which are accomplished with parts of the chip dedicated to these tasks–not the CPU itself, which would be forced to handle them in software and in the process burn up a much larger percentage of the chip’s power and heat budget."

    "Gennap explains that here, paradoxically, the key to conquering the power wall isn’t more power–it’s less. Fewer watts per instruction means more instructions per second in a chip that is already running as hot as it possibly can:

    The changes Intel did make were more often about power than performance. The reason is that Intel’s processors (like most others) are against the power wall. In the old days, the goal was to squeeze more megahertz out of the pipeline."
    https://www.technologyreview.com/s/421186/why-cpus-arent-getting-any-faster/

    Why CPUs Aren't Getting Any Faster?, Heat, Power, and the Real Estate the iGPU steals from the CPU, and that was 6 years ago.

    Nothing has changed except we keep losing real estate, power, and thermal budget to the iGPU.

    It's gotta stop, or the AMD CPU power will catch up with the Intel CPU power :D

    Intel has to pop that iGPU out onto it's own chip, and restore those resources back to the CPU for CPU performance growth.
     
    Last edited: Oct 26, 2016
    TomJGX and jclausius like this.
  44. Jarhead

    Jarhead 恋の♡アカサタナ

    Reputations:
    5,036
    Messages:
    12,168
    Likes Received:
    3,133
    Trophy Points:
    681
    The real question is when they'll pop that iGPU out (assuming they will). Skylake's CPU performance is considerably better than Sands Bridge, yet the iGPU is still present.
     
    hmscott likes this.
  45. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,710
    Messages:
    29,842
    Likes Received:
    59,625
    Trophy Points:
    931
    Yees. Push out 6 Core i7 with <No iGPU CRAP> intended for DTR laptops. 110- 115w TDP is ok. Of course higher TDP is preferred :D AND PUSH OUT THE LOW POWERED UNWANTED BGA CHIPS, ONLY FOR THE SMALLER NOTEBOOK SEGMENT INTENDED FOR CAFE/web browsing - email :D Good times coming :p I hope this isn't a wet dream :oops:
     
    hmscott likes this.
  46. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Yup, given the extra real estate, power, and thermal headroom recovered from a vacated iGPU, it's all possible :)
     
    Papusan likes this.
  47. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Everyone, including me is repeating the same thing. Now, we're even quoting quotes from someone else from 2010. :)

    Is the iGPU taking resources that could be possibly used to make a technically faster (i.e. higher 'scores') CPU? Sure. That is a given.

    Is it? In real world usage? Using real world applications and a modern O/S? I'd say no. Not by a long shot. The performance gained from an igpu-less design is greatly offset by the loss of the igpu for many, if not most users (and here, I'm including me).

    Where power and heat are not a problem (i.e. desktop space) there are available choices to go CPU only on the cpu...

    On the mobile platforms, power, heat and a sense of 'balance' is more important than sheer boasting of 'best' processor 'scores'. As I've already pointed out, the score is easy to make it look better when multithreaded applications (synthetic?) are the baseline. In the real world though, today, October 26, 2016, they're not the only (and often times not the most important...) aspect of performance that matters.

    And if mobile and performance is still the goal? The mobile Xeon's fill that niche nicely, atm.

    See:
    http://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E3-1575M+v5+@+3.00GHz


    With 4 Cores and 8 Threads, there are less than a handful of people on this forum that could tax these processors with their everyday workflows.

    And yeah, the processor above comes with built in graphics too (for the betterment of the processor too, as we all know...). ;)

    I've been accused of being an Intel 'fanboy', many times. I'm nobody's 'fanboy'.

    What Intel has proven to me over and over is that their vision of current and future computing is matching my own, even with my much more limited experiences. Their SSD's were not (and still are not) touted for their outstanding 'scores' or other marketing type BS. Their processors give everyone what they actually want/need, if chosen appropriately (price/cost not considered - what can I say? Keep saving...). What Intel is doing is giving us tech, as they're able, for real world solutions. Not for best 'scores'. Not for bragging rights.

    Just products that work as they should (and spec'd for) in the environments that are most applicable to their customer base.

    That Xeon chip linked above is an example of an real world engineering feat. With 8MB cache, 128MB eDRAM and up to 64GB of DDR4 RAM and with up to 16 lanes of PCI 3.0 connectivity, all in a 45W envelope, there is nothing else that screams mobile 'real world performance' than what Intel is already offering (since early 2016).

    I think the minds at Intel have a better vision of what should be done, rather than what can be done. In the name of (mobile) performance. ;)
     
    hmscott, ellalan and Jarhead like this.
  48. Vyga

    Vyga Notebook Enthusiast

    Reputations:
    5
    Messages:
    31
    Likes Received:
    21
    Trophy Points:
    16
    I think CPU is not getting faster by much because they are really close to their performance limit- intel/amd would bankrupt if they would instantly release chip which cannot be improved anymore and would be useful for 20+ years so they improving it by very little, adding 1 or 2 new features and trying to approach that top performance limit as slow as they can.

    The limit it is- simple physics. You can clock CPU to as high like 10 GHz (jk, jk), but it will need hell of the cooling system. How much heat, surface, as big as 5x5 cm, can transfer to even unlimited size heatsink without damaging contact area between chip and heatsink? "Very" limited, but you could increase chip surface area to transfer more heat? -No. Increasing chip size would require higher voltage to keep the chip running and higher voltage would result with even more heat. So, that, my friend, is the maximum limit of CPU performance.

    The only thing they can do is tweaking CPU structures, adding up to infinity cores and working on smaller nm technologies to help reduce voltage and with it heat output. Yet again, smaller nm reduces chip size and smaller chip can transfer less heat to heatsink due to smaller contact area. Even if chip will be size of the needle tip and will use 0,00001 mV it wont be able to give out all the heat it produces working at like 5Ghz through such small area like a needle tip.

    So, in short, CPU isn't improving drastically because there is limit of heat which can be transferred via given surface area and they are close to that limit.

    GPU, on the other hand, is not working nowhere near even to 3 GHz (1800 GHz~max atm?) so it have room for improvment for another 10 year or so before it will hit the wall of heat transfer limit.
     
    Last edited: Oct 29, 2016
  49. yotano21

    yotano21 Notebook Evangelist

    Reputations:
    67
    Messages:
    570
    Likes Received:
    332
    Trophy Points:
    76
    I want a faster cpu so that I can start my porn faster.

    LETS ALL MAKE PROCESSORS GREAT AGAIN!!








    *I'm Donald Trump and I approve this message
     
    TomJGX likes this.