The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
 Next page →

    CPU isnt improving fast enough

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by jsteng, Jan 26, 2010.

  1. jsteng

    jsteng Notebook Consultant

    Reputations:
    3
    Messages:
    179
    Likes Received:
    2
    Trophy Points:
    31
    CPU was postulated to double in speed every 18 months.

    This was true in the 80s, 90s, and early 2000s.

    In 2004, I had an Athlon 2.5GHz (Pentiums are about 2.4~2.8GHz). So how come, in the last 5 years, CPU didnt quadruple in speed and instead engineers opted to make Hyperthreading Quad Cores?

    Though it is true that engineers had added lots of improvements since, such as bigger cache, faster instruction execution and such. But 5+ years had passed, we should be running Dual Cores at 5GHz the least right now.

    I personally prefer Less Cores at Higher Speed than More Cores at Lower Speeds. Reason: Performance does not scale linearly by adding multiple cores. But performance does scale linearly on single cores by increasing CPU speed. Second Reason: Not all software are written "Perfectly" for multicore/threads.
     
  2. Melody

    Melody How's It Made Addict

    Reputations:
    3,635
    Messages:
    4,174
    Likes Received:
    419
    Trophy Points:
    151
    The reason is efficiency mainly. Theoretically we could make extremely high clocked single core CPUs which might match some dual cored CPUs, but the power and thermal intake/output of those CPUs would just be inefficient and impractical. With the whole "be eco friendly" trend that most modern society is adopting, efficiency is an image most companies want to have.

    And yes, not all softwares can fully take advantage of multiples cores nor even multiple threads. However, it's the way the industry is going so software devs are going to have to learn sooner or later.
     
  3. jsteng

    jsteng Notebook Consultant

    Reputations:
    3
    Messages:
    179
    Likes Received:
    2
    Trophy Points:
    31
    I see.. so the Main Reason is Efficiency..

    "Throw in more core, as long as power efficiency is not compromised and run some Benchmark (that utilizes all the cores) and publish it on the web to show off this i88 CPU with 88 cores is gonna cook at 888,888CPU Marks ..."
     
  4. granyte

    granyte ATI+AMD -> DAAMIT

    Reputations:
    357
    Messages:
    2,346
    Likes Received:
    0
    Trophy Points:
    55
    actualy 1 core of a i7 2.0 leave a pentium IV 3.2 in the dust (ya the 3.2 stock exist i have one in my room)
     
  5. Lithus

    Lithus NBR Janitor

    Reputations:
    5,504
    Messages:
    9,788
    Likes Received:
    0
    Trophy Points:
    205
    Moore's law doesn't say that a processor will double in speed. It says that the transistor count will double every two years.
     
  6. granyte

    granyte ATI+AMD -> DAAMIT

    Reputations:
    357
    Messages:
    2,346
    Likes Received:
    0
    Trophy Points:
    55
    moore's law actualy talks about brute power meaning that you count cores, clock speed and operation per clock to obtain the total amount of operation per second a processor can do
    and if we count that way it's has been going a little faster then moore's law in the lastes years in fact .....
     
  7. dtwn

    dtwn C'thulhu fhtagn

    Reputations:
    2,431
    Messages:
    7,996
    Likes Received:
    4
    Trophy Points:
    206
    I'm surprised noone mentioned that core speeds don't exactly match up either. A Athlon 2.5 Ghz is going to get its clock cleaned out by a C2D 2.5 Ghz, even in single threaded applications. Clock for clock, CPUs are getting more efficient and faster as well.

    Edit: Looks like granyte did.
     
  8. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,788
    Likes Received:
    0
    Trophy Points:
    205
    well, if i check out for example the fully cpu based raytracing application here:

    Arauna Real-Time Ray Tracing, then i notice how much cpu's improve.

    clock speed is one measurement, but a very unimportant one by today.

    and yes, in case of 100% cpu usage, a quadcore is simply 4x as fast as a single core. so the i9 will be 6 times as fast as a single core i-processor (which, so far doesn't exist, as there's no i1).

    with the i's, that's not exactly true anymore due to the load balanced dynamic clocking. but anyways. if you fix the ghz, then it's true.

    this is why i often measure max-perf = clockrate * corecount

    hyperthreading adds a bit to it (but mostly it adds snappiness as it allows the os to react while the cpu is at full load), etc. of course.


    all in all, no, cpu's increase by much, and that's great.
     
  9. newsposter

    newsposter Notebook Virtuoso

    Reputations:
    801
    Messages:
    3,881
    Likes Received:
    0
    Trophy Points:
    105
    CPUs are already faster than hard drives and memory.

    How many wait states do you really want your machine to sit in??
     
  10. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,788
    Likes Received:
    0
    Trophy Points:
    205
    yeah. which is why the ssd thread is at the top and so much loved. the cpu ain't the bottleneck anymore for most cases.
     
  11. HPDV6700

    HPDV6700 Notebook Consultant

    Reputations:
    1
    Messages:
    262
    Likes Received:
    3
    Trophy Points:
    31
    My 2.2GHz P4 on the Dell Dimension takes 55Watts and runs hot just to watch a youtube video, or some google maps street view.... That is not very efficient at all. Yet, the T1350 on the latitude is slightly slower, (1.87Ghz) It uses 27 watts, and blows that P4 out of the water in every way....
     
  12. streetxdreamer

    streetxdreamer Notebook Enthusiast

    Reputations:
    2
    Messages:
    33
    Likes Received:
    0
    Trophy Points:
    15
    Engineers went other routes then just to keep on increasing clock speeds is because of bottlenecks in other areas.

    Sure a Ferrari is fast but it cant go fast if it's stuck in traffic.
     
  13. Thaenatos

    Thaenatos Zero Cool

    Reputations:
    1,581
    Messages:
    5,346
    Likes Received:
    126
    Trophy Points:
    231
    People need to seriously stop relating CPU power/performance to clock speeds...Its 2010 for crying out loud, heck even in 2005 I had a mobile 1.6GHZ AMD turion that would keep up with and sometimes beat my overclocked desktop hyperthreading pentium 4. An OK analogy is look at car engines these days vs 10 years ago. There are 4 cylinders making the power that v6s and some v8s were. Processors have become ALOT more efficient and the combination of newer architecture and instruction sets each pulse of a processor can now do more then ever.
     
  14. Melody

    Melody How's It Made Addict

    Reputations:
    3,635
    Messages:
    4,174
    Likes Received:
    419
    Trophy Points:
    151
    I blame company marketing. It's like Video RAM on Graphics cards/GPUs. The companies are trying to find a toned-down simple "dumb" system for consumers to easily scale products in terms of performance. For CPUs, companies have decided that clock speed was that value ad for GPUs, VRAM was that value. IMO it's just the system trying to dumb things down for the average Joe and failing at it...epically.
     
  15. rgathright

    rgathright Notebook Geek

    Reputations:
    32
    Messages:
    93
    Likes Received:
    0
    Trophy Points:
    15
    Look at the Intel Core 2 Duo E8600... at a clock speed of 3.33Ghz the processor can run with no CPU fan while processing SETI@Home workunits on my workbench. The last of the line in Socket 775, Intel seemed to have woke up one morning and said "No more, let's move on to the Quad core now". Yet, we all have to agree that Intel left a lot of power on the table with this series!

    My point is that I believe, with AMD leaving the picture in disgrace after the whole quad core debacle, Intel is not releasing their fastest anymore. They have focused instead on streamlining their production costs.

    P.S. Did anyone remember this news release in 2009 and the quick retraction of the story by Intel? Well they finally got around to manufacturing the product overseas and releases are expected in 2010. A full year delay for a simple chip what else is being conveniently delayed by Intel? http://techreport.com/discussions.x/16294
    [​IMG]
     
  16. Thaenatos

    Thaenatos Zero Cool

    Reputations:
    1,581
    Messages:
    5,346
    Likes Received:
    126
    Trophy Points:
    231
    We run e8650s here a work for cad and they are cool and great running chips. Would have been nice to have 2 extra cores, but so far we are OK on CPU power even after 2 or so years.
     
  17. IntelUser

    IntelUser Notebook Deity

    Reputations:
    364
    Messages:
    1,642
    Likes Received:
    75
    Trophy Points:
    66
    I don't like the "core war" either, but that is the only way to go. Each process generation offers not so quite 50% reduction in power consumption, more like 35-40% each generation. If you double clock speed, then you'll eventually reach a point where nobody will buy the CPU. To compound that, doubling clock speed requires more transistors, which equal even more power.

    Truth is clock speed is the only way to increase performance consistently in all apps, but like the phrase "There's no free lunch" says, something has to be compromised in order to increase clocks.

    Clock speed is like speed limit on the highway, and the cores are amount of lanes on the highway. Everyone would prefer that the speed limit be increased, but that's not all that good when looking at the overall picture.
     
  18. anothergeek

    anothergeek Equivocally Nerdy

    Reputations:
    668
    Messages:
    1,874
    Likes Received:
    0
    Trophy Points:
    55
    Are you kidding me? I had a QX9300 and more power than I knew what to do with. The i7 has made it into laptops, offering 4 cores with hyperthreading, and the revolutionary turbo function that overclocks and shuts down cores on the fly. The i7 720/820/920 have effectively eliminated the age-old duo vs quad conundrum. And arrandale (i5) creates a lower TDP option that still includes hyperthreading, and more than enough power for mainstream users.

    Conroe slaughtered on a completely different level, but Nehalem was still a vast improvement in many ways. Then came Lynnfield and Capella, and the six core i7 980x is right around the corner!
     
  19. HPDV6700

    HPDV6700 Notebook Consultant

    Reputations:
    1
    Messages:
    262
    Likes Received:
    3
    Trophy Points:
    31
    LOL...
    My little core solo is like a 4 Cylinder engine, it maxes out often, but gets the job done. :D

    But i would not mind owning a 6 Cylinder... :( one day.
     
  20. stefanp67

    stefanp67 Notebook Consultant

    Reputations:
    238
    Messages:
    264
    Likes Received:
    0
    Trophy Points:
    30
    The last i heard was that AMD is about to release 8 and 6 core cpu's in Q2 of this year so i wouldn't say AMD left in disgrace they just did a tactical regrouping.
     
  21. thinkpad knows best

    thinkpad knows best Notebook Deity

    Reputations:
    108
    Messages:
    1,140
    Likes Received:
    0
    Trophy Points:
    55
    Good thing you weren't frontman in the Pentium 4 research team, oh wait they already had a guy like you running it. Are you saying you still think high clocked singles are better than lower clocked, more efficient clock for clock multi-core's? It doesn't scale linearly with performance in many cases actually, a Pentium M 770 can easily overtake a 3.5GHz Prescott Pentium 4, and the Pentium M runs at a (according to some people) measly 2.13GHz. And according to this theory my T7700 should only have the same power as my Northwood 2.4GHz Pentium 4 HT. It has been proved time and time again that clock speed isn't everything.
     
  22. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    99% of people don't even need a "fast" CPU...
     
  23. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    And that holds back the 1% of us who do. :)
     
  24. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    The performance does not scale 100% in a faster single core. The problem is that main memory can't be fed to the core fast enough. This, among other delays, is why a multithreading app on a dual core is more efficient than the same app running a single core at 2x the clock.
     
  25. Melody

    Melody How's It Made Addict

    Reputations:
    3,635
    Messages:
    4,174
    Likes Received:
    419
    Trophy Points:
    151
    All you need is an ethical/moral rights cause on your side like visible minorities do and you can get what you want regardless of numbers! :D
     
  26. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    lol... (10Char).
     
  27. Simpler=Better

    Simpler=Better Notebook Consultant

    Reputations:
    74
    Messages:
    165
    Likes Received:
    0
    Trophy Points:
    30
    My 1.6 Atom powered netbook is blazing fast compared to my 2.3ghz P4 at work.
     
  28. IntelUser

    IntelUser Notebook Deity

    Reputations:
    364
    Messages:
    1,642
    Likes Received:
    75
    Trophy Points:
    66
    Pentium M is single core btw. :) Cores aren't everything either.

    Neither does it scale 100% on a 2x core. Clock speed increase however, will benefit ALL applications, while cores have to be optimized. Doing dual cores were easy, because threading dual cores weren't so difficult, but as we go into quad and more it makes less and less sense.

    Only reason multithreading on a 2x core 1x clock seems better than 1x core 2x clock is because with the latter, the single program is running 2x faster.

    The lesson here is that focusing on one idea never works. They gotta do all of them. Clock speed increases, cache and memory improvements, architectural enhancements, more cores, and software optimization. Multi cores are hyped just as much as clock speed was back in its days.
     
  29. thinkpad knows best

    thinkpad knows best Notebook Deity

    Reputations:
    108
    Messages:
    1,140
    Likes Received:
    0
    Trophy Points:
    55
    This is why TurboBoost exists, and to the person who said Pentium M was single core, yes.. i obviously know that it is single core, i used them for years. However all i'm saying is that core clock speed doesn't need to physically double to get theoretical performance, it can stay the same or be even lower and double performance on the predecessor processor. You also have to consider which programs actually can use 2 or more threads. CPU in everyday applications at least, is not a bottleneck, i don't think it ever will be since RAM will never be able to feed instructions and access faster than a CPU. In the long term, that is what processor cache is for, high priority, commonly accessed on-demand instructions. You can run a CPU with no cache, there were many in the 90's even now you can.

    I agree with sgogeta4 indefinitely, most of us could still be content with using "ancient" Core 2 Duo's, Core Duo's or even Pentium M if it were 64-bit.
     
  30. DetlevCM

    DetlevCM Notebook Nobel Laureate

    Reputations:
    4,843
    Messages:
    8,389
    Likes Received:
    1
    Trophy Points:
    205
    So who cares?

    Honestly...

    The average home user could live with a Pentium M or an ultra low voltage CPU.
    They don't need all the processing power.

    The people who do, for Photoshop, Music production etc. can always get a better CPU.

    And performance wise - Applications that are demanding are rewritten for multi core support - so that's no problem.

    I can't see a problem at all - so development has slowed down, so what?

    As long as it doesn't stop or goes backwards - Atom - cough, cough - we don't need to worry.
     
  31. rgathright

    rgathright Notebook Geek

    Reputations:
    32
    Messages:
    93
    Likes Received:
    0
    Trophy Points:
    15
    The 1% tend to create the demand that the 99% will want to tomorrow.

    On the subject of the Intel Atom processor, couldn't we see a sizeable performance increase with a higher FSB on these netbooks? :rolleyes:

    Do they want us to get thrilled when they finally release a 1333FSB processor in the netbook arena? :mad:
     
  32. rapion125

    rapion125 Notebook Evangelist

    Reputations:
    15
    Messages:
    353
    Likes Received:
    0
    Trophy Points:
    30
    Actually, the Atom is quite an engineering feat. It has the same performance as a 3.2 GHz P4 and uses only 7W of power.

    Also, multiple cores do scale linearly if the program was designed well. Look at Cinebench. A quad-core gets a four times as many points in multi-core rendering mode than single-core rendering mode. Look at multi-core optimized video encoding programs. If a quad-core and dual core have the same clock-per-clock efficiency and clock speed, the quad core accomplishes the task in half the time as the dual core.
     
  33. DetlevCM

    DetlevCM Notebook Nobel Laureate

    Reputations:
    4,843
    Messages:
    8,389
    Likes Received:
    1
    Trophy Points:
    205
    On the single core models a 1,6Atomn is less than half as powerful as a 1,6Pentium M - that's no feat.

    Who cares about energy consumption at this stage?

    If you want less power consumption.... why not use a mobile phone processor?

    Low energy consumption in general is a good development - but only if you are not getting sub par performance - and that's what an Atom is.
     
  34. Thaenatos

    Thaenatos Zero Cool

    Reputations:
    1,581
    Messages:
    5,346
    Likes Received:
    126
    Trophy Points:
    231
    That and more cache would make it a totally different chip. Too bad its hard for me to actually work on one.
     
  35. Bog

    Bog Losing it...

    Reputations:
    4,018
    Messages:
    6,046
    Likes Received:
    7
    Trophy Points:
    206
    Indeed, multiple cores do scale linearly in terms of performance if and only if there is a symmetric thread load. However, this is not the case for almost every application out there.

    Davepermen seems to enjoy ignoring this important aspect of software when it comes to measuring performance. We've had this debate before.
     
  36. jsteng

    jsteng Notebook Consultant

    Reputations:
    3
    Messages:
    179
    Likes Received:
    2
    Trophy Points:
    31
    Most of you missed the point in my original post.

    With all the improvements such as bigger cache, better microcoding (less cycles to execute and instruction), less power and the like, surely a current tech 2.0 GHz CPU can beat 5 year old tech 2.5Ghz CPU in any playing field.

    You should compare Less Core CPU vs More Core CPUs based on SAME Technology:
    ie, Compare an i7 class CPU @ 2.0GHz base frequency (that can boost up to 3GHz) vs a "similar technology" single core CPU running at 8GHz?

    FAT Chance.

    Like someone had said, only 1% needs the high performance. But in the future, the 99% will be using what 1% have now. You can thanks the lazy programmers and their FATWARE. They make programs slower thereby the need to have faster computer to run them.

    Whether to believe Multi core will scale up linearly of not is the big debate.
    We need Intel Engineer to answer that. Not some rigged up benchmark.

    AS for me, i've seen the results in everyday applications since 1995.. I only went back to multicore in 2006 as that was the only choice available..
     
  37. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    In your original post, you are mixing up Ghz with performance. I think anything current will 'beat' a 2004 CPU. Doesn't matter what GHz speed its running at - the end result is all that matters.

    I prefer multi-cores for the endless 'headroom' they seemingly give the computer (or 'snappiness', put another way). I agree that higher clock rates always help, but when my systems idle with over 900 threads in task manager, more cores = more responsiveness in my experience. And more responsiveness makes me more productive by making the mechanical side of using a computer seemingly disappear (when things happen in 'real time', as you need them).

    As to the lazy programmers who give us FATWARE, I say bring it on - I'd much rather work on todays apps/programs... than what was written at the beginning of the last decade - no matter how much faster they were back then (because of smaller code), I'm still more productive today.

    Gains are not to be measured solely on a linear basis like GHz, the whole computing ecosphere needs to be taken into account too.
     
  38. thinkpad knows best

    thinkpad knows best Notebook Deity

    Reputations:
    108
    Messages:
    1,140
    Likes Received:
    0
    Trophy Points:
    55
    Now that i think of it, why don't they just shrink the 90nm Pentium M's to something like 45 or even 32? They'd run probably as cool as Atoms, cost a little more due to the 2MB cache on the latest ones, and they'd be great for netbooks, all i know is a 2GHz Pentium M can smoothly play back 720p content, not too shabby.
     
  39. Bog

    Bog Losing it...

    Reputations:
    4,018
    Messages:
    6,046
    Likes Received:
    7
    Trophy Points:
    206
    The Atom uses a different architecture to achieve a physically smaller chip. The Pentium M has no such allowances, and the TDP of most P-Ms is 25-27W; even if you reduced it drastically, you probably couldn't match the low cooling requirements of the Atom.

    A cheap Atom-like CPU with Pentium M level performance wouldn't be too much to ask from Intel... but that is technology. Its never enough.
     
  40. rgathright

    rgathright Notebook Geek

    Reputations:
    32
    Messages:
    93
    Likes Received:
    0
    Trophy Points:
    15
    I liked the Pentium M's as well.

    I think that scaling them down would result in the performance we are looking for but with 2Mb of cache they would still get fewer chips per wafer of silicon.

    Profitability rules at Intel.

    [​IMG]
     
  41. thinkpad knows best

    thinkpad knows best Notebook Deity

    Reputations:
    108
    Messages:
    1,140
    Likes Received:
    0
    Trophy Points:
    55
    Well what is better then, higher TDP but better performance? Or great battery life but only when you're doing virtually nothing on your netbook, there's compromises for everything, and you have to sacrifice something for performance, they'd still get pretty decent battery life. Besides, so what if the price of netbooks would go up a little, i think computers should be more of a luxury than a commodity, like in the 90's.
     
  42. anothergeek

    anothergeek Equivocally Nerdy

    Reputations:
    668
    Messages:
    1,874
    Likes Received:
    0
    Trophy Points:
    55
    Remember, Conroe was basically 2 Pentium Ms on the same die. I was preaching Pentium M while they were still cramming P4s in the majority of notebooks. Thankfully we're over clockspeed wars, right OP?
     
  43. Bog

    Bog Losing it...

    Reputations:
    4,018
    Messages:
    6,046
    Likes Received:
    7
    Trophy Points:
    206
    It depends on what your design goals are. Better performance is always welcome, but unfortunately compromises must be made in order to meet other, perhaps more important design criteria. In the case of the Atom, Intel's intention was clearly to minimize heat output and power consumption.

    Do you mean Yonah?
     
  44. IntelUser

    IntelUser Notebook Deity

    Reputations:
    364
    Messages:
    1,642
    Likes Received:
    75
    Trophy Points:
    66
    You got that wrong. Both Core 2 Quads and Phenom II's get 3.5x for 4 core. Core i7 only gets 4x because Hyperthreading increases it further.

    http://www.techwarelabs.com/intel-core-2-quad-q9400-processor/5/
    http://www.techwarelabs.com/wp-content/gallery/amd-phenom-955/cine_955.png

    Atom uses 2.5W on the Netbook variant.

    Think not. 2 Pentium M was Yonah and even that CPU had enhancements and wasn't a straightforward 2x Pentium M.
     
  45. IntelUser

    IntelUser Notebook Deity

    Reputations:
    364
    Messages:
    1,642
    Likes Received:
    75
    Trophy Points:
    66
    The Atom project was originally made to scale down to fit it on smartphones. Think about how low they'll have to scale down Pentium M to fit on smartphones. It'll still be a larger die too.
     
  46. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,788
    Likes Received:
    0
    Trophy Points:
    205
    that's like saying 32bit can't address 4gb ram, only 3.5gb or so.

    any quadcore can reach 4x the performance of an identical single core IF NO OTHER BOTTLENECK GETS HIT FIRST. like any 32bit os can adress exactly 4gb ram.

    and yes, i've done apps that are 100% computational workload with nearly no memory accesses (all into the cpu local caches). there, scaling is exactly linear for quadcores and sure for 6 and 8 cores, too. then, the inter-communication may start to become more and more a bottleneck if you don't split the workload well.

    to see the end result of that about perfectly scaling software, check the raytracing pic on my webpage.


    but, especially the first quadcores from intel had huge memory bandwith limitations when using all 4 cores, thus often resulting in <x4 performance gains.
     
  47. Greg

    Greg Notebook Nobel Laureate

    Reputations:
    7,857
    Messages:
    16,212
    Likes Received:
    58
    Trophy Points:
    466
    Hey, if you don't like it go to college and focus on computer engineering and computer architecture. :p

    Seriously...there is only so much one can do with today's technology
     
  48. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,788
    Likes Received:
    0
    Trophy Points:
    205
    yeah, help graphine processors to come out. they should be able to handle near-terahertz clock rates :) it would be awesome :)

    but we sure will get them to feel slow again, too :)
     
  49. IntelUser

    IntelUser Notebook Deity

    Reputations:
    364
    Messages:
    1,642
    Likes Received:
    75
    Trophy Points:
    66
    Yes of course. But that is the reality. Vast, I mean VAST majority of the programs don't scale linearly. Be it Amdahl's law, memory bandwidth, interprocessor bandwidth, suboptimal code, it doesn't.

    The problem is when people try to compare quad core as being "4x" while dual core with 2x clock speeds as somehow less. Or even a dual core with 1.9x clock speed. There's no Amdahl problem with frequency, or interprocessor bandwidth, or even suboptimal multi-thread code. And zero optimizations will give you performance with frequency changes.

    It wasn't the problem that clock speed isn't the only way to increase performance. The real problem was when everybody kept associating that one vector of performance as everything.

    Same with the "core war" now. It isn't the silver bullet to performance. Just like frequency is. Everything is just as important as cores and frequency. Little frequency increase, faster memory access, more cores, more caches, faster caches, better architecture, all combine.

    If I can put it in a negative way, using one way to measure performance is essentially being LAZY.
     
  50. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,788
    Likes Received:
    0
    Trophy Points:
    205

    sure. i just wanted to make sure people understand that a core more means +1 in terms of gigahertz of POTENTIAL performance. it's not a more complicated equation.

    BUT you're 100% right that one always hit some bottleneck somewhere, and more often than not, today, it's not the cpu computational workload that is the bottleneck. not in single threaded nor in multithreaded apps. memory bandwith, cache sizes, disk accesses etc very often are the main bottlenecks of today. increasing clockspeed or adding cores doesn't help, then, at all.
     
 Next page →