The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    i5 2500k Overclock vs. i7 4710hq

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by Coprhead66, Aug 12, 2015.

  1. Coprhead66

    Coprhead66 Notebook Enthusiast

    Reputations:
    0
    Messages:
    10
    Likes Received:
    1
    Trophy Points:
    6
    Hey everyone,

    I'm currently selling my older desktop and buying a refurbished / open-box Alienware 15 or 17. I tend to play games like Archeage that are processor intensive, and I'd like to know how my overclocked desktop 4.1 GHz i5 2500k would compare to the i7 4710hq or the i7 5700 for gaming purposes...

    Thanks!
     
  2. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Your i5 at 4.1GHz is stronger than a 4710HQ or 5700HQ can ever become.

    Assume the i7s (at full turbo) would act like your i5 chip would at ~3.6GHz.
     
  3. Coprhead66

    Coprhead66 Notebook Enthusiast

    Reputations:
    0
    Messages:
    10
    Likes Received:
    1
    Trophy Points:
    6
    Thank you.

    And crap!
     
  4. Tsunade_Hime

    Tsunade_Hime such bacon. wow

    Reputations:
    5,413
    Messages:
    10,711
    Likes Received:
    1,204
    Trophy Points:
    581
    You will always be handcuffed with a "laptop" unfortunately. Even laptops with desktop CPUs, you will be limited on chassis space/cooling, MXM cards that do lineup very well against desktop cards, but top the top tier ones as laptop power circuitry will be the limiting factor.
     
  5. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Considering a 2500K with adequate cooling can usually get darn close to 5 GHz, I doubt any current laptop CPU can match its gaming performance
     
  6. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Buying an older platform will be a step down in performance vs. your OC'd desktop from 2011.

    But raw cpu performance isn't all there is to a newer platform. There may be benefits that you don't anticipate yet.

    See:
    http://www.cpubenchmark.net/compare.php?cmp[]=804&cmp[]=2243&cmp[]=2565

    The link above shows a couple of things that are interesting.
    • At a base clock speed of 800MHz less and 200MHz less at turbo, and at half the TDP, the Haswell based platform handily outperforms the SNB platform of yore.
    • At the same TDP, the latest i7 Skylake platform is ~80% faster than the i5 SNB you have and ~25% faster in single core performance too.
    • Both of these latest platforms are outperforming the i5 even though it has four physical cores and the newer ones have only two.
    • And these differences do not even begin to describe the capabilities of a system based on the very newest platforms.
    Yes, raw (cpu) performance may be lower. But I am betting that on a fully optimized and balanced setup, a current platform (especially Skylake, if you can wait that long) will give you similar performance or better vs. what you were used to on your DT setup. All with the best battery life (for the lighter tasks you'll use it for), lower heat and noise too.

    The i7-4710HQ may well be more than enough processor for your games (given a good enough gpu). It will be a downgrade in other workloads vs. your OC'd DT, but is still a very good move into a mobile platform.

    See:
    http://www.ocaholic.ch/modules/smartsection/item.php?itemid=1123

    See:
    http://www.ocaholic.ch/modules/smartsection/item.php?itemid=1123&page=13


    The above two links (esp. the second one) shows how little OC'ing makes in games. Just make sure the games tested are similar to the ones you play.

    Good luck.
     
  7. Dufus

    Dufus .

    Reputations:
    1,194
    Messages:
    1,336
    Likes Received:
    548
    Trophy Points:
    131
    Errr no. The newer ones in your link have 4C/8T vs the 4C/4T of the 2500K

    Here's with the i5-6600K which like the 2500K doesn't have HTT
    http://www.cpubenchmark.net/compare.php?cmp[]=804&cmp[]=2243&cmp[]=2570

    So using passmark the 4710HQ is better than the latest 4C4T Skylake DT CPU.

    Normalizing the 2500K and 6600K as both are overclockable we get a whopping 14% increase in performance if they both overclock the same amount. If the 2500K overclocks better then even less.
     
  8. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Dufus, thanks for the correction (sorry, was too tired...). I don't know how I was confused there...

    The 4710HQ may be slightly better (when the 'score' is so close, lets just say equivalent) overall, but it is still ~14% faster in single thread performance.

    And even OC'd that performance difference will only grow. Overall, the better platform is still the latest one. At least from these preliminary numbers we're comparing.
     
  9. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    It's possible that RAM is the reason the 4710HQ passes the skylake chip. I would blame hyperthreading, however that should give the 4710HQ a larger increase. Skylake clock for clock should be something closer to a minimum of 30% over Sandy Bridge:
    SB = 100%
    IB = 110%
    HW = 110 * 1.07 = 117.7%
    BW = 117.7 * 1.05 = 123.585%
    SL = 123.585 * 1.05 = 129.77% (rounded to two decimals)
     
  10. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    AnandTech found Skylake's average clock-for-clock improvement over Sandy Bridge was about 25%, so 30% is too high of a min
     
  11. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Was this with equal specs RAM and stuff? I wanted a mostly equal testing environment earlier but I haven't gone hunting. 25%... blah, that likely means either Skylake has no improvement over Broadwell, or Ivy Bridge has a lower minimum boost over SB than I'm giving it credit for.
     
  12. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    They were normalized with DDR3L-1866 CL9

    http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/23

     
  13. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
  14. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631

    As has been proven on other sites, using the i7-6700K with DDR4 RAM below about 2400MHz/2667MHz modules/settings, is crippling the platform.

    'Normalizing' here is not offering us an apples to apples comparison that makes any sense in the real world. It is seeing how the new cpu work with introductory DRAM modules vs. the fully optimized and mature DDR3 RAM available for the old platforms (as also mentioned in the article, btw).

    Apples to apples is comparing an optimized Haswell (or the already upgraded/optimized and tricked out Haswell Devils Canyon i7-4790K...) to an optimized Skylake platform with the fastest, lowest latency RAM they can get today. Anything else is just feeding the 'Intel is holding back performance because they have no competition' mantra.

    They themselves state that the PI of 207.3 for the DDR3-1866 C9 is 'super high' and will tend to better the much, much lower PI of 142.2 of the DDR4-2133MHz DRAM chips, yet they continue to test on assumptions that make no sense when moving to a new platform (i.e. this misplaced and blind belief that strict 'apples to apples' makes sense in all situations).

    What is most interesting is that even with all these ridiculous restrictions placed on the testing, Skylake still shines, overall (even if it is a small victory in absolute numbers).

    The 'apples to apples' we should be testing between platforms is not artificial and arbitrary restrictions on either platform being tested (why don't they test an SNB/IB/HW/HW.DC/ with CL15 RAM settings...).

    We already have the apples to apples tests ready for us: the actual workloads we want to run. Games, Productivity software, even synthetic benchmarks are fair game here. Crippling one of the platforms in the name of fairness is not showing us anything (once again...) that translates into the real world results people will see with the new platforms and set them up properly, not crippled...

    When I am hired to do a job, it is never to continue in the footsteps of who I am replacing. It is to provide the results asked of my skills and time. And judging the final results is all that matters in the end, not how much better I was at doing the (wrong) steps faster...

    Similarly, if a true test of real world applicability for a new platform is performed, it should be allowed free reign to use any and all advantages it has to show its worth vs. the old/ancient ways...

    Otherwise? If the testing of platforms is this seriously flawed, the testing and conclusions should be discarded and testing should be done by each individual vs. their previous setup as I have been recommending for years...

    But either way, the quarter hour it took to read that anandtech article was still a waste of time if you're looking for advice on upgrading to the new platform or not (and worse, if, you don't know to question it's testing premise and act accordingly).
     
    Last edited: Aug 13, 2015
  15. Dufus

    Dufus .

    Reputations:
    1,194
    Messages:
    1,336
    Likes Received:
    548
    Trophy Points:
    131
    Get some sleep, chances are the internet will still be there when you wake up. ;)

    Absolutely.

    Personally I think a lot of the disappointment is due to the stagnation in frequency increases which would help boost performance gains.
     
    D2 Ultima likes this.
  16. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Frequency was important when it could be counted in MHz. Today, other internal cpu processes are easier to exploit than brute force freq increases. Especially when we're over ~2.5GHz per core or over ~4GHz sustained for all cores.

    Fixed hardware functions that dramatically speedup common/basic code. Optimized caches in hit ratio/latencies and multiple levels of them (L4...) make today's processors much more efficient and notably faster than yesteryears. And tomorrow's hardware will similarly overshadow today's best efforts.

    Everything has a balancing point and with silicon, the world reached that in frequency years ago. Sure, we can OC to pass that range with some older platforms. But they are still the same old cpu's that will prematurely fail sooner or later, especially if used at that performance level for extended periods. And use more power, expend more heat and likely be noisier too. And still feel sluggish when compared to a modern cpu/platform that is better optimized for the latest O/S' and peripheral hardware we use today (and yes, to me, they do feel sluggish simply navigating the O/S vs. the latest Broadwell platforms I've used).

    That is why I want a proper testing of the new platforms. Instead of all that navel gazing that review sites seem to offer now.

    When I buy/test a new platform, I set it up at the highest level of performance I know how to at that time (without OC'ing - I want this stable and dependable) and see if my workflows benefit and if so, how much.

    Seeing if memory access is faster, IPC has increased or if O/C'ing gives me more performance is not really important during this type of real world testing. I am not buying this to put it in a museum; it is only a tool, after all. And in the end it is either a better tool than what I have today, or it isn't. That is the question on everyone's mind, but we get silly graphs with same clocks same ram and lame conclusions instead.

    Okay, those graphs are interesting as additional information... but when served up as the main piece they leave much to be desired.

    When we ditch silicon for something newer, we'll reset the GHz baseline (and the freq race) once again. Until then, looking at frequency as an overall indicator of performance is just an attempt to simplify and even minimize the obvious enhancements going on elsewhere in newer platforms, whether those enhancements are on the cpu itself, the companion chipset or the latest peripherals it requires, or, as is usually the case; as a whole (the whole is greater than the sum of the parts).
     
    Last edited: Aug 14, 2015
  17. Dufus

    Dufus .

    Reputations:
    1,194
    Messages:
    1,336
    Likes Received:
    548
    Trophy Points:
    131
    We need to look at all aspects, not just one or the other. For instance an increase in IPC of 15% coupled with a frequency increase of 30% will result in a performance increase of ~50%.
     
  18. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Actually, as I've already stated, when we use synthetic tests to predict actual performance/productivity we usually fall flat on our face. Even while manipulating the tests to make it seem like this is a science, when it's not (review sites aren't scientists - they're a business).

    The example you give is based on theory which is based on our incomplete knowledge of the real capabilities and actual overhead of how the cpu/chipset/platform does the work internally and extrapolating it to any/all workflows is then a flawed conclusion too.

    In addition, most any system needs to run close or at it's default speeds to hit the spec's it was designed for. Artificially running it (and the associated peripherals, like RAM) at significantly different values to get those synthetic IPC increases is skewing the results and isn't doing anybody any favors when we're trying to understand the differences (not the sameness) of two different platforms.

    I agree 100% we need to look at all aspects. And our actual workflows do just that. Sure, we won't know specifics, but in the end, who really cares? The online review sites will provide more of that fluff than we'll ever need to know.

    A client once said to me that in his line of work, a sustained (as long as the computer was doing actual work) 1% increase in real productivity was worth a $10K one time upgrade of his main system (and he had a few to upgrade back then and he meant $10K for each of them). At a time when a mere video card cost $2K+. And he had the foresight to predict that as time went on, that 1% productivity increase would be harder, not easier to accomplish.

    What I learned from him was that single aspects like ipc, frequency, rpm or capacity were not that important when considered in isolation. Worse was to try to predict how they interacted with each other. Later, I developed my testing methods based on the hands on relavations that I experienced when comparing the theory to the actual.

    Things like higher RPM drives were not always faster than slower ones. Nor were the supposed superiority of denser drive platters obvious in their first or second iterations. Or, that single platter drives were better than two platter drives.

    Or, SSD's circa 2009 and lasting almost two years... were far inferior to mere HDD's for my workflows (yeah; those were my VRaptor days... :) ).

    Or, RAM spec'd far above the chipsets capabilities making a difference in older systems - even if they down clocked to the spec's the older systems required.

    Or, O/S and yes, platform upgrades that made more difference to productivity than any review site ever hinted at.

    None of the above needed comparing 'apples to apples' to an arbitrary standard. Yet, without testing for myself, I would not know that some were worthwhile upgrades or not - regardless of what 'official' reviews might have suggested (agreeing or disagreeing with my results/experience).



    I'll mention this again; we know the results we want to improve (our actual gaming, productivity, or other workflows...) vs. what we have now.

    Testing and comparing to anything else is superfluous at best.

    At it's worst, it will lead us to wrong conclusions and possibly inappropriate decisions and either lost performance potential or wasted $$$$ in the long run.

    The review sites love to have us miss that basic truth. After all, the more they write today, the more potential they have to attract more advertising dollars tomorrow.

    I don't have that constraint - I can still test systems fairly, even if they're different.

    And for anyone else that needs an answer to 'is the newest platform better than what I have now', I suggest they do the same too.



     
  19. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    There's real-world-tests and apples-to-apples tests. Real world tests is someone's PC vs someone's PC and you don't control anything (even if information about each system is gathered and recorded for the purpose of the tests). For example, I could go a step further and say "because I run <insert large number of programs etc in background> before I launch a game, my CPU is being used up a bit more, and because my game is in borderless windowed for multitasking, my vRAM is being used up significantly more", and claim that someone running a game fullscreen with nothing else in the background is a pointless scenario to me.

    But I digress. I'm going to make it as clear as I can possibly make it, why I wanted apples-to-apples, normalized-hardware tests:
    I wanted to find out, the DIRECT IPC benefit, without other factors like RAM/GPU/Driver set/OS/Storage/etc influencing anything. Once this is found out, and used as a near constant, you can determine other reasons for potential performance problems easier. You yourself just showed that RAM was a big factor if you use too slow a speed on the platform. How would that have been determined if they didn't remove all other factors before testing the RAM? Now the benefit is that they could likely have used a skylake board with DDR4 support and another with DDR3L support and thus remove the CPU difference, but you couldn't easily do that with say... one board using DDR3 on Haswell and another doing it with DDR4 on Skylake, without accounting for the difference in the IPC of skylake.

    It's how you eliminate potential factors when testing the differences between two pieces of hardware. Does it matter real-world? If I bought a haswell system and then a skylake system later and tried playing games on them, would those tests mean much to me? No, they wouldn't. But if my new shiny skylake machine wasn't doing as well in say... BF4, which RAM has a decent effect on, as my haswell was? How would I determine what was wrong? I'd have to get information from people who HAVE done per-hardware tests, even if that information trickled through people who didn't do the tests themselves by word of mouth.

    Don't get me wrong: real world applications/testing is very important. It just isn't the only kind of testing that is necessary or matters.
     
    Dufus likes this.
  20. Dufus

    Dufus .

    Reputations:
    1,194
    Messages:
    1,336
    Likes Received:
    548
    Trophy Points:
    131
    @D2 Ultima Understand your methodology and Tiller's. We all have are own criteria.

    @tilleroftheearth while I agree with you on performance being relative to all aspects of a system we do have some differences it seems. Personally I try to maximize the performance of my system and that usually means working outside specification while you seem to indicate a preference to work within specification, well sometimes. For instance the spec for the i7-6700K DDR4 is 2133MT/s, anything outside of that is out of spec. Why, because that is what Intel have validated. Also TDP is rated at the base frequency so there is no guarantee of maximum turbo at highest loads. Unfortunately it seems some hardware isn't even able to deliver even at default spec such is the way things seem to be going.

    What happens when you use memory that is higher than 2133MT/s, well you make your own validation, same with overclocking. Also TDP is not the maximum power the CPU can use and top turbo is not the maximum frequency the CPU can use, if it were then unlocked CPU's would not be provided ;). Given suitable hardware we might be able to run above default specs giving increased performance while having very little impact on life of the product other than to extend it by providing a platform whose performance will still be acceptable for a longer time than if left to run at spec. Of course we all have our own criteria and cost vs performance levels.