The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    Mobile Skylake launching September 2015

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by Cloudfire, May 20, 2015.

  1. alexhawker

    alexhawker Spent Gladiator

    Reputations:
    500
    Messages:
    2,540
    Likes Received:
    792
    Trophy Points:
    131
    It's not about cost effectiveness. It's about if it's even reasonable to pursue or offer on the market.


    Sent from my iPhone using Tapatalk
     
  2. ikjadoon

    ikjadoon Notebook Deity

    Reputations:
    224
    Messages:
    1,633
    Likes Received:
    45
    Trophy Points:
    66
    So, who is the "technological" leader on mobile laptop cooling? They all throttle, but who throttles the least?

    Surface?

    Sent from my A0001 using Tapatalk
     
  3. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Currently, it's MSI, I think. The GT72/GT80 are monsters at cooling. They however still have the HQ CPU problems.

    Previously it was Alienware by far. They didn't "all throttle" before. It was easy to get an Alienware to never throttle due to heat unless you overclocked it by a good bit.
     
  4. djembe

    djembe drum while you work

    Reputations:
    1,064
    Messages:
    1,455
    Likes Received:
    203
    Trophy Points:
    81
    To further expand on the point made by D2 & alex, it obviously makes no sense to develop a product that very few people will buy due to its poor value for performance. However, if you want the best, newest, and most advanced technological features in your system, you can still have them. Just fund the research, development, and manufacture of your ideal system. Then you'll get a nearly perfect computer that's tuned to your exact preferences, all for only $15,000-$20,000.
     
  5. ikjadoon

    ikjadoon Notebook Deity

    Reputations:
    224
    Messages:
    1,633
    Likes Received:
    45
    Trophy Points:
    66

    [​IMG]

    ~9 heatpipes + 4 heatsinks + 2 fans. Looking at the limited space they have (compared to desktops), they did well.

    But, is that only viable solution for laptop cooling? Brute force? More fans, more heatsinks, and more heatpipes.
     
  6. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    It is the only viable solution most consumers can get today.

    The manufacturers? They have to hold something in reserve...
     
    D2 Ultima likes this.
  7. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    To be fair, the GPUs do not really need that much. The hottest thing on 900M series are the VRMs because nVidia purposefully reduced the number on them, and with heatsinks (and a cooling solution) designed for them, they'd work just fine, even OC'd. The CPU is what needs the most cooling, but those huge fans and all that fin/exhaust area is top notch for it. The problem is that the CPU line for that machine sucks like a bucket of ticks at a dog show for performance purposes.

    Also there's only 3 heatsinks that I see there; two per GPU and 1 for the CPU (which is what is supposed to happen, for that kind of machine).
     
  8. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,710
    Messages:
    29,842
    Likes Received:
    59,625
    Trophy Points:
    931
    Never liked a design there the processor must share heatsink with gpu.
     
    Ashtrix likes this.
  9. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    I understand, but I guess they made good use of their space either way.
     
  10. ikjadoon

    ikjadoon Notebook Deity

    Reputations:
    224
    Messages:
    1,633
    Likes Received:
    45
    Trophy Points:
    66
    Oh, I see. Well, they used their thickness well.

    I think I counted wrong; I didn't even notice the one in the back for the CPU.

    But, I'm more curious about innovation in the thin-and-light category. For DTR/gaming laptops, it's simple: just make the system thicker and add square inches of copper until you're happy. But, in the mobile space, there's a pretty wide range of cooling solutions from terribad to useable: what's the key factor there? I haven't noticed anything special: almost all of them are just a single heatpipe from the CPU to an axial fan with a little heatsink.
     
  11. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    If you mean thin/light gaming grade, they're mostly all more than a single heatpipe, but if you mean ultrabook (non-gaming) I don't know. What I DO know is that the cooling systems in the thin/light gaming notebook categories cannot handle the CPU heat or pre-maxwell GPU heat, and as such the machines were intended to both get hot and loud AND to throttle below their default spec. The current gen maxwell cards are so cool that they can put 970Ms in most of the thinner machines, but it still doesn't allow them to cool the CPU effectively and the GPUs still run fairly hot compared to other machines. What SHOULD happen is that OEMs and ODMs need to make their machines a bit thicker and adjust the cooling solution for the parts, OR they need to put weaker parts (like midrange GPUs and ULV CPUs) to avoid excess heat in there. But they won't and people keep buying them and proving with their wallets that it's fine to sell machines designed to perform under-spec, so we're getting into this dark age of gaming laptops XD
     
  12. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,710
    Messages:
    29,842
    Likes Received:
    59,625
    Trophy Points:
    931
    It does not help if one third of the buyers complain about the weak cooling in a gaming laptop if 2/3 of the buyers of the same laptop did not realize the problem and never complain. Then continue the OEMs to create trash.
     
  13. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Complaining about the products is pointless unless they return it and make a big stink about it elsewhere. But they don't. The most we get are people finding NBR and trying to fix what's fundamentally broken. The rest, as you say, don't even notice the issues, or assume "well it's a laptop, I know my desktop runs better" etc or some crap.

    What we need is for people who discover issues to call them out with the same level of quality they expect from desktop parts, and we need it to happen BEFORE people mass-buy the products. We need to get other people to stop hyping up the terrible products too.
     
    Papusan likes this.
  14. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,710
    Messages:
    29,842
    Likes Received:
    59,625
    Trophy Points:
    931
    This isn't pretty. It's ugly.
    http://forum.notebookreview.com/threads/bios-a05-cpu-not-turbo-boosting.778514/
     
    D2 Ultima likes this.
  15. T2050

    T2050 Notebook Deity

    Reputations:
    280
    Messages:
    1,699
    Likes Received:
    93
    Trophy Points:
    66
    If skylake means smaller computer due to less thermals, then manufacturers are only going to put in just enough small cooling to cool it. In the end it will all be proportional anyhow.
     
    D2 Ultima and Starlight5 like this.
  16. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Huh? Care to expand/explain what you mean by the above?
     
  17. Seanwhat

    Seanwhat Notebook Evangelist

    Reputations:
    42
    Messages:
    327
    Likes Received:
    41
    Trophy Points:
    41
    He's saying we won't get cooler notebooks, manufacturers will just spend less on cooling components in the new line of notebooks because they can, resulting in notebooks being no cooler than they are today.
     
    D2 Ultima likes this.
  18. ikjadoon

    ikjadoon Notebook Deity

    Reputations:
    224
    Messages:
    1,633
    Likes Received:
    45
    Trophy Points:
    66
  19. Cakefish

    Cakefish ¯\_(?)_/¯

    Reputations:
    1,643
    Messages:
    3,205
    Likes Received:
    1,469
    Trophy Points:
    231
    http://www.fudzilla.com/news/processors/38225-first-skylake-core-i7-6700k-scores-out

    Take with a grain or two of salt, of course. It is just a rumour/leak.

    If true, however, it's really interesting. Broadwell already has 5% IPC advantage vs Haswell refresh. If Skylake really does have an average of 6.7% IPC boost over Haswell refresh then it suggests that the new architecture is focused heavily on power efficiency rather than raw performance.

    Good news for notebook enthusiasts, bad news for desktop enthusiasts.
     
  20. ikjadoon

    ikjadoon Notebook Deity

    Reputations:
    224
    Messages:
    1,633
    Likes Received:
    45
    Trophy Points:
    66
    alexhawker likes this.
  21. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    DDR4 sucks with timings though. It's a downgrade from DDR3 (that's more expensive) right now.
     
  22. ikjadoon

    ikjadoon Notebook Deity

    Reputations:
    224
    Messages:
    1,633
    Likes Received:
    45
    Trophy Points:
    66
    But, if we're honest, RAM timings are a non-issue for real-world performance. I mean, should we use PCIe x16 slots or a PCIe x8 slots for those add-in USB 2.0 cards? :rolleyes: The system bottleneck is somewhere else....

    But, hahah, did I overclock my DDR3 1866MHz CL9 sticks to 2200MHz CL9? Absolutely and at 1.7V no less, haha.
     
  23. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    This doesn't show it. DDR3's range for 2133MHz is what, CL9 through CL11? DDR4 STARTS at like CL15-16. And with mobile, we always have more latency. We'll end up with stuff like DDR4 2133MHz CL18 or something. No matter how you slice it, doubling RAM timings can be notice-able in some instances. And winrar and adobe show some benefit from the tighter timings etc.

    Also, again, for this worse RAM, we're going to have to pay more.
     
  24. superparamagnetic

    superparamagnetic Notebook Consultant

    Reputations:
    402
    Messages:
    252
    Likes Received:
    56
    Trophy Points:
    41
    It's only worse in that one aspect. DDR4 has lower voltages, which means less power which means longer battery life. DDR4 also has higher density. There's already DDR4 16GB modules coming into mass production, wheres 16GB DDR3 modules are a niche (and therefore expensive) product.

    For a lot of people those two are bigger advantages that outweigh looser timings. Besides that pricing will eventually fall and crossover DDR3. DDR2 was pricier than DDR1 initially but then prices flipped. Same thing with DDR3, and the same will be for DDR4. Timings will come down too as manufacturing gets better.

    There's always some growing pains with new technology, but DDR3 is at a dead end.
     
    ikjadoon, Starlight5 and Seanwhat like this.
  25. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Nope XD.

    Doesn't help much. DDR3 vs DDR3L was what, 1 minute of battery life extra? 1.5v to 1.35v? 1.2v isn't going to make more of a jump that's notice-able in any way.

    Yes, this is true... for desktops. The -U chips have low RAM limits (I believe 16GB is max for most of them) and 2 x 8GB DDR3 in dual channel is not anywhere near being hard to find.

    I was specifically speaking about the context given (mobile platform; more specifically the Skylake-U models) which is why I did not list any benefits of DDR4.

    This is correct. I don't think we should never move to DDR4. But putting DDR4 for cheap, low-power CPUs in machines not meant to be any kind of workhorse is just an expensive, pointless endeavour. Right now. If DDR4 was cheaper, and the timings weren't so bad for the natural speed, then we'd have no problems. But as it stands, we aren't yet at that point, which is why making it for low end mobile chipsets is just a waste. They should stick with DDR3L for a while in the ULV market. The gaming market doesn't NEED it now either, but it would be less of a bad thing to include them there (unless the timings are really as bad as I predicted earlier; 18-18-18-40 or so for 2133MHz) but that's because those boards/chipsets can handle 32GB/64GB of RAM and decent DDR4 RAM at higher voltages and better timings might be available. For example, I found a set of DDR4 2400MHz 12-13-13-35 RAM that runs at 1.35v (rather than 1.2v default) for sale earlier that I recommended to a friend. I was amazed DDR4 could perform so well already; but to get those kinds of things into mobile platforms will be MUCH harder.
     
  26. Seanwhat

    Seanwhat Notebook Evangelist

    Reputations:
    42
    Messages:
    327
    Likes Received:
    41
    Trophy Points:
    41
    Thb ram isn't a bottleneck and never will be (pretty sure it never has been). No-one's seeing any performance increases ddr3 to ddr4.
     
  27. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Because there is a performance DECREASE because DDR4 is worse than DDR3 right now
     
  28. ikjadoon

    ikjadoon Notebook Deity

    Reputations:
    224
    Messages:
    1,633
    Likes Received:
    45
    Trophy Points:
    66
    Nope. XD

    Basically nobody will notice the difference of this "worse RAM" and current DDR3. Whoever told you that is a loony and should never be listened to again, lol.

    Nope. XD

    You don't understand how RAM's power consumption relates to standby battery life; give this a read. Because it's volatile, you have to keep refreshing it...even when the device is "sleeping". Whoever taught you that it's just 1 minute extra is also a loony; ask for your money back, lol. Anyways, your voltages are wrong: DDR4L is 1.05V (from DDR3L's 1.35V).
     
  29. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Oh, the differences can be felt. It won't be comparable for the most part, as users won't be able to test good timings vs bad timings etc in the same laptop, but it can be felt in compression programs and such. Again: just because you don't feel it and most people don't feel it doesn't mean it doesn't exist. It's the same arguement I make against the HQ chips, and nobody can counter the fact that their problems do exist (whether or not 99% of people encounter them).

    No, actually, I used standard tests of "put laptop on and let it be idle until poof" using ivy bridge-based laptops (which can use both DDR3 and DDR3L, unlike Haswell and Broadwell and Skylake). DDR3 vs DDR3L is almost nothing.
    Next, you're showing me LPDDR3 vs DDR3 and DDR3L. At what point did I mention LPDDR3? I have not spoken about it; I have simply stated that DDR3L was worse than DDR3 because its lower voltage meant it had to have worse timings/speed ratios, as the best ratios require more voltage. I also mentioned the fact that DDR4 is the same way right now, as by default they have MUCH worse timings than DDR3 and even DDR3L at the speeds they're sold at, and since laptop sticks by default are usually worse than desktop sticks, I am not putting it past them to consider that if desktops' high quality DDR4 RAM is "2133 @ 15-15-15-36" then laptops might get "1866-2000 @ 18-18-18-40" or something similar. Since DDR3L 1866 @ 10-10-10-27 already exists, nearly doubling the timings can make a difference in even some everyday programs.

    Finally, at no point did I BEGIN to mention DDR4L. I said DDR4 is 1.2v and DDR3L is 1.35v and DDR3 is 1.5v, and that DDR4 needs 1.35v to get the good speed/timings to match what DDR3 can do.

    As a side note: I very rarely look at studies and theoretical things for real world effects. If someone can take a laptop and put two different sets of hardware in it for a comparison "all other things held equal" then I'm going to take that over every other study that shows "potentially" or "used a tablet" or anything that doesn't affect the current focus (which is lower-end to midrange and entry level gaming (like the AW13) laptops).
     
  30. ikjadoon

    ikjadoon Notebook Deity

    Reputations:
    224
    Messages:
    1,633
    Likes Received:
    45
    Trophy Points:
    66
    Again, if you honestly can feel the difference in the massive GBs you're constantly compressing/decompressing using WinRar, you're in the minority.

    I mention LPDDR3 because that's what most high-end Ultrabooks use. And that's also why I mentioned DDR4L. It's probably what most high-end Ultrabooks will use.

    Again, I'm not saying there is no difference...just that most people (easily over 99%) who buy Ultrabooks....don't give a rat's butt about WinRar compression speeds.

    Agreed: real-world tests are the only true metric, but studies are good for future predictions and teasing out minor details.
     
    Dobbs95 likes this.
  31. T2050

    T2050 Notebook Deity

    Reputations:
    280
    Messages:
    1,699
    Likes Received:
    93
    Trophy Points:
    66
    There isn't anything wrong with either at this point of time, and likely will not matter for another year, as we are at the transition stage when it doesn't matter to much and the gains or differences are minimal. Transition from a mature DDR3 tech that has gone through speed and voltage upgrade to the new DDR4 tech at the beginning of it life cycle. In a couple of years DDR4 will become more relevant, especially once DDR3 support is removed from processors integrated memory controllers.
     
  32. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    I mentioned Winrar as it's the one test that stood out for me in memory (7zip and other compression formats are the same deal). It's not that people are buying a notebook to use that, but if someone upgraded from a 2012 notebook and grabbed a 2016 notebook and noticed some things they're accustomed doing are a couple of seconds slower... they usually want to know wtf is going on. It's just what happens. It's still in the minority; most PC users on the whole don't get a feel for anything and are quite willing to spend minutes letting chrome load or dealing with a laggy system while watching youtube videos and when you look at them and go "I couldn't use this" they look at you funny like something is wrong with you, like "what else is there? I'm not paying $5000 for this to be more smooth" not realizing that if they would have spent $100-$200 more than what they did for that thing and cut out the beer and smokes for a month or so (and yes, I'm using VERY often-seen real-world examples here, from many many many people I've seen and spoken to over the last few years since I got into gaming and then enthusiast machines; this is not an exaggeration nor a "lumping together"; most users can easily afford better in a similar way and simply don't care to) they'd have something that'd do just as good as what a $3000 machine does because the limit for a good working machine is pretty low for most mundane tasks.

    I haven't seen any ultrabooks with LPDDR3, or anything that suggests LPDDR3 from most things. And you mention "high end" here; I'm not talking about "high end", which is why the higher price for worse gear is a big deal (and DDR4L *WILL* be more expensive than DDR4, just like DDR3L was; when I bought my computer I paid extra for DDR3L thinking it had benefits, only to realize later that Haswell *MUST* use DDR3L anyway and I basically just blew money). And even then, talking about "high end" hurts your deal more. Because high end means expensive and quality, and DDR4L having even worse performance than the already-bad DDR4 notebook performance I predict, coupled with its even higher cost... it just isn't a good idea. I cannot see, right now, why it makes sense. Keep DDR3L (or LPDDR3 as you say some use) and use that until DDR4 matures. Even the desktop chipsets have DDR4 and DDR3L support, so you can buy the mobo you want (even though DDR3L in itself kills performance because it won't use higher voltage and the chipset won't use much higher voltage, resulting in a push for DDR4's "high speeds" to the masses that can't find good DDR3L sticks).

    99% of people in general see little difference. But just because something isn't noticed doesn't mean it should exist/be that way.

    Predictions and all that are great, but if predictions with something following an already existing trend clash with already existing data about that trend, then I'm skeptical. I'd need to see it directly break the logical data trend based on already-existing real-world experiences before I accept it. And that's not a bad stance, when the direction things are moving in result in little to no choice about what to get.
     
  33. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    That whole "standby" argument again. Grr. Nobody cares about standby times. Most laptops can last days already on standby. If you're going to sleep your laptop for more than a day, then use hibernate. That's what it's there for. Then it can sit indefinitely. No power required at all.
     
    alexhawker and TBoneSan like this.
  34. ikjadoon

    ikjadoon Notebook Deity

    Reputations:
    224
    Messages:
    1,633
    Likes Received:
    45
    Trophy Points:
    66
    You act like RAM is the only part in a laptop, lol. Any 2016 laptop will OUTRUN a similarly-specc'd 2012 laptop literally only because of CPU performance increases, which are the predominant factor in WinRar.

    Nobody will experience this "mysterious" performance decrease by purchasing a 2016 laptop, lol.

    "most PC users on the whole don't get a feel for anything and are quite willing to spend minutes letting chrome load or dealing with a laggy system while watching youtube videos and"

    This is not caused by slightly higher than average RAM latency. I'm dying of laughter here, man. I get it: I like my machines to be in tip-top performance, too, but you are just wrong here, :(

    You haven't seen any Ultrabooks that use LPDDR3? That's like saying, you haven't seen a quad-core CPU in years: hard to believe you pay attention to hardware news.

    Asus Zenbook UX305, Acer Aspire R 13, HP Spectre x360, Lenovo LaVie Z, HP EliteBook Folio 1020 G2, etc.

    We have this same silly debate every time a new RAM standard comes out: it'll pass, just like it did with DDR2....and DDR3.

    If your system is using DDR4L, it's not made for performance.

    Of course....please, let us wait for data that compares 2016 systems with DDR4 and 2012 systems with DDR3. I'm dying to see which one is faster in WinRar....

    Grr...you conveniently forgot all the points for having longer standby we made in the other thread. Instead of rehasing it all, just re-read here (where LPDDR3's longer standby can give you another day away from a charger) and here (where standby is much more convenient than hibernation).
     
  35. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    I did clarify to say I haven't seen anything that suggests "LPDDR3", which is true. I haven't looked hard at ultrabooks.

    My whole point is, paying more for less is not a good idea, and I don't understand why the STANDARD should be pushed on us when the chipset supports another standard. I could sit here giving points about when RAM is felt and when it isn't, and about who can feel it and who can't (and despite what you think, there are applications and instances that are RAM-bound such that a 1st-gen i3 and a 4th gen i7 perform the same with equivalent RAM) but the whole thing I've been saying is that DDR4 for this market is a terrible idea right now as-is because there are only downsides (like more cost and MUCH higher latencies) without any benefits to show for it. Which is a flat out waste in my eyes. You can't downgrade me and charge me more, that makes no sense. You upgrade and charge more, or you downgrade and charge less.
     
    Starlight5 and ikjadoon like this.
  36. ikjadoon

    ikjadoon Notebook Deity

    Reputations:
    224
    Messages:
    1,633
    Likes Received:
    45
    Trophy Points:
    66
  37. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    TomJGX, Starlight5 and D2 Ultima like this.
  38. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Someone was trying to tell me how revolutionary it was two days ago that a new unlocked CPU was out. I told him we had it for many years and it was nothing new, and that if it had HQ power limitations it's a pointless waste of $$ of a chip.

    He sat trying to convince me that previous unlocked chips weren't any good because desktops had more options to overclock.

    I was like
    [​IMG]
     
    TomJGX and Starlight5 like this.
  39. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    ^Is that Swaggy P?!
     
  40. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    I honestly have no idea but someone keeps sending that to me on twitter when I post utterly retarded experiences with people, so I used it here XD
     
  41. ikjadoon

    ikjadoon Notebook Deity

    Reputations:
    224
    Messages:
    1,633
    Likes Received:
    45
    Trophy Points:
    66
    Not "X" chips, but "K" chips. They probably have some reason to change the name...why not use the XM marker that they've used since Sandy Bridge?

    Or, you can wallow in your pessimism, too, haha: seems appropriate. :D
     
  42. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    No difference, in the past desktop -K was the same thing as mobile -XM/-MX until Intel went full retard (BGA) for mobile. Hopefully the name change means Intel will stop charging Extreme Edition pricing ($1000) for the mobile unlocked chips but I ain't keeping my fingers crossed.
     
  43. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    "X" signifies "extreme" chip, which is the most top-end chip for the platform. There's always only one "X" chip. i7-980X and its replacement 990X, i7-3960X and its replacement 3970X, i7-4960X, i7-5960X, i7-920XM, i7-2920XM and its replacement 2940XM, i7-3920XM and its replacement 3940XM, i7-4930MX and its replacement 4940MX. Those are the ones I know off hand. The i7-965 and 975 were also "extreme" editions but lacked the "X" in the name.

    Basically it means "best binned, best for OCing" for the platform. A "K" simply means "unlocked multiplier", and for the mobile platform, is no different than the previous "XM" and "MX" chips. Except that this time it has a "H" in it, which means "soldered" (despite Intel's website saying "high graphics") and thus it is very likely to fall into a TDP limit like all current HQ chips do. If it does, then it means the unlocked multiplier is beyond worthless, as the voltage required to hit high clockspeeds means the TDP increases significantly enough that OCing and stressing is a vastly different story.

    To put it in perspective: my 4800MQ at 3.5GHz running TS bench hits around 45-47W or so, with a nice undervolt. Setting the voltage back to default and raising the multiplier just 300MHz (which is necessary to use it without a BSOD) makes TS bench draw over 60W, and streaming up to 68W if I push it just a bit. Now try to get that chip to hit 4-4.2GHz and see what happens if you run something that eats up wattage. Even if it gets locked to 57W instead of 47W, it's likely going to downclock heavily under any kind of stress.

    What Octiceps and I are saying however is that people are going gaga over the announcement of a mobile "K" chip, when they have existed for no less than the last 5 years already. It's like having a line of cars that comes with a GPS in their top of the line model, and they're excited that the 2015 model has a GPS when they've been using them since 2010. It's nothing new, nothing revolutionary, nothing to be really excited about (especially since the Clevos are using desktop CPUs already anyway)
     
    TomJGX, Ashtrix and TBoneSan like this.
  44. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Mobile Extreme Edition has existed since Core 2 Extreme X7800 (released 7/07) so it's been 8 years
     
  45. ikjadoon

    ikjadoon Notebook Deity

    Reputations:
    224
    Messages:
    1,633
    Likes Received:
    45
    Trophy Points:
    66
    Well, OK, if what you're saying is accurate, this might be a step back. In Skylake, there is no XM part, if this leak is correct; only K-series.

    [​IMG]

    In fact, there is a higher numbered part and it's not unlocked:

    i7-6820HK (the only mobile unlocked chip)
    i7-6920HQ

    Hmmm...well, if Skylake's desktop results are anything to believe (where the i7-5775C beats the i7-6700K in gaming results...which may be a result of the 128MB eDRAM cache, as some speculate), then--for strictly gaming purposes---the i7-6920HQ may be the stronger chip (unless your game is being bottlenecked at a presumably 4C/8T 3.5GHz chip).
     
  46. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Well Skylake hasn't released any but the 6600K and 6700K so far, according to Ark.intel's website. When it comes out, we'll see.

    As for broadwell beating the 6700K, it might very well be RAM-specific environments due to DDR4 being worse out of the box. The way to do proper testing would be to throw haswell, broadwell and skylake in a system that is otherwise equal. Same storage drive, RAM clocked to the same speeds at the same latencies, etc. But I haven't seen any benchmarks using that. As much as we had the discussion of the negligible effect of RAM before, there is that one article showing that the difference between 1600MHz cl8 and 1600MHz cl11 was up to a few FPS in some games. With DDR4's loose timings, it could spread more. Especially in games that aren't in a CPU bottleneck state, where the IPC from Skylake would help it.

    In other words, we need a complete-control scenario, where the only difference is the CPU/motherboard.
     
  47. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    This has already been done. Skylake's dominance (too strong? :) ) is obvious. As is the i7-5775C Broadwell too in many benchmarks (including high performance gpu setups...) in many tests.

    Skylake doesn't need to be proven anymore by online reviewers. Users need to evaluate them in their workloads to measure and appreciate the difference themselves.

    See:
    http://www.tweaktown.com/reviews/72...700k-cpu-z170-chipset-gt530-review/index.html



    See:
    http://techreport.com/review/28751/intel-core-i7-6700k-skylake-processor-reviewed


    What the above specific reviews and many other similar ones indicate is that the time to upgrade for almost anyone is past due. At least at stock speeds. If they need the most performance possible, OC'ing is an option for the older systems, yes. But staying with an old platform that, OC'd, matches or even slightly exceeds the latest available processors doesn't take into account the rest of the capabilities the new platforms offer. The whole is greater than the sum of the parts and Skylake, even with these initial offerings, is a solid platform with greatly enhanced capabilities over almost anything before it - even with no OC'ing added into the mix (but especially with the proper RAM installed).

    As you can see, the tests you wish for have been done already.

    Now, all that is left to consider is whether keeping the old system around for another few months/years is the better way to go in the long term.



     
  48. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    o rly
    Seems like they DIDN'T drop it. At all, in fact. That substantiates a discrepancy in the apples to apples tests.

    Don't get me wrong; it's obvious Skylake's IPC etc is better. But for the times when Broadwell is winning? The RAM might very well be a factor. It is what it is.
    I want full apples-to-apples tests. All chips locked to 4.2GHz, all memory locked to 2133MHz or 2400MHz at the SAME TIMINGS (I know that there's DDR4 RAM from Kingston that out-of-the-box can use XMP to hit 2400MHz 12-12-13-35, so DDR3 can do that too to be the same speed/timings), same tests run, same motherboard for 4790K and 5775C as well. All RAM dual channel. Same GPU and driver installs (literally the same GPU; I want it to swap out from each machine) and if it's a nVidia card, I want a custom vBIOS where the GPU voltage is constant and clockspeeds remain fully constant and don't fluctuate under load so we don't get any oddities in the tests. I expect as-consistent-as-possible tests to remove every single other variable EVER excepting the CPUs. Then we'll see what the real difference between them is. That's how you do "Apples to Apples" comparisons.
     
    Starlight5 and Ashtrix like this.
  49. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Oh, okay. The tests you want don't correlate to my real world tests that matter to me and that is why they are effectively meaningless (to me, again).

    As I've indicated (maybe without enough emphasis), the whole is greater than the sum of it's parts and I test whole platforms vs. my previous platforms to see any real productivity gains before I drop real $$$$ to upgrade the workstations I'm responsible for.

    What you seem to want is something different/synthetic and doesn't get me on board enough to rely on the results based on those arbitrary parameters that may help or hurt different platforms at different stages and at different loads.

    I really think the performance envelope of the currently released Skylake processors has been thoroughly mapped vs. old tech in the many (over a dozen) reviews I've read on them so far. And have answered your questions albeit in a not so blunt way.

    Later versions of DDR 'X' RAM is always better performing than the previous versions at the right clock speeds. Latency cannot be seen as a single spec - it is a calculated number that depends on many other spec's that make it up. Basically, the total absolute time taken is what is important - not the single 'same timings' different DRAM can exhibit on a spec sheet. If this were not so, we would still be using DDR RAM or worse.

    Likewise, when you lock down other parameters, you are affecting the internal workings and synergies of the cpu and matching chipset. This makes no sense to me.

    It is akin to saying two cars are fast. But the fastest one will be determined if we cripple them both to some unknown degree (but not equally).

    I know this goes against most people's common knowledge to compare to dissimilar systems on this forum. But I don't care or measure what difference it makes in synthetic or gaming (which to me is the same thing) benchmark 'scores'. I measure productivity and productivity doesn't care what tools are used. The results speak for themselves (always).

    We don't need to agree on any of this - I'm just giving the reasoning behind my viewpoint.

    Let's just say that I'm satisfied with the tests I've read on Skylake and you're not. Cool. :cool:



     
  50. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    That's what I said though. I want direct tests to prove what Skylake's IPC improvements are.

    If skylake's IPC improvements are discovered and documented directly, then returning systems to normal/standard configs and seeing Skylake fall behind or pull heavily ahead means that there are factors other than just the CPU affecting things.

    For example: If with equal GPU and RAM configurations (leaving only the CPU's IPC at fault) Skylake pulls ahead and falls behind in varying tests (with consistency) then that means something in the chipset or the cache is at fault. If it does not, but relaxing the apples to apples environment does, then it means that likely RAM is the cause, which means it'd be beneficial for users to fine-tune their RAM as much as possible.

    Understand? I work by process of elimination, and the end result will provide the best scenario for real-world. If skylake is falling behind somehow in any test, then that means there is a bottleneck somewhere in the system. If too many variables are different across the systems, then that means you can't find the bottleneck. Finding bottlenecks is the key to removing bottlenecks. You can't solve a problem without knowing the cause, no?
     
← Previous pageNext page →