It's not about cost effectiveness. It's about if it's even reasonable to pursue or offer on the market.
Sent from my iPhone using Tapatalk
-
-
So, who is the "technological" leader on mobile laptop cooling? They all throttle, but who throttles the least?
Surface?
Sent from my A0001 using Tapatalk -
Previously it was Alienware by far. They didn't "all throttle" before. It was easy to get an Alienware to never throttle due to heat unless you overclocked it by a good bit. -
-
~9 heatpipes + 4 heatsinks + 2 fans. Looking at the limited space they have (compared to desktops), they did well.
But, is that only viable solution for laptop cooling? Brute force? More fans, more heatsinks, and more heatpipes. -
tilleroftheearth Wisdom listens quietly...
The manufacturers? They have to hold something in reserve...D2 Ultima likes this. -
Also there's only 3 heatsinks that I see there; two per GPU and 1 for the CPU (which is what is supposed to happen, for that kind of machine). -
Ashtrix likes this.
-
-
I think I counted wrong; I didn't even notice the one in the back for the CPU.
But, I'm more curious about innovation in the thin-and-light category. For DTR/gaming laptops, it's simple: just make the system thicker and add square inches of copper until you're happy. But, in the mobile space, there's a pretty wide range of cooling solutions from terribad to useable: what's the key factor there? I haven't noticed anything special: almost all of them are just a single heatpipe from the CPU to an axial fan with a little heatsink. -
-
It does not help if one third of the buyers complain about the weak cooling in a gaming laptop if 2/3 of the buyers of the same laptop did not realize the problem and never complain. Then continue the OEMs to create trash.
-
What we need is for people who discover issues to call them out with the same level of quality they expect from desktop parts, and we need it to happen BEFORE people mass-buy the products. We need to get other people to stop hyping up the terrible products too.Papusan likes this. -
http://forum.notebookreview.com/threads/bios-a05-cpu-not-turbo-boosting.778514/D2 Ultima likes this. -
If skylake means smaller computer due to less thermals, then manufacturers are only going to put in just enough small cooling to cool it. In the end it will all be proportional anyhow.
D2 Ultima and Starlight5 like this. -
tilleroftheearth Wisdom listens quietly...
-
D2 Ultima likes this.
-
September it is..
http://www.neowin.net/news/microsoft-to-host-event-on-september-4-showcasing-new-windows-10-devices
Sent from my A0001 using TapatalkCharles P. Jefferies likes this. -
http://www.fudzilla.com/news/processors/38225-first-skylake-core-i7-6700k-scores-out
Take with a grain or two of salt, of course. It is just a rumour/leak.
If true, however, it's really interesting. Broadwell already has 5% IPC advantage vs Haswell refresh. If Skylake really does have an average of 6.7% IPC boost over Haswell refresh then it suggests that the new architecture is focused heavily on power efficiency rather than raw performance.
Good news for notebook enthusiasts, bad news for desktop enthusiasts. -
http://www.pcper.com/news/Processors/Report-Intel-NUC-6th-Gen-Skylake-Specifications-Leaked
The Skylake NUC (which just uses laptop parts in a small-form-factor case) specs were leaked....and DDR4 is coming to Skylake-U!
Wasn't expecting that. No Thunderbolt 3/USB type-C or USB 3.1, though,alexhawker likes this. -
DDR4 sucks with timings though. It's a downgrade from DDR3 (that's more expensive) right now.
-
The system bottleneck is somewhere else....
But, hahah, did I overclock my DDR3 1866MHz CL9 sticks to 2200MHz CL9? Absolutely and at 1.7V no less, haha. -
Also, again, for this worse RAM, we're going to have to pay more. -
superparamagnetic Notebook Consultant
For a lot of people those two are bigger advantages that outweigh looser timings. Besides that pricing will eventually fall and crossover DDR3. DDR2 was pricier than DDR1 initially but then prices flipped. Same thing with DDR3, and the same will be for DDR4. Timings will come down too as manufacturing gets better.
There's always some growing pains with new technology, but DDR3 is at a dead end.ikjadoon, Starlight5 and Seanwhat like this. -
Doesn't help much. DDR3 vs DDR3L was what, 1 minute of battery life extra? 1.5v to 1.35v? 1.2v isn't going to make more of a jump that's notice-able in any way.
Yes, this is true... for desktops. The -U chips have low RAM limits (I believe 16GB is max for most of them) and 2 x 8GB DDR3 in dual channel is not anywhere near being hard to find.
I was specifically speaking about the context given (mobile platform; more specifically the Skylake-U models) which is why I did not list any benefits of DDR4.
This is correct. I don't think we should never move to DDR4. But putting DDR4 for cheap, low-power CPUs in machines not meant to be any kind of workhorse is just an expensive, pointless endeavour. Right now. If DDR4 was cheaper, and the timings weren't so bad for the natural speed, then we'd have no problems. But as it stands, we aren't yet at that point, which is why making it for low end mobile chipsets is just a waste. They should stick with DDR3L for a while in the ULV market. The gaming market doesn't NEED it now either, but it would be less of a bad thing to include them there (unless the timings are really as bad as I predicted earlier; 18-18-18-40 or so for 2133MHz) but that's because those boards/chipsets can handle 32GB/64GB of RAM and decent DDR4 RAM at higher voltages and better timings might be available. For example, I found a set of DDR4 2400MHz 12-13-13-35 RAM that runs at 1.35v (rather than 1.2v default) for sale earlier that I recommended to a friend. I was amazed DDR4 could perform so well already; but to get those kinds of things into mobile platforms will be MUCH harder. -
Thb ram isn't a bottleneck and never will be (pretty sure it never has been). No-one's seeing any performance increases ddr3 to ddr4.
-
-
Basically nobody will notice the difference of this "worse RAM" and current DDR3. Whoever told you that is a loony and should never be listened to again, lol.
You don't understand how RAM's power consumption relates to standby battery life; give this a read. Because it's volatile, you have to keep refreshing it...even when the device is "sleeping". Whoever taught you that it's just 1 minute extra is also a loony; ask for your money back, lol. Anyways, your voltages are wrong: DDR4L is 1.05V (from DDR3L's 1.35V). -
No, actually, I used standard tests of "put laptop on and let it be idle until poof" using ivy bridge-based laptops (which can use both DDR3 and DDR3L, unlike Haswell and Broadwell and Skylake). DDR3 vs DDR3L is almost nothing.
Next, you're showing me LPDDR3 vs DDR3 and DDR3L. At what point did I mention LPDDR3? I have not spoken about it; I have simply stated that DDR3L was worse than DDR3 because its lower voltage meant it had to have worse timings/speed ratios, as the best ratios require more voltage. I also mentioned the fact that DDR4 is the same way right now, as by default they have MUCH worse timings than DDR3 and even DDR3L at the speeds they're sold at, and since laptop sticks by default are usually worse than desktop sticks, I am not putting it past them to consider that if desktops' high quality DDR4 RAM is "2133 @ 15-15-15-36" then laptops might get "1866-2000 @ 18-18-18-40" or something similar. Since DDR3L 1866 @ 10-10-10-27 already exists, nearly doubling the timings can make a difference in even some everyday programs.
Finally, at no point did I BEGIN to mention DDR4L. I said DDR4 is 1.2v and DDR3L is 1.35v and DDR3 is 1.5v, and that DDR4 needs 1.35v to get the good speed/timings to match what DDR3 can do.
As a side note: I very rarely look at studies and theoretical things for real world effects. If someone can take a laptop and put two different sets of hardware in it for a comparison "all other things held equal" then I'm going to take that over every other study that shows "potentially" or "used a tablet" or anything that doesn't affect the current focus (which is lower-end to midrange and entry level gaming (like the AW13) laptops). -
I mention LPDDR3 because that's what most high-end Ultrabooks use. And that's also why I mentioned DDR4L. It's probably what most high-end Ultrabooks will use.
Again, I'm not saying there is no difference...just that most people (easily over 99%) who buy Ultrabooks....don't give a rat's butt about WinRar compression speeds.
Agreed: real-world tests are the only true metric, but studies are good for future predictions and teasing out minor details.Dobbs95 likes this. -
There isn't anything wrong with either at this point of time, and likely will not matter for another year, as we are at the transition stage when it doesn't matter to much and the gains or differences are minimal. Transition from a mature DDR3 tech that has gone through speed and voltage upgrade to the new DDR4 tech at the beginning of it life cycle. In a couple of years DDR4 will become more relevant, especially once DDR3 support is removed from processors integrated memory controllers.
-
I haven't seen any ultrabooks with LPDDR3, or anything that suggests LPDDR3 from most things. And you mention "high end" here; I'm not talking about "high end", which is why the higher price for worse gear is a big deal (and DDR4L *WILL* be more expensive than DDR4, just like DDR3L was; when I bought my computer I paid extra for DDR3L thinking it had benefits, only to realize later that Haswell *MUST* use DDR3L anyway and I basically just blew money). And even then, talking about "high end" hurts your deal more. Because high end means expensive and quality, and DDR4L having even worse performance than the already-bad DDR4 notebook performance I predict, coupled with its even higher cost... it just isn't a good idea. I cannot see, right now, why it makes sense. Keep DDR3L (or LPDDR3 as you say some use) and use that until DDR4 matures. Even the desktop chipsets have DDR4 and DDR3L support, so you can buy the mobo you want (even though DDR3L in itself kills performance because it won't use higher voltage and the chipset won't use much higher voltage, resulting in a push for DDR4's "high speeds" to the masses that can't find good DDR3L sticks).
99% of people in general see little difference. But just because something isn't noticed doesn't mean it should exist/be that way.
Predictions and all that are great, but if predictions with something following an already existing trend clash with already existing data about that trend, then I'm skeptical. I'd need to see it directly break the logical data trend based on already-existing real-world experiences before I accept it. And that's not a bad stance, when the direction things are moving in result in little to no choice about what to get. -
alexhawker and TBoneSan like this.
-
Nobody will experience this "mysterious" performance decrease by purchasing a 2016 laptop, lol.
"most PC users on the whole don't get a feel for anything and are quite willing to spend minutes letting chrome load or dealing with a laggy system while watching youtube videos and"
This is not caused by slightly higher than average RAM latency. I'm dying of laughter here, man. I get it: I like my machines to be in tip-top performance, too, but you are just wrong here,
You haven't seen any Ultrabooks that use LPDDR3? That's like saying, you haven't seen a quad-core CPU in years: hard to believe you pay attention to hardware news.
Asus Zenbook UX305, Acer Aspire R 13, HP Spectre x360, Lenovo LaVie Z, HP EliteBook Folio 1020 G2, etc.
We have this same silly debate every time a new RAM standard comes out: it'll pass, just like it did with DDR2....and DDR3.
If your system is using DDR4L, it's not made for performance.
Of course....please, let us wait for data that compares 2016 systems with DDR4 and 2012 systems with DDR3. I'm dying to see which one is faster in WinRar....
-
My whole point is, paying more for less is not a good idea, and I don't understand why the STANDARD should be pushed on us when the chipset supports another standard. I could sit here giving points about when RAM is felt and when it isn't, and about who can feel it and who can't (and despite what you think, there are applications and instances that are RAM-bound such that a 1st-gen i3 and a 4th gen i7 perform the same with equivalent RAM) but the whole thing I've been saying is that DDR4 for this market is a terrible idea right now as-is because there are only downsides (like more cost and MUCH higher latencies) without any benefits to show for it. Which is a flat out waste in my eyes. You can't downgrade me and charge me more, that makes no sense. You upgrade and charge more, or you downgrade and charge less.Starlight5 and ikjadoon like this. -
All you "I want a desktop replacement" peeps should rejoice a little bit today:
http://arstechnica.com/gadgets/2015...clockable-skylake-cpu-for-laptops/?comments=1
Intel's releasing an unlocked mobile CPU on Skylake (mobile K-series, it seems). -
TomJGX, Starlight5 and D2 Ultima like this.
-
He sat trying to convince me that previous unlocked chips weren't any good because desktops had more options to overclock.
I was like
TomJGX and Starlight5 like this. -
^Is that Swaggy P?!
-
-
Or, you can wallow in your pessimism, too, haha: seems appropriate. -
-
Basically it means "best binned, best for OCing" for the platform. A "K" simply means "unlocked multiplier", and for the mobile platform, is no different than the previous "XM" and "MX" chips. Except that this time it has a "H" in it, which means "soldered" (despite Intel's website saying "high graphics") and thus it is very likely to fall into a TDP limit like all current HQ chips do. If it does, then it means the unlocked multiplier is beyond worthless, as the voltage required to hit high clockspeeds means the TDP increases significantly enough that OCing and stressing is a vastly different story.
To put it in perspective: my 4800MQ at 3.5GHz running TS bench hits around 45-47W or so, with a nice undervolt. Setting the voltage back to default and raising the multiplier just 300MHz (which is necessary to use it without a BSOD) makes TS bench draw over 60W, and streaming up to 68W if I push it just a bit. Now try to get that chip to hit 4-4.2GHz and see what happens if you run something that eats up wattage. Even if it gets locked to 57W instead of 47W, it's likely going to downclock heavily under any kind of stress.
What Octiceps and I are saying however is that people are going gaga over the announcement of a mobile "K" chip, when they have existed for no less than the last 5 years already. It's like having a line of cars that comes with a GPS in their top of the line model, and they're excited that the 2015 model has a GPS when they've been using them since 2010. It's nothing new, nothing revolutionary, nothing to be really excited about (especially since the Clevos are using desktop CPUs already anyway) -
Mobile Extreme Edition has existed since Core 2 Extreme X7800 (released 7/07) so it's been 8 years
Ashtrix, triturbo, D2 Ultima and 1 other person like this. -
Well, OK, if what you're saying is accurate, this might be a step back. In Skylake, there is no XM part, if this leak is correct; only K-series.
In fact, there is a higher numbered part and it's not unlocked:
i7-6820HK (the only mobile unlocked chip)
i7-6920HQ
Hmmm...well, if Skylake's desktop results are anything to believe (where the i7-5775C beats the i7-6700K in gaming results...which may be a result of the 128MB eDRAM cache, as some speculate), then--for strictly gaming purposes---the i7-6920HQ may be the stronger chip (unless your game is being bottlenecked at a presumably 4C/8T 3.5GHz chip). -
Well Skylake hasn't released any but the 6600K and 6700K so far, according to Ark.intel's website. When it comes out, we'll see.
As for broadwell beating the 6700K, it might very well be RAM-specific environments due to DDR4 being worse out of the box. The way to do proper testing would be to throw haswell, broadwell and skylake in a system that is otherwise equal. Same storage drive, RAM clocked to the same speeds at the same latencies, etc. But I haven't seen any benchmarks using that. As much as we had the discussion of the negligible effect of RAM before, there is that one article showing that the difference between 1600MHz cl8 and 1600MHz cl11 was up to a few FPS in some games. With DDR4's loose timings, it could spread more. Especially in games that aren't in a CPU bottleneck state, where the IPC from Skylake would help it.
In other words, we need a complete-control scenario, where the only difference is the CPU/motherboard. -
tilleroftheearth Wisdom listens quietly...
This has already been done. Skylake's dominance (too strong?
) is obvious. As is the i7-5775C Broadwell too in many benchmarks (including high performance gpu setups...) in many tests.
Skylake doesn't need to be proven anymore by online reviewers. Users need to evaluate them in their workloads to measure and appreciate the difference themselves.
See:
http://www.tweaktown.com/reviews/72...700k-cpu-z170-chipset-gt530-review/index.html
See:
http://techreport.com/review/28751/intel-core-i7-6700k-skylake-processor-reviewed
What the above specific reviews and many other similar ones indicate is that the time to upgrade for almost anyone is past due. At least at stock speeds. If they need the most performance possible, OC'ing is an option for the older systems, yes. But staying with an old platform that, OC'd, matches or even slightly exceeds the latest available processors doesn't take into account the rest of the capabilities the new platforms offer. The whole is greater than the sum of the parts and Skylake, even with these initial offerings, is a solid platform with greatly enhanced capabilities over almost anything before it - even with no OC'ing added into the mix (but especially with the proper RAM installed).
As you can see, the tests you wish for have been done already.
Now, all that is left to consider is whether keeping the old system around for another few months/years is the better way to go in the long term.
-
Don't get me wrong; it's obvious Skylake's IPC etc is better. But for the times when Broadwell is winning? The RAM might very well be a factor. It is what it is.
I want full apples-to-apples tests. All chips locked to 4.2GHz, all memory locked to 2133MHz or 2400MHz at the SAME TIMINGS (I know that there's DDR4 RAM from Kingston that out-of-the-box can use XMP to hit 2400MHz 12-12-13-35, so DDR3 can do that too to be the same speed/timings), same tests run, same motherboard for 4790K and 5775C as well. All RAM dual channel. Same GPU and driver installs (literally the same GPU; I want it to swap out from each machine) and if it's a nVidia card, I want a custom vBIOS where the GPU voltage is constant and clockspeeds remain fully constant and don't fluctuate under load so we don't get any oddities in the tests. I expect as-consistent-as-possible tests to remove every single other variable EVER excepting the CPUs. Then we'll see what the real difference between them is. That's how you do "Apples to Apples" comparisons.Starlight5 and Ashtrix like this. -
tilleroftheearth Wisdom listens quietly...
Oh, okay. The tests you want don't correlate to my real world tests that matter to me and that is why they are effectively meaningless (to me, again).
As I've indicated (maybe without enough emphasis), the whole is greater than the sum of it's parts and I test whole platforms vs. my previous platforms to see any real productivity gains before I drop real $$$$ to upgrade the workstations I'm responsible for.
What you seem to want is something different/synthetic and doesn't get me on board enough to rely on the results based on those arbitrary parameters that may help or hurt different platforms at different stages and at different loads.
I really think the performance envelope of the currently released Skylake processors has been thoroughly mapped vs. old tech in the many (over a dozen) reviews I've read on them so far. And have answered your questions albeit in a not so blunt way.
Later versions of DDR 'X' RAM is always better performing than the previous versions at the right clock speeds. Latency cannot be seen as a single spec - it is a calculated number that depends on many other spec's that make it up. Basically, the total absolute time taken is what is important - not the single 'same timings' different DRAM can exhibit on a spec sheet. If this were not so, we would still be using DDR RAM or worse.
Likewise, when you lock down other parameters, you are affecting the internal workings and synergies of the cpu and matching chipset. This makes no sense to me.
It is akin to saying two cars are fast. But the fastest one will be determined if we cripple them both to some unknown degree (but not equally).
I know this goes against most people's common knowledge to compare to dissimilar systems on this forum. But I don't care or measure what difference it makes in synthetic or gaming (which to me is the same thing) benchmark 'scores'. I measure productivity and productivity doesn't care what tools are used. The results speak for themselves (always).
We don't need to agree on any of this - I'm just giving the reasoning behind my viewpoint.
Let's just say that I'm satisfied with the tests I've read on Skylake and you're not. Cool.
-
That's what I said though. I want direct tests to prove what Skylake's IPC improvements are.
If skylake's IPC improvements are discovered and documented directly, then returning systems to normal/standard configs and seeing Skylake fall behind or pull heavily ahead means that there are factors other than just the CPU affecting things.
For example: If with equal GPU and RAM configurations (leaving only the CPU's IPC at fault) Skylake pulls ahead and falls behind in varying tests (with consistency) then that means something in the chipset or the cache is at fault. If it does not, but relaxing the apples to apples environment does, then it means that likely RAM is the cause, which means it'd be beneficial for users to fine-tune their RAM as much as possible.
Understand? I work by process of elimination, and the end result will provide the best scenario for real-world. If skylake is falling behind somehow in any test, then that means there is a bottleneck somewhere in the system. If too many variables are different across the systems, then that means you can't find the bottleneck. Finding bottlenecks is the key to removing bottlenecks. You can't solve a problem without knowing the cause, no?
Mobile Skylake launching September 2015
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Cloudfire, May 20, 2015.