Hello all!
I have been scraping together the cash, and I am in the market for a new laptop. I would like to keep saving for a bit of time, so I would like to know of any rumored hardware releases from today to the end of 2015.
My current desired speculation extends to the following: Broadwell implementation, impending changes to M.2, NVIDIA's mobile GPUs, and USB Type-C implementation.
Will there be a decent amount of laptops with true Broadwell i7 processors (i.e. not ULV) by 2015's final day?
I jumped the gun when mSATA was announced, and I insisted upon my brother buying the greatest and most expensive mSATA drive on the market for his new laptop. Shortly afterward came M.2's PCI-e compatibility which circumvented SATAIII's bottleneck. Are there supposed to be any storage and speed enhancements that would justify holding off on buying a laptop with a dedicated PCI-e x4 M.2 bay? I read somewhere that current Haswell motherboards can support up to one PCI-e x4 bus, but I have yet to truly research that.
Generally, will NVIDIA be greatly updating their offering? Also, as I never looked into mobile graphics cards before last week, I have no idea how NVIDIA schedules hardware releases. Would someone mind explaining their release plan?
Finally, apart from Apple and Google, will there be many quality computers that adopt USB Type-C? I would like to ensure that I am not left behind in the update to that standard due to the fact that it fixes one of the most irritating "first world" issues (i.e. fumbling about to plug a cable into something). In all seriousness, though, the upgrade in speed and power is rather desirable; to have a mobile phone that supports Type-C but a laptop stuck with USB 3.0 would be quite as irksome as having an HD+ laptop display, a FHD television, and a qHD phone.
Thank you for your time and help,
Tristan D.
EDIT: Will Skylake actually be released, and will the fact that it is so heavily based on Broadwell allow for more than ridiculous ULV processors to be released during its first few months?
-
tilleroftheearth Wisdom listens quietly...
Everything is based on everything else... but Skylake based on Broadwell? No.
Keep saving...
Wait until the Black Friday sales or even New Year day sales (it seems you have the patience and no pressing need to get a computer now...).
What we have now will either be cheaper or completely outclassed by the new platforms available or at least introduced then.
Buying Broadwell today is highly recommended - if you need a system right now.
Otherwise? Skylake is the platform to be saving for now.Crackow likes this. -
Starlight5 Yes, I'm a cat. What else is there to say, really?
Crackow, wait and see. If early Skylake models meet your expectations upon release, you'll buy it; if not, you'll be able to buy Broadwell cheaper. I recently purchased a Haswell-based notebook for half-a-price and feeling great about it.
-
I suppose it would have been a good recommendation to say either go with some outgoing haswell board, stick with sata3, etc. Since the improvements with broadwell are pretty much insignificant once you get towards sustained loads - the power-saving functionality is the same, the motherboards are generally the same, and so on. For similarly clocked boards, it's also doubtful that Skylake will actually improve significantly on broadwell - there's going to be ceilings blown on desktop, but for the lower treshold power-draw, the differences might turn out to be very small in practice. That there will be a certain amount of improvement for sustained loads up to a point before the same tdp limit ticks in on, say, 15 or 25w kits. But that the idle level and low burn will be pretty much identical. Just as the sustained loads (of course, this is a kind of qualified guess).
But anyway - for some reason the haswell setups don't seem to actually fall in price all that much, and the laptop kits are generally becoming better and better as production costs for materials get lower. Same with the contents in the chassis - things have happened with the quality in terms of wires, ribbon cable placement, and so on - it doesn't seem to pay off for manufacturers to save pennies on bad quality connectors when warranty returns increase before a year, etc. There are certain improvements in firmware and hardware on some bus functionality that was improved on the second release of haswell that carries over, etc. So that alone would be a reason to go with broadwell.
So it might be an idea to jump on a cheaper but newer kit if you're buying one now. And then wait with the huge purchase until pci-e based storage and graphics cards take off, and then deliberately pick a laptop chassis where you actually can change the components freely, when that turns up on the market. Which.. admittedly... might be a while.
..or what do you think, tiller - are we looking at a Clevo shell with open pci-e slots for graphics, and an emerging market for mobile pci-e peripheral cards next year? -
I'm currently in a similar space: buying soon, but reading to see what changes might be imminent vs. longer term.
It seems to be worth waiting for skylake, given that it should be out the door very soon. With regards to storage, PCIe isn't a huge of a deal as it's made out to be (my opinion though, not claiming fact here), but the other interesting development in storage is the emergence of NVMe. It seems that manufacturers are still figuring it out, so the gains haven't yet been substantial. I'm not sure that either (in the storage area) is worth delaying substantially for. I can't seem to find much news at all about an upcoming NVIDIA refresh. There's some talk about a 990M, but it's likely to be out of my price range (and probably yours too if you're scraping together cash). USB Type-C is on my wishlist, but I haven't read as much chatter as I would expect. In principle it's very exciting, but I have a feeling we'll be waiting a number of months or a year before we see the really interesting uses, such as charging your laptop or connecting a monitor with one cable, become commonplace.
One thing to keep in mind about the new USB standard is that there are two separate pieces: USB 3.1 (the speed increase) and USB Type-C (the new connector). If one or the other is more important to you, make sure you get that particular one, rather than assuming the two are the same thing. -
I will definitely wait. I am not usually prone to making such hasty judgement calls, and I have not needed to buy a new laptop since roughly seven years ago. My old one is sputtering to a halt, but it still works. My phone is fine enough for most of what I do anyway. I just have never really been up to date with technology, for I could never afford to be. Now that I almost have enough money to afford a really decent computer, I was getting swept away in my excitement.
On a separate note, has Intel been poor lately? I keep reading about their apparent desire to keep mobile products from being upgraded and poor results coming from mobile processors. Is there a lack of decent competition that is allowing them to sit back upon their laurels and barely upgrade their products? -
tilleroftheearth Wisdom listens quietly...
With regards to Intel holding back. Yes, no doubt they are (because they can). But poor results from mobile platform options? I don't see it. Each and every upgrade in the last half dozen years has shown improvements all around. CPU performance is not the end all and be all for a mobile platform (if it is; then you need a desktop based platform like a Clevo or a Eurocom Panther setup).
See:
http://forum.notebookreview.com/thr...nd-also-double-pcie-m-2.778269/#post-10042596
The cumulative upgrades in performance and features is easily seen (to me) from a generation or two difference on an otherwise identically setup system. And those 5% to 10% (on the low end and up to 30% to 40% on the high end, depending on the workload tested) really add up after a very few generations.
Not to mention the increased battery life, cooler and quieter running systems and the directly linked dependability/reliability and availability of those systems because they do run cooler - even at max performance workloads.
Sure, if there was today a circa 2004 AMD breathing down Intel's neck we would see what performance computing really meant in 2015. But just because we are not seeing leaps and bound per platform iteration doesn't mean the performance increases are anything to scoff at either.
It may seem I am contradicting myself (and in the same post too...) by saying the improvements each generation are significant and also saying that a balanced ('maxed') out platform today will be sufficient for the next half to full decade to come (for basic tasks, of course). But there is no contradiction. A well balanced platform today can comfortably attack and conquer almost any task (except for video editing... if that is your full time job) today and for a few years from now. As time goes by though, even if it isn't useful to you anymore (because you could afford a new system then...) it will still be useful to others for those same basic (and maybe not so basic) tasks for a very long time.
Why? Because a properly balanced system today has the fewest bottlenecks of one component holding back another than at any other time in computer history.
The 5-10 years I'm estimating may even expand to 15-20 years for true basic use. -
NVMe is a tech that will need to mature with time. It's new, and has its inherent advantages, but I don't know that any of the current gen SSD's would really benefit, and they still need to figure out how to properly cool an M.2 PCIe drive anyhow because those suckers get hot and throttle with any sustained heavy load applied.
ajkula66 and Starlight5 like this. -
tilleroftheearth Wisdom listens quietly...
NVMe is PCIe (effectively)...
See:
https://en.wikipedia.org/wiki/NVM_Express
Specifically; it is the next step after SATA/SAS/FC.
An SSD can be connected through PCIe and not be NVMe compliant - but an NVMe compliant SSD needs a PCIe connection. -
While NVMe does require PCIe, it is important to remember the distinction (one is a bus standard, the other is a hard drive standard). The focus of each is slightly different as well.
No matter how you consider it, I agree with HTWingNut that the biggest problem seems to be cooling these drives. One of the nice things about SSDs in general was that they ran so much cooler than mechanical drives, but newer drives seem to be running quite hot. -
tilleroftheearth Wisdom listens quietly...
I think my post above made that distinction clear? The focus is the same: connect a storage subsystem to the cpu...
Newer drives do run much warmer than SSD's of 3/4/5 years ago - but they finally bring the promise of an SSD's performance to fruition - along with higher power usage and much higher performing controllers too.
That is why M.2 SSD's have throttling issues (especially with properly implemented PCIe connections) - their performance doesn't materialize out of thin air; more power=more performance (and heat). -
The original poster mentioned specifically about lamenting purchasing mSATA vs. M.2, so I thought they might appreciate understanding the subtle differences between the PCIe and NVMe. I'll certainly drop it here though because I realize I'm veering away from the original question.
Back to the original question: I'm glad that you're waiting for a bit. That's what I'm doing with skylake so close (rumors saying September for mobile SKUs). Maybe by then we'll have gotten an update on whether NVIDIA will be doing a maxwell refresh. It's been nearly a year since the 900 series came out, so it's quite possible we'll hear something soon. A refresh won't be a big as a new architecture, so I don't anticipate me personally waiting that out.
You subsequent question as to whether Intel is resting on its heels is a good one. Unfortunately we'll never know if it was physics or just a lack of competition. I think tilleroftheearth is very correct in stating that while 5-10% doesn't seem like much, it does add up quite a bit over time. Given that your current computer is seven years old, you're in for quite a treat when you do settle on the new one!HTWingNut likes this. -
Hmm, this interesting I know about these new standard but haven't done much research so you can get USB 3.1 and USB Type-C separate? I though they were the same thing. I have do some research.
On another note I waiting to also hear about 3k/4k 17 inch screens, aside from some the above threads hardware mentioned. However I been waiting I doubt I wait past the new year. -
USB Type-C is just a type of connector. USB 3.1 is the standard for data I/O. USB 3.1 just will likely use Type-C primarily.
And I don't know why, for the love of your favorite ethereal being, anyone would want a 3k or 4k LCD on a laptop. -
-
-
I know it's a choice, but personally, after using 4K LCD's from 5" handhelds up to 15" laptops, I can say it really is a gimmick. Laptops especially. It draws more power, tend to be glossy touch only, but most importantly, scaling is horrid in Windows. If you're going for 4k, you will likely have to run it so it's at 1080p anyhow so you can see anything. So why take the added cost and power consumption, not to mention lack of matte non-touch options.
D2 Ultima and Starlight5 like this. -
Starlight5 Yes, I'm a cat. What else is there to say, really?
HTWingNut, add GPU performance hit to the list.
-
Re. power draw, see this, for example: http://www.tomshardware.com/reviews/p320h-ssd-pci-express,3344-10.html
I.. believe the reason why the power draw is generally so high, is that the pci-e standard interfaces they use specify 4x pci-e as "up to 25w" while there's actual traffic on the bus. Which.. would likely be pretty much all the time. But I don't think it's technically impossible to design pci-e interfaces for asynchronous reads and writes that would drop the power consumption fairly low, without actually losing all that much transfer speed. Or, it seems likely to me at least that we'll get "consumer" type ssds for pci-e interfaces eventually that have much lower power draw than the ones available at the moment. -
tilleroftheearth Wisdom listens quietly...
Yeah, the bus interface spec's make a difference, but more power usually equals more performance and vice versa.
We'll see lower power usage if/when new tech is introduced (usually; a smaller node, 3D nand, more efficient processor and/or DRAM).
The 'up to' is not the problem of the PCIe specs. It is inherent in the base hardware available today (as seen by the Intel 750 with it's ~22W+ of power and high airflow requirements for proper/highest operation). -
Right. But the actual "dram" reads and writes on the flash-ram hardware is as cheap as it always was. ..and read somewhere that the interfaces have alternative modes with lower power drain, on "only" up to some.. 2Gb/s on sustained loads, for example. That would be... 10w, on two pci-e lanes at peak? Where you don't actually need to go that high either. So it at least seems unlikely that you wouldn't be able to design a pci-e bus interface that makes use of up to 8 lanes for short amounts of time, and then drops back to two lanes without losing response times or queue depth. Or that you would design an interface that would use the l2 states through normal power-saving settings, and get something like .. maybe 1ms extra wake-up for the secondary lanes, etc?
...Are we absolutely sure that there aren't any pci-e ssds like that on the market already? -
tilleroftheearth Wisdom listens quietly...
There may be low power and/or super efficient PCIe SSD's available now, but I have not seen them and frankly, would not want them.
The power states you're describing remind me of mobile Arrandale platforms with the H55 chipset (if I'm remembering correctly)... those early platforms with great power saving features (circa 2010) were great for sipping power, but they were also plagued with the poorest SSD response times ever.
I certainly do not want to go back to anything like that. Especially with an 4x PCIe lane 'performance' SSD.
I don't think there is any way around it with current tech. More power, more performance. More efficiency, much lower performance.
I am sure Intel learned their lesson with those early chipsets and they won't likely go back to that model soon. Neither would any other manufacturer want to be crippled by such a setup either.
Especially for a high performance part like an PCIe x4 SSD.
While the 1ms latencies you suggest seem low, they quickly added up to make the system (back then) perform worse than a HDD in those Arrandale platforms. Good idea, but already tried, I think. -
1ms wake up is not going to make anything slower than HDDs. Normal HDDs have at least 10ms+ seek time.
The H55 power management and later SSD DevSlp, however, can get to 50ms and way beyond. -
http://pcisig.com/sites/default/fil..._Substates_with_CLKREQ_31_May_2013_Rev10a.pdf
Here we are.. remembered wrong. There's something called l1 substates, l1.1 and l1.2, at least in the specification. Where exit should be measured in very few µs (and not much else, since an ssd has no mechanical parts and so on, and while it shouldn't interfere with other requests - what you run into with an hdd - you can't idle it regularly without knowing that every read will be delayed..). So maybe the l1.2 state could be useful in a driver/controller software for an ssd. In a way that it wouldn't be for a graphics card, or a hdd..?
Upcoming Hardware
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Crackow, Jul 14, 2015.