Gigabyte and Aorus notebooks from Computex 2016:
Notably:
- new Gigabyte Aero 14: very thin, 14" 1440p screen, large battery (94.24Wh, 10 hours), for now with just GTX 965M/970M
- updated Aorus X7: 120 Hz 17" screen, expected to get Pascal GPU (GTX 1080M or 1080)
-
I doubt full 1080 will be in that Aorus. Maybe, and I mean maybe 1080m. If they put a 1070m in that 14", that looks like a nice little laptop and performance at 1440p would be good.
-
They already dared to put desktop GTX 980 in previous version of that chassis ( Aorus X7 DT).
Not saying it's such a good idea, but they sure like to try
Indeed, that's what I was thinking - with GTX 1070M it would quite the beast - 1.9 kg, 2 cm thick, 14" 1440p, 10 hours battery, about GTX 980 performance.
Shame though there is no Thunderbolt 3, no G-Sync and they kept the damn Optimus (though that's kinda understandable here, one of their main selling points is super-long battery life for this market segment).
With G-Sync and MUX it would be much better chassis.HTWingNut and Ionising_Radiation like this. -
We probably need Prema here) But does it affect gameplay or benchmark scores? I mean it's using boost, so it'll spike anyways.
Fire strike did pretty good, could you run Unigine?Mr Najsman likes this. -
Have you tried forcing max performance via NCP by chance?Mr Najsman likes this.
-
You really don't need to do that. And it just makes the gpus run allot hotter in the end.....
Although i am curious as to this "wrecks xtu" with a bga cpu though.....
Side Note:
Well, it looks like the 1080N is kicking butt and taking names. Glad i waited. BGA here i come!
Err.....Ummmm, I mean 1080N or 1080M SLI hopefully.... here I come. Haha
http://www.3dmark.com/compare/3dm11/11336882/3dm11/11287728
http://www.3dmark.com/compare/fs/8718575/fs/8760023
http://www.3dmark.com/compare/fs/8725145/fs/6480443Last edited: Jun 17, 2016hmscott likes this. -
I only asked for the sole purpose of seeing if it'd keep at least boost clocks, since he's getting random throttle even at stock. If it's a broken vBIOS though that of course won't help.
I think... you must have a fever there. *gives soup*. What do you mean by waited, though? You have 980M SLI already, no? -
You couldn't possibly betray our kind...
-
It's technically not throttling (per nvidia), but in reality it is.
Not sure where you have been but all vbios files are broken from gate. Or, set to boost 2.0, 3.0 or what ever the next revision is going to be.
What does having a fever have to do with me wanting 1080's for my laptop? Or a new laptop with 1080's in it?
Side note: pascal works a bit different than what you are normally use to....Prema likes this. -
That's what I'm hoping to buy it with. Alternatively the p640re refresh or Msi 640 refresh. It has to be 14" 1070m for me. They slap a 1080m in if they can. I'm not dead favourable to one gpu lol
Sent from my iPhone using TapatalkIonising_Radiation likes this. -
It'll most likely be the 1070M in the Aorus. Still a very nice card! I'll be quite jealous of those with it.
However, I'd honestly prefer a 120Hz 1440p screen, that way I can dial it down to 1080p/120Hz if I wish. That is the sweet spot for gaming, imo.
-
Ionising_Radiation ?v = ve*ln(m0/m1)
I'd like to see where the 1060M stands - hopefully somewhere around the desktop 970. Then I can get decent framerates on some of my games...
-
These machines are too small for 1080m. They would run hot. 1070m will already run hot but should be manageable.
-
Thanks so much for sharing this with us here
It's a bit disappointing that the 1080 mobile is really a 1070, given Nvidia/rumors said that there weren't going to be different GPU's for desktop / laptop.
So far the test results match a 1070 performance, and not a full 1080.
Why couldn't this just be a 1070? Why fake us out calling it a 1080?
There is nothing wrong with fitting a full 1070 in a laptop as the top end for a frame that has power/thermal cooling designed for the 1070, and not a full 1080.
Then offer the 1080 in a laptop that can actually run a full 1080.
I don't understand the need for this deception.Red Line likes this. -
I know this much, though I still wanted to cover all bases. Not like I'm getting one of them anytime soon, or that I could edit my own vBIOSes to any great effect.
Nothing? I wasn't talking about the 1080s. I was talking about the bolded and italicized "BGA here I come!" bit, where I somehow believe you are not thinking straight, as you're one of the last people I'd expected to hear that from. -
Ivy has one clevo laptop with 1080m on preorder, with eta on july 29th. Do any of you know what clevo model this is? Here is a picture: http://pc-konsulten.se/wp-content/uploads/2016/05/IVY-17P-Front-1500x1500.jpg
The keyboard looks different than the current DM models. I really hope a refresh of the DM models will have a different keyboard than what they have right now, with a 1080. -
That is not a Clevo. That is a MSI.
-
Thank you. Thought I recognized the keyboard from somewhere else. I was confused, because Ivy has previously only sold clevos.
-
How can there be deception when they haven't even announced anything yet?
Sent from a 128th Legion Stormtrooper 6P -
I've been saying the 1080M will be like a 1070 since the beginning. It's always like that - top tier mobile performs like second tier desktop. It should technically be better, though.
And by "like," I mean closely to. Not exactly or identical.Kade Storm, Ionising_Radiation and hmscott like this. -
Isn't a Pre-Order for a 1080 laptop an announcement?
Isn't the measured performance of that advertised 1080 actually performing at the level of a 1070?
So a 1070 is being sold as a 1080, that's deception. -
Yet, it seems to be performing identically to a 1070 desktop GPU. Almost like they tuned it to do so.
They didn't need to use a 1080 as the base for that level of GPU performance.
They could have used a 1070, saved money - reducing the sale price - and ended up with the same performance.Last edited: Jun 18, 2016 -
It was a pipe dream to expect desktop 1080 performance in a notebook. Isn't the "notebook 980" not quite a desktop 980?
Sorry I said I was leaving this thread, you guys are truly insane it's mind boggling the fact devoid pontification.
Sent from a 128th Legion Stormtrooper 6P -
Sorry the facts didn't fit your pontification, and that you needed to pull in additional irrelevant splatter to cover your exit.
The point is indeed that they couldn't fit a full 1080 into that frame, so why did they say they did?
Why label it a 1080 when it's performing as a 1070 would?
That's my point, factually based, and clearly stated - I don't know how you thought your response was valid, it was completely off base.
Please don't leave, you might learn something
Papusan likes this. -
Did nvidia promise performance equal to a desktop 1080 in notebooks? It's a simple yes or no.
So nvidia basically said we're using desktop 1080 metal in notebooks? Where?
If you're taking about this.. http://videocardz.com/60853/nvidia-to-offer-desktop-graphics-in-notebooks
"Our contacts say" just isn't good enough. We need direct quotes from nvidia.
Sent from a 128th Legion Stormtrooper 6PLast edited: Jun 18, 2016 -
That's not a relevant fact is it?
Nvidia hasn't released any info on the Pascal mobile lineup.
The Ivy laptop says it's a 1080m, not a 1080, that would have been a good fact to bring up, as it suggests that it's less than a 1080 - which it is - it's performing like a 1070.
I am merely pointing out that the 1080 cut down is performing like a stock 1070, so why the 1080-ish naming, as that is deceptive. If it's performing like a 1070, then use a 1070 as the base for the GPU and call it a 1070.
A 1070 would have been a better choice for a laptop frame with the power and cooling to fully utilize a 1070, but unable to fully utilize a 1080.
That's my point. Fit the part in there that fits, don't shoehorn in and then cut down a more expensive part to try to convince us it's better than it is. It's performing like a 1070, call it a 1070.
I was quite clear about that in my first post, re-read it. -
You're projecting your own logic on how they should release notebook specific gpu into the scenario, which isn't realistic. They've always been separate sku with performance differences from the desktop sku. There's no reason to expect that to change.
Sent from a 128th Legion Stormtrooper 6Phmscott likes this. -
Yes there is a reason to expect a change, with the introduction of a full 980 Desktop GPU for mobile.
That's the factor that led to the rumors that Nvidia might offer full desktop GPU's in place of the deceptively named 'm' versions.
I for one am tired of the use of the 'm' GPU's between desktop and laptop. It's always been a joke, calling a laptop GPU a 980m when it's really a 970-ish performer.
If they can't fit a desktop 1080 into laptops that had a 980m, so what. Fit a 1070 in there instead.
I don't want another 1080 "desktop mobile" class GPU, just call it a 1080.
Leave out the 1080m's, we don't need the deceptive naming. I know it's a 1070 performer whether it's called a 1070 or a 1080m.
We need to leave the 1080 name for the laptops that can actually provide the power and cooling required to fully realize it's performance.
Give us a full 1070's instead, and charge us their reduced price.
-
So you're really complaining about the naming scheme, even though the performance difference and which metal is used is still going to be the same? It seems extremely trivial.
Sent from a 128th Legion Stormtrooper 6P -
Hence the deception comments. Call it what it is, don't come up with a name that sounds like something it's not - and charge us more for it.
That's the problem, if it was just a name to make us feel better that would be one silly thing to do.
They don't do it to make us feel better, they do it to take more of our money.
So yes, it is a big deal. $599 - $379 = $220 => 60% more for a deceptive name, and no more performance. $440 cost increase for an SLI set up.
That's a damn big deal
And, the worst part is, we get charged far more for the mobile board than the desktop board. Something like 50%-85% more depending on when you compute the difference.
That's the cost of the deception, which we all end up paying. -
Come on folks why worry about it being called a 1080m (probably) and performing like a 1070?
There's a lot of marketing behind a new product and these things are not new.
If it performs like that it's ok in my book as long as they don't charge more than the 980m when it launched.
Now I hope I can fit it in my MSI GT60
-
If it's based on a 1080 part, I don't see how the 1080m won't cost more than a 980m.
If it's based on a 1070 part cost, it's going to be right below the same price.
The 1080 costs a lot more than a 980, and a 1070 costs just below it - market pricing has dropped rapidly on the Maxwell parts - and will continue - I am talking about the pre-Pascal pricing for the 980. -
@johnksss has said it before. He is a bencher first. He does not care whether he is using BGA or LGA as long as he is scoring up top.Ionising_Radiation, D2 Ultima, Johnksss and 1 other person like this.
-
Technically mxm gpus cost just a little over their desktop brethren (from a BOM perspective). It's the OEMs that do charge more for these kind of designs, cause of their limited availability. Nvidia wins by selling chips. It's a bit different than desktops. It's not fair I know and I don't agree with that pricing disparity too. Unfortunately we have an almost monopoly situation...hmscott likes this.
-
That is more Mr. Fox and Papusan. What ever parts i get in my hands im benching. For me it's hardware points. Which takes a great deal of "hardware" to accumulate.
Side note:
As to the full 1070...... it would just perform like a 1060.
Ionising_Radiation, hmscott, ssj92 and 1 other person like this. -
You guys also have to remember re-branding is extremely profitable. They'll never unleash Pascal's (or whatever generation's) full capabilities in the first release. Intentionally dialing back the performance of the 1080M and still keeping GDDR5 saves them TONS of money and earns them even more next year, when GDDR5X replaces it and they introduce HBM2.
It seems like next year we'll see pretty much all of Pascal unleashed.hmscott likes this. -
Well okay. Didn't know that, but it makes sense as benchmarks are what you love. Mind if I ask why switch from four Titan X to two 1080s? Better benching at 4K?
I. HIGHLY. Doubt. We're going to see HBM2 on mobile from nVidia. At least not until Volta. We ALL know they're going to milk us hard. -
@D2 Ultima
I switched from 4 980's to 4 titan x's to 1 980 ti to 2 1080's. Why? because it's a going up process for starters... And I guess you haven't heard yet, but nvidia has stopped supporting more than two cards in games. So having more than two cards would now be for benching and not gaming. Its why the new sli bridges are now for only two cards. Also, i ran my titan x's as high as i could get them, so now they are out dated. Time to move on. Benching is a very very very expensive habit to be in. With no real gains except world recognition. In other words...A very expensive hobby.
Side note:
Example:
Buying a game is like 79 bucks and i hear a bunch of people complaining about that. So when an upgrade cost 1300 dollars, people tend to play the price per watt card. Or the i'm waiting till they come out with gtx 2080. And when it finally comes around they want to wait for the 3080. Pretty funny really.
Im not one of those in the example. I buy when i feel like it, not when i can afford too.....ssj92 likes this. -
I'm with Johnksss. But if all GPUs go BGA. I'll just buy a mac.
Screw BGA GPU.
I've got an 18 core xeon so I don't care about meeting cpu needs elsewhere. -
I heard. I made a comment about this on your post on T|I if I remember correctly talking about how there isn't enough bandwidth but instead of XDMA-like design they just doubled the SLI bridge up and discarded 3-way and 4-way because they couldn't market NVLink to consumers.
I guess I mistakenly thought you were a bencher first gamer second. Well, I can't claim to know all that much about you, I rarely see you post on either this forum or T|I. I should really stop making assumptions and just ask outright.
Yeah, this I never understood. It's one thing if you're fine with what you've got now, or even if new cards are known to be about a month away. But to see new things come out and then decide to barely hang on for another year and a half I've never understood.
This however, I would love to be in such a position for. Maybe one of these days. -
I already have a macbook pro i7, but will not be benching that any time soon..lol It might break on the first go round.
And with doing the new bga gpus, they have more space to work with...I think at that point you might get your full fledged gpu....eventually. Speculation of course
Side note:
This is no my only machine. I am not limited and have to use only one comp. i have at least 10. So, there is always that. And i dont need to be over clocked to bill my clients or surf the internet. And they almost all can play games fine and dandy...Should i have time to be doing that all day instead of working and paying bills.
That is correct. I never said i was a gamer first. Ever.
Nvdia sells less cards if people only buy two instead of 4. And nvlink has nothing to do with the game makers porting games for only 1 to 2 cards. Nvidia left it open that should game makers port in for more than two cards that nvidia would activate the link for it. Pascal and up only.
They could possible be trying to shut it down using the newer drivers for older cards as well...Who knows.Last edited: Jun 18, 2016 -
Ionising_Radiation ?v = ve*ln(m0/m1)
As a matter of fact, with respect to BGA GPUs - the MXM GPUs are BGA anyway. They're just on a separate card, plugged into the motherboard.
I just want an MXM 14" - imagine a 1070 (or M - I don't get Nvidia's new naming) stuffed into a 14" laptop, with 16 GB 1866 MHz DDR3L RAM and a 4920MX.
Oh, wait - DDR4 (which is pointless in real-world) except for power savings) and Skylake are out already? I meant 16 GB 2133 MHz DDR4 and a... 6820HK?
I miss the 'M' moniker for mobile CPUs. It meant rPGA, good overclocking and it meant mobile. Now we have this 'H' nonsense. Bah. Screw you Intel and Apple for making ultra-thin laptops completely locked down.
I will miss my W230SS' rPGA slot when I finally upgrade it (probably next year) to a 1160M notebook.
Sent from my HTC One_M8 using Tapatalkhmscott likes this. -
Did anyone else notice just how expensive this 1080 laptop is?
http://pc-konsulten.se/produkt/ivy-gaming-laptop-17p-config/
And, why does it already have a discount off the "list" price?
It's priced about $2550 USD discounted from $2900 USD.
That's in the price range of single 980 Desktop Mobile laptops, not single 980m laptops.
It seems the 1080m is more overpriced than the 980m at launch, before discount
Last edited: Jun 18, 2016oveco likes this. -
Now I am even more confused. I don't think I'll ever understand you. But okay, to each their own. I don't need to understand to interact with you I guess.
Truthfully, most people stopped at two. There's a very small set of people who actually go above 2, and even then they'd stop at 3 if for the purpose of gaming more often than not. I don't think it's THAT many lost sales for them, but a promise of better SLI support for 2-way might make more sales.
As for NVLink, the reason the new engines and design techniques are supporting multi-GPU less and less is because the bandwidth between the cards is too little, as lots of techs (especially VR tech) require data from the previously rendered frame. This breaks AFR because the card doesn't have the last frame rendered, but the one before it. SMAA T2x doesn't work in multi-GPU for this reason, and TAA is extremely bad in multi-GPU as well for this. The solution is more bandwidth between the cards (for example, if you force SLI in Unreal Engine 4 which doesn't support SLI officially it'll work, but if you're at 1440p or above, you usually notice negative scaling with SLI on unless you're on PCI/e 3.0 x16/x16 for the cards, which requires a minimum of a $500 USD CPU, and it means tri-SLI is absolutely impossible as far as benefits go, as until Skylake-E with 48 PCI/e CPU lanes launches, PCI/e 3.0 x16/x16/x16 cannot happen. Couple that with games having less development time for PC (I've heard from a friend in the industry that most AAA multiplatform titles only start working on the PC port maybe 2-3 months before launch... and this is 2-3 year development cycle games) and you end up with a situation where making it work is not worth the $$ for them.
The reason I mentioned NVLink is because the fix would be to use the PCI/e bus for bandwidth. XDMA is AMD's design to allow the memory to transfer data between the cards easily enough using the PCI/e interface speed of 16GB/s and not require a crossfire bridge (Hawaii; R9 290/290X and onward support it). Even if assuming it doesn't hit that high, it's much better than the 1GB/s of the default SLI bridge, and it'd fix the problem. But they can't make money off of it. NVLink's entire point is to improve GPU interconnectivity, but it requires (as far as I know) a proprietary connector, so adding it to mainstream would split the market in too large a fashion. If you bought a NVLink motherboard you'd only ever be able to use nVidia Pascal cards with it. So instead of simply designing the cards in a similar fashion to XDMA and fixing the bandwidth and stutter issues, they simply made a bridge to use both SLI fingers and clocked it higher than existing bridges (400MHz clock for default, 650MHz for the HB bridge, and double the lanes of connection; if I just do top-level math it comes up to 3.25GB/s bandwidth not counting if there are any other bandwidth-granting tricks). In doing so they cut loose 3-way and 4-way SLI and said they'd focus on 2-way alone. If they really wanted, we could be having BETTER 3-way and 4-way multi-GPU configurations right now. But they just don't.
If they're going to lock it out for previous generations... that'd be something pretty devious.
Edit: Somebody tell HMScott that the US is cheaper than everywhere else and such rip off prices are normal after translating to USD.Last edited: Jun 18, 2016Robbo99999 likes this. -
Ahh, I already bought the one with the 980 for the same price last week...
Part of me wishes that I should've waited, and the other part of me still wants to go the egpu route.hmscott likes this. -
You wont ever really understand because you only have a Clevo P370SM3 laptop....
Have you ever owned a highed board that had the sli chip onboard? If so, you would understand that that didn't work well either. And less than 1 percent of the world bought into it.
We are all still waiting on the "real" results on this hb bridge to determine if it is in fact that much better than the normal bridge with two connectors per card. Which is what im using now.
All this stuff looks good on paper and all, but that is not real world testing. And until that gets done, it's all just speculations.
Done with that for now since it is totally of topic.
Still hoping for normal 1080M/1080N sli cards though -
Firstly, what I was saying about bandwidth isn't coming from my personal laptop's usage.
Second, I don't know what a "sli chip onboard" is. Unless you mean a PLX chip, then I know of it and its issues, but have never used it. If not, then if you could explain further, that'd help.
I don't mind being told I'm wrong, but I'd like a how. Even if you've got to send me a PM if you don't want to derail this thread, that'd be much appreciated. In the future I can change what I say. -
Taxes are included in that price. I think it's pretty cheap for the first batch, especially here in Sweden.
Edit: The price in USD without taxes would be roughly $2000Last edited: Jun 18, 2016hmscott likes this. -
No, i mean 2 plx chips like this one.
http://www.newegg.com/Product/Product.aspx?Item=N82E16813157327&Tpk=X79 Extreme11 -
WHAT KIND OF ABOMINATION IS THAT SORCERICAL COWNESS
Pascal: What do we know? Discussion, Latest News & Updates: 1000M Series GPU's
Discussion in 'Gaming (Software and Graphics Cards)' started by J.Dre, Oct 11, 2014.