Looks like those reports of a supposed GTX '990M' weren't complete nonsense after all.
Nvidia GeForce GTX 980 (990M) for notebooks may be based on the desktop GTX 980
-
But the article points back to this thread as it's source
Seanwhat likes this. -
Repost
-
thegreatsquare Notebook Deity
Is MXM dead? -
They don't just list the speculation thread as a source, they put "Own" right above it. That TDP range sounds ridiculous though... 200W TDP?!?!? Guess we won't need to use our heaters in the winter anymore
-
All speculation. More speculation. I don't doubt there will be a new "flagship" Maxwell GPU, but it won't be a mobile 200W card and it will be MXM. Making a flagship card as soldered only is dumb. Besides Clevo and Alienware build their own MXM cards so what's to keep them from making an MXM one?
Mr Najsman likes this. -
It'll probably be GM204 with more cores (up to 2048) giving us about 30% performance over the 980M.
It would be interesting to see a hybrid card with HBM. -
Alienware is back in the MXM game? Am I missing something? Also they don't build them, rather contract someone to make it for them. MSi builds their own modules, and most vendors contract them as well. As far as I know Clevo doesn't build their modules on their own, but relies on someone else (ASUS/Pegatron), not very certain though.
-
Maybe the R2's will be. Perhaps they learned from their mistakes. Doubt it, though.
-
more cores for video editing and 3d work. yummy
-
++msi
-
What nonsense. Remember the Maxwell 860M was forced solder only...? You think that they won't do it again? The ODMs are at the mercy of nVidia, especially with AMD being out of the game so long... They will do what nVidia says. Of course if you need proof of this ask @Prema about the system BIOS overclock block he discovered.
-
I'm in touch with Prema quite a bit. I know a lot of that crap that goes on, between him and a couple other contacts. But to have 970m, 980m MXM and then 990m soldered only? Whaaa!? Then again, I wouldn't put it past Nvidia these days. That would just be icing on the cake. You want 990m? You gotta buy a new laptop. No MXM for you!Last edited: Aug 9, 2015Ethrem likes this.
-
Those idiots selling 990M pre-orders for 1500 EUR are going to be in one hell of a mix up if it's soldered, lol.
moviemarketing likes this. -
Alienware is no longer manufacturing MXM cards.
-
All current high end laptops would need an all new motherboard design. I don't see that going over lightly with those OEM's that offer laptops.
-
Clevo is launching their new products at the end of this month with the 970M & 980M, so we know that it must be MXM. Unless they plan to launch an additional lineup, never seen before, it would be suicide for the company to invest in obsolete hardware/processes.
Board meeting: "Hey guys, I have a great idea. Let's bankrupt ourselves. Sounds cool, right?"
I imagine NVIDIA will slowly transition into another type after 2016 (e.g. MXM 3.0c / 3.1 or something).Last edited: Aug 8, 2015 -
yeah, I just don't see it happening. The 860m thing was so that they could exhaust their Kepler chips.
-
The days of unfolded chips didn't end with nVidia and Intel at the same time for no reason.
-
Alienware builds nothing.
Compal and wistron build the alienware models. -
Ah, I thought that the other thread about the new chip was dead. @Ethrem might as well lock this one since it's basically a repost.
-
thegreatsquare, there isn't only 1 path for GPU upgrades, and there are a lot of MXM spots to fill with upgrades.
I doubt Nvidia would leave money on the table, they will provide SLI/single MXM upgrades.
Nvidia can come up with MXM 985m's that SLI faster than a single 990m
-
985m in sli with a 17 inch 4k screen and skylake cpu with some pcie 3.0 ssd drives? sold!hmscott likes this.
-
Phase, first, 4k wouldn't be fun to feed for intense games - low FPS - at least until 2016+. Use DSR at 1080p for internal screen for now for best gaming experience.
18.4", it's bigger
PCIE lanes in Skylake are divided weird, only 4 lanes direct to the CPU, and the other 16 lanes share those 4 lanes into the CPU. Maximum shared throughput is 3.5GB - that is from 4GB theoretical - and from only 1 device, more devices feeding that will incur more overhead loss.
First gen Skylake will be fun, even better in some respects, but not fully realized.
Unless it is, but we won't know until it ships
-
The new chip is meant to fill the gap for the most powerful cross platform engine... UE4... It is optimized based upon the last frame... SLI won't ever be officially supported and everything else will be an extremely buggy hack...
-
Don't forget the 62fps cap built in to UE4.
UE4: It seemed like a good idea at the time...
Genius.
Does the UE4 engine support SLI?
https://answers.unrealengine.com/questions/21746/does-the-ue4-engine-support-sli.html
No SLI support with Unreal Engine 4
https://forums.geforce.com/default/topic/812392/no-sli-support-with-unreal-engine-4/Last edited: Aug 11, 2015 -
This quote doesn't make any sense:
Don't a lot of games rely on data from previous frames, and why can't that info be shared between cards? -
No, actually. It's how AFR works. Previous-frame frame buffer data tech means that to work, it needs to know what every single frame did. The frame buffer of GPU 1 is not the same as GPU 2 because they render different frames, so tech like SMAA's Temporal Filter and MFAA cannot be activated as they'd be "guessing" where to work from on every frame. nVidia made TXAA work in SLI though, so I don't know why MFAA won't. Silly nVidia.
Also, the frame buffer data is too much to transfer over the SLI bridge. Octiceps can explain more about it. Know how AMD's XDMA tech on the R9 290/290X/390/390X/Fury/FuryX can work in CrossfireX without a crossfire connector? That's what needs to happen for frame buffer data to be transferable quick enough; otherwise PCI/e communication is also too slow. nVidia cards aren't designed for it. AMD cards theoretically can do it. Also, doing this method of rendering (which would be SFR) would kill the vRAM access bandwidth improvements of multi-GPU configurations. Maybe that's why HBM is becoming a huge thing right now, coinciding with DX12/Vulkan. -
I'm so guttered that UE4 has these limitations. Especially the fps cap.
-
When did UE4 get a 62fps cap? I am dead certain UT4's alpha ran at ~120fps for me when I tried it last year
-
It's probably a simple .ini file change. Many UE3 games on PC also defaulted to a 62 FPS cap. Hardcoded numbers are usually 30 and 60.
-
I hope you're right. I don't know for certain. I want the new Doom at more than 60* otherwise it's going to spoil our reunion
TomJGX, moviemarketing and D2 Ultima like this. -
Yea sli was a neat trick but I would take a single powerful GPU over it 999 out of 1000 purchases
Sent from my Nexus 6 using Tapatalk -
That's why you buy that powerful GPU...
THEN SLITomJGX, Mr Najsman and hmscott like this. -
The benefits of sli are still there you just have to look harder to find them lately
Sent from my Nexus 6 using Tapatalk -
Oh believe me, I know that better than most. I've long since stopped telling people to SLI anything under a 980Ti. If you can buy a second 970, sell your current one and buy a 980Ti. If you're aiming to buy two 980s, get one 980Ti and save up for a second one. I'm seeing more and more games barely taking advantage of or giving low returns for SLI, and all the UE4 games that'll be coming out soon will never support SLI either.TBoneSan likes this.
-
I'm having a hard time seeing how in this day and age a very popular engine fails to support multi-gpu rendering. Ever from the moment I slotted two 3DFX Voodoo 2 12MB's I've been a strong believer of multi-gpu rendering. I hope Nvidia gets creative and find something else than AFR, that is suggested anyway looking at the DirectX 12 multi-gpu rendering features (such as stacked vram and more act as one for both gpu's) if I'm not mistaken.
hmscott likes this. -
Exactly, AFR is the problem. It's a hacky inefficient way of doing multi-GPU from the '90s. SLI/CrossFire worked fine for some years after release when all games were relatively simple DX9 affairs but it just doesn't mesh with modern engines anymore.
-
I wonder what Nvidia's stance is on that. Having the developers of one of the most popular engines openly stating on their website one has to forgo SLI to enjoy the Unreal 4 engine has to create some buzz in the board rooms at Nvidia HQ not?
-
Yeah that is curious isn't it, esp. considering Epic is one of Nvidia's biggest partners
-
Why are there two threads about speculating on NVIDIA GPU's this year?
-
unlike Epic, Notebook review supports SLI technology for its threads perhaps?1nstance, Mr Najsman, moviemarketing and 1 other person like this.
-
I thought about merging them but the OP contains the information on the rumored GPU while you have to go to page 70 or so of the other to get it.
nVidia is likely not bothered at all by the loss of SLI technology in UE4 by the way. It forces people to buy a more expensive video card rather than buying two cheaper cards. It's a win for nVidia.DataShell likes this. -
To me it feels like another nail in the coffin for SLI as a legitimate setup or upgrade path. It's a loss for Nvidia as far as I'm concerned since it another reason for me not to SLI my 980ti. I know I'm not the only ex-SLI fan that feels this way. SLI feels more illegitimate than ever since Maxwell released. Frame pacing, scaling issues and a lack of support from both Nvidia and Devs are at the heart of the problem.
-
SLI and CF are buggy disasters these days. I miss the Voodoo days. But still, it is more likely that a gamer will spend the extra money for a more powerful video card, especially considering how much horsepower that engine actually uses when it is pushed, or they'll have to put up with poor performance. Time will tell. They managed to get Ether One working at mostly 30FPS on the PS4 using UE4 so it may all be about the optimization.
-
Yeah, I agree. I was playing MGSV Ground Zeros the other day. It worked fine, except for some inexplicable drops to about 45-50fps that was fairly common. My util wasn't going up, I wasn't maxing on CPU or GPU... was sitting at around 70% GPU util, maybe 80% sometimes.
Solution? Overclock GPUs. Boom, instant 60fps flat. Apparently that game won't use over 70-80% of each GPU, so OCing them granted extra power to the game. Silly. UE4, Unity 5, ID Tech 5, no like SLI. Who knows how Source 2 is going to like it. Maxwell has features that are disabled in SLI, and it works worse in SLI without custom vBIOSes,
I've been telling everyone. If you plan to SLI or Crossfire any card, buy a 980Ti first. The only person I let off the hook is someone who owns a 980 and wants to buy another 980, because selling a 980 to buy a 980Ti is kind of... I can understand not wanting to do that. -
Why? I totally understand it. @TBoneSan did this, he got rid of 980 SLI for 980 Ti. GM200 is exactly 50% bigger than GM204 any way you slice it, so at identical clocks it is 50% faster. A 50% tangible improvement is more than can be said for SLI nowadays, what with the heat, power, microstutter, and awful driver/game support. Less headaches and not to mention cheaper.
-
Well, you understand it and I do too, but other people aren't so... understanding. They're quicker willing to dump a 970 for a 980Ti.
Either way, I make sure I get it clear that everyone should aim for a 980Ti rather than a 980 (overpriced) or a 970 (broken by design). I can recommend a R9 390 over the 970 easily, but it's difficult when someone wants to buy a ~$500 GPU. There's not enough extra power to justify a 980 or R9 390X for the cost, people don't wanna buy a R9 290X anymore, and the 980Ti may be slightly out of reach for them. But then I'll go "if you're buying this now planning to get another in the near future, hold off and save up" and that's a hard sell too.
Blah, I dislike the GPU market right now. I've been seeing some people have random issues with Fiji cards too; random low utilization.
And now potentially soldered-only 990M that probably won't even have a notebook to go into.
New Nvidia 900m series flagship
Discussion in 'Gaming (Software and Graphics Cards)' started by DataShell, Aug 7, 2015.