Nobody knows. GTX 960 perhaps?
-
-
Also to anyone wandering what there will cost, AMD doesn' price stuff like the Titans, the 4GB 390X I would expect to cost $600-700 at launch, and the 8Gb model $100-200 extra. BTW the difference between Sapphire Tri-X (tried to compare Vapor-X cards but couldn't find the 4GB on sale) 4GB and 8GB cards is $50 at newegg (even after rebate on the 4GB version).Last edited: Mar 15, 2015Cloudfire likes this. -
-
-
Cloudfire likes this.
-
-
-
-
-
-
-
-
Cloudfire likes this.
-
Edit: Never mind Watch Dogs doesn't use 6GB at 1440p. I didn't read all of it but I think you didn't mention the fact that in Mantle and soon DX12 it is possible to double the VRAM in Crossfire. Also you said that memory bandwidth does double, which is not the case in regular cases (again possible in mantle and DX12.Last edited: Mar 15, 2015 -
I also stated that with second monitors plugged in, and playing windowed mode etc, your vRAM increases.
So... while 8GB is a large stretch, above 4GB is DEFINITELY useful at 1080p and 1440p. Just that you can't put 5GB or 6GB on a 256-bit card. The next step up is 8GB.
But it is not a waste. It's perfectly useful. And more importantly, when more games like Shadows of Mordor come out, people will be happy they have over 4GB. Especially the aforementioned dual screen or more users.Cloudfire likes this. -
With the new low-level APIs VRAM stacking in multi-GPU is possible, yes, but requires app-specific SFR implementation by the dev. It is not a one-size-fits-all app-agnostic solution like AFR currently is, where the game engine doesn't even know whether it is running on single or multiple GPUs and everything is handled by the driver.
D2 Ultima likes this. -
-
The reason SFR (split frame rendering) doesn't double bandwidth, if you've ever used it in the past, is that you're never gonna get a perfect 50-50 division of workload on each GPU per frame. It is highly variable depending on the frame contents. This is also the reason that SFR doesn't scale and perform as good as AFR in terms of raw FPS, but it has much better frame time performance (there is no microstutter). For VR, it is probably going to be SFR or one GPU per eye, depending on the app, since AFR has too much latency.Last edited: Mar 15, 2015 -
Last edited: Mar 15, 2015
-
That's debatable and very subjective
-
And the tech for one GPU per eye exists already; it happens in nVidia shutter glasses tech for Stereoscopic 3D. If you run a game that does NOT support SLI but SLI is enabled in the machine (yes, even Unity engine games like Paranutical Activity, etc) then the second GPU renders the second set of frames for the 3D image, resulting entirely in 0 performance drop but a proper 3D implementation in the game.
So as far as VR goes, SLI and CrossfireX are going to be go-to environments for users.
My problem is that SFR kills the memory bandwidth improvements that SLI grants, which means that each card is going to essentially have to fend for itself. But this could be why HBM is coming into play, even though I've pretty much determined that ~160GB/s is about all you need before increasing memory bandwidth yields little to no performance gains. But who knows? Maybe with the degree of hurt that DX12 can use on GPUs (you know. In like 3-4 years. When people use it to actually make games.) can make games benefit from extra memory bandwidth more. But we're far from DX12 benefits, regardless of what people think. By that time, we should worry about Pascal/Volta, and by then HBM should be a definite thing, and mobile gaming may or may not be a steaming pile of regurgitated meowmix by that time. -
-
Listen. Firstly. GTX Titan Black = 336GB/s. R9 290X = 320GB/s.
Titan Black's memory bandwidth wins, but it did poorly at 4K. Memory bandwidth has little to do with that resolution and GPU power. Architecture is the key. It's why the GTX 980 with its measly 224GB/s (257GB/s or so with Maxwell optimization) memory bandwidth TROUNCES the Titan Black at 4K and generally anything above 1080p, even though at 1080p the Titan Black is generally better (unless you OC a 980 till an inch of its life).
Secondly, SLI does not benefit memory write speed, but access speed is nearly doubled because each card deals with the memory in its own frame buffer (though the contents are copied across each card's buffer). It is a definite improvement, one which AMD's CrossfireX improves on due to the lower overhead of using the PCI/e interface instead of a bridge (R9 290 and 290X cards only). It is an improvement that will be killed with Split Frame Rendering, because even though each card would have less in its buffer to actually work with, it is producing each frame on its own and its access times etc are thus the same as a single GPU configuration. We've already established that amount of information in vRAM =/= how much bandwidth is needed, or how much of the memory controller is loaded.
Anyway, I said, if shutter glass tech can do "one GPU per eye", then you can bet it can work for VR tech with SLI, and AMD could make it work for CrossfireX. -
-
Second, 370 is a 110W card. The same as R7 260X, which happen to be what R9 M280X was based on.
So this is probably the R9 M380 or something. A midrange, 60-75W card which will probably match or beat GTX 880M.
Id say its a great improvement on the efficiency scale.Link4 likes this. -
More shots fired leaks
All I can say is GODDAMN THIS THING CAN'T GET HERE SOON ENOUGH -
HBM at 640GB/s huh... so double that of R9 290X, but less than SLI Titan Blacks, and much less than SLI Titan X.
Things could be interesting... especially with Crossfire's memory bandwidth improvements.
NOW IF ONLY AMD WOULD ALLOW IT TO WORK IN WINDOWED MODE -
Dumb question. Is that 6 + 8 pin OR 8 + 8 pin (or) 6+8 AND 8 + 8 pin?
-
Looks like either or.
-
8+8 is a scary thought though because it means TDP is over 300W. -
This means that 336GB/s on a Titan X would be likely closer to 380GB/s or thereabouts with the optimizations. -
If it delivers on performance I don't give the furry crack of a rat's behind what the TDP is.
Would actually like to see a Lightning version with 6+8+8 pins -
-
Too bad so sad I guess lol
If this 390X can deliver stock 970 SLI performance that's good enough -
4096-bit lol
And here we are, mobile cards at 256bitTomJGX, James D and Mr Najsman like this. -
If 300W is the default operating TDP of those things, then AMD hasn't actually improved much at all, because if nVidia's Titan X is even 50W less, then imagine if they made a 300W TDP card to begin with. HBM or not, AMD'd get roasted over an open fire. They need to run some optimizations into GCN.
Anyway, we'll see. If 8 + 8 is required only for heavy overclocking-type cards, and 6 + 8 is for normal + OC headroom, things might be better.
But we'll see.
-
Well if the leaks are true and it's 50% faster than 290X at the same TDP, that's still a tremendous improvement in power efficiency.
Also I wouldn't buy too much into Maxwell's efficiency, I mean both of my 970s become 210W+ cards when running BF4 LOL (granted heavily overclocked with a +20 mV volt bump, but you get the idea)
Edit: yeah ok probably not the best argument there, but for 15% performance improvement, TDP went up by almost 50% (145W to 210W+), I gues that's what I was trying to highlight.Last edited: Mar 16, 2015TomJGX likes this. -
-
*sighs* It's threads like these that give me a sense of relief in knowing that tech enthusiasts still thrive somewhere... <3
Thank you for giving me hope <3 -
-
-
On that note, when I was playing Wolfenstein: The New Order I actually had to downclock my card by about 30MHz to get full stability because of the voltage/boost table crossover thing. This was a different issue though and is caused by voltage not ramping up fast enough with core clock. Sometimes it would take a good minute for voltage to ramp up, so the core would be running at 1380 MHz on 1.015V
So yeah that's why I really couldn't care less for Maxwell's faux efficiencyLast edited: Mar 16, 2015 -
In that regard you could simply force problem games to keep your GPU at max speeds and be done with it; I do that for games that'll downclock one card or the other card now and then to keep stuttering nonexistent.
-
-
Do you know that the R9 290X TDP is 290W? With 2816 shaders. 10W more and you get 4096 shaders? How is that not improvement? Its a massive improvement on the efficiency scale. 45% more shaders for 3% more TDP. And the shaders are also 50MHz higher clocked.
Nvidia may have more room for another GPU with GM200, but both previous Titan cards was 250W. GTX 780 Ti was also 250w but with more cores (+2SMX), because they disabled the power hungry FP64 cores.
Looking at the GM200 die vs GK110, its only slightly bigger than GK110. Meaning the GM200 doesnt have much more cores than GK110. ie 2880 cores. Titan X is at 3072 cores. The GM200 die can fit max 2SMM more than Titan X, 3200 cores. That wont beat AMD by any big margins anyway.
R9 390X and Titan X/980 Ti with 2SMM more, is what we are stuck with atleast for another year. Until 16nm is here.
TomJGX likes this. -
I thought 3072 was already the full GM200?
-
Could be, but atleast nothing more substantial will come out from Nvidia until new architecture is here anyway. Best case scenario:
Titan: 2560 cores
780 Ti: 2880 cores Full GK110
Titan X: 3072 cores Full GM200?
980 Ti: 3200 cores Full GM200?
You can see from this picture that the GM200 isnt much bigger than GK110.
-
Why would 980 Ti have more shaders than Titan X? Surely you don't think a halo card like Titan would feature a cut down GM200, unless there will be a second Maxwell Titan? It's more likely 980 Ti will have fewer shaders than Titan X.
How would Titan X and 980 Ti both be full GM200 if the latter has 1 SMM more? Full means nothing disabled.Last edited: Mar 16, 2015 -
Next, the Titan Black was 2880 shaders just like the 780Ti. Both titan cards were not the same. The Titan had 1 shader cluster disabled, from 2880 to 2688, just like the 670 had 1 shader cluster disabled (1536 --> 1344). The Titan Black and 780Ti were both full GK110. IF the Titan X is not full GM200, then we might see something stronger indeed, but I dunno how much nVidia is willing to put off things this time around.
R9 370 Details leaked - Finally a new architecture from AMD?
Discussion in 'Gaming (Software and Graphics Cards)' started by Cloudfire, Mar 11, 2015.