The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    R9 370 Details leaked - Finally a new architecture from AMD?

    Discussion in 'Gaming (Software and Graphics Cards)' started by Cloudfire, Mar 11, 2015.

  1. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Nobody knows. GTX 960 perhaps?
     
    BigDRim likes this.
  2. Link4

    Link4 Notebook Deity

    Reputations:
    551
    Messages:
    709
    Likes Received:
    168
    Trophy Points:
    56
    If you are going for a Crossfire setup it would be better to get the 8GB cards, but these cards are meant for 4K + resolutions, so I'm not even sure if anything more than 4GB would be useful in 1440p.

    Also to anyone wandering what there will cost, AMD doesn' price stuff like the Titans, the 4GB 390X I would expect to cost $600-700 at launch, and the 8Gb model $100-200 extra. BTW the difference between Sapphire Tri-X (tried to compare Vapor-X cards but couldn't find the 4GB on sale) 4GB and 8GB cards is $50 at newegg (even after rebate on the 4GB version).
     
    Last edited: Mar 15, 2015
    Cloudfire likes this.
  3. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    That'd be sad. Mobile variant, assuming it is AMD's new flagship, would only be on the level of 965M/870M/780M which is last-gen performance and barely better than 7970M/8970M/M290X.
     
  4. Link4

    Link4 Notebook Deity

    Reputations:
    551
    Messages:
    709
    Likes Received:
    168
    Trophy Points:
    56
    It won't be a flagship, Trinidad will be used in M380 and M380X, it will be replacing the Bonaire based M280X, yet it will still be faster than M290X. M385 and M385X might be Tonga chips (hopefully with improved efficiency thanks to GloFo 28nm). As for M390X, who knows, maybe they are preparing a full GM204 competitor based on desktop 380.
     
  5. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Nope, mobile GPUs limited to 256-bit unless somehow get HBM. So either Trinidad or Fiji w/HBM (impossible since the desktop card has stock WC). And Tonga mobile already confirmed to be an overheating joke, even if GloFo 28nm improves thermals performance is still well behind GM204.
     
    Cloudfire likes this.
  6. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Read my vRAM guide
     
  7. Link4

    Link4 Notebook Deity

    Reputations:
    551
    Messages:
    709
    Likes Received:
    168
    Trophy Points:
    56
    Why are mobile GPUs limited to 256 bit? Trinidad is just not gonna be a flagship, it won't even outperform Tonga at stock clocks, and even at 850MHz Tonga is already faster than 970M. If full Tonga besides more cache and 384 bit bus also has 48 ROPs (we don't know we haven't seen a full Tonga yet) it might even perform close to a 980M with efficiency improvements. But for the flagships M390X, whenever it is ready, it most likely won't be any of the GPUs we know, possibly Grenada (as long as it's not a Hawaii refresh), so there is no reason not to give it HBM.
     
  8. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    If the 8GB 390X is going to cost upwards of $800 then AMD won't be seeing a lot of buyers, unless Titan X is the only GM200 part that nVidia releases and it goes for $1349. I'm not going to lie when I say $800 is about the max I'm willing to pay for an 8GB 390X Lightning, and even then that's a bit steep for my liking.
     
  9. Link4

    Link4 Notebook Deity

    Reputations:
    551
    Messages:
    709
    Likes Received:
    168
    Trophy Points:
    56
    Why? 4GB is enough for most things 1440p, 4K+ is a different story.
     
  10. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    MXM space limitation, hence we've never seen any 384-bit mobile GPUs. Stacked DRAM is the way to go if you want bigger than 256-bit bus on mobile.

    Tonga is a 256-bit GPU and even at full strength only comparable with 970M. Rest of stuff is pure speculation at this point.
     
  11. Link4

    Link4 Notebook Deity

    Reputations:
    551
    Messages:
    709
    Likes Received:
    168
    Trophy Points:
    56
    The prices of 4GB cards would most likely go down once the 8GB variant launches (unless they decide to launch at the same time) so I doub't we would see an $800+ card from AMD. $700-750 is what I am expecting but if the 8GB version is using second gen HBM it could be a bit more expensive.
     
  12. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    That response is why you should read my guide.
     
    Cloudfire likes this.
  13. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    Well here's to hoping an 8GB 390X Lightning goes for $700 or less :D
     
    Cloudfire likes this.
  14. Link4

    Link4 Notebook Deity

    Reputations:
    551
    Messages:
    709
    Likes Received:
    168
    Trophy Points:
    56
    You just listed a few rare cases (mainly Shadow of Mordor) where the VRAM usage is more than 4GB. Cases like Watch Dogs with Ultra Textures using 6GB at 1440p are rare and that alone doesn't warrant the extra cost of the graphics cards for many people, but when it comes to Crossfire setups I would recommend getting the extra memory just because the core is less of a bottleneck and VRAM size does become more important at higher resolutions.

    Edit: Never mind Watch Dogs doesn't use 6GB at 1440p. I didn't read all of it but I think you didn't mention the fact that in Mantle and soon DX12 it is possible to double the VRAM in Crossfire. Also you said that memory bandwidth does double, which is not the case in regular cases (again possible in mantle and DX12.
     
    Last edited: Mar 15, 2015
  15. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    I listed places at 1080p for one.

    I also stated that with second monitors plugged in, and playing windowed mode etc, your vRAM increases.

    So... while 8GB is a large stretch, above 4GB is DEFINITELY useful at 1080p and 1440p. Just that you can't put 5GB or 6GB on a 256-bit card. The next step up is 8GB.

    But it is not a waste. It's perfectly useful. And more importantly, when more games like Shadows of Mordor come out, people will be happy they have over 4GB. Especially the aforementioned dual screen or more users.
     
    Cloudfire likes this.
  16. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Mantle is just about dead now, Vulkan is essentially its successor.

    With the new low-level APIs VRAM stacking in multi-GPU is possible, yes, but requires app-specific SFR implementation by the dev. It is not a one-size-fits-all app-agnostic solution like AFR currently is, where the game engine doesn't even know whether it is running on single or multiple GPUs and everything is handled by the driver.

    Actually, in regular AFR like what we have now, effective memory bandwidth is multiplied. In SFR it is not.
     
    D2 Ultima likes this.
  17. Link4

    Link4 Notebook Deity

    Reputations:
    551
    Messages:
    709
    Likes Received:
    168
    Trophy Points:
    56
    Sure Vulcan will be replacing Mantle but there are still Mantle Games coming out (Vulcan is just too recent and it will take some time untill games with it are released) so there is a possibility for some to have SFR. Also the fact that it is there we will see more devs using SFR from now on.

    If you are talking about a single Frame, no it isn't. Each GPU uses it's memory bandwidth for it's own purposes only so while you get 2 frames from both GPUs and want to count that as double the memory bandwidth it really isn't. In SFR each GPU only has to deal with a portion (half in case of 2) of the frame so it is "effectively" (because there is less traffic on each bus) multiplied.
     
  18. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    I'm talking about AFR (alternate frame rendering), where GPU0 renders odd numbered frames (f1, f3, f5,...) and GPU1 renders even numbered frames (f2, f4, f6,...) or vice-versa. So for every 2Z frames, each GPU is essentially doing half the work.

    The reason SFR (split frame rendering) doesn't double bandwidth, if you've ever used it in the past, is that you're never gonna get a perfect 50-50 division of workload on each GPU per frame. It is highly variable depending on the frame contents. This is also the reason that SFR doesn't scale and perform as good as AFR in terms of raw FPS, but it has much better frame time performance (there is no microstutter). For VR, it is probably going to be SFR or one GPU per eye, depending on the app, since AFR has too much latency.
     
    Last edited: Mar 15, 2015
  19. Link4

    Link4 Notebook Deity

    Reputations:
    551
    Messages:
    709
    Likes Received:
    168
    Trophy Points:
    56
    AFR still doesn't reduce the frame latency due to the time it takes for the bus to transfer data, and sure SFR doesn't split bandwidth 50/50 (hence I said it multiplies and not doubles) thanks to explosions and other effects happening only on 1 part of the screen, but it still reduces frame latency, and while you don't get as good average FPS scaling as you would get with AFR, you are still getting better minimums, no stuttering, no dropped frames and the like. A better frame-time does improve perceived gameplay more than better average framerate.
     
    Last edited: Mar 15, 2015
  20. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    That's debatable and very subjective
     
  21. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Gonna be honest with you right now, frame latency is not really a problem in AFR except at very low FPS counts; sub-40fps or so.

    And the tech for one GPU per eye exists already; it happens in nVidia shutter glasses tech for Stereoscopic 3D. If you run a game that does NOT support SLI but SLI is enabled in the machine (yes, even Unity engine games like Paranutical Activity, etc) then the second GPU renders the second set of frames for the 3D image, resulting entirely in 0 performance drop but a proper 3D implementation in the game.

    So as far as VR goes, SLI and CrossfireX are going to be go-to environments for users.

    My problem is that SFR kills the memory bandwidth improvements that SLI grants, which means that each card is going to essentially have to fend for itself. But this could be why HBM is coming into play, even though I've pretty much determined that ~160GB/s is about all you need before increasing memory bandwidth yields little to no performance gains. But who knows? Maybe with the degree of hurt that DX12 can use on GPUs (you know. In like 3-4 years. When people use it to actually make games.) can make games benefit from extra memory bandwidth more. But we're far from DX12 benefits, regardless of what people think. By that time, we should worry about Pascal/Volta, and by then HBM should be a definite thing, and mobile gaming may or may not be a steaming pile of regurgitated meowmix by that time.
     
  22. Link4

    Link4 Notebook Deity

    Reputations:
    551
    Messages:
    709
    Likes Received:
    168
    Trophy Points:
    56
    No 160GB/s is not enough for anything above 1080p, there is a reason why 290X Crossfire is better for 4K than 980 SLI, and that's much better memory bandwidth and not just better scaling of Crossfire. And what do you mean by SFR killing the memory bandwidth improvements of SLI, SLI doesn't even improve memory bandwidth, stereo-3D is a different story but for anything else AFR doesn't improve memory bandwidth at all, you just get more frames, so it reduces intervals between frames but not the latency it takes for the GPU to receive data from the CPU, complete the frame, and then output the result. SFR does though, and with latency being an issue in VR headsets, the less there is the better. You don't want to turn your head and then notice the camera turning after and sure there are other factors adding to the overall latency, but any improvement is a +.
     
  23. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    You just spoke utter mess, LOL.

    Listen. Firstly. GTX Titan Black = 336GB/s. R9 290X = 320GB/s.
    Titan Black's memory bandwidth wins, but it did poorly at 4K. Memory bandwidth has little to do with that resolution and GPU power. Architecture is the key. It's why the GTX 980 with its measly 224GB/s (257GB/s or so with Maxwell optimization) memory bandwidth TROUNCES the Titan Black at 4K and generally anything above 1080p, even though at 1080p the Titan Black is generally better (unless you OC a 980 till an inch of its life).

    Secondly, SLI does not benefit memory write speed, but access speed is nearly doubled because each card deals with the memory in its own frame buffer (though the contents are copied across each card's buffer). It is a definite improvement, one which AMD's CrossfireX improves on due to the lower overhead of using the PCI/e interface instead of a bridge (R9 290 and 290X cards only). It is an improvement that will be killed with Split Frame Rendering, because even though each card would have less in its buffer to actually work with, it is producing each frame on its own and its access times etc are thus the same as a single GPU configuration. We've already established that amount of information in vRAM =/= how much bandwidth is needed, or how much of the memory controller is loaded.

    Anyway, I said, if shutter glass tech can do "one GPU per eye", then you can bet it can work for VR tech with SLI, and AMD could make it work for CrossfireX.
     
  24. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Just curious where you get this information from. Is it from personal experience/testing of multi-GPU configurations?
     
  25. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Remember its R9 370 we are talking about, not 370X.

    Second, 370 is a 110W card. The same as R7 260X, which happen to be what R9 M280X was based on.
    So this is probably the R9 M380 or something. A midrange, 60-75W card which will probably match or beat GTX 880M.

    Id say its a great improvement on the efficiency scale.
     
    Link4 likes this.
  26. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    Link4, Cloudfire and TomJGX like this.
  27. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    HBM at 640GB/s huh... so double that of R9 290X, but less than SLI Titan Blacks, and much less than SLI Titan X.

    Things could be interesting... especially with Crossfire's memory bandwidth improvements.

    NOW IF ONLY AMD WOULD ALLOW IT TO WORK IN WINDOWED MODE
     
  28. Any_Key

    Any_Key Notebook Evangelist

    Reputations:
    514
    Messages:
    684
    Likes Received:
    316
    Trophy Points:
    76
    Dumb question. Is that 6 + 8 pin OR 8 + 8 pin (or) 6+8 AND 8 + 8 pin?
     
  29. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Looks like either or.
     
  30. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Titan Black and Titan X have the same memory bandwidth. 384-bit GDDR5-7000 at 336 GB/s.

    It's 6+8 or 8+8.

    8+8 is a scary thought though because it means TDP is over 300W.
     
  31. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Yes, this is true, but Maxwell's memory optimizations are a thing and not easily measurable. 980Ms at 5GHz and 780Ms at 5GHz had different memory bandwidths in the cuda benchmark that went around with the 970 fiasco. The Maxwell chips, even though having the same bus width and clock speeds, were without a doubt, faster.

    This means that 336GB/s on a Titan X would be likely closer to 380GB/s or thereabouts with the optimizations.
     
  32. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    If it delivers on performance I don't give the furry crack of a rat's behind what the TDP is.

    Would actually like to see a Lightning version with 6+8+8 pins :D
     
  33. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    True, Maxwell has memory optimizations so the same bandwidth goes further

    What if it can't overclock worth a damn like Hawaii?
     
  34. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    Too bad so sad I guess lol

    If this 390X can deliver stock 970 SLI performance that's good enough
     
  35. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    4096-bit lol

    And here we are, mobile cards at 256bit :p
     
    TomJGX, James D and Mr Najsman like this.
  36. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    If 300W is the default operating TDP of those things, then AMD hasn't actually improved much at all, because if nVidia's Titan X is even 50W less, then imagine if they made a 300W TDP card to begin with. HBM or not, AMD'd get roasted over an open fire. They need to run some optimizations into GCN.

    Anyway, we'll see. If 8 + 8 is required only for heavy overclocking-type cards, and 6 + 8 is for normal + OC headroom, things might be better.

    But we'll see.

    Yeah, and their memory clock is only 1250MHz effective, and we're on 5000MHz effective, and desktops are 7000MHz effective. We'll see how things play out in the future.
     
  37. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    Well if the leaks are true and it's 50% faster than 290X at the same TDP, that's still a tremendous improvement in power efficiency.

    Also I wouldn't buy too much into Maxwell's efficiency, I mean both of my 970s become 210W+ cards when running BF4 LOL (granted heavily overclocked with a +20 mV volt bump, but you get the idea)

    Edit: yeah ok probably not the best argument there, but for 15% performance improvement, TDP went up by almost 50% (145W to 210W+), I gues that's what I was trying to highlight.
     
    Last edited: Mar 16, 2015
    TomJGX likes this.
  38. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    So? HBM still has a ton more bandwidth. It's new tech, I'm sure clock speed will go up as it matures. It's already been known for a while now that GDDR5 has reached the end of the line and topped out at ~8 GHz.
     
  39. LunaP

    LunaP Dame Ningen

    Reputations:
    946
    Messages:
    999
    Likes Received:
    1,102
    Trophy Points:
    156
    *sighs* It's threads like these that give me a sense of relief in knowing that tech enthusiasts still thrive somewhere... <3

    Thank you for giving me hope <3
     
    TomJGX and octiceps like this.
  40. hfm

    hfm Notebook Prophet

    Reputations:
    2,264
    Messages:
    5,296
    Likes Received:
    3,048
    Trophy Points:
    431
    It's good news but that graph Y axis is somewhat scaled happily..
     
  41. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Just like all GPU marketing slides, they "forgot" to start at zero :rolleyes:
     
    TomJGX, LunaP and D2 Ultima like this.
  42. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Yeah I know Maxwell's efficiency is basically at stock. But even so, it's headroom we're looking at. A 300W TDP at stock can barely OC, but a 250W TDP (even if only 15%) can OC a bit. Etc etc.

    Pshh. We exist. We're dying, but we exist. XD
     
  43. n=1

    n=1 YEAH SCIENCE!

    Reputations:
    2,544
    Messages:
    4,346
    Likes Received:
    2,600
    Trophy Points:
    231
    You can make that pretty much all marketing slides when they want to show off how impressive their "gains" are over "leading competitors"...


    Point taken. Although I'll also point out that Maxwell's efficiency comes from its aggressive dynamic throttling algorithm, which can be a real pain in the ass especially for less demanding games. The so-called "low utilization bug" kicks in because the game doesn't peg the GPU enough for it to go full boost, and when the boost and voltage tables start crossing over you get random crashes without forewarning.

    On that note, when I was playing Wolfenstein: The New Order I actually had to downclock my card by about 30MHz to get full stability because of the voltage/boost table crossover thing. This was a different issue though and is caused by voltage not ramping up fast enough with core clock. Sometimes it would take a good minute for voltage to ramp up, so the core would be running at 1380 MHz on 1.015V :vbeek: :vbrolleyes:

    So yeah that's why I really couldn't care less for Maxwell's faux efficiency
     
    Last edited: Mar 16, 2015
  44. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    In that regard you could simply force problem games to keep your GPU at max speeds and be done with it; I do that for games that'll downclock one card or the other card now and then to keep stuttering nonexistent.
     
  45. Ethrem

    Ethrem Notebook Prophet

    Reputations:
    1,404
    Messages:
    6,706
    Likes Received:
    4,735
    Trophy Points:
    431
    Maxwell overrides it... It doesn't work like Kepler where you can lock the core and run with it, it does what it wants to even with a modded vbios, at least that's the behavior I've noted with my cards and Prema's vbios and his doesn't have a power limit at all so it's not that.
     
  46. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Whaaaaat?
    Do you know that the R9 290X TDP is 290W? With 2816 shaders. 10W more and you get 4096 shaders? How is that not improvement? Its a massive improvement on the efficiency scale. 45% more shaders for 3% more TDP. And the shaders are also 50MHz higher clocked. :)

    Nvidia may have more room for another GPU with GM200, but both previous Titan cards was 250W. GTX 780 Ti was also 250w but with more cores (+2SMX), because they disabled the power hungry FP64 cores.
    Looking at the GM200 die vs GK110, its only slightly bigger than GK110. Meaning the GM200 doesnt have much more cores than GK110. ie 2880 cores. Titan X is at 3072 cores. The GM200 die can fit max 2SMM more than Titan X, 3200 cores. That wont beat AMD by any big margins anyway.

    R9 390X and Titan X/980 Ti with 2SMM more, is what we are stuck with atleast for another year. Until 16nm is here.


    The whole idea behind stacked VRAM or HBM is to reduce power by running them slower and at the same time increase bandwidth because of the bus. I rather use a 4096bit @ 500MHz than 256bit @ 1250MHz any day of the week. Either way you get the best of both worlds: Better bandwidth and lower heat/power
     
    TomJGX likes this.
  47. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    I thought 3072 was already the full GM200?
     
  48. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Could be, but atleast nothing more substantial will come out from Nvidia until new architecture is here anyway. Best case scenario:

    Titan: 2560 cores
    780 Ti: 2880 cores Full GK110

    Titan X: 3072 cores Full GM200?
    980 Ti: 3200 cores Full GM200?

    You can see from this picture that the GM200 isnt much bigger than GK110.
    [​IMG]
     
  49. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    *Titan: 2688 shaders

    Why would 980 Ti have more shaders than Titan X? Surely you don't think a halo card like Titan would feature a cut down GM200, unless there will be a second Maxwell Titan? It's more likely 980 Ti will have fewer shaders than Titan X.

    How would Titan X and 980 Ti both be full GM200 if the latter has 1 SMM more? Full means nothing disabled.
     
    Last edited: Mar 16, 2015
  50. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    That's true, but then GCN proves to only be Kepler-capable? Think about it: If R9 290X with 2816 shaders was a bit weaker than the 780Ti and Titan Black cards at 2880 shaders, then if they need 4096 shaders to romp with 3072 shaders on maxwell, it means they improved on power consumption, but not on actual strength of shaders. As sad as it is, AMD needs to do better. Especially more so if the R9 390 and R9 390X cards only attain that kind of power consumption drop partially due to HBM cutting power draw.

    Next, the Titan Black was 2880 shaders just like the 780Ti. Both titan cards were not the same. The Titan had 1 shader cluster disabled, from 2880 to 2688, just like the 670 had 1 shader cluster disabled (1536 --> 1344). The Titan Black and 780Ti were both full GK110. IF the Titan X is not full GM200, then we might see something stronger indeed, but I dunno how much nVidia is willing to put off things this time around.
     
← Previous pageNext page →