I know only a few games benefit from having 8GB of VRAM that the 880m and 980m have, but what about other applications?
I know that extreme 3D super detailed modeling applications can eat it up, but what about more common software like Adobe Premiere, After Effects or Photoshop?
Is having 8GB of VRAM overkill in the real world?
I mean obviously getting a photoshop or premiere project with 100 layers full of 8k images would use it up, but that's not realistic.
I'm asking any other content creators if there's a benefit in editing photos or videos with 8GB of VRAM and NOT using 3D modeling. I'd like to find a way to use my 8GB of VRAM to show to people that it can be useful.
Thanks for your input.
-
8GB is overkill. Doubtful any game will use more than 3GB at 1080p with my 980M.
-
That being said, 8GB is indeed overkill for gaming, but since it's the next step up from 4GB (they can't use 6GB at full speed without making the card a 192-bit memory bus) there isn't any other choice, and any game that uses even slightly over 4GB all the way to 8GB will just benefit from caching and the extra memory, etc etc.
-
THIS. IS. oh wait nvm I'm not on a desktop forum lol
On a more serious note though:
Watch Dogs, FC4, and COD AW can all suck up 7+GB of vram at 4K. Incidentally all three have worse than dog poo optimization. Now compare those 3 games to Crysis 3. -
Are the games playable at those settings is the real question
-
Dying Light is sort of playable:
Same with COD AW:
Although to be fair, COD AW is one of those games that simply goes "you haz 12GB vram? OM NOM NOM" in terms of how it treats vram, so 7.3+GB is simply a reflection of terribad optimization.Last edited: Mar 18, 2015 -
Dying Light on 980 has too many dips below 30 FPS for me to consider that playable. Looking at the line vs. Titan X, performance clearly isn't limited by VRAM (I don't see any absurd dips in min).
And about AW, exactly as you said. Look at 690 hanging in there with a measly 2GB.
My prediction is 8GB VRAM will be safe for the duration of this console gen. -
I wasn't asking about games at all. I know the 8GB will come in handy in the future. I was asking about any applications that can use the GPU for acceleration. Video editors or photo apps or Illustrator or flash or something that isn't using 3d models.
BTW some Pascal GPU's will have 32GB of VRAM.... so overkill -
^ooops
@octiceps: I was only referring to Titan X in terms of Dying Light being somewhat playable, but yes it doesn't seem to be vram limited.
I think 8GB is more than enough if you're using a single panel. However if you're doing 4K surround you probably will need that 12GB (as well as 3 Titan X's, so glhf with tri-SLI lol) -
-
-
Of course they could have been hard about it and used a 512-bit memory bus and shoved an 8GB buffer on it, and we'd have gotten better memory speeds to boot. Might have made it slightly more worth that huge price tag.
BUUUUUUT nVidia. -
I don't know whether to laugh or SMH whenever I see nVidiots nVidia fans cheer at AMD falling behind the curve and hoping their next release is a huge flop. Then again these are typically (and I do mean typically) people who can only afford mid-range stuff, but still buy nVidia anyway because they think it's somehow more "premium", not realizing that mid-range is where AMD offers MUCH better bang for buck.
/rant -
"Nvidiots" ROFL!
-
TomJGX likes this.
-
-
I think the only true apples to apples comparison would be to take cards that offer multiple vram configs (780 3GB vs 6GB, 290X 4GB vs 8GB, 7970 3GB vs 6GB), test them under identical situations and see if the min framerates tank with the lesser vram card. Similar cards based on the same chip could also be used of course eg 780 Ti vs Titan Black. -
-
8GB is probably overkill, but my 2GB GTX 680m has over 800MB used up running AC3 plus a bunch of other Windows programs open on two 1080p displays soooooo that's a little uncomfortably close to 2GB.
-
-
-
That's like saying "OMFG BF3 2011 last-gen game uses 1600MB VRAM maxed out on my 980M 4GB what am I gonna do!!!"
James D likes this. -
Okay...? Well anyway, point is, while 8GB may end up being overkill, I'd sure prefer to have at least 4GB.
-
Windows won't use up your vRAM unless it's a 3D app, that will use up your system RAM.
-
-
Depends. If you have Optimus it's using the iGPU so system RAM. If you don't have Optimus, Aero uses VRAM on the dGPU. WDDM memory usage in W8 is higher than in W7 plus you don't have the option to disable Aero to free up more RAM/VRAM.
-
-
-
Hey look guise. MORE vRAM
http://www.overclockers.co.uk/showproduct.php?prodid=GX-098-BG&groupid=701&catid=1914&subcat=1576
(Yes, that is a pre-OC'd, 24GB vRAM Titan X)TomJGX likes this. -
I always thought that the instigator of creating new video cards with high vram were these games.
Shame they aren't graphically as great as their predecessors.
I would say to OP 8GB vram is about 2-3 years away in that game engines will actually use it graphically speaking. -
-
-
I am reminded of the fiasco people talked on these forums years ago about discrete mobile mid-range GPU's with very high amounts of vRAM but a very constricted bandwidth.
At the time, most people thought that high vRAM meant better performance, but it was constantly pointed out that the interface limitations and low bandwidth would prevent efficient utilization of all that vRAM.
Namely, someone stated that by the time a full vRAM would be filled, performance would drop like stone because the gpu couldn't process the data properly.
So I don't think vRAM plays a too crucial role when it comes to performance... but rather, bandwidth and bus width are more essential.
The 9600m GT GDDR3 for instance with 512MB of RAM was able to handle games just fine... whereas a slower version of the same gpu with DDR2 for instance was 30% slower, but had 4x more vRAM in some iterations.
My thought is that GDDR5 might run into similar problems when it comes to high amount of vRAM but constricted bandwidth and bus width... I'm specifically referring to Titan X.
Hence, I think a 'balance' of sorts can be achieved.
Namely, if AMD's HBM pans out properly, then even 4GB of vRAM of HBM will offer more than enough performance for discrete graphics cards and will probably outperform 8GB GDDR5 - especially at higher resolutions.
vRAM seems to play a role when it comes to very large/detailed textures and high resolution gaming, plus potential mulit-monitor support... but for laptops where majority of gaming is likely to be done on the laptop itself, I doubt there would be that much of a difference between 4GB HBM and 8GB GDDR5 performance-wise, except at high resolutions where HBM has more bandwidth to play with, ergo it can process the data a lot faster than GDDR for instance and doing so with less vRAM.
Plus, GGDR5 is more restrictive than HBM for future prospects. HBM also occupies much less space.
And, architecture also plays a part in this, as well as how well the games/programs have been optimized in the first place.Starlight5 and HTWingNut like this. -
VR will require 6GB or more. 8GB/12GB definitely isn't overkill there. Pretty sure the Titan X was designed with developers in mind.
I wonder if one day cards will have like 128GB vRAM frame buffers, lol.
"I can game with the NVIDIA Titan ABCDXYZ with 128GB vRAM on 15k monitor at 120 FPS!" Windows 25 and scaling issues persist.Last edited: Mar 21, 2015 -
-
Oh dear god not that 10x faster BS again. Let me just quote 2 posts from another forum because I'm too tired to debunk the 10x faster myth:
octiceps likes this. -
-
btw sorry if that was a bit blunt, I've posted the exact same thing like at least 20 times this week on different forums so I'm a bit on the edge lol
HBM should give the 390X an edge over Titan X in 4K gaming, assuming the core performance is there. -
Also, have there been any tesla GPUs based on Maxwell yet? I wonder if Pascal is going to be Tesla only, and Volta might be the next step for consumers. Just a thought. -
Maxwell's DP is crippled from the ground up, so I've a feeling there will not be any Maxwell Teslas.
I'm sure James Clerk Maxwell won't be too happy about the lack of Teslas.Last edited: Mar 22, 2015 -
So at 4K it should behave similarly performance-wise (adjusting for drivers and game optimizations)... and since R9 390X has been reported to have 1.9 (practically 2) times the bandwidth of TitanX, I'm thinking that at 4K gaming, the HBM R9 should easily overrun Titan, whether it has 4GB of vRAM or 8GB.
But I agree that when it comes to the desktop arena, Nvidia seems to have barely been able to catch up to (or somewhat lags behind) AMD in the bandwidth area, and architecturally they aren't as good as GCN at higher resolutions.
In the mobile sector however, Maxwell is a lot more efficient as it scales better performance-wise and in power use compared to GCN.
Its a shame AMD was unable to come out with anything new that's better in the mobile sector...
Though, one thing that nags at me with the Tonga variant of top-end AMD mobile GPU is that no laptop has come properly EQUIPPED to handle the chip in question... therefore, there's a lot of throttling going on and insufficient power to the gpu so it can be properly tested under it's maximum. -
Here is where Titan X stands in a synthetic test:
Last edited: Mar 23, 2015D2 Ultima likes this. -
No. It's all architecture. The fact that Hawaii murdered Kepler at 4K with less bandwidth but only trades blows with Maxwell at 4K (which it loses to at 1080p, and has ~60GB/s more bandwidth than) means that architecture is the absolute deciding factor here. nVidia sucks at making cards for higher resolutions and that's the long and short of it. Maxwell came out and it still is barely able to keep up at 4K with cards it schools at 1080p... which, while an improvement over how Kepler tore to shreds at 1080p, equalized at 1440p and then lost at 4K, is still unimpressive for high resolutions. Even if the R9 390X cards lose a bit at 1080p to a Titan X, as long as they demolish at 3K and 4K, I will happily recommend them to everybody who wants high resolutions. Especially if it's cheaper. I'd like some of nVidia's marketing to take a dive because of raw results, because they need to start making cards for the consumer again. And for the love of all hell, nobody gives a crap about boost 1.0. 2.0, 3.0 or whatever .0 version they get to.
nVidia lost in the early stages of Kepler vs the 7970 cards, but the top end cards (780Ti and Titan Black) took the highest memory bandwidth. AMD went to HBM technology, so it's pointless for nVidia to even fight. But as we all know, raising memory clocks is a small linear increase in FPS... sometimes... and isn't really all that important as far as gaming goes. But if GPU-accelerated work is a thing and memory matters, the R9 390X will be a winner. Or the R9 390 might be the real winner, being almost as strong with the same HBM applications. As far as I recall though, in Fermi and GTX 200 series (I never could find the name of that architecture anywhere) they were winners in bandwidth. #GTX285WithA512BitMemoryBusWasOP
We don't know how good the M390X will be yet, so we'll wait and see. But I do agree it isn't looking all that good. -
-
-
GT - Tesla
GF - Fermi
GK - Kepler
GM - Maxwell
GP - Pascal
GV - Volta
etc. -
Well yeah I knew that, but every time I kept looking for "GTX 200 architecture" I kept finding a lot of dead ends explaining launch date and just overall saying "GT200"
-
That's because all the desktop GTX 200 GPUs used the GT200 chip, unlike what we have now where 960 is GM206, 970 and 980 are GM204, and Titan X is GM200. They just didn't tell you what the T stood for.
But yeah, it's confusing. Tesla was Nvidia's first unified shader architecture and actually debuted 2 generations before in the GeForce 8 Series, but Nvidia didn't switch to its current naming scheme until the GeForce 200 Series. So the 8 and 9 Series cards, although also Tesla, were codenamed G8x and G9x instead of GTxxx. -
To make things even more confusing, GTX was also used as a suffix for the high performance cards during the Geforce 7, 8, and 9 series. Thankfully these days it's either GT or GTX, none of this additional GTS and GS bullcrap.
Although had XFX made ATi cards back in the day, I imagine the XFX X1900 XTX DD edition would've won just on the name alone.
(honestly someone should just grow a pair and release an XXX edition card already, although I guess you could argue Sapphire's Tri-X cards are technically just that)Last edited: Mar 23, 2015TomJGX likes this. -
Dem suffixes...
Don't forget Ultra, XT, LE, SE, GTO, GSO, GTX+, GX2. And oh god, GeForce 4 MX.
And that's just for Nvidia alone.
VRAM in Applications
Discussion in 'Gaming (Software and Graphics Cards)' started by Phase, Mar 17, 2015.