All the vRAM.
Jokes aside, since this debate will continue to rage, I figure we could use a thread dedicated to it.
Confused about just how much you might need? Ask away, and someone far more knowledgeable than me might answer!
-
Okay. If this is the case, I am going to openly link it. I wrote a guide on how vRAM works, how much you need and what it bottlenecks and what it does not bottleneck.
This is a full, comprehensive guide here. It should contain all you need to know and more, and if there is anything you wish to ask, feel free to do so.
The Video RAM Information Guide -
A good example of why this is used is BioShock Infinite, this game uses relatively little VRAM despite the world being fairly large in size, however you get bits of microstutter on occasion when the game engine is streaming parts of the world on demand from the HDD.
Other times, the game is simply badly coded. Wolfenstein demands copious amounts of VRAM, presumably for seamless area transitions however you still get loads of annoying texture pop-ins. -
It at least runs fairly well though. -
The answer to this is 8GB if you want to play console ports, end of story.
Sent from my HTC One_M8 using TapatalkCloudfire, D2 Ultima, TBoneSan and 1 other person like this. -
-
-
Is it lazy coding? Absolutely. But exactly when did console ports have great optimization when brought to the PC? I can't think of such a time.
The reality is that while the consoles are woefully underpowered compared to a modern midrange gaming PC, they have that damn unified GDDR5 so we are at a distinct disadvantage until we make up for that by increasing our vram or we wait for the developers to start actually porting properly. Which solution sounds more viable? I think Ubisoft has shown the answer to that with Watch Dogs.
Sent from my HTC One_M8 using Tapatalk -
-
Ubisoft are saints compared to WB's Shadow of Mordor.
-
Sent from my HTC One_M8 using Tapatalk -
-
Yep, nothing appears to be wrong with SOM apart from being a total Vram w.hore. -
-
I am looking at the Asus G751 but I'm put off by them having lower VRAM than the competition. I'm not tech savoy VRAM but is Maxwell 4GB "better" than Kepler 4GB?
-
-
HT hold me I'm scared this Asus will have problems if games are going down this 4gb more path in the future.
-
J.Dre likes this.
-
I've been good with 4GB's for a while now, and would say that 4GB's is all one needs, as HTWingNut suggests.
As for what one may want to "future proof" their system (a phrase we gamers love to throw around
), I think 6GB's would be ideal. But if you plan to upgrade within two years, there's no point.
P.S. As 1080p will be the standard for mobile gaming for at least one more year, I am speaking in reference to 1080p. Higher resolutions obviously require more VRAM. -
-
-
I know it's not a huge difference, but it is a difference. That's all I was saying.
-
Can't wait to play SD with this.
Save 10% on Sleeping Dogs: Definitive Edition on Steam -
)
-
It seems the Definitive Edition has higher population density. It also has lower system requirements, so optimization could've been improved. I'll probably pick it up unless reviews stink as it's only $15 for me, which is cheaper than getting all the DLC for the original.
LOL it's funny--Square Enix and its studios are just about the only companies I trust right now. Their track record over the last few years has been really good. DX:HR, Sleeping Dogs, Hitman: Absolution, and Tomb Raider have all been very solid, well-optimized games. -
Yeah, it still kinda sucks when running the max AA settings though. And even with it, there are still jaggies. Downsampling also didn't work with it. And Wei looked like Wol Smoth sometimes so I think they fixed his facial proportions too. I'm sure reducing the high res textures sizes are somewhere in there. That helps on performance. In DEHR, they added and improved on shadows so I'm sure that made it in too with this. Reduced price with all dlc included is also a plus. They may not have fixed the 3dvision issues though unfortunately because they didn't on DEHR (not have 3d options in the in-game menu so ppl can control it and view it easier by just using the 3d hotkeys).
-
That is a good point indeed... but if I had $15 I'd buy Binding of Isaac rebirth (for $10.04) and Shadow Ops DLC for Beat Hazard (for $5.00)
-
When you downsample, you're supposed to turn AA off, unless you mean through profiles. But I think that using DS on it, it would actually get rid of jaggies unlike it's AA.
-
LOL what's the point. Downsampling is OGSSAA, Sleeping Dogs' in-game supersampling is also OGSSAA, so same thing. GeDoSaTo doesn't work with DX11 games so you can't use the Lanczos sharpening filter. Hopefully, DSR comes out for Kepler at the end of the month like it's supposed to (and isn't locked to GTX GPU's only) as its only notable feature is the adjustable 13-step Gaussian filter. Normal downsampling and upsampling in Windows uses a bilinear filter which blurs/softens the image.
-
You have multiple downsampling settings with Gedosato. You don't have that many with just SD alone. Downsampling is also faster than the setting version AA as I see from some games that I already have.GFE has a very small list of games that it works with for GFE doesn't detect every game. Then you will have to wait on Nvidia to add support which is at a terrible pace as you can see from their other softwares. Gedosato works with anything besides anything higher than dx9 but the author is currently working on dx11 versions. Even if Gedosato was dx11 compatible right now though, I think the way that SD does it's settings may still not have it work. But we'll see. Is it already working for Nvidia's DSR for SD?
-
DSR is only available on the desktop GTX 980 and 970 cards right now. Support for older and mobile GPU's remains to be seen.
-
Just made an angry post on the Asus ROG forums threatening them that I'd buy from MSI instead if they didn't release 6/8gb versions of their G751. D:
-
Attaboy, JinKizuite. Now you should post on game forums threatening to take your business elsewhere if these devs don't optimize properly.
Ningyo likes this. -
The g751 is just fine at only 4GB, lower memory means better over clocking.
-
No it doesn't.
-
dumitrumitu24 Notebook Evangelist
PS4 has 8gb ddr5 but its not fully 8gb available for gaming.I read a article that ps4 and xbox can use 5-5,5gb vram for gaming.Im just interested how much vram will gta 5 consume.
-
Thing is developers can use 5.5GB VRAM and 512MB as RAM for the game (PS4 Memory is unified/shared) and consoles are not expected to get "ultra settings" sure developers can only require 3GB VRAM card then people would complain the textures does not look any better than the consoles therefore lazy developers not taking advantage of pc is thrown everywhere -
4GB minimum to survive through this console generation.
-
Notebookcheck seems to have benched Shadows of Mordor with a 3gb 970m @ ultra with 51fps. Has the ultra texture pack been released yet or is this just high still, if so what's up with that? Hardly seems like it would stutter through lack of VRAM at that frame rate.
-
-
As much as i can!
Fun to plan a whole scene in 3ds max and import it part by part to the game engine and give those dumb objects a life by script.
From the video games developer point of view.
From the gamers point of view, 8GB standard opens up (for the developers) a lot of options both for textures sizes and diversities + scene complexities and sizes = more realistic look. (someone mentioned 4K?)
In other words it allows developers to less sacrifice the art for performance...they can "get wild on the canvas".
It's the gamers interest too.
And i'm not talking about bad console ports...pure PC Gaming. -
I'm fine with a game requiring lots of memory if it frees up the artists to create tons of varied, unique assets with amazing level of detail. Totally understandable. But so far, we have not seen that with these newer titles that use up ridiculous amounts of VRAM without a corresponding generational leap in fidelity, which is why they seem poorly optimized.
If Witcher 2 can look so good with <1GB, if RAGE can render its entire world without a single repeated texture using ~1.5GB, if Crysis 3 can fit start-of-the-art technology and visuals within 2GB, then what excuse do these devs have? -
Yeah right on Octiceps. I think its going to take a heavy hitter like Star Citizen to show the world what a game requiring huge resources should look like.
Of course its a PC exclusive, but I'm pinning some hopes on it jump starting another renaissance period in PC gaming. -
-
vRam size, or even storing textures and static information in vram rather than in main memory, has absolutely nothing to do with framerate.
Bad developers use bad programming practices, and continue to use bad practices when making "PC ports". What happens here is that because of the way the current console architecture is set up, suddenly the holy grail in visual fidelity has to do with vRam hacks. That.. frequently use cpu. So that all the information has to be resubmitted anyway after the calculation is done. But hey, so all the assets are stored in what is defined as video-ram. And this is so convenient that they're now bringing this convention over to "PC" as well.
That this actually gives you a performance hit is of no concern, of course. What's really going on here is that video ram is becoming fast enough that you can store and transfer information to and from it sufficiently fast enough to not really need dedicated main system ram.
So everything is slower. And suddenly you need more dedicated video-card ram. Because reasons.
If you're wondering why some of us have retired our hobby more or less permanently this generation, wonder no more. -
you'll face stuttering and slowdowns, which have a big, negative effect on the gaming experience. Previous to the launch of the GeForce 900 generation, only the top models of the GeForce GTX 870M (6 GB) and the GeForce GTX 880M (8 GB) could meet the high memory requirements in the notebook range. This gets especially apparent in the minimum fps. Instead of 26 fps, the predecessor of the GTX 870M only reached 11 fps (GTX 770M @ 3 GB) at ultra settings. Likewise, the GTX 780M with 4 GB VRAM stuttered more than its counterpart of the 800 series. While the performance was comparable otherwise, the minimum fps were more than 50% lower (13 vs. 30 fps).octiceps likes this. -
Each generation of GPU we get more bandwidth to the vRAM...We need where to write all this power in real time.
All the stutters and hold ups during run time causes by instantly kill a large amount of objects by code and than reloading a new set in a single frame where it's condition started the function...a lot of work for a fraction of a second...that's why they invented Occlusion Culling, loading the game objects to the vRAM part by part by player position.
Cry Engine is awesome.
Think of it's Evolution. :thumbsup:
I use Unity3d Engine 4.6...because of it's versatility.
Already bought version 5.X for less than a half the price in Pre-Order.
Tons of new features...state of the art can't wait. -
Sounds great, obviously. So now you just put everything, including textures and resources, of everything that /may/ be possibly seen in the nearest game-world galaxy into ram. And then quickly fetch the object whenever it's visible. Great efficiency! It all fits! All logical!
Sadly, no. What actually happens - and this is of course something like 99% of the time, unless you for whatever reason decide to simply create objects that aren't visible (by whatever standard "visible" means through clipping scenery, and so on) - the object behind the first object is only /partially/ obscured. Now suddenly you realize that the real reason why occlusion culling is even mentioned when it comes to the "need" for impossible amounts of ram, is that the actual engine simply has no resource management.
Because unless the object behind the first object is invisible all of the time, what actually happens? It means that this version of occlusion culling has no impact, but the existence of the algorithm requires all the resources in the nearby galaxy to be placed in vram.
What sort of negative impact this has on performance, no one in AAA cares about that. That better variants and implementations of partial occlusion detection exists, and has been employed previously with massive success. That per object occlusion is a study on it's own, and that there are infinitely better implementations out there have been scrapped - even on current architectures - is simply of no concern to anyone.
Because this generation, increased ram means increased framerate. Because reasons. -
You are right but.
"the detection" as you said is not calculated in real time but Pre-Calculated. When the scene is done you tell the algorithm XYZ numbers of Occlusion Areas across the scene...the Pre-Calculation Algorithm check 1 by 1 each corner of all the Occlusion Areas, checks 360 for all visible object from that intersection (and it takes a LOOOT of time)...Dynamic Objects in the scene does not "Occlude".
In the memory in real-time you have a struct of Occlusion Areas (cube) and the list of Static Objects visible in that cube...when you move from cube to cube you load unload or enable disable game objects not relevant to this cube. -
...so it happens at build time anyway. Why not just build a heap or something and create a cache prediction from that?
How much vRAM does one really need?
Discussion in 'Gaming (Software and Graphics Cards)' started by Templesa, Oct 6, 2014.