This test was pretty interesting.
PCGamesHardware tested Titanfall out with 4xMSAA. They ran the "Skyfall" benchmark maxed out at only 1080p.
It maxed out a VRAM usage at 3513MB...
![]()
Titanfall Grafikkarten-Benchmarks - 2-GiB-Karten mit enormen Problemen
-
-
Robbo99999 Notebook Prophet
I'm running Titanfall maxed out with x4MSAA too. It uses all 3GB of my VRAM, as well as 4GB of RAM, both of which is more than I have seen with any game before. This is only on a 1600x900 screen too! Runs very smooth locked at 60fps though for most of the time, I enable Ambient Occlusion in the NVidia Control Panel as well, but only at the 'Performance' Level, if I set it to 'Quality' then frames dip into the low 50's on some maps.
If I run it with textures reduced to Very High then it uses 2.5GB of VRAM, and if my memory serves me right only about 2GB of system RAM. I just run it on Insane textures though, because it doesn't give me slow downs. Good game by the way!!Cloudfire likes this. -
That's one hungry game...
-
Meh, runs fine at 1080p high with 2GB vRAM on 765m... /shrug/
edit:
Just played a game, max detail 1080p, here's what I get, 1806MB used. I'm sure it throws in whatever it can into vRAM, but it really doesn't matter.
-
-
How do you know you are getting most performance out of your cards if you are getting close to maxing it out?
Have you tried Insane textures?
What about MSAA? -
Robbo99999 Notebook Prophet
-
There is no benchmark that I've seen, not sure where they found the "skyfall" benchmark. -
Robbo99999 Notebook Prophet
Ah right, cheers for clearing that up HTWingNut. You might find 'Very High' texture settings will work fine anyway - but reviews have stated that textures are not greatly influenced by the increased settings anyway! A lot of people have complained about the textures, but the game is just too damn fast to stop and look at the scenery! My overall impression of the graphics is really positive though, even if it is based on an old Source engine!
I've not seen a benchmark either. Notebookcheck benchmarked it by just using one of the opening scenes of a Campaign game - so was somewhat reproducible as there was no multiplayer action at that point.
I've found VRAM & RAM usage to be completely stable throughout the game and throughout the loading scenes. It's always at 3GB VRAM, and pretty much always at 4GB stystem RAM usage. It loads it all into RAM & VRAM right at the start when you first load up the game, and stays like that even in game menus in between the loading of different maps & games. -
I just set it to Insane textures (no AA) and here's what I got:
Although I did find a way to unlock FPS, kinda weird. Set in-game V-Sync to ON but force it OFF in nVidia Control Panel. It's not ideal because there is some intermittent lag, but nothing horrible. I guess it's locked to cap at the refresh of the screen, but I've heard of people with 120Hz and 144Hz monitors that it was capped at 60.
edit: here's Titanfall on 880m (8GB vRAM) with same settings:
except the trick to turn off FPS cap didn't work on the 880m hmm.Cloudfire and Robbo99999 like this. -
Of course it will utilize more if more is available. It's probably designed to do so, and may help with performance by spreading the load. It definitely doesn't require that much, though.
-
This pretty much confirms that 2GB doesnt cut it for Titanfall. You using 500MB of the system RAM because your 765M only have 2GB. GTX 880M use more VRAM than 765M, faster cards use slightly more than the slower ones. But not this much, so one can assume some of it is cached on the VRAM. Which is a good feature because it helps with reading textures and such without any lag since its already on the VRAM.
You only tested without AA, so if someone wants to run Titanfall with antialiasing which I use on pretty much all games I play, that VRAM usage will go up by quite a bit I assume.
So this is a good example that one should buy 3GB and 4GB when its available instead of 1.5GB/3GB. -
I'm not quite so sure about that. With 2GB vRAM performance was still more or less pegged at 60FPS with only a 128-bit vRAM, and drops in FPS was likely due to the performance of the GPU not the swapping of RAM from system to vRAM. I'll run the 880m with 8xMSAA and see what happens, but I don't expect it to be much higher than the 3.5GB already shown.
Personally, at native I rarely use AA of any kind, possibly 2xMSAA but for the most part it doesn't seem to enhance it by much, and only consumes performance.octiceps likes this. -
Robbo99999 Notebook Prophet
Yeah, I don't like running Titanfall with V-sync on either, it's too fast a game to live with that lag. I hope they release a patch to unlock the framerate without having to enable V-sync, I'd love to be able to run it at the 78fps that my overclocked panel can support, especially given that it's such a fast paced game. Does anyone know if a patch is in the works to do so?
I agree Cloudfire, I think 3 and 4GB cards are making a lot more sense now, I have a feeling that especially given the consoles large amount of accessible VRAM (at least on the PS4), that games will continue to want that extra VRAM. -
Many unknown variables here HTWingNut.
Would you notice it if you are running a graphic setting that your GPU can do 60FPS on? What would happen if you ran a setting the GPU had problems running at 60FPS?
Would MSAA or AA strain the DDR3 more than the GDDR5, hence you would feel the lack of VRAM more here?
Was your session long enough for the DDR3 RAM usage to kick in?
Could there be any other negative aspects of DDR3 instead of GDDR5 like lag and stuttering? -
Robbo99999 Notebook Prophet
I would imagine a GPU having to use system RAM will slow it down a lot, depending on how often it has to access that info there, I guess it's like temporarily reducing the bandwidth of your GPU memory from something like 90 GB/s down to the bandwidth of the system RAM (about 16 - 21GB/s), so that's about a reduction in bandwidth of 80% - I think that would really reduce fps if the GPU has to wait for data from system RAM, I think it would create a stutter during access.
I reckon the greater the proportion of system RAM used in comparison to your size of VRAM, then the more pronounced the stuttering will be. So, if you had a 3GB card and only 200MB of data in shared system RAM, then it wouldn't stutter much in comparison to if you had say 1GB in shared system RAM. Just a theory.Cloudfire likes this. -
It was same map, about same amount of time on the map (one round ~ 10 minutes).
Settings were identical (all high, with 'insane' textures).
The AA would just kill bandwidth for the 128-big vRAM that it wouldn't be a fair comparison. -
-
Robbo99999 Notebook Prophet
-
-
Robbo99999 Notebook Prophet
-
I'm frequently 40-60FPS. I'm starting to think there's something wrong with my stock 680m. Or are those results after overclocking? -
Robbo99999 Notebook Prophet
I'm getting a constant 60fps with very occasional dips to the mid 50's, with everything maxed out on Insane textures bar x4 MSAA, along with added Ambient Occlusion forced through the NVidia Driver Control Panel. I have a 3GB card though, and it is overclocked, by a lot so is a good chunk faster than a stock 680M and a little slower than a 780M. Your card is a 2GB card, so you could try reducing your textures to 'High' rather than 'Insane' or 'Very High' ('Very High' uses 2.5GB of VRAM on my system).
Also, it uses a boat load of system RAM too on Insane textures - 4GB on my system, and you've only got 4GB total system RAM, so you might be doing some swapping to the hard drive with the page file even. I'm pretty sure that with your RAM & VRAM limitations that you'll need to turn the textures down, probably down to 'High'.
(My screen native resolution is only 1600x900, so that helps to increase the framerate, by close to 30% when comparing against your 1920x1080 screen) -
It makes no difference how much VRAM you have, because it will use most of it regardless. It's probably caching textures for different maps just to make you think it's doing a lot. There is nothing about this game that should use anything close to 3GB VRAM.
octiceps likes this. -
Robbo99999 Notebook Prophet
-
-
Robbo99999 Notebook Prophet
-
This kinda of stuff is gonna start an unnecessary Vram race. By this logic the 780 ti with 3gb vram is not enough for this game. They simply wouldn't do that, they are not crytek. It is good practice to use every bit of available vram. In fact, I have long argued that not doing so is bad practice, but it is not uncommon for games not to do this as you well know. This allows the performance of the game to scale better.
If you dump more of the resources into the vram it allows more things to be on-demand, and allows for more consistent performance without constantly having to dump from the main memory to the vram.
Of course, there is a ceiling on how much Titanfall uses, as there are only so many assets to throw into the vram, and there is a floor on how much vram it needs without a serious performance drop. But keep in mind- just because the game uses it doesn't mean it needs it.octiceps and Robbo99999 like this. -
Robbo99999 Notebook Prophet
Titanfall recommend a 3GB card for Insane textures though, so I'm sure the 780ti with 3GB VRAM would be fine with it.
-
octiceps likes this.
-
Robbo99999 Notebook Prophet
-
There doesnt exist any method of monitoring what is being cached in the VRAM and what is being actively used. Hence there is no way of assuming 2GB is enough.
The graph presented in the link in the first post is clear and shows that 4GB seems to be required for Titanfall with Insane textures and 4xMSAA.
Like Robbo explains, 2GB GTX 770 @ 1088MHz get 52.2FPS while 4GB GTX 680 @ 1059MHz get 66.8FPS. At that clock, those should get similar FPS since its the exact same chip, but the 4GB GTX 680 get 30% more FPS.
Sounds a bit extreme but thats what their test show.Robbo99999 likes this. -
-
-
perhaps blurred visual inside the Titan is another factor that balances them with the pilots, if its crystal clear i bet the pilots r gonna struggle further taking on a titan
-
Say I set a game to some insane texture setting. The game will likely load all of the needed assets for the current level into the memory. Depending on the available vram, the game will send a chunk of the textures into the vram. Say there is 1GB of vram available on the card, but there is a total of 3gb of loaded textures for the current level ready to go in the main memory. The game will send say 800MB of textures to the cards vram, those being what is currently being rendered will take priority vs the others.(I'm being really whimsical with these numbers) This is not what is within the distance your game renders but what is actually on the frames being displayed, as that is what most well-optimized games do. Say you are facing one way, only what textures are visible are loaded into the vram, as stuff behind walls or out of your viewing area is not rendered. Most modern games have so many brushes and meshes that if half of one object is visible, chances are you are not actually rendering the whole object's textures.
Off topic, this is why there is no universal solution to wallhacking in modern games at the rendering layer- it used to be nearby players were rendered no matter what, but that is no longer the case. The game itself must be forced to render that player's model anyway. With older games, you could intercept the way textures were rendered making universal undetectable wallhacks for any game that renders the player models even if they are not on screen. This is becoming less common though. If you want to see what I mean and how games render things, mess with a program called 3D Analyze which is an optimizer allowing you to changes the way Directx 8.1 and 9/ Opengl games render, allowing legacy hardware to be more compatible.
So back to rendering, only the visible objects are rendered, using that partial texture dump on your vram. Problem is, say the screen drastically changes, you turn your camera around for example, the group of needed textures potentially changes drastically, given the new shown objects don't use the same ones. The game engine dumps some or all of the textures in the vram, then transfers the new textures from the main memory. While this is really quick, without wiggle room to have those textures already there, you may experience stuttering. The reason for the fps dip is this transfer of assets must occur before a single frame with new assets is rendered.
This is also has to do with large areas causing low fps.
Taking the factors of lighting and particle effects and the like out, the gpu doesn't actually struggle to render everything. But since so many assets are needed on the screen at once, without adequate vram both on the card and allocated by the game, the transfers must happen several times to render each frame: dump a texture here, because it needs a new one here, and on.
It could even be to the point that it must make several transfers before rendering a single frame if the objects on screen require more memory than is available, causing a -partial render - transfer - final render effect, multiplying the time a frame takes to render. This gets really messy with lighting as well.
You are probably wondering what kind of game would need so much space for textures. Remember I'm pulling those numbers on the memory used out of my butt, because I'm not accounting for other things that use the Vram. It can only use what it has left after other functions and effects. Lighting effects are a big offender. The partially rendered frame is in the vram too obviously along with the meshes, ect ect. The odd one out is the textures though, if you don't have enough vram for the basics it will likely just crash. This is why you get a lot of instances where you will get 60 FPS+ on a card but annoying stuttering every now and then.
The better cards cannot only have more buffer room for the textures, possibly allowing the game to load everything it will need for a scene or level at once, but a better card will be able to transfer to and from the vram much much faster, so the overflow on the vram is less consequential to frame render time, adding to stability. So the amount of vram needed because of the textures is not a hard necessity. This is important.
To achieve 60 frames a second without stuttering, each frame must be rendered in 16 and 2/3 milliseconds or less. If say some frames are rendered slower but others are faster averaging it out to 60fps still, it will feel choppy or uneven.
Some games actually wont use more than a set amount of vram, despite actually needing more with all of the extra effects on and maximum textures. I have no idea why this is, it just is, and it is pretty common as far as I've noticed. So the game pretty much puts a ceiling on its own stability and frame render time.
The only reason a much newer GPU will shred an older game like this that doesn't allow for the full utilization of the vram for the assets needed is simply the advances in the transfer rate in and out of the vram. If my memory serves me right, the 780 ti has a memory clock rate of 7000mhz. The gtx 680 had a memory clock rate of 6008mhz (or 6080mhz?) and so on. Not long ago most mobile cards had a clock rate of around or below 1500mhz, but some are much higher now. The 780m for example is at clocked at 5000mhz, the 680m is 3600 mhz, the 580m is 1500mhz, and so on. The 880m is- wait for it - still clocked at 5000mhz. -.-
The improvements that are made in memory clock rate are huge. Given there are no other bottlenecks to this (there are, and they advance at their own rate) the GPUs memory clock rate allows it to perform well, despite a lack of vram or lack of the games actually utilization, simply because the transfers are so fast, drastically cutting the render time of frames that needs multiple swaps to and from the vram.
So to sum up games sometimes actually will not take up enough vram to accommodate the maximum settings. However, if the game or the card does not have enough vram for everything, it does not necessarily mean the card wont handle it just fine as long as it can make the transfers and renders the frame in under 16 and 2/3 milliseconds. -
-
Robbo99999 Notebook Prophet
syntaxvgm, thanks for all that explanation, it sounds like you know a lot of the technical background of how this works, and I understood some of what you were talking about there. I certainly understood your last paragraph. Yes, so you are saying that if the card has a high enough memory bandwidth then it can make up for not having 'enough' VRAM by being able to quickly swap out required textures from the system RAM. Other factors would come into play here though as well right, like how fast your system RAM is, e.g. 1333Mhz vs 2133Mhz RAM? Wouldn't the system RAM be the bottleneck in this scenario though - 18GB/s vs say 100GB/s for a GPU, so that makes me think that GPU VRAM speed would not help in this transfer process? And I guess this cards ability to swap textures into and out of VRAM quickly is only going to compensate for a certain amount of 'lack' of VRAM, which is the "floor" that you mentioned in one of your earlier posts. Do you know if this 'Compensation Effect' is significant with some cards, how much can these cards get away with in terms of lack of VRAM, e.g. lacking 30% the 'required' VRAM yet still run it without any loss in fps, or is there always a certain amount of loss of fps? (This is probably a difficult question to get an answer to - thinking about it!)
-
-
Robbo99999 Notebook Prophet
The 880M's 8GB of VRAM is kind of funny! Might be required for "Titanfall: The Revenge!"
-
I think compared to Infamous game, Titanfall's graphics have nothing to brag about. But i still love the graphics of this FPS game, especially when you have to get out from the Titan and jump up in the sky. The scenery looks gorgeous. Probably the most addictive FPS i've ever seen. I
-
-
While I did just say this is more common than you would think so I now I might be contradicting myself, it is not a common issue you run into, therefore there is really no benchmark for this to speak of, nor a set percentage for lack of vram you can get away with. This problem normally occurs if the game was developed improperly, or the common cause is simple lack of vram on the card, ie you are stretching running a game on older hardware.
I'll give some other numbers though.
I don't know how much the man memory bottlenecks it, honestly there is a limit to what I know. What I can attest to is the transfer rate.
As you may know there are high end GPUs as of recent, namely the GTX 7 series, that are actually bottlenecked by PCI express 2.0 barely, and I don't think this was true with the 6 series, with maybe the exception of the 690 which I know was bottlenecked, but it is a dual gpu card. PCI Express 2.0 Transfer at 500MB/s per lane, meaning 16x is 8GB/s. So knowing that some cards are now bottlenecked by that, that means with common high performance gaming hardware and a top of the line GPU, you can achieve at least 8GB/s transfers between the card and the memory given all bottlenecks.
1600 MHz non-ecc DDR3 memory transfers at 12.8 GB/s for single channel, 25.6 GB/s for dual channel. Meaning that the memory is far from holding this transfer rate back.
Now, when I say 8GB/s, I mean the top tier GPUs, or multi gpu cards are bottlenecks by PCI express 2.0, and just barely at the moment. Now you see why memory doesn't bottleneck this currently, and overall memory clock rate has such a small effect of game performance, most gains not related to the GPU.
Now at 8 GB/s, (Keep in mind, that is really high end and really favorable conditions), in addition to rendering the frame, there is probably enough time to bring in 100+ MB of new data into the vram. In reality, this number would be lower in most circumstances, and is mixed a ton of other crap, but still a lot of new data in enough time to render at 60 FPS. If you are able to pull texture data in that quickly, you can overcome not having enough vram in some cases quite nicely.
When I speak of this "Compensation effect", I don't really mean it compensates, but rather brute forces it in a way.
I originally discovered this a quite bit ago while i was running an older game with mods with a great GPU, but still has random FPS dips- that didn't show up... I felt the dip but the counter didn't show it- it stayed at 150-200 fps or more. (Might have been cod2, I don't remember). I had a great 512MB GPU that shredded the game as it was an older game, and I had most of the settings maxed, as I didn't like all of them. On a couple of the HUGE maps, mainly on mirrored map sniper servers, even if the map was optimized to only render what was shown (That was done manually by the mapmaker in COD, called portaling or something), which it likely wasn't, you pretty much had to render half of the map at once, and since the map was mirrored, you were using almost every texture included. With these custom maps, they got a bit crazy on the amount of textures for a 2005 game. I discovered that If I had small things like soften smoke edges on with max textures, the dips occurred. One or the other, it was fine for most maps, so I usually turned something off.
You would think that was normal, but when I upgraded to a 1GB card, and that still happened, I decided to look into it. I eventually discovered the game had a hard limit to the amount to vram the game would use since it was not dynamically assigned. With the stock levels the developers had a certian amount the allowed it to use for some reason. But with community made maps, especially large and poorly optimized ones, it ran out of the allocated vram. With my new 1G GPU, (This was 2010? Cod2 was from 2005 unless is was COD1 this happened in idk) the dips still happened, but the fram render time only rose enough that if every frame took that time it would still average 70fps or so, which means I wouldn't notice it normally. Meaning since it rendered so fast and was able to transfer stuff to the vram so fast, it overcame the problem and was able to keep above the 60 FPS average frame time of 16 and 2/3 milliseconds for every frame. It was still only using less than 300 MBs of vram though. So I went to a forum where I had previously gotten help with game development on UE3 and unity and the like, and asked about the fact it only used 300MBs and the frame render time spikes for single frames, and they explained that not all games actually will expand based on their needs, but rather a preset optimized level, which makes the problem common among games with mods, and the single frame render spikes were for every time it needed new textures to render that frame.
Obviously the same applies to games that need more memory than your GPU has. Your gpu will need to pull the new textures in before it can render a single frame of the new area, causing each frame that it needs new textures to take much longer to render. However, the faster it is able to do this, the smoother it is. Problem is, commonly, despite getting 60 FPS or more average, the frames that it has to render with the new resources take longer than 16 and 2/3 milliseconds, likely 2 or three times that, but only for one single frame. Leading to the unstable feeling no matter what the average fps if it cant get under the target time, likely leading to that "It shows a constant 60+ fps, why does it feel choppy?" question.
So what I'm trying to say is this effect is for a card on the border of the Vram required or not being assigned enough if so sudden and quick it does not actually affect your FPS in most cases. The frames that it needs to pull new resources will always be slower to render though, causing that feeling of instability if it is too slow, sometimes not even showing up in your FPS ticker. If the is an extreme lack of vram will cause constant swapping, meaning the frames will be consistently uneven in the time they take to render.
How much vram can the card go without? In most cases not much if you are on max textures unless the GPU is probably years ahead of the game, in that case it's likely its vram exceeds the need. However, if you have the textures set to medium or low, even with newer games they will only be so detailed, and cards now are so fast the even despite not having the memory for the sheer amount of textures in the local area for the level, your card can grab new textures before it finishes a frame, as it's likely only a few megabtyes at a time. In that case, your card can get away with much more percentage wise I guess. Also keep in mind that despite being able to get such large chunks of data for a single frame, a good chunk of that is the basics, textures aren't the only thing happening, tons of other data is also received at the same time. Generally, textures are one of the few things that are meant to be left in the vram for long periods of time, as everything else is constantly changing.Robbo99999 likes this. -
Robbo99999 Notebook Prophet
Thanks syntaxvgm, that was very well explained, thanks for taking the time to do that, made sense. Yep, so with a powerful GPU it's possible to get a high average framerate, but have severe frame time spikes causing stutters as the GPU waits for textures to stream in over the PCI which is the bottleneck being the 8GB/s. Kind of goes hand in hand with what I always thought, not enough VRAM then stuttering. I wasn't so aware of that being possible at high frame rates though, so cheers for that, and I wasn't aware of the fact that some games limit themselves with how much VRAM they can use, so that stuttering can be down to the game engine as well as not having enough VRAM.
-
But in the end this doesn't seem to be the case with titanfall. It runs without stutter whether 2gb or 8gb vram with same settings.
Beamed from my G2 Tricorder -
Robbo99999 Notebook Prophet
EDIT: I re-read the article in Post #1, they don't talk of stutter (don't even mention it), just refer to severely reduced frame rate with 2GB cards, so much so that a 2GB 770 only performs marginally better than a 2GB 660, where in reality it should perform massively better. (also seen when comparing 4GB 680 vs 2GB 770). Maybe they've somehow designed the game in a way to spread the textures 'evenly' throughout the RAM/VRAM that it requires so that it's not place specific on the map. That way you could assume that for 2GB cards where say 1GB (3GB VRAM requirement) is stored in system RAM that you always have to get the same amount of data from RAM per frame because it's not place specific in the map and not dependant on where you're looking at any given moment. I could imagine that scenario resulting in a decreased frame rate, but without any spikes or stutters as you move through the map. Hmm, finding this hard to explain, might sound like word spaghetti! Still, all just theories! -
-
Robbo99999 Notebook Prophet
-
Robbo99999 Notebook Prophet
On a related note with Titanfall, but a slight tangent, I've noticed that I had to dial back my overclock 1 notch on this game - it wasn't stable crashing every few hours. Strange, because my other games are stable at my max overclock, and Titanfall doesn't stress the GPU to a consistent 100% due to being locked at 60fps - maybe it's the large fluctuation in GPU load from low to quite high load throughout the game that makes it more sensitive to overclocks. Either way, just put this comment here in case you're finding that you're getting crashes with Titanfall.
Titanfall - 4GB VRAM required
Discussion in 'Gaming (Software and Graphics Cards)' started by Cloudfire, Mar 24, 2014.