I'm using Win7 64bits with 8gm ram and game alot, what is a good mb settings to put in value for begin and maximum?
-
-
The OS picks the best virtual memory for your system by default. I wouldn't touch it. If your computer is slow, its the hardware(processor or GPU) that you have and not the virtual memory.
-
I use System Managed Size. Assuming your aren't nearly out of hard drive space, that will likely work as well as trying to set it manually.
If you do try setting it manually, you risk setting it to low to handle the odd peak in memory usage. That is, if you set it to something low like 512 MB and one day you run 3 times as many things as usual for some reason, you may run out of virtual memory, and things will grind to a halt as all of a sudden your programs can't get the memory they need. If you follow the old "two times your physical memory" recommendation and set it to 16 GB, you're just wasting space, as by the time you hit 14 GB or so of virtual memory in use (including physical), your computer will be slower than molasses in January because it will be reading virtual memory from the hard drive so often. System managed tends to be the best option, as it will keep it reasonably low most of the time, but if there is the odd day that you need more for whatever reason, it will increase it. It may be a bit slow when you have high virtual memory usage, but at least programs won't crash due to not being able to get memory (which can happen if you don't have enough virtual memory).
Having played around with this some over the years, the only time I'd set it manually nowadays is if I were quite low on disk space and wanted to make sure I'd always have a certain amount free so that, for example, programs could save files properly and not fail to save because all the remaining free space had been dedicated to virtual memory. But at that point, it's really time to be buying a larger hard drive.
JamesBI isn't correct in saying that it's always the processor or GPU that's the cause of slowdowns, however. Inadequate amounts of memory is another common cause, and there are others as well. I routinely run into slowdowns on my work computer due to not really having enough memory. The solution to that is more physical, not virtual, memory. I doubt you'll be running into problems due to memory amounts with 8 GB if gaming is your primary use of the machine, however. -
Leave it at system managed, there is a lot more to virtual memory, memory allocation, etc than meets the eye in Windows. There's a blog on technet called Pushing the Limits of Windows by Mark Russinovich that covers how memory works in Windows. It is a bit of a long read (for a blog), but it's worth reading.
-
The question which I'd ask is whether you have a HDD or SSD.
If it's the former, follow the advice given above and enjoy.
With a SSD, I'd add more physical RAM and kill the paging file altogether. -
I personally just set it to a specific size and leave it alone depending on the physical RAM the system has; 4GB will get a 4GB Page file, 8GB will get a 1GB page file (or 2) and more than 12GB I'll disable it or set it to use 1GB max. On my own desktop and laptop (16GB and 8GB respectively) I have it disabled to save space on their 128 and 256GB SSDs, but they have different workloads that I know won't exceed the physical RAM limits there.
But, from what I've read, Windows will make a paging file anyway in some odd cases irregardless of what you set it at, but if that happens to me, so be it. -
)
alexhawker likes this. -
I have set it at 6142mb for both and im using a SSD. Ive read that this is the most stable for me.
-
killkenny1 Too weird to live, too rare to die.
Better reduce it to 1-2GBs than kill altogether. -
-
So at a 20GB page file running did the simulation ever end, or could you ever [ESCAPE] from it?
Edit; the debate on a PF on a SSD has been around for a while. Yes having one will use the SSD and there by wear. I've had mine on none for well over 4 years on 2 different systems. This is not for everyone though.
My simple rule of thumb is do not play with it. If you have to at least set it to 1GB. If you think you knowledge of any prevailing issues is good enough then you could disable it.
Even if not related though, whenever you ask for help on any issue disclose this and any other system customizations. You would be surprised how often a seemingly unrelated setting can cause an issue. -
I cancelled it as soon as I saw what was happening, anyways, even with the PF on a SSD, it slowed down to a crawl so there was no point on letting it running. I have the PF on my SSDs on my M6700 and the SSDs are still reporting to be alive and kicking at 100% "Health" after close to two years (will be two in october). I have about 1 TB and 1.35 TB written on each drives respectively.
Back to the virtual memory question, I just saw a post on the ArsTechnica forums in a thread about memory management in windows 7 and one person replied with a comment that the memory mnagement chapter of the Windows Internals book is 200 pages long so memory management in Windows is pretty complex.I used to want to tweak everything when I was younger, but then again, I figured the guys at MS probably already tweaked the page file settings to something appropriate. Maybe not the most optimal settings for a given workload, but close enough imo.
-
W/O the page file on my 480GB Mushkin I am at 100% good with 20.66TB read and 11.35TB write. I hear when they go though it is a rapid decent. This drive is since September 2012 and 16142 hours. Originally I was going to use it in the P79xx from gateway but it ended up in this machine.
-
-
Thanks to a member of the arstechnica forum, I got to read this piece, it's a pretty decent explanation og paging and virtual memory and is much more accessible that Mark's Russinovich's blog posts over at technet.
The Out-of-Memory Syndrome, or: Why Do I Still Need a Pagefile? | Azius Blog -
The speedup he does not address is the HDD. With defragers, such as Disk Perfect, they will move the page file to the middle of the platters on a disk. If you tend to fill the HDD this could cause either a skip of data written to the end or beyond of a pagefile on the platter or fragmentation of the written file or free space. Now you can optimize a disk with partitions to avoid this issue.
With a SSD it is a matter of free space and/or unneeded writes establishing and maintaining the page file. Since SSD's access, and read/write speed, is consistent on all sectors page file location is irrelevant. With proper firmware over provisioning and wear leveling the stagnant writes to the pagefile area should not cause uneven NAND ware. -
..sometimes what actually happens is that a program will fill the page-file with cache before using ram. Sony Vegas for example does that.
And if you're lucky with the free space you have left on the disk, the page-file will increase, then decrease, then increase again, etc., every time you make any changes to a project. And that takes a very long time.
So generally speaking - since like people mention, some programs require a page-file - is to set it to any random size that's fixed. That's going to save you issues with massive loading times, or random lockups and hangs (never mind the constant fragmentation issues).
But then if you have enough ram and don't use specific programs that have these extremely curious ways to deal with memory mapping, you could just as well turn it off.
I mean, I haven't read the "second coming" treatises on why page-files isn't really all that hot after all. But there is no function in a system with 4Gb+ ram that should rely on virtual addressing. -
My SSD is from late 2011, has 11,965 hours on, 10.44 TB of writes (and 2.01 TB of reads) and about 95% health according to Intel's estimate. That's with a page file. The write:read ratio is a lot higher than either TANWare or tijo's drives, quite possibly due to the page file. Since I almost never hit 100% RAM usage anyway, perhaps I'll switch my page file over to my Toshiba HDD for a bit and see how much that affects the ratio in 6 months or so.
Finances allowing, I do agree that having a small (or auto-managed) page file on the HDD and plenty of physical memory really doesn't have any downside.
You probably meant more along the lines of relying on the page file for virtual memory, as opposed to being able to fit everything into physical RAM. And while for the average user it's still uncommon for one program to use 4 GB+ of RAM (although in industrial uses, that's becoming less and less true every day), it's not so uncommon for the sum of all programs the average user is running to require more than 4 GB of RAM. -
Also, Apollo is spot on about the difference between virtual addressing vs actually using the page file. -
Yeah, bad(incorrect) choice of words. All memory addressing is of course virtual outside assembly.. not even direct always then either.
...What I'm really talking about is "Demand paging", I think? With the specific windows implementation that is some sort of artful compromise in order to get the worst disadvantages between having a page-cache, and from using demand paging..
Anyway - I think that what happens when some programs fail to run without a page-file, is that the dll-functions have to be copied to a memory area the program has ownership over. That it's not allowed to duplicate it in ram, but only duplicate it via the virtual memory manager. ...it's a very curious way to do things, imo. -
To follow up on the read:write rations on my SSDs given that I have a paging file on them:
Intel 520 (windows 7 drive): 1.80 TB : 983 GB, power on hours: 2013
Mushkin Chronos Deluxe (Win8 drive) : 1.88 TB : 1.39 TB, power on hours: 2555 hours
I tend to use Windows 8 more than Windows 7 on that laptop by the way. -
While my R/W is much higher so are total hours, 16,523 right now.
Virtual memory question
Discussion in 'Windows OS and Software' started by KillWonder, Jul 13, 2014.