Someone on these forums one told me that having free space on an SSD is not the same as having a dedicated reserved space for OP
Can someone please explain the logic behind this because when I tell people about it, they always tell me the opposite, some of them say they don't even OP because they never fill their drives.
I would like to prove them wrong with some hard evidence / logic
thanks
-
Spartan@HIDevolution Company Representative
-
Well, if you have free space, then it is really like a dynamic OP. The SSD will use all cells available, so whether you OP or not, it will have whatever free space is available to deal with. If you OP, it just will ensure that much space is "clean" for garbage collection, wear-leveling (moving blocks for even wear), and marking bad blocks. But if your drive is typically 20-30% full (i.e. 50GB of a 256GB drive) it really won't make a whole lot of difference unless you do a lot and frequent writes and erases of data.
-
Spartan@HIDevolution Company Representative
So in that case my 30% OP is overkill I guess -
I am by no means an authority on the matter, but that is my understanding and experience.
If you're only using 30% space, then 30% OP won't hurt anything either.Ferris23 likes this. -
Tilleroftheearth was one of the guys who looked in to this.
From my memory, I don't think this distinction is an issue anymore with modern SSDs.
Basically it came down to how the controller abstracts the free space available as a scratch space. Some controllers don't differentiate between NAND that has been allocated as free space vs space that has a filesystem. However, some controllers (like the Samsung one) supposedly does care.
It is difficult to test because the biggest benefit of OP arises with random write workloads which span the whole drive by definition, it is difficult thus to test only a small portion of a fully partitioned space.Ferris23 likes this. -
To the best of my knowledge, there is no functional difference between unused capacity and overprovisioned capacity for an SSD. I would guess that the recommendation to overprovision SSDs (as opposed to simply leaving a certain percentage of the capacity unused) is so the user cannot accidentally "overfill" the drive.
-
Personally I only OP if I know I'm going to fill the drive near capacity or have a task where I will be doing a lot of writes. Even then, letting the drive idle for an hour or so results in a regain of speed.Ferris23 likes this. -
The free space becomes Dynamic OP after a TRIM operation. So, in between TRIM operations the free space is presumed valid data.
Understanding SSD over-provisioning | EDNalexhawker and Ferris23 like this. -
Short said:
No OP:
Blocks on the NAND used - User delete content on the NAND - Blocks stay used (1) unless the system cleans up the cells trough garbage collection and turns the blocks to a clean state (0). If system needs to use those unclean blocks later, it have to delete the content on the blocks, then rewrite the new content. You get a performance hit
OP:
Parts of the NAND is reserved for OP = Blocks is in a complete clean state (0)
No performance hit if ever used.
------------------------------------------------------------------
You get a performance hit only under certain really heavy writing scenarios anyway. Doing normal workload you should not feel any OP problems, unless you have a full drive and no OP.
The system schedule garbage collection (turns unclean blocks (1) to clean blocks (0)) should have no problem to do it regulary so that the drive will always have access to clean blocks during normal workload.
With heavy usage, you will see different drives behave differently because the OEMs use different controllers and different software, and you see that SSDs with a more agressive TRIM/GarbageCollection does better because it is quicker at giving the programs you use clean blocks. SSDs with less agressive protocols on this will be slower because the programs have to deal with dirty blocks where the programs needs to delete and rewrite on these blocks and therefor you get a performance hit.deadsmiley and Ferris23 like this. -
Right. But unless garbage collection doesn't take place for days, most users won't see an issue. The SSD has so much idle time for the average user who just surfs, plays games, does MS Office stuff, even some light video/audio editing. Unless you are filling the drive to near capacity and the drive rarely idles, it won't be a problem.
Ferris23 likes this. -
Yep, you got it.
Thats why you see Anandtech only doing OP/consistency tests with Queue depth (QD) of 32. Massive workload that is meaningless for the average user but is the only way to test out OP.
Normal workload will have no issues because TRIM call function is issued out regulary and garbage collection happens pretty often, plus pretty much all drives are 8%++ OP anyway which should be more than enough to cope with workloads that does go a little over the normal workload.
Its one of those made up benchmarks reviewers began with after SATA3 drives got so equal in performance, so they have to spice up the reviews to make them interesting with this sort of tests.Ferris23 likes this. -
I wouldn't say it's "made up," as steady-state performance is important to heavy users or typical users with occasional heavy use scenarios. And reminders to avoid completely filling drives (either by overprovisioning or deliberately leaving free space) can be useful even to typical users who would otherwise wonder why their performance slowed down on an SSD that's nearly filled to capacity. While SSDs and hard drives use different technology, it's true for both of them that a full drive = poor performance.
Ferris23 likes this. -
tilleroftheearth Wisdom listens quietly...
OP'ing is not the same as free space. And the answer is simple. With HDD's it didn't matter if a previous block was written to or not - if it is free now, it can be written to just as fast as if it was never used.
With SSD's, a controller is always keeping track of all the nand cells and their states. With a simple 'free space' mentality, the controller cannot keep fresh, free, ready to be written to cells at all times available to the user (us). With OP'ing; it can (because these can never be used for data - until they get 'traded' with previously used nand that has been cleaned and ready to be written to).
When the need to NOT erase before writing to a previously written to nand cell is removed (either by firmware magic, different nand topologies or other as yet unreleased SSD improvements), then free space will be equal to OP'ing. We're not there yet (by a long shot... have you seen the 15MB/s SSD's hit when actually used as a premium storage subsystem... sigh).
This is somewhat related to the reason that I still defrag my SSD based systems monthly (after MS Tuesdays updates) - the SSD's may be the coolest things since sliced bread with a laser sword - but that doesn't lessen the fact that if they (and the O/S) are doing multiple searches for a fragmented file, many, many times a day, they are not as fast as they can/should be. Even 15ms avg. access times add up for a fragmented file and with many of my files having over 800 or more fragments, the speedup is not as unexpected as it might seem at first.
Everyone here has 'proven' that we have more than enough write endurance on SSD's. That is why defragmenting them 12 times a year is not an issue for me (almost two years now) and especially when I OP at least 30% on my notebook setups and 50% on my desktop systems.
See:
http://forum.notebookreview.com/sol...64-smartplacement-defragging-perfectdisk.html
All HDD's slow down because we use the fastest (outer) part of the platter(s) first. When they get full, they're using the slower inner part of the platter which is still noticeable (even though on mobile HDD's the distance is only ~1/2 an inch with the appropriate reduction in angular velocity).
SSD's slow down because only the O/S 'knows' what is good data and what is discarded data - the SSD doesn't know that until a TRIM command is issued and actually executed by the SSD. Only at that point can a nand cell be erased (and therefore be ready to be written to, immediately). With free space, eventually all the nand gets used and tracked as 'dirty' until there is enough idle time to execute the TRIM command and. With OP'ing the same thing happens (at the controllers leisure - on some drives this can take hours or days) but with the OP that was implemented - any write that has to happen is able to be executed immediately (no slowdown).
I've said it for many years now; an SSD's capacity is to be considered what is actually left over after allowing free space for the O/S (Windows needs at least 25GB free space and I've lately standardized on 50GB to 100GB with the higher capacity SSD drives we now have available) and after allowing for OP'ing (30% is the sweet spot for performance vs. capacity loss still, ime).
Sure, I may have effectively doubled the $$/GB cost of SSD - but the difference is that for the last couple of years, SSD's have actually become usable and cost effective vs. just being a novelty that they were at the 128GB capacity and less when I was first able to test/use them.
See:
http://forum.notebookreview.com/sol...707276-unallocated-space-ssd.html#post9068560
See:
http://forum.notebookreview.com/sol...marks-brands-news-advice-214.html#post9317068
Anyone who thinks free space is equivalent to OP'ing does not have a firm grip of how SSD's work. Or how SSD's and HDD's and the O/S interact too.
Hope the above helps answer the question. -
defrag =/= TRIM or Garbage Collection. You're just moving data for the sake of moving it, or not even moving it at all. Tells me you don't quite understand how SSD's work. SSD's don't put data where Windows tells it to. SSD's put data where the SSD needs to. That's the whole point behind controller level defrag and garbage collection. Running Windows defrag manually, even though it shows it as at position AA, it may actually be at position ZQ because that's where the wear leveling protocol needs it.saturnotaku, trvelbug, Cloudfire and 1 other person like this. -
You can change the frequency of the "Optimize" (TRIM) schedule in Windows 8, 8.1 to Daily, Weekly or Monthly.
Optimize Drives Schedule - Change in Windows 8tilleroftheearth likes this. -
Tinderbox (UK) BAKED BEAN KING
Never used OP on my SSD, the toolbox does not even have the option to create one, 20TB later and everything is working fine
John.saturnotaku, HTWingNut and Ferris23 like this. -
It seems to me that, based on tiller's post and others, there is a difference between free space and overprovisioning if and only if the SSD does not have the idle time to execute TRIM commands. So for extended constant write scenarios, such as retrieving a system image or heavy audio/video editing, an SSD with a set percentage overprovisioned would perform better than an identical SSD with the same percentage of free space, provided that the continuous writing lasted long enough. We can label this as a heavy usage scenario.
On the other hand, in lighter use scenarios, there is sufficient idle time to run TRIM and thus there is no performance difference between overprovisioning and free space. As such, it would appear that there is little or no difference between overprovisioning and free space given a light or mixed workload for the storage device, but there is a more noticeable difference that favors overprovisioning given a more constant heavy workload.saturnotaku, HTWingNut and vayu64 like this. -
-
tilleroftheearth Wisdom listens quietly...
You and many others are missing the point that TRIM and GC don't happen when we let the system be idle - depending on the firmware (as I've stated already in this thread) it can take hours or days and with really bad drives (controllers and models); it may never happen in our notebook's lifetime (Patriot garbage...).
Maybe even more to the point is the fact that at least for me; when I use my notebook, I finish my work and immediately shut it down. No idle time. No GC. No TRIM. (When it's off).
I know you'll find creative ways to argue this point; all I'm saying is that free space is not equal to an equivalent unallocated capacity with any SSD I've ever used. And the difference is easy for me to 'feel'.
As far as defrag =/= TRIM or GC we agree there.
Where you fail to understand the distinction is the fact that if windows thinks a file has 800 fragments, it will do 800 searches for that file (if it's 2MB or 2TB in size). That is the power of defragmenting on an SSD. PerfectDisk (with a very specific defrag pattern selected) tells the drive to rearrange not only the actual fragmented files; but also to defrag the free space too (by compacting the files tightly together).
When PD is done and the O/S needs that file that was previously 800+ fragments, it is noticeable quicker to load and/or get to the next step in the process.
It doesn't matter where the file actually is on the SSD (even if it was in exactly the same position as it was before) - as long as the O/S now sees it reported/stored as a single contiguous file - the effect is a faster system. Period.
Btw, when I said 'somewhat related' in my post above - the point was that defragging (currently with PD13 as no other defragger is comparable for these discussions) makes as much of a difference as OP'ing in my systems, at least in terms of regained 'snappiness' each month.
I think I know very well how storage subsystems work, including SSDs.
And what makes them fly.Ferris23 likes this. -
Spartan@HIDevolution Company Representative
and what do you think of O&O Defrag Pro? It can send TRIM commands to the SSD. isn't that better/safer than PD?
-
tilleroftheearth Wisdom listens quietly...
Ferris23,
don't want to compare all the defraggers out there - but I have tried them (almost) all. None compare to PD because PD can defrag the complete file system. And, the free space too.
What I think is PerfectDisk is the only way to defrag a drive (HDD or SSD). Anything else is a waste of time. And I've been saying this for a looong time.Ferris23 likes this. -
I just checked my 840 Evo 1TB and I have it set for 10% OP.
-
-
I just learned something. I am on Win7.
Someone once said, FREE SPACE does not equal OP Space...
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Spartan@HIDevolution, Aug 8, 2014.