The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    SSD's, Spare Area (aka; Over-Provisining), Windows 8 x64 and SmartPlacement Defragging with PerfectDisk.

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by tilleroftheearth, Dec 5, 2012.

  1. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    I have been suggesting/recommending to over-provision an SSD by partitioning it from first use to at least 30% less than it's nominal capacity (if the maximum sustained performance is needed to be obtained from the drive in question) and I have been saying this for over a year.


    Anand has produced an article which shows why this is important (and explains/shows why I have complained in the past that SSD's can feel slower than a HDD):


    See:
    AnandTech - Exploring the Relationship Between Spare Area and Performance Consistency in Modern SSDs

    The article shows (with numbers) that just like in gaming, the minimum performance (FPS...) is just as important as the maximum or 'up to' performance that SSD manufacturers claim in their marketing.

    In an SSD the performance takes a nosedive when the garbage collection kicks in (or worse: when it doesn't). This is like the minimum FPS 'score' in gaming - it doesn't matter if a system gives us 'up to' 200 FPS if it also hits single digit FPS 'scores' during the course of the game (or workload for the SSD's).

    I was also wondering if my persistence (at least on my own systems) to vastly over-provision all the SSD's I use might be giving me close or possibly equal performance to the just introduced Data Center SSD by Intel, the S3700.

    In a word: No. :)

    We still need new/better SSD's like the S3700 with a mandate to give much more consistent performance in our workloads - unlike almost every SSD available now (the Samsung 840 PRO just became interesting to me again: with 50% overprovisioning, it gives the Intel S3700 a run for it's money).


    I want to propose a new 'wild idea' that is just as ahead of it's time as leaving 50% or more of your SSD's capacity 'wasted' was in late 2011:


    Defrag your SSD.


    Using PerfectDisk 12.5 SP5, I have consistently seen much 'smoother' running systems after doing an offline defrag and a couple of online SmartDefrags.


    Having clean installed Win8x64, the drivers, the updates and the programs and data I normally use on an SSD that has been over-provisioned by at least 50% (i.e. 75GB used out of 160GB nominal), I installed PD12.5 and saw that the drive was a mess (fragmentation-wise).

    First, I let PD do an SSD Optimize run - and the system did feel noticeably 'snappier'. Not much - but enough to notice. I then did an offline optimization and with that finished, an online SmartDefrag based on the 'Performance Aggressive' Profile under the SmartPlacement options with no space allowed between the different categories of files (in this order: directory, rarely, boot, occasional and recently).

    On the systems with Intel SSD's, I also ran the Intel SSD Toolbox' manual TRIM command and let the systems sit idle for an hour or so.

    All the systems feel faster - from the AMD 350 setup to my QC O/C'd desktops (all systems with 8GB RAM minimum up to 32GB RAM installed).


    While all 'common' wisdom states that SSD's don't care about files being fragmented (because they're so fast...) - the Anand link above means we should care about this on any current SSD - especially if we're using them over 70% filled - I have seen 14 Seconds Average response time in Win8 Task Manager's Disk section (on the Performance tab) - and that is with insane levels of overprovisioning already applied.


    Why does defragging an SSD give noticeably better performance? Because while the O/S needs to track/request a couple of thousand fragments for a few hundred files - the nominal speed of the SSD is decimated by the high maximum response times (sometimes going below HDD speeds) for up to EACH access to those fragments.

    Sure, defragging the SSD burns through the nand cycles - but after a couple of runs (and roughly once a month thereafter; right after MS Tuesday's updates...) the files 'behave' and stay pretty well put. Of course, the files on the nand itself keeps moving around as the drives Firmware sees fit (for optimal wear leveling) - but that doesn't negate the overhead saved by the O/S having to call on only one file/location per file.

    Am I worried about burning through the nand by my actions? Not really. Especially when I'm only doing this around 12 times a year AND I'm over-provisioning the drive by 50% or more. (Not to mention the 6PB 'record' going on at Extreme Systems dot org forums with the Samsung 830 256GB SSD).


    I have been doing this for the last year with no ill effects on any of my SSD setups and can tell when a system needs a 'freshening-up' by how sluggish it feels.

    Do I recommend this method/madness to others? Yes.

    If the goal is the most responsive (snappiness), highest (sustained) performing and worry free SSD experience we can have now: Yes.

    Just make sure to:

    Buy your SSD, ensure it has the latest Firmware available, install it in the system and partition it to less than 70% of it's nominal capacity and do a clean install of Win8x64 (highest performing O/S right now).

    Install all your programs, updates and data. Including critical drivers like the chipset, sound, video and SATA (AHCI) drivers (Intel RST 11.6.2 highly recommended on compatible platforms).

    Disable all the power-saving features and leave the system on for at least overnight.

    Install a trial of PD12.5 (or higher) and do an offline then an online SmartDefrag (with the above mentioned options).

    Enjoy the fastest setup now possible (no matter how/what you use your system for).


    I know a lot of people will knock this because I have no numbers to back anything up. Oh well. The world is not just numbers (and I'm too old to get them for the few who will simply tear apart my 'methodology' yet again) and I'm sure Anand will give us an in-depth article shortly. :)


    For the others: where did I 'feel' the snappiness? In boot/rebooting, checking for Windows updates, running a virus scan and general O/S navigation (even opening 'Computer' or 'Network' folders). Is the speedup huge? No, not on an absolute scale. Will I stop this practice?

    Lol.... (never, never, never... whispered into the milky fog as the moon struggles to faintly light the wet, echoing hills....).


    What I believe this (defragging) does ultimately is make the TRIM and GC even more effective and using PD 12.5 - it also ensures that the free space is also 'defragged' thoroughly - giving new writes the biggest chance to happen at the TOP rated SSD speeds possible. Just a theory (now) but Hey! It does work on HDD's like this.


    Does anyone else go against conventional wisdom and find their workflows were bettered by it? (Seems I keep falling into these situations somehow).

    I'd love to read about your discoveries; here, on in your own thread (your choice :) ).
     
    Spartan@HIDevolution likes this.
  2. Marksman30k

    Marksman30k Notebook Deity

    Reputations:
    2,080
    Messages:
    1,068
    Likes Received:
    180
    Trophy Points:
    81
    I thought defragging just moves files around so they are closer together on a platter to speed up sequential reads. The advantage seems small at best on an SSD. I think the main issue is that while we can generally control TRIM, we have no user obvious way of controller the GC routines on the SSD. For example, my 320 is realllllly lax about GC, it delays it for ages before it engages so while the peak speeds do slow down noticeably, the performance feels very consistent . Anand actually noted how unusually delayed the GC was on the 320. However, this isn't really a problem since my usage patterns are predominantly 4k random read operations with a smattering of sequential for games loading. The timing and conditions which trigger GC lies embedded on the controller so we are pretty much at the mercy of the company that manufactures it. I read the anand article and agree free space is probably the most important factor, however, I've noted that some drives like the Plextor m5s have really aggressive GC to the point it actually impact its 4k random write performance.
     
  3. GrofLuigi

    GrofLuigi Notebook Enthusiast

    Reputations:
    0
    Messages:
    27
    Likes Received:
    0
    Trophy Points:
    5
    I think defragging/tidying up the MFT would achieve the same. Is it time to move the MFT onto the controller?

    GL
     
  4. jclausius

    jclausius Notebook Virtuoso

    Reputations:
    6,160
    Messages:
    3,265
    Likes Received:
    2,573
    Trophy Points:
    231
    tiller,

    Out of curiosity, did you use something like hdparm to run a 'secure erase' on the drives before reinstalling the OS? I've done a couple of re-installs some without running a secure erase, and some with a secure erase. While I didn't run any benches, I can say any SSD drives that have contained data were definitely "snappier" after a secure erase/re-install than the same drive with just a plain re-install.

    I wonder if running a defrag would provide a side effect of writing blocks to areas where there is dirty data, and then being smart enough to "clear" the freed cells, thus creating the feel of being faster.
     
  5. zippyzap

    zippyzap Notebook Consultant

    Reputations:
    159
    Messages:
    201
    Likes Received:
    1
    Trophy Points:
    30
    So, your version of "worry free SSD experience" is to worry about it?

    My version of a "worry free SSD experience" is to buy a modern not-crappy SSD, physically install it into the system, enable AHCI in BIOS, install a modern OS that supports Trim, then use it and not worry about it.

    Regarding free space and overprovisioning, my belief is that for the "normal" user (whatever that means) a bit extra space on the drive is more useful than sustained high writes. For instance what if I wanted to do a quick video edit and wanted to keep the original multi-GB file intact until I was happy with the results? I would need a bunch of available free space, but that space would be reclaimed after I was done.

    It isn't as if an SSD has zero extra space. All of them reserve a bit extra, to preserve performance. What you are advocating is to move to a different part of the performance/capacity curve. For those who can afford to always buy twice the SSD capacity that they actually need (for your 50% overprovisioning) then I say go for it. :hi2:

    For the rest of us mere mortals who would otherwise still be using HDDs because that's all we can afford, I say don't worry about this.

    Yes.

    I don't run manual virus scans.

    I don't do extra testing on HDDs before using them.

    I don't run hours-long or overnight stability tests on my systems, even when overclocked.

    I don't "optimize" SSDs.
     
  6. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631

    We don't have to control TRIM or GC to enhance it. Overprovisioning allows us to get a lot of control (depending on the SSD in question).





    I think the Intel DC S3700 has effectively done that? With a 'flat' table that is 1 to 1 with the data it is managing - it can't be expected to be implemented more 'real world' than that?




    I have SE'd a few SSD's (years ago) but they always fell back to their 'non-performance' modes in my use. With the overprovisioning, I don't feel a need to do a SE (especially as the speed drop will be almost 'instant' again when they're put to use).

    What I do do with a re-install is a Quick Format of the SSD in a Win7/8 system - this is very close to a SE for my requirements and is not as hard on the nand as a SE.





    lol... I like how you love to twist words... :)


    To have a worry free 'experience' doesn't mean that you don't worry even a little... Nor does it mean sticking your head in a hole and pretending the issues just go away.

    I have found a system that works for my workloads and furthermore has been proven by third-parties that I was on the right track all along.


    Optimizing a system is not a bad thing - especially when the results are consistent and repeatable, and make for far less work down the road... (meanwhile, 'tweaking the he!! out of a system is a 'bad' thing, I agree...).


    Thanks for everyone's comments.

    In closing, I just want to recommend James D's thread about RAM (yeah, I went a bit off topic there - but that is how I am with new-found POWER/KNOWLEDGE). :D :D :D


    See:
    http://forum.notebookreview.com/har...m-full-speed-help-screenshots-appreciate.html
     
  7. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,878
    Trophy Points:
    931
    I do like what zippyzap has to say, at least first part of his post. The second part, I do tend to thoroughly test my systems and components and run manual virus scans.

    But for 99% of users, throwing the drive in their machine and just using them is perfectly fine. Heck that's what I do, and am none the wiser, with no negative effects for the work I do. If you think SSD's are slow, then try shifting to a laptop with a 5400RPM HDD that isn't even defragmented. Talk about slow.

    Adding extra space for over-provisioning is great if you do lots of writes on a regular basis, but for everyone else, it doesn't matter. Or just do 5-10% extra free space. Free up an extra 10-20GB on a 256GB drive if you're paranoid about it, but 50% is insane unless you're looking for the absolute best performance 100% of the time.
     
  8. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    HTWingNut, you're right - I am looking for the absolute best performance 100% of the time.

    Especially back when SSD's were worth $400+ for ~100GB or less, it seemed like madness to buy one for it's performance and have to settle for less than HDD results in as little as two hours, or two days and/or two weeks of use. It's not the $$$ that matter so much ( I've spent many times that and more on cutting edge HDD tech to fill up all my workstations and notebooks...) - it's the fact that I was spending the money and seeing NOTHING in return ( performance-wise) - yeah; it felt like I was being lied to.

    Just over a year ago, I stumbled onto extreme overprovisioning and although I can still 'see the ugly' behind almost all the SSD's (especially with Windows 8 and it's amazing Task Manager) I own, it still has enough 'makeup' on to not leave me running for the door ( and my once beloved vRaptors). ( The 'ugly' that I'm seeing is the huge swings in performance that the Intel S3700 SSD specifically targets to eliminate).

    Having played around over the years with RAM drives, eBoostr, FancyCache, external (eSATA) vRaptor based 'Scratch and Temp' disks, ( in addition to 'extreme' RAID0 setups and so many other 'schemes' that I can't recall them all right now) to minimize the effect of HDD's on my workflow, I have come to realize and appreciate how little the HDD effects actual performance. More/faster/bigger is always better, granted. But SSD's now offer pretty much 90% of what a storage subsystem can contribute to a modern workstation in almost any 'real world' use. Not in all scenarios, I know - but except for synthetic benchmark/scores - most real world performance is unaffected by even the lowliest HDD's available today ( the exceptions are in RAW video/image editing and creating/updating and editing 'smaller' files such as PDF's and/or databases and installing the O/S, updating windows, programs and drivers and running 'maintenance' tasks like A/V scans - yeah; even manually, zippyzap :)).

    There is a current thread on RAM drives that has a link to a video - I took a peek and laughed at the 'results' with a RAM drive that is supposedly 100's of times faster than an SSD (and takes over 10 minutes to shut down/boot up?). Why did I lol?

    Because my SNB based workstation setups start up PS almost as fast with 'only' 32GB RAM and a few strategically placed SSD's.

    ( Okay, they take about a second longer to start programs the 'first' time after a cold boot - and if I had the 64GB RAM that the tester had, I wouldn't be wasting it on a RAM drive that limits the O/S' excellent on-the-fly tuning it is capable of, nor would I want to intentionally limit the modern programs I use that use RAM as efficiently as possible for their purpose either).

    I'll agree for a 'gamer' - load times may be all that matter from a storage subsystem - but if my 'recommends' are 'overkill' - what is this? :D :D :D
     
  9. zippyzap

    zippyzap Notebook Consultant

    Reputations:
    159
    Messages:
    201
    Likes Received:
    1
    Trophy Points:
    30
    Those are the key words.

    My "work"-load at home consists of half web browsing, half gaming (so far work doesn't believe in spending money to save time). I'm pretty confident that my life would not be made any better if I were to change what I do, which is to just install and use.

    Here's something which would bother the heck out of you. :p I have a separate SSD for games, an old 256GB Indilinx Barefoot drive from years ago. I'm down to 6GB free. Sure, game patches take a bit longer to complete (almost as slow as if it were on a HDD) but since reads are not much affected, games still load quickly.

    My new system will have a 512GB Crucial M4 for games. Yay!
     
  10. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    zippyzap, you're right about what would bother me and the 'key words' too.


    Note that SSD read speeds do decrease with time/use... I would be surprised if a good (modern) HDD wouldn't be faster overall than your almost filled Indilinx.


    Now, with the 512GB M4 you're talking!

    Use it and enjoy it (and let's see what conversation we'll be having about this drive and your abuse of it in a few years...).


    but...
    (I'll still partition my drives just as always. :) ).
     
  11. jclausius

    jclausius Notebook Virtuoso

    Reputations:
    6,160
    Messages:
    3,265
    Likes Received:
    2,573
    Trophy Points:
    231
    FWIW, you do have control over TRIM. You can tell the OS use it or don't use it. Once it is enabled, it ALWAYS is in play once a LBA is freed up. There's nothing more to it than that. Complete control.

    No. A quick format is nothing like a secure erase as the NAND cells will still be marked as containing . Also, suggesting a secure erase is harder on an SSD than running a disk defragger 14-16 times per year? On average, I would venture that is incorrect.

    Now, I don't believe limited WRITE cycles are a big a problem as they once were, but still, assuming you don't repave a machine every week, on average a secure erase would cause a drastically less number of writes. Besides, why would you want to hamper a reload in the first place. Having all cells freed at OS re-load means the drive will be just that much more faster rather waiting for possible slow downs due to write amplification issues.

    On over-provisioning yes. On a defragger, unless all the info on SSDs in the public are a bunch of lies, no.
     
  12. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631

    Just because we can say to our O/S 'use TRIM' doesn't mean the SSD actually uses it. Depends highly on the SSD (the O/S setting is just a 'suggestion' to the drive). This is far from 'complete' control.

    Running the disk defrag only 'costs' us nand write cycles for the files moved - SE ZAPS the entire nand with overvoltage. And as already stated (and Anand has backed up this 'theory' too) - a disk Format on a Win7/8 machine is effectively TRIMMING the drive to effectively SE levels.


    Yes, I was thinking/speaking about the overprovisioning - but is all the public info lies about the defragging results? I don't know (but not all use PD...): what I do know is that using PD 12.5's SmartPlacement as I suggested gives a real improvement to the responsiveness of the system - almost as much as going to an SSD in the first place with regards to 'snap'.
     
  13. jclausius

    jclausius Notebook Virtuoso

    Reputations:
    6,160
    Messages:
    3,265
    Likes Received:
    2,573
    Trophy Points:
    231
    One could also argue using an SSD on Win XP doesn't give you control either. :) So, just to be clear, I was speaking about most SSDs/OSs since mid-2011. Not too many SSDs sold in the last 12-18 months that lack TRIM, and Linux, Mac OS X, and Win 7/8 all now support TRIM.

    The secure erase will cost 1 P/E cycle. The defrag? Most likely depends... Depend on how many times you reload your OS (For example, I'm on my 2nd install in 27 months), depends on how many times you plan on defragging the drive as well as (due to wear leveling algos in the SSD) how many fragmented files it finds/moves on the drive.

    I still suspect on average the SE is less destructive, but as stated before the importance of the number of writes a cell takes is declining with each new SSD advancement. In the end, my guess is defragging vs. SE, it will come down to the old "YMMV" cliche.
     
  14. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    I too am talking about current OS' and SSD's - TRIM is not 'performed' on the drive at/when the O/S indicates it - the drive's Firmware decides when it will ultimately take that 'suggestion' from the O/S and follow through. This could be immediately, in a few minutes, hours or days - or; never - depending on how the drive is being used.

    While every cell taking a 1 write hit with SE seems like a 'little' - it is very hard on the nand (again: overvoltage on every nand chip). Defragging (even if considered over the life of the drive...) will only incur a write hit with the files needing to be defragged (and remember; very little 'extra' WA is incurred if you also follow by overprovisioning the SSD too). Don't want to seem argumentative; just want to be clear how important this distinction is.

    While the write cycles with SE is technically decreased by 1, the life of the SSD as a whole is compromised each time SE is performed.


    (Reading about the newer controllers (not here yet) that can effectively self-adjust (voltage) to each specific nand chip will make this less of an issue going forward - something like electronic ignition and/or fuel injection for SSD's. :) ).
     
  15. jclausius

    jclausius Notebook Virtuoso

    Reputations:
    6,160
    Messages:
    3,265
    Likes Received:
    2,573
    Trophy Points:
    231
    Unless you have something to the contrary, from Anand's article ( http://www.anandtech.com/show/2738/10) when a delete w/ TRIM command is sent, the controller queues the command to delete/free the cell, and is performed as it also performs the delete command.

    No harder than storing anything else. The voltage level IS set by the drive manufacturer. When a drive writes to a Nand cell, it applies a voltage to the cell and checks to see how it responds. You keep increasing the voltage until you get a result. In regards to MLC, with four voltage levels to check, MLC flash takes around 3x longer to write to as SLC. On the flip side you get twice the capacity at the same cost. (Again, Anand article).

    In regards to a SE, I've never heard of anything on forum posts, etc. say anything about unsafe voltages being used to clear cells. From what I understand, the controller sends a voltage which places all cells in an "empty" state. A SE should have no other side effects other than reducing the write count by 1. If you have a white paper or other proof contradicting this, then please share.

    And a drive with lots and lots of file fragmentation will have the same problem over time (12-16 times or more a year) as when files are reconstituted, the wear leveling routine is going to move an entire file to all new cells on the drive. Again, it is going to be what files are on disk, what is fragmented, and what the defragger does at each run against something I've done maybe 2x over a 27 month period.
     
  16. Const4nt1n3

    Const4nt1n3 Notebook Enthusiast

    Reputations:
    0
    Messages:
    12
    Likes Received:
    0
    Trophy Points:
    5
    Hi,

    Very interesting read but I've rather basic question.

    I just got myself new 128GB SSD drive so what would be suggestion for the spare space? Something like 30GB? In addition, how to properly format it? I read about alignment issues, therefore using diskpart should solve that but I'm not sure how to do it. Can someone put me on a right track? What commands to use in diskpart. Oh, and I'm planing to encrypt whole drive, probably with TrueCrypt. Does TrueCrypt has anything to do with spare space?

    Thanks.
     
  17. OtherSongs

    OtherSongs Notebook Evangelist

    Reputations:
    113
    Messages:
    640
    Likes Received:
    1
    Trophy Points:
    31
    Given that performance of the Crucial M4 has been exceeded by a number of others recently, what made you choose an M4???

    And FWIW I just got a 2.5" 512GB Crucial M4 SSD, and my key reason was reliability.

    But I'm an old mainframe guy, so what do I know. :D

    My own take so far is to not partition 20 to 30 percent of the SDD.

    There is no way that I'm going to not partition as much as 50 percent of the SDD.

    This percentage likely also depends on the specific controller in the SSD, which is an issue that I've yet to see discussed here on NBR to any extensive extent.

    And also on how much that specific SSD has already set aside in terms of non-partitioned area.

    Those are all very good questions.

    Short answer is that I've yet to partition a SSD. :)

    I'll likely start with trying the free gparted partitioning/bootable program at: GParted -- About

    Odds are that others in this thread will say if gparted works for partitioning a SSD.
     
  18. jclausius

    jclausius Notebook Virtuoso

    Reputations:
    6,160
    Messages:
    3,265
    Likes Received:
    2,573
    Trophy Points:
    231
    It is really up to you. You could see suggestions anywhere from 10GB to 64GB. Is this going to be used on Windows 7/8? Encryption aside, you know you can change partitions later on Win 7/8 (Disk Manager) and Linux (GParted). Dunno if you can on a Mac (as I've never tried with Disk Doctor or whatever it is called), but would be surprised if you couldn't.

    My suggestion is to try something around 12-15 GB. The M4 should already have some provisioning, so no need to overdo it. If you want to make sure you won't fill it up, you can shoot this up to 20GB, but it is just a loss of space IMHO.

    What OS? Win 7/8, latest linux, and OS X Lion and above should all handle that just fine.

    Don't know much to help out here. Check the details of your encryption software regarding partitions.
     
  19. James D

    James D Notebook Prophet

    Reputations:
    2,314
    Messages:
    4,901
    Likes Received:
    1,132
    Trophy Points:
    231
    Could you give a link? I am thinking if I should buy used Samsung 830 128GB SSD.

    Some people still think that defragging HDD is useless. How many of them will support you with defragging SSD you think?
     
  20. Abidderman

    Abidderman Notebook Deity

    Reputations:
    376
    Messages:
    734
    Likes Received:
    0
    Trophy Points:
    30
  21. davidricardo86

    davidricardo86 Notebook Deity

    Reputations:
    2,376
    Messages:
    1,774
    Likes Received:
    109
    Trophy Points:
    81
    While at work, my manager told me not to bother with defragmenting a customers' storage device. I didn't agree with him though but i know he said that because of time constraints. I've noticed an improvement in system response and usability with HDDs.


    Hm, as for defragging an SSD? I haven't done this yet to my Samsungs, but I sure wouldn't mind reproducing results similar to tillers'.
     
  22. JOSEA

    JOSEA NONE

    Reputations:
    4,013
    Messages:
    3,521
    Likes Received:
    170
    Trophy Points:
    131
  23. Const4nt1n3

    Const4nt1n3 Notebook Enthusiast

    Reputations:
    0
    Messages:
    12
    Likes Received:
    0
    Trophy Points:
    5
    Thanks for replies.

    In addition, I was wondering how SSD works in terms of space. For example, if my SSD is 128GB, then real space is ~119GB. Now, we know that SSD does "provisioning" so my question is if SSD cuts out the space from 119GB for provisioning or the NAND in the SSD is actually bigger than declared 128GB thus have several GBs reserved for "provisioning" already?

     
  24. JOSEA

    JOSEA NONE

    Reputations:
    4,013
    Messages:
    3,521
    Likes Received:
    170
    Trophy Points:
    131
    Good Question; the way I understand this is the overprovisioning is "over and above" the marketed capacity. BUT the percentage varies from manufacturer to manu. For instance a drive desinged for data centers might be 30% or more overprovisioned where as a consumer drive such as the intel X-25m is only 10% OP'ed. (So my 80 GB drive + OP is ~ 90 GB).
    Of course once formatted for win 7 the usable capacity come to 74.5 GB for this drive. Hope this helps .
    This article http://www.storagesearch.com/ssd-jargon.html explains some of the jargon associated with SSD's
     
  25. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631

    The answer depends on how you're planning to use the SSD (specific workflow...).

    If you're clean installing Win7 or Win8 - then there are no alignment issues to worry about (this is why I recommend partitioning/formatting the SSD with a Win7/8 Install disk/usb key - just to take care of alignment - even if you don't ultimately install Windows on the drive).

    Encrypting a drive is like putting a leash on a race horse: you won't be getting the performance you're paying for. I understand the security angle - but there are very specific SSD's to look for if security is your top concern (and not performance). The platform you're installing the drive into also comes into play (on how much of a performance hit you'll get).

    For general/basic use - I would still consider 30% (of the formatted capacity) as the 'minimum' I would leave as 'unallocated'. This would be around 83GB available for your use. Considering that Windows 7/8 needs around 25GB 'free' - that is around 60GB for your O/S, Programs and data.

    Yeah: looks rather bleak when looked like that: but this is why I recommend 240/256GB SSD's in the first place...


    So, what is the workload this setup will see day to day?
     
  26. James D

    James D Notebook Prophet

    Reputations:
    2,314
    Messages:
    4,901
    Likes Received:
    1,132
    Trophy Points:
    231
    @Const4nt1n3, Samsung 830 has additional chip (or chips or half of chip) which gives you about 7% for using instead of dead blocks.
    But it is not enough. And I am not sure if it using this space for better defragmentation and wear leveling or just use it when bad block needs to be replaced.
     
  27. Const4nt1n3

    Const4nt1n3 Notebook Enthusiast

    Reputations:
    0
    Messages:
    12
    Likes Received:
    0
    Trophy Points:
    5
    First of all, thanks everyone for answers.

    Now on topic. Very interesting information. I also found this: AnandTech - The SSD Relapse: Understanding and Choosing the Best SSD
    In short, for example, 80GB SSD actually uses those 5GB (even though Windows sees ~75GB) while on regular HDD it's just "gone".

    Firstly, let me tell you what kind of SSD I got and my hardware setup.

    SSD: Plextor M5P 128 GB. Originally I was looking for M5S 256 GB one, but M5P came across for a good price and it carries 5 years warranty. I wanted other controller than SandForce as latter one does not work well with uncompressible data, thus if I'll encrypt the whole drive, I'll get hits on performance.

    Laptop: couple of years old P8700 with 4 GB of RAM. Unfortunately, CPU doesn't have AES-IN, which would speed up real time encryption. I'm aware of performance losses but at least I want to try how "bad" it will be. If it will be unbearable, I'll probably give up on TrueCrypt. However, this is laptop so security is an issue.

    Usage scenario: Nothing serious because laptop is quite old. On the other hand, it's my main machine so I'll browse the internet, do office tasks, listen to music and occasionally watch movies. Light photo and movie editing is also possible but CPU is probably biggest limitation here.

    For various DATA (MP3, etc.) I'll use portable HDDs so on Plextor I'll have system, applications and most important files required at that moment.

    One more thing. Is it necessary to OP that much? If SSD itself already overprovisioned around 7%, maybe around 20% of formatted space will do the rest?
     
  28. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    How much you over-provision is definitely up to you - but I hate to tell you that the M5P is one of the worst SSD's Anand has tested so far in regards to consistency.

    See:
    AnandTech - Plextor Updates The Firmware on M5 Pro: Promises Increased Performance, We Test It


    Even with 25% over-provisioning, you can see that it really takes a plunge in performance vs. almost every other SSD.

    On your older platform along with this kind of expected performance for the higher performing capacity model Anand tested (vs. your 128GB SSD - from which I would expect much lower performance...) I would not be recommending TrueCrypt or any other extra load (on the system and the storage subsystem).


    If you're really curious, go ahead and try - but I would be really tempted to have 50% over-provisioning on this drive (in the hopes that the performance consistency improves drastically).

    Oh, and I would also upgrade the Firmware of the SSD too (was the shipping firmware even worse?) before you install the O/S and your programs/data onto it (even if it says it is non-destructive).

    If you still have it in the original (un-opened) box; you might want to sell it?

    You may also try ThrottleStop to increase the performance of the SSD on your older platform and also try unparking the cpu cores with the utility linked below.

    See:
    http://forum.notebookreview.com/har...arket-upgrades/531329-throttlestop-guide.html


    See:
    Coder Bag: Disable CPU Core Parking Utility



    Good luck.
     
  29. Const4nt1n3

    Const4nt1n3 Notebook Enthusiast

    Reputations:
    0
    Messages:
    12
    Likes Received:
    0
    Trophy Points:
    5
    Thanks for "heads up". I'll see what I can do in this situation. :)

    On a side note, do you think M5S would perform any better? The thing is that M5P was compared to currently the best SSDs on the market and they're much more expensive than M5P, thus I won't be able to get them. In addition, do you think that average SandForce drive would fare better than M5P also, because they're priced similarly.

     
  30. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631


    I'm confused what you're asking? But the best SSD to go for right now (with as little OP'ing as possible) is the Corsair Neutron 240GB SSD (try to forget the 'more affordable smaller capacities' for now).

    See:
    AnandTech - Exploring the Relationship Between Spare Area and Performance Consistency in Modern SSDs


    But any SSD (Intel SF included...) is going to be better than the M5P.

    Hope this helps?
     
  31. Const4nt1n3

    Const4nt1n3 Notebook Enthusiast

    Reputations:
    0
    Messages:
    12
    Likes Received:
    0
    Trophy Points:
    5
    Also, I'm checking AnandTech article now and it looks that overprovisioning M5P doesn't make sense at all, because there's no difference. Basically, I'm thinking that unless I go something like 50% (as you said) I won't see any difference and I can skip overprovisioning at all. What do you think?
     
  32. James D

    James D Notebook Prophet

    Reputations:
    2,314
    Messages:
    4,901
    Likes Received:
    1,132
    Trophy Points:
    231
    Why wouldn't you ask this and tons of further questions in specifically made thread for them like Plextor M5 Thread for example?
     
  33. Const4nt1n3

    Const4nt1n3 Notebook Enthusiast

    Reputations:
    0
    Messages:
    12
    Likes Received:
    0
    Trophy Points:
    5
    Man, if you don't like it, just don't participate. What I'm talking about is still related to main topic: provisioning.

    I'll try to clarify my points:
    1. There're plenty good reviews of Plextor M5 series but apparently no one did such tests as AnandTech did.
    2. M5S uses different NAND and controller compared to M5P so I was wondering if the former would performance won't be that erratic as M5P.
    3. In the test by AnandTech M5P was compared to top tier SSDs which are more expansive than M5P but cheaper SandForce based SSDs weren't included in the test so I assume that is unknown how they would perform, thus if I buy SF disk I may get the same performance as M5P. However, you mentioned that performance should be better on SF but again, I bought other SSD than SF because I intended to use it with encryption.

    My bad, I mistaken Intel 355 with S3700. In AnandTech test 335 shows incredible results when stacked against much more expensive drives.

     
  34. zippyzap

    zippyzap Notebook Consultant

    Reputations:
    159
    Messages:
    201
    Likes Received:
    1
    Trophy Points:
    30
    Funny you should mention that. Way back before the Indilinx SSD was filled up (probably just over half at the time) I had noticed loading times for a certain game (League of Legends) to be almost exactly the same as the same game running off a HDD.

    Granted, that HDD was a Samsung F3 1TB (500GB/platter, pre-Seagate) that was short-stroked to maybe 1/4 the capacity. So, one of the fastest 7200RPM HDDs at the time, versus an SSD that was already a bit long in the tooth. Both drives were dedicated game drives and not OS drives. Both systems were otherwise quite similar (Sandy Bridge quads @4.5GHz with 8GB RAM, same motherboards, Fermi).

    The drives were similar enough in performance that sometimes one would load a tick faster, sometimes the other. Also, even as my SSD filled up, performance between the two were consistent.

    Less than 2 months ago the other system was upgraded to a Crucial M4 512GB as the dedicated game drive, as part of an entire system upgrade (Ivy Bridge+Kepler). It now consistently beats the Indilinx drive in loading times. Interesting to note, however, that even though in theory the drive should benchmark around 2x in sequential throughput and who knows how many times more IOPS, the actual game loading time only got better by maybe around 5%.

    Comparing to other people's systems (loading screen shows TEN loading bars, one for each person on two 5-player teams) is usually a joke. Not sure what other people use for storage, but most are super slow in comparison.

    Anyways, this is just an example showing two things.
    1) Yes, current fastest HDDs can be as fast or maybe faster in "real world" performance than an old and filled SSD using a now ancient controller.
    2) Switching to a newer SSD that benchmarks much, much faster may or may not make that much of an impact in "real world" performance, depending on what you use your computer for.

    Price, performance and reputation (reliability). I picked this one up up (separate drive from the one mentioned above - yes household has two Crucial M4 512GB) actually a while back, maybe half year ago? It was in a different computer initially, and will just be re-purposed in my main rig. I think I paid around $350 for it back then.

    Regarding performance, I _like_ having good performance but I also like having good bang/buck. Sure, brand X may outperform the Crucial M4 by 10%, but if it costs 30% more then it is on the back side of the price/performance curve.

    To give another example of my way of thinking, my new rig with this re-purposed Crucial M4 512GB SSD (already built, haven't deployed because, well, gotta install and copy over a ton of stuff and I've been busy/lazy) will have a GTX 670 Kepler graphics card. Why a 670 and not a 660 Ti or 680? Well, at the time I bought it, the 660 Ti were all selling for $300 and 680 were around $470, while I got this 670 for $370. It hit a sweeter spot on the price-performance curve versus the 680, plus has more memory bandwidth to push my 2560x1600 resolution versus the 660 Ti.

    All this, of course, is a moving target depending on availability, what goes on sale, what's coming soon, budget, if this is an emergency purchase (some essential part died), etc.
     
  35. OtherSongs

    OtherSongs Notebook Evangelist

    Reputations:
    113
    Messages:
    640
    Likes Received:
    1
    Trophy Points:
    31
    1st thanks for adding your comments. For others, I thought it best to transplant HTWingNut's comments to this tread. 2nd I gave some thought yesterday to my own SSD usage on my new laptop, and suspect it is likely to be relatively light; perhaps heavy with sequential reads for cloning the SSD, but light with writes. So I'd already revised to leaving just 20% of my SSD not allocated. Or given your above comments, maybe even only 10% not allocated.

    For others, the anandtech article is: AnandTech - Exploring the Relationship Between Spare Area and Performance Consistency in Modern SSDs

    And thanks for pointing out that it "is absolute worse case scenario."

    What I noticed most was Anand's "Final Words" advice:

    "For drives on the market today that don't already prioritize consistent IO,"

    I hate it when reviewers toss in clinkers like "prioritize consistent IO" because it gives them an out; and AFAIK most current SSD's do not prioritize consistent IO, including current top sellers (i.e. low priced) such as M4 and 830 and 840.

    "it is possible to deliver significant improvements in IO consistency through an increase in spare area. OCZ's Vector and Samsung's SSD 840 Pro both deliver much better IO consistency if you simply set aside 25% of the total NAND capacity as spare area. Consumer SSD prices are finally low enough where we're no longer forced to buy the minimum capacity for our needs. Whatever drive you end up buying, plan on using only about 75% of its capacity if you want a good balance between performance consistency and capacity."

    At least Anand actually names the very new Vector and 840 Pro as being somewhat better at IO consistency, but still goes on with "set aside 25% of the total NAND capacity as spare area" even for them.

    FWIW I got a Crucial M4 512GB SSD two weeks ago. :) And hope the new laptop will show up by this Friday. And I am starting to wonder if I'm going to stay under $2000 budget. :)

    In still another recent NBR SSD thread I found a ref to Anand's SSD comparison spot at: AnandTech - Bench - SSD

    It is already set to compare the Samsung 830 to 840.

    What I found most interesting was the summary benchmarks near the end and also the power values.

    Needless to say I plugged in the M4 and others to see how they compared, and feel better about having bought the M4.

    But would likely buy the 840 in 512GB size if I were to do it right now; key reasons being it's $12 less and performs slightly better and is only 7mm thick but uses slightly more power while reading/writing than the M4.

    Hey Tiller!

    Odd that you are so quiet in this thread that you started. But what else is new? :D

    I'm figuring 15% unallocated on my 512GB Crucial M4 SSD at this point.

    Given my low write activity, is there any good reason to go with unallocated space on my SSD that is greater than 15%???

    All ears open.
     
  36. James D

    James D Notebook Prophet

    Reputations:
    2,314
    Messages:
    4,901
    Likes Received:
    1,132
    Trophy Points:
    231
    As anand says most of ssds already have 7% of reserved space. So your 15% will become 20%.
     
  37. Abidderman

    Abidderman Notebook Deity

    Reputations:
    376
    Messages:
    734
    Likes Received:
    0
    Trophy Points:
    30
    I have several SSD's and have not partitioned any. I do use all except the ep121 (it has no room for a secondary drive) with a HDD (usually have to have in caddy) to have extra space. I only put OS and most used programs on the SSD, have never filled any SSD over maybe 60% and all still feel and bench like new. The article did use an unreal (for most of us) situation in this. And Tiller will be the first to tell you he uses his computers unlike many or most of us would.

    The important part to me is to make sure you put what you really need on the SSD and use the secondary drive ( larger and for many, a HDD) for data and such. My HDD is also my onboard backup, combined with my external backups.

    I do a lot of photo editing each week, have between 700+GB's and over a TB on the different SSD's, all show 100% life so far, all feel snappy and I have never had to SE or anything. If I were just putting one in now, I would most likely make a partition, so I don't have to be so disciplined, but I have not ever had a problem with storage on my drives.

    I can not say for sure that keeping so much space free on my SSD's has been the reason for the lack of performance decline, but it does make sense to me.
     
  38. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631


    Sorry, not quiet - just busy at this time of year - not enough time even though I'm waking up at 3AM to start some days!!!

    I'll answer as I can. ;)