The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    How do we know that SSDs actually rewrite old data?

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by Peon, May 22, 2015.

  1. Peon

    Peon Notebook Virtuoso

    Reputations:
    406
    Messages:
    2,007
    Likes Received:
    128
    Trophy Points:
    81
    This somehow slipped into the realm of common knowledge at some point, but I can't seem to Google up any evidence that SSDs actually do this...

    In fact, there's plenty of evidence to the contrary - the 840 EVO clearly doesn't.
     
  2. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    They do not overwrite data realtime. Eventually they do so with garbage cleanup and TRIM.
     
  3. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    I vaguely remember a link to an Intel paper that stated all nand was used evenly, eventually (yeah; GC).

    The goal was to have each nand chip use the write cycles evenly over time, so that when they were exhausted, that meant all nand write cycles were used. This gives the most life to the SSD and also protects the data the most...

    The 840 EVO doesn't, neither does the original TLC 840 Samsung drive either. Lazy, bad, inept programmers... sigh.

    There may be other manufacturers that skip this important step too... as it will impact performance substantially (depending on the firmware and Processor + Ram) and won't allow the marketing departments to brag so highly of their (junk) drives. Only time will tell if this is true or affects the drives and the owners usage patterns though.

    TRIM is not involved in this important step. After all, it is simply clearing data that is not needed anymore vs. moving existing, valid, data.

    And the correct terms are re-write (not overwrite). We are not talking about updating a data file. We are talking about the controller/firmware's way of dealing with static data that is not updated or changed, but needs to be moved to gain access to the nand chips' write cycles that are remaining on that data block.
     
  4. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    This is called wear leveling. It's common for all modern SSDs, even Samsung.
     
  5. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
  6. ratinox

    ratinox Notebook Deity

    Reputations:
    119
    Messages:
    1,047
    Likes Received:
    516
    Trophy Points:
    131
    As a point, you don't know that specific blocks are overwritten with rotating media, either. All modern disk drives, with a few exceptions, use logical block addressing. The drive firmware then maps logically addressed blocks to physical blocks as required. Its one of the reasons why you never see bad sectors any more. They're still happen, quite often in fact, but the logical mapping mechanism maps around them automatically.
     
  7. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    Why would read speeds drop if it was sitting static? It would never have been cycled multiple times so it should still be fresh. It should be the other cells that take the hit because they are deleted and rewritten multiple times, and more frequently because "old" data is sitting static consuming those cells.

    edit: I read through some of that thread and wild stuff from Samsung. WTF are they thinking? It just looks like an inherent issue in the design of the drive/firmware that data after a certain age doesn't make use of all data channels.
     
    Last edited: May 23, 2015
  8. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631

    The true reason is not known as Samsung doesn't offer an official explanation, only firmware for select models (the original TLC 840 and the mSATA 840 EVO which suffer the same problems do not have firmware to address the issue).

    Lot's of guessing and wild theories which the 'doesn't make use of all data channels' was one of the early ones too. The latest favorite is the firmware is not tracking voltage drift in cells properly. I'm not convinced of anything, even if Samsung at this point supports one view or another.

    In the end what is reality is that Samsung, TLC nand and their associated firmware is not to be trusted for mission critical (and performance critical) applications. Nor are they to be trusted or benefit with any more of my $$$ either for any of their products based on how they are handling this issue for the last eight months and two firmware fixes later. With the first one a known bust and the latest one still to early to be proven one way or the other.

    See:
    http://www.overclock.net/t/1507897/...-written-data-in-the-drive/2930#post_23930493

    The above is an example of the latest 'fix' not working for at least one person.


    If the wear leveling was working properly, this old file read issue would not have surfaced. Samsung intentionally left one the most basic functions of the firmware disabled on their junk products and the issue surfaced much quicker than they expected (i.e. before the warranty ran out).

    As I have reported here myself, I saw huge drops in performance copying data to other systems (including NAS) just after I had clean installed Windows and only a few days past my return period (mere weeks).

    What has made them continually useable to me at a certain level of performance (barely), is the fact that I defrag my systems monthly (with PD13) and this counteracted this issue to some extent.

    But the performance jumps (for a few weeks) if I run the Data Disk Monthly script in MyDefrag and/or run Puran DiskFresh in addition to my usual defrag runs.

    See:
    http://forum.notebookreview.com/threads/1tb-evo-with-samsung-magician-4-6-f-w-update-results.775351/

    Note that in my case too, the minimum read speeds after applying the latest 'D' firmware and also running the Magician Advanced Optimization simply brought my systems to 'samsung' levels of HDD 'performance'. Running MyDefrag afterwards finally showed me what the first week of using that drive felt like so long ago... but, still nowhere close to what it is supposed to be doing (regarding minimum speeds...).


    Samsung SSD's have always run laggy in my experience. I've repeated this many times in the past half dozen years. And it still applies today. What is pathetic is that Samsung cripples their drives further by not following the basic recipe of a fast and stable SSD design that all other manufacturers are adhering to (as far as we know, today).

    This disregard for their users and their users' data is what makes Samsung my #1 enemy today. I'm sure they traded this basic GC task for slightly higher synthetic scores... I shudder to think what else they are doing too with our systems and our data as test mules to save a few pennies worth of programming per drive while also bragging about the highest performance which continues to be unsubstantiated in real world use, with or without their latest fix.

    I know that my $$$ now has a permanent allergy to Samsung's pockets/products. I hope it spreads and Samsung gets the message loud and clear from all corners of the world.
     
  9. Peon

    Peon Notebook Virtuoso

    Reputations:
    406
    Messages:
    2,007
    Likes Received:
    128
    Trophy Points:
    81
    @tilleroftheearth: Thanks for actually answering the question. The Samsung issue makes me really think twice about whether or not the "basic recipe", which we've all been taking for granted all along, actually exists - it might simply be wishful thinking on our part, which the SSD makers and their marketing departments have been subtly encouraging without ever explicitly confirming. I mean, if it comes down to it, Samsung can always claim that they never said that old data gets rewritten. Given that I'm having a hard time digging up any hard evidence on this issue, they just might be right...

    There's a billion other things like this - SATA/AHCI compliance, ECC sufficiency, power loss protection, hardware encryption, etc. etc. are all things that we've somehow taken for granted without the SSD makers ever making any guarantees on.

    @ratinox: My understanding is that valid, live data is never rewritten with traditional rotating media at the firmware/controller level - it forever sits wherever it was originally dumped.
     
  10. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    You're welcome Peon, and those other things are something to think about too. Hmmm...

    Specifically for this part of the 'basic recipe' for SSD's that I'm remember was straight from Intel half a decade or so ago (In the two long SSD threads you may still find links that work...). It may also be repeated in the following article too by Anand Lal Shimpi which is one of the few online journalists I would trust (for depth of knowledge and intentions too):

    See:
    http://www.anandtech.com/show/2738

    (Sorry, I don't have time to re-read that again right now and actually check for you if it does contain anything useful to this topic).


    Wear leveling is so basic and logical that without it built properly into the firmware and executed as needed as part of the GC (background) clean up routines, that SSD's would not be possible or realistic today.

    Think about it; except for me, most people buy the smallest SSD's they can and then fill them as much as possible and also use and abuse them with 5 to 20% maximum free space. Without wear leveling there would not be an SSD alive after so many years if all the nand cells that were used over and over was just from the free space nand pool...

    HDD's do not move data around (internally) unless the block was bad. That is why I still re-write data from one drive to another continually (after formatting the latter) to ensure that no head/track drift affects my ability to read back my stuff from them. (After verifying that the data and drives are in good condition, I format the original HDD and copy the data back from the drive that was 'online' for the last six to nine months - then put the 'copied' drive in storage for a while... Yeah... too many drives and too much data... :) ).
     
  11. pete962

    pete962 Notebook Evangelist

    Reputations:
    126
    Messages:
    500
    Likes Received:
    223
    Trophy Points:
    56
    Actually Samsung did explain why 840 evo drives had slowdowns, that it was due to voltage drift inside cells containing older data and overly aggressive reading algorithm, that would improperly correct for those voltage drifts, causing errors and forcing rereads=slow downs, . At the end newer algorithm didn't fix the problem completely as some people still had slowdowns and Samsung end up with rewriting old data to keep it fresh, with minimal voltage drift, I guess the voltage drift was too big to correct with algorithm alone .
    My understanding of wear leveling is that there are 2 ways of doing it:
    1. write fresh data into cells with lowest write count, this way does not rewrite older data and its faster
    2. move blocks of data with erase count bellow threshold to new locations, this does rewrite older data (if its bellow threshold) and it's slower.
    I don't know which wear leveling Samsung was using, but even with #2, it maybe not enough to keep old data fresh with some less used drives, therefore new algorithm was needed to force rewrite memory blocks based more on date, than wear leveling threshold alone.
    At least this is the way I understand this whole issue, but I was wrong before :)
     
  12. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    #1 doesn't make sense. That is saying that old data is never rewritten. The whole point of wear leveling is to keep the wear of each cell roughly the same, which means old data would have to eventually be rewritten so those cells with old data could be equalized to same rewrite cycles. A good GC routine would write the oldest data to the cells with the highest write count, since they would likely remain stale, and use the cells that the old data was sitting in for several write cycles for new or the more dynamic data.

    Otherwise an SSD sitting 50% full would only be rewriting the other 50% over and over again and wear out that much sooner.
     
  13. ratinox

    ratinox Notebook Deity

    Reputations:
    119
    Messages:
    1,047
    Likes Received:
    516
    Trophy Points:
    131
    Not entirely true. If the on-board controller detects an impending block failure then it will allocate a block from the reserve (yes, rotating disks have reserves), copy the data to this block, change the block map to point to the new location, and mark the soon-to-fail block as bad. This happens entirely automatically and invisibly to the host computer. The drive will start throwing hard errors when the reserve is exhausted and more blocks start failing.

    Yep. When an SSD goes idle the on-board controller examines the map of allocated blocks and maybe shuffles data around in order to maintain more or less even wear across the entire drive.

    The controllers in thumb drives and SD cards don't do this.
     
  14. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Samsung did not offer any explanation that I have read. But many theories abounded, with the one you state one of the more popular ones. A reading algorithm cannot be overly aggressive; it either is matched to the nand properties or it's not. A third possibility is that the nand properties cannot properly be contained within an algorithm. But that has since been proven false as of the latest 'D' firmware available.

    They dropped the ball in a way that no corporation should in 2014/2015. They lied to their customers (with the first fix and possibly this one too... time will tell once more). They ignored the issue that affected other similar products (original Samsung TLC based 840 SSD and at least some mSATA versions as well). They may as well have painted 'liars' and 'fools' on their foreheads, imo.

    I would be curious to read any links you provide directly from Samsung on this issue. But I don't think they exist.
     
  15. pete962

    pete962 Notebook Evangelist

    Reputations:
    126
    Messages:
    500
    Likes Received:
    223
    Trophy Points:
    56
    http://www.anandtech.com/show/8617/...e-to-fix-the-ssd-840-evo-read-performance-bug
    Quote from article "Samsung finally disclosed some details of the source of the bug." with the explanation and diagrams.
     
  16. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Yeah, I've read that too. Not from Samsung though...

    As I've mentioned, nothing directly from Samsung has been released, afaik. And no, it's not that I don't trust Anandtech. It's that they could be lied to too. ;)

    (With any spin possible in the future as deemed appropriate by the slimy Samsung corp).
     
  17. pete962

    pete962 Notebook Evangelist

    Reputations:
    126
    Messages:
    500
    Likes Received:
    223
    Trophy Points:
    56
    the article specifically says "Samsung released more info", are you saying the writer of the article lied and made up whole explanation? And what do you thing the problem is? If Anandtech was lied to, they would be lied to by Samsung? Why? To me it seems very simple, all memory cells slowly loose electrons and this being one of the smallest and triple level memory chip, older data was harder to read than expected and the only reliable fix came to be rewriting old data on regular basis. Actually the fix, rewriting old data, could have been made as part of wear leveling, just in more aggressive way and no one would know. Probably we will never know why Samsung didn't catch it before release and took so long to fix it: rushing into production would be one guess, but when you're on bleeding edge mistakes are bound to happen, no one else had memory like this until now. Now Samsung increased size of memory cells used in evo850 and all is well, so far.
     
  18. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    You're right, thumb drives and SD cards don't do this, mostly. There are USB flash drives that do wear leveling though. But usually flash drives aren't used continuously like SSD's in a laptop, or are more likely to be erased frequently since they're used for temporary storage.
     
  19. ratinox

    ratinox Notebook Deity

    Reputations:
    119
    Messages:
    1,047
    Likes Received:
    516
    Trophy Points:
    131
    Anything behind an ATA controller will get minimal wear leveling. It was discovered pretty quickly that linear flash cards in Cisco routers wore out prematurely because they kept writing and rewriting the same blocks. Flash cells back then (1980s) had a durability of at most a few thousand writes. Designers of ATA to flash bridges took this into account and implemented random, scattered writes behind logical mappings to avoid reusing blocks too much.

    Now, you put an SSD in a USB enclosure and you'll get all of the extra bells and whistles that an SSD controller provides except TRIM (although there are some exceptions). On the other hand you need to be very careful about hot unplug. Sudden power loss can corrupt data on SSDs.
     
  20. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    Aggressive GC does not require TRIM. If you want a USB enclosure with an SSD, best to consider one with aggressive garbage collection. I believe Kingston drives have pretty aggressive GC. It would be nice if TRIM was added to the USB 3.1 spec, but so far it seems it's not there.