This somehow slipped into the realm of common knowledge at some point, but I can't seem to Google up any evidence that SSDs actually do this...
In fact, there's plenty of evidence to the contrary - the 840 EVO clearly doesn't.
-
They do not overwrite data realtime. Eventually they do so with garbage cleanup and TRIM.
-
tilleroftheearth Wisdom listens quietly...
I vaguely remember a link to an Intel paper that stated all nand was used evenly, eventually (yeah; GC).
The goal was to have each nand chip use the write cycles evenly over time, so that when they were exhausted, that meant all nand write cycles were used. This gives the most life to the SSD and also protects the data the most...
The 840 EVO doesn't, neither does the original TLC 840 Samsung drive either. Lazy, bad, inept programmers... sigh.
There may be other manufacturers that skip this important step too... as it will impact performance substantially (depending on the firmware and Processor + Ram) and won't allow the marketing departments to brag so highly of their (junk) drives. Only time will tell if this is true or affects the drives and the owners usage patterns though.
TRIM is not involved in this important step. After all, it is simply clearing data that is not needed anymore vs. moving existing, valid, data.
And the correct terms are re-write (not overwrite). We are not talking about updating a data file. We are talking about the controller/firmware's way of dealing with static data that is not updated or changed, but needs to be moved to gain access to the nand chips' write cycles that are remaining on that data block. -
This is called wear leveling. It's common for all modern SSDs, even Samsung.
-
tilleroftheearth Wisdom listens quietly...
See:
http://www.overclock.net/t/1507897/samsung-840-evo-read-speed-drops-on-old-written-data-in-the-drive
If that was the case, old (static) data would have been wear leveled and the issue above would never have surfaced. -
As a point, you don't know that specific blocks are overwritten with rotating media, either. All modern disk drives, with a few exceptions, use logical block addressing. The drive firmware then maps logically addressed blocks to physical blocks as required. Its one of the reasons why you never see bad sectors any more. They're still happen, quite often in fact, but the logical mapping mechanism maps around them automatically.
-
edit: I read through some of that thread and wild stuff from Samsung. WTF are they thinking? It just looks like an inherent issue in the design of the drive/firmware that data after a certain age doesn't make use of all data channels.Last edited: May 23, 2015 -
tilleroftheearth Wisdom listens quietly...
The true reason is not known as Samsung doesn't offer an official explanation, only firmware for select models (the original TLC 840 and the mSATA 840 EVO which suffer the same problems do not have firmware to address the issue).
Lot's of guessing and wild theories which the 'doesn't make use of all data channels' was one of the early ones too. The latest favorite is the firmware is not tracking voltage drift in cells properly. I'm not convinced of anything, even if Samsung at this point supports one view or another.
In the end what is reality is that Samsung, TLC nand and their associated firmware is not to be trusted for mission critical (and performance critical) applications. Nor are they to be trusted or benefit with any more of my $$$ either for any of their products based on how they are handling this issue for the last eight months and two firmware fixes later. With the first one a known bust and the latest one still to early to be proven one way or the other.
See:
http://www.overclock.net/t/1507897/...-written-data-in-the-drive/2930#post_23930493
The above is an example of the latest 'fix' not working for at least one person.
If the wear leveling was working properly, this old file read issue would not have surfaced. Samsung intentionally left one the most basic functions of the firmware disabled on their junk products and the issue surfaced much quicker than they expected (i.e. before the warranty ran out).
As I have reported here myself, I saw huge drops in performance copying data to other systems (including NAS) just after I had clean installed Windows and only a few days past my return period (mere weeks).
What has made them continually useable to me at a certain level of performance (barely), is the fact that I defrag my systems monthly (with PD13) and this counteracted this issue to some extent.
But the performance jumps (for a few weeks) if I run the Data Disk Monthly script in MyDefrag and/or run Puran DiskFresh in addition to my usual defrag runs.
See:
http://forum.notebookreview.com/threads/1tb-evo-with-samsung-magician-4-6-f-w-update-results.775351/
Note that in my case too, the minimum read speeds after applying the latest 'D' firmware and also running the Magician Advanced Optimization simply brought my systems to 'samsung' levels of HDD 'performance'. Running MyDefrag afterwards finally showed me what the first week of using that drive felt like so long ago... but, still nowhere close to what it is supposed to be doing (regarding minimum speeds...).
Samsung SSD's have always run laggy in my experience. I've repeated this many times in the past half dozen years. And it still applies today. What is pathetic is that Samsung cripples their drives further by not following the basic recipe of a fast and stable SSD design that all other manufacturers are adhering to (as far as we know, today).
This disregard for their users and their users' data is what makes Samsung my #1 enemy today. I'm sure they traded this basic GC task for slightly higher synthetic scores... I shudder to think what else they are doing too with our systems and our data as test mules to save a few pennies worth of programming per drive while also bragging about the highest performance which continues to be unsubstantiated in real world use, with or without their latest fix.
I know that my $$$ now has a permanent allergy to Samsung's pockets/products. I hope it spreads and Samsung gets the message loud and clear from all corners of the world. -
@tilleroftheearth: Thanks for actually answering the question. The Samsung issue makes me really think twice about whether or not the "basic recipe", which we've all been taking for granted all along, actually exists - it might simply be wishful thinking on our part, which the SSD makers and their marketing departments have been subtly encouraging without ever explicitly confirming. I mean, if it comes down to it, Samsung can always claim that they never said that old data gets rewritten. Given that I'm having a hard time digging up any hard evidence on this issue, they just might be right...
There's a billion other things like this - SATA/AHCI compliance, ECC sufficiency, power loss protection, hardware encryption, etc. etc. are all things that we've somehow taken for granted without the SSD makers ever making any guarantees on.
@ratinox: My understanding is that valid, live data is never rewritten with traditional rotating media at the firmware/controller level - it forever sits wherever it was originally dumped. -
tilleroftheearth Wisdom listens quietly...
You're welcome Peon, and those other things are something to think about too. Hmmm...
Specifically for this part of the 'basic recipe' for SSD's that I'm remember was straight from Intel half a decade or so ago (In the two long SSD threads you may still find links that work...). It may also be repeated in the following article too by Anand Lal Shimpi which is one of the few online journalists I would trust (for depth of knowledge and intentions too):
See:
http://www.anandtech.com/show/2738
(Sorry, I don't have time to re-read that again right now and actually check for you if it does contain anything useful to this topic).
Wear leveling is so basic and logical that without it built properly into the firmware and executed as needed as part of the GC (background) clean up routines, that SSD's would not be possible or realistic today.
Think about it; except for me, most people buy the smallest SSD's they can and then fill them as much as possible and also use and abuse them with 5 to 20% maximum free space. Without wear leveling there would not be an SSD alive after so many years if all the nand cells that were used over and over was just from the free space nand pool...
HDD's do not move data around (internally) unless the block was bad. That is why I still re-write data from one drive to another continually (after formatting the latter) to ensure that no head/track drift affects my ability to read back my stuff from them. (After verifying that the data and drives are in good condition, I format the original HDD and copy the data back from the drive that was 'online' for the last six to nine months - then put the 'copied' drive in storage for a while... Yeah... too many drives and too much data...).
-
Actually Samsung did explain why 840 evo drives had slowdowns, that it was due to voltage drift inside cells containing older data and overly aggressive reading algorithm, that would improperly correct for those voltage drifts, causing errors and forcing rereads=slow downs, . At the end newer algorithm didn't fix the problem completely as some people still had slowdowns and Samsung end up with rewriting old data to keep it fresh, with minimal voltage drift, I guess the voltage drift was too big to correct with algorithm alone .
My understanding of wear leveling is that there are 2 ways of doing it:
1. write fresh data into cells with lowest write count, this way does not rewrite older data and its faster
2. move blocks of data with erase count bellow threshold to new locations, this does rewrite older data (if its bellow threshold) and it's slower.
I don't know which wear leveling Samsung was using, but even with #2, it maybe not enough to keep old data fresh with some less used drives, therefore new algorithm was needed to force rewrite memory blocks based more on date, than wear leveling threshold alone.
At least this is the way I understand this whole issue, but I was wrong before -
#1 doesn't make sense. That is saying that old data is never rewritten. The whole point of wear leveling is to keep the wear of each cell roughly the same, which means old data would have to eventually be rewritten so those cells with old data could be equalized to same rewrite cycles. A good GC routine would write the oldest data to the cells with the highest write count, since they would likely remain stale, and use the cells that the old data was sitting in for several write cycles for new or the more dynamic data.
Otherwise an SSD sitting 50% full would only be rewriting the other 50% over and over again and wear out that much sooner. -
The controllers in thumb drives and SD cards don't do this. -
tilleroftheearth Wisdom listens quietly...
They dropped the ball in a way that no corporation should in 2014/2015. They lied to their customers (with the first fix and possibly this one too... time will tell once more). They ignored the issue that affected other similar products (original Samsung TLC based 840 SSD and at least some mSATA versions as well). They may as well have painted 'liars' and 'fools' on their foreheads, imo.
I would be curious to read any links you provide directly from Samsung on this issue. But I don't think they exist. -
Quote from article "Samsung finally disclosed some details of the source of the bug." with the explanation and diagrams. -
tilleroftheearth Wisdom listens quietly...
As I've mentioned, nothing directly from Samsung has been released, afaik. And no, it's not that I don't trust Anandtech. It's that they could be lied to too.
(With any spin possible in the future as deemed appropriate by the slimy Samsung corp). -
the article specifically says "Samsung released more info", are you saying the writer of the article lied and made up whole explanation? And what do you thing the problem is? If Anandtech was lied to, they would be lied to by Samsung? Why? To me it seems very simple, all memory cells slowly loose electrons and this being one of the smallest and triple level memory chip, older data was harder to read than expected and the only reliable fix came to be rewriting old data on regular basis. Actually the fix, rewriting old data, could have been made as part of wear leveling, just in more aggressive way and no one would know. Probably we will never know why Samsung didn't catch it before release and took so long to fix it: rushing into production would be one guess, but when you're on bleeding edge mistakes are bound to happen, no one else had memory like this until now. Now Samsung increased size of memory cells used in evo850 and all is well, so far.
-
-
Now, you put an SSD in a USB enclosure and you'll get all of the extra bells and whistles that an SSD controller provides except TRIM (although there are some exceptions). On the other hand you need to be very careful about hot unplug. Sudden power loss can corrupt data on SSDs. -
Aggressive GC does not require TRIM. If you want a USB enclosure with an SSD, best to consider one with aggressive garbage collection. I believe Kingston drives have pretty aggressive GC. It would be nice if TRIM was added to the USB 3.1 spec, but so far it seems it's not there.
How do we know that SSDs actually rewrite old data?
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Peon, May 22, 2015.