ORIGINAL POST is in SPOILER tags below, it has been replaced with the test data/review below the SPOILER tag.
I managed to snag two Samsung 840 120GB drives from someone (brand new) for $75 each. I should get them in the next few days and am planning to run endurance testing on them using Anvil's Storage Utilities Endurance Test.
They will be running in a Core 2 Duo desktop with G33/ICH9DH chipset running SATA II, one at a time, however. Max sequential write speed for the Samsung 840 120GB is only about 130MB/sec so SATA II shouldn't restrict write performance. One will be just out of the box no over-provisioning, and the other will be 20% over-provisioned (so approx 90GB).
I will be running a read/write performance test every 50TB (about every 5 days I estimate) by removing the drive and putting it in my Intel desktop with SATA III controller.
Hopefully this will provide data for several things:
(1) TLC write endurance
(2) Performance degradation after being hammered with data
(3) Effect of Over-provisioning with read/write performance over time
One thing that I'm trying to figure out is what the WA is for the drive so I can determine total P/E cycles, and can't find anything to show MWI for the drive either. Maybe I'm crazy but I thought it'd be fun to see first hand how this works, just want to make sure I have the right tools in place before I start. Thanks for any suggestions.
TORTURE TESTING THE SAMSUNG 840 120GB SSD
I purchased a Samsung 840 (non PRO) SSD 120GB to evaluate for torture test since I got it at a bargain price. In any case I decided to not just hammer the SSD with data until it failed, but to actually test it with actual data writes and deletes as a normal user would to some extent. An accelerated user workload so to speak. So I set down a path to write a regular Windows command line script to perform the tasks. This specific SSD was taken through over 200TB of writes and deletes over the course of several months to check the reliability and longevity of the drive.
ABOUT THE SAMSUNG 840:
The Samsung 840 is part of Samsung's latest line of consumer SSD's and utilizes TLC NAND. This is different than past SSD's which typically use MLC NAND for consumer devices, and SLC for enterprise or server type devices. Without going into detail, the bottom line is that TLC has many fewer overall write/erase cycles (about 1000) than MLC (about 5000-10000) or SLC (100000+). This is due to the MLC storing more voltage states per cell, which over time due to wear and many factors can make it more difficult to maintain the minute differences in voltage to determine what's stored in each voltage state. I highly recommend reading this blurb at anandtech if you want to know more: http://www.anandtech.com/show/6337/samsung-ssd-840-250gb-review/3
1000 Cycles doesn't seem like much especially if you look at 120MB drive with 2:1 Write Amplification, which means really 500 cycles * 120GB ~ 60TB of writes before it can't write any more. Granted these drives are intended for consumer devices with either a second drive for data and storage or a light use machine. In any case even if you write 20GB per day (including background system tasks), 365 days a year, which is a lot, that means it will survive 60000/20GB ~ 3000 days or over 8 years. The issue isn't really with longevity in my mind, however but performance. While these drives are a massive improvement over any laptop hard drive, they also are slow in comparison with other SSD's released today. Sequential read rates can make use of SATA 3 speeds running over 500MB/sec, but all other cases work within the SATA 2 (300MB/sec). As you will see in these tests, the read performance degrades over time with number of writes, however, oddly enough, write speeds, while slow comparatively, remained steady throughout the testing.
TEST SYSTEM:
Shuttle SH67H3 with Intel H67
Intel i5-4600 quad core CPU
2x4GB DDR3 1600
Intel X25-M G2 80GB SATA II SSD
Samsung 840 120GB (the drive to be tortured)
SOFTWARE:
Software that was used to manage the torture tasks and measure performance were:
(1) My own personal command line script for writes and deletes to the drive to be tortured, each action timed and logged
(2) CrystalDiskMark - to measure performance of the drive at regular intervals
(3) CrystalDiskInfo - to check status of the drive
(4) Samsung Magician 4.0 - to check status of the drive including SMART, manual trim, and secure erase
(5) MS Paint - to save images as needed
INITIAL PERFORMANCE:
The Intel X25-M 80GB probably isn't the best choice as a source drive, but considering the meager performance of the Samsung 840 it fit the performance criteria just fine. The Intel SSD was used for the system OS and also stored the files to be written to the SSD to be tested/tortured. It has been around for a while, but been a solid performer through many machines and as a test drive. Following is the performance of the Intel 80GB X25-M G2 SSD at the start of the tests:
![]()
Fresh out of the box, below is the performance of the 120GB Samsung 840 SSD to be torture tested.
![]()
METHODOLOGY:
Instead of hammering the drive continuously with random data, I decided to use a more "accelerated consumer use" approach. It may not be entirely realistic of user habits, but I figured it would be more beneficial to use actual files and folders using Windows write/delete commands. I used a mix of personal and public domain files containing images, videos, music, and text documents in five different folders of varying file sizes, number of files, and folder depths. I wrote a command line script, which was much more of a challenge than I expected, but in the end I feel it turned out the way I wanted, and learned a bit about more about batch file programming along the way.
I decided not to over-provision the drive just to test it out as a customer would receive it. Despite the possibly benefits of over-provisioning, my guess is that a large majority of users don't know what over-provisioning is or care.
The folders utilized contained the following contents:
folder0 = documents, 409MB, 656 files, 6 folders
folder1 = game (FlightGear), 1.18GB, 11673 files, 1432 folders
folder2 = music MP3, 1.00GB, 182 files, 0 folders
folder3 = video, 7.53GB, 6 files, 0 folders
folder4 = images, 538MB, 242 files, 2 folders
The command line routine would randomly choose one of these folders from the source drive (Intel SSD) and write to the Torture drive (Samsung 840) renamed to a different folder name. Folders were added and randomly deleted off the Torture drive. The script was written so that there was a more likelihood of a delete than a write when the drive was over 80% filled, and more likelihood of a write than a delete at less than 80% filled. So the drive would typically hover from 75% to 95% filled throughout the torture testing. There was a random delay of up to 30 seconds between each write or delete action. Every action was timed and logged.
At approx every 20TB performance of the drive was measured with CrystalDiskMark in five states:
(1) immediately after torture
(2) after 1 hr idle (to test for garbage collection)
(3) after 8 hr idle (to test for garbage collection)
(4) after quick format, manual trim, 1 hr idle
(5) after secure erase
SMART attributes were also recorded, although it didn't seem to provide much useful information.
Below you an see the general trend of read and write performance for each of the 5 states listed above.
READ performance immediately after each torture session:
![]()
WRITE performance immediately after each torture session:
![]()
READ performance 1hr idle
![]()
WRITE performance 1hr idle
![]()
READ performance minimum 8hr idle
![]()
WRITE performance minimum 8hr idle
![]()
READ performance after quick format, manual TRIM, 1hr idle
![]()
WRITE performance after quick format, manual TRIM, 1hr idle
![]()
READ performance after secure erase
![]()
WRITE performance after secure erase
![]()
You can also see performance after each of the above cycles based on sequential, 512k, 4k, and 4k QD32. You will notice some anomalies or broken parts of the data, that is because I did not collect all the data points for everything. This was somewhat of a work in progress given this is the first time I did this, but the trend is still apparent.
SEQUENTIAL READ performance
![]()
SEQUENTIAL WRITE performance
![]()
512k READ performance
![]()
512k WRITE performance
![]()
4k READ performance
![]()
4k WRITE performance
![]()
4k QD32 READ performance
![]()
4k QD32 WRITE performance
![]()
We can also evalute the % change after a secure erase compared with the performance immediately after a torture test session. This is shown because this is where SSD's typically improve their performance the most. Other than a write anomaly, you can see as the more wear, the more significant performance percentage increases. For writes, other than the one -14% anomaly, it's more or less within a few percent. Writes on this drive remain rock solid no matter what.
READ performance change after secure erase
![]()
WRITE performance change after secure erase
![]()
As far as general SMART info, here is the SMART information at 3TB (duh, I forgot to record it fresh) and after 200TB writes.
SMART at 3TB (click image to enlarge and browser back arrow to come back)
![]()
SMART at 200TB (click image to enlarge and browser back arrow to come back)
![]()
I have also recorded the time it takes to process each read and erase cycle, and am in the process of trudging through thousands of log entries to determine the best way to present that data. That is coming soon.
-
tilleroftheearth Wisdom listens quietly...
1) I wouldn't have too much faith in TLC write endurance.
2) Call this the 'steady state' performance for the drive.
3) This should not change with an 'endurance' type (synthetic) work load. Curious to see if it does.
As far as SATA2 vs. SATA3 - yeah there are differences (think max latency... for example...) but 'endurance' testing doesn't account for that as far as I know.
I think this is a waste of two drives :^) but good luck and curious to see your ongoing results. -
Well what would you rather see done with them as far as testing?
-
You can also test the effect of sudden power loss on one of the 840 drives. This would definitely be pioneering work. -
-
I'm considering doing my own torture test, and basically checking for performance with and without over-provisioning, with and without trim, and then hammer it in the end til death.
I've already got an idea for a script to write a set of data of varying file sizes, number of folders, etc at random intervals, with random wait times between folder writes and deletes. Something more akin to an "accelerated" regular work load. There will be 8-10 folders, some with large video files, another a game folder with hundreds/thousands of varying file sizes from 1KB to hundreds of MB, another with a bunch of 2-4MB images. Folders will be selected randomly and written to SSD, sometimes folders deleted. I think this may be more realistic than a static write to the SSD constantly until death. -
I'm actually very interested to see, on the 840 pro, any differences seen between a drive with no OP specifically set aside and one with space set aside for OP.
-
MyDigitalSSD Company Representative
That will give some good valuable quantifiable info. Cannot wait to see how it goes.
-
(1) Game Folder: 1.2GB, 11673 files, 1432 folders (Freeware/Open Source game FlightGear)
(2) Documents: 410MB, 656 files, 6 folders (DOC files with random characters)
(3) Music: 1.0GB, 181 files, 0 folders (MP3)
(4) Videos: 7.53GB, 5 files, 0 folders (MP4, MPEG, AVI)
(5) Images: still working on it, but probably about 700MB, 400 files, 5 folders
The script will randomly choose one of these folders and write it to disk as a renamed folder (like folder001, folder002, etc), with occasionally random deletion of a folder or folders, and occasional quick format. It will also pause a random amount of time from 0 minutes to 20 minutes before writing again.
I figured this would be more of an "accelerated" closer to real world use wear than just shoving random data bits to the SSD over and over and over again. Maybe it shouldn't matter.
I was planning on checking read/write performance occasionally (every couple days) under a few conditions as well:
- immediately after pausing the script (real-time TRIM effect)
- after 60 minutes idle time (GC effectiveness)
- after secure erase (resetting bits after x% wear)
I may just run the script, check performance every day for about 4-5 days, then just hammer it with Anvil endurance test until dead. Of course this would be done both on a drive with no OP and with 20% OP. -
saturnotaku Notebook Nobel Laureate
-
Yeah I think I am. I just wanted it to be useful information before killing two SSD's is all. I'm making a regular command line batch script and haven't messed with that in a while. It'll be ugly coding but at least it will work (I hope, lol). Using a RAM disk to test it out before I run it on the SSD.
-
I am very interested in following this. Both to see the OP vs standard OP by the manufacturers and the effects it has over time. Thanks up front for doing this.
-
If I can ever find time to get it going, lol. Life has become just too dang busy lately.
-
-
It's crazy, besides Chicago, there's only a few points separating the rest of the teams in the Western conference. It could be anyone's game. Would this be the year the Wings don't make the playoffs? I sure hope not. -
-
MyDigitalSSD Company Representative
I think seeing how many times the drive can be cycled full write, read, and erase would be interesting.
MyDigitalSSD -
Even nonstop it should take at least a month of continuous I/O to kill the drive. You're going to be running an awful lot of tests. Would be nice if instead your tests produced useful results (e.g., a distributed computing work unit, I dunno) because you're going to spend a lot of CPU/energy/time to get there from here.
-
Torture testing has begun already with nonOP drive.
Took me a lot longer than expected to write a command line batch file to execute the way I wanted but it's under way anyhow.
And it won't take a month of continunous IO to kill it if you do the math. Let's say you average 200MB/sec (with continuous large and random small files) with continuous writes that's 720GB/hr @ ~ 1000 write cycles with Write Amplification = 1 and Wear Leveling = 1 (in reality it will be worse) that's about 120000GB on the 120GB drive so 120000GB/720GB/hr ~ 167 hours ~ 7 days.
If you hammer it with continuous IO it will be about half that. -
Surprisingly with 3TB of writes, taking it to 98% full, performance hasn't dropped much at all. I will provide more data later, but so far I'm impressed. Strange thing is after idle for a while to let GC do its thing, read performance suffered slightly, but write performance improved. Scratching my head on that one.
Technically 3TB of data on 120GB drive should be about 30 r/w cycles, or about 3% life of the cells. I'm going to increase write frequency from random 0-60 seconds to 0-20 seconds and see how that affects it. Plus let it run to about 10TB and then check again. -
Any results yet, HT?
-
Working on it. I updated my batch file to include some more reporting and improve/streamline the code a bit more. It got a lot more complicated than I cared for, but it seems to be working well so far. I'm going to report data out in many ways. I don't think I'll have to run a test with 20% OP because with GC and secure erase the drive pretty much recovers close to peak performance. I'm at about 13TB writes now. I had about a week down time as I updated my batch code. I should average about 3TB a day and will provide all sorts of data once I reach 20TB, and then every 10TB thereafter.
I'm checking for read performance, write performance, any write or read delay times, performance immediately after write torture (at 80%+ filled), after 1 hr GC, after 12hrs GC, and after secure erase, then torture for another 10TB, and report out again. Also checking SMART data through Samsung Magician. Technically it should last 100TB+ no problem, but we'll see. -
Thanks and good job.
-
ill be waiting for results on this one
since im planning to buy the same drive as OS drive
-
Hopefully first set of data in the next day or so at 20TB.
-
Prostar Computer Company Representative
Considering the 840 uses TLC I'm interested as well.
-
Still churning away. Almost to 40TB. CrystalDiskInfo and Samsung Magician still saying 100% and "Good" status. Trying to figure out how to slice and dice all the data I've collected now, lol. The odd thing is that read speeds are affected with a decrease in performance, although writes no matter what, are consistent. I expected opposite.
-
Was decrease in read performance very noticeable? i mean around how much decrease in read performance did you notice
-
Just to show a snapshot in time, here's a comparison of CrystalDiskMark with a fresh drive (no writes) vs. 40TB of writes after a secure erase:
Samsung 840 120GB No OP FRESH
Samsung 840 120GB No OP after 40TB Writes then Secure Erase
So far I've collected data:
Fresh Drive
3TB Torture
10TB Torture
20TB Torture
40TB Torture
During the torture session I have a log file to collect data:
- Folder contents and size written
- Time to write/erase each data folder during torture to test for any noticeable lag times and performance during torture testing
- SSD free space before each data folder written
- Total bytes written/erased for each torture session
After each torture session I have collected performance data (primarily CrystalDiskMark):
- Immediately after Torture
- After 1 Hr Idle (to check GC routine)
- After 8 to 12 Hrs idle (to check GC routine)
- After Secure Erase
- Also collected SMART attributes immediately after a torture session
Then I start up the next torture session
Keep in mind that this is not a constant write. It writes one of five folders I've created, each with various file sizes and folders. There is a random delay of 0-30 seconds between each write or erase (which is determined randomly), and writes up to about 80% full then it more or less remains at 80-90% filled, occasionally dropping to 65-70% and then up to 100%. This is to mimic an accelerated "real world" use and not just hammer it with random data without stopping.
I plan on testing every 20TB until 100TB then I will check every 10TB or so because it will likely be running near failure. -
It's going on its run to 100TB now, should be there in a few days. No failures or bad blocks yet. I'll have more data than you'll probably want to see once it's all done. Read performance is quite dismal though, 200-240MB/sec. Write performance hasn't wavered. Similar performance both read and write whether 90%+ full or 0% full. Secure erase seems to recover some of the performance, but still only about 250MB/sec read.
-
I exceeded 100TB. Performance is starting to tank though, even write speeds now. Only a matter of time...
-
Nice to see some real life test of degrading SSD drives
.
+ 1 rep -
Nice work testing out the wear/life of a TLC, pls continue the great work.
-
Great work, thanks!
-
Yes, thank you for this; it is important work in terms of the new TLC technology, but it also is adding data to the currently hot topic of choosing between the pro & non-pro versions of the 840.
And testing the 120GB non-pro, being the lowest performer of both 840 series and often painted in reviews as somewhat of an undeserved step-child, is no bad thing either.
+rep for you, my friend.
Thank you, again. -
Still going, almost 140TB.
-
great work
-
Almost to 180TB, lol. Still going strong. Performance has pretty much stabilized, albeit slow, but it still stabilized. I'll be sure to publish all the data once I reach 200TB regardless. Then it will only be a matter of time before it goes kaput completely. But quite impressive that a 120GB SSD has reached a 180TB writes. That's like a minimum 1500 w/e cycles on something that was supposed to only have 1000.
-
Nice testing. Did you try flash the new firmware on it?
www.samsung.com/samsungssd
Sent from my GT-I8190 using Tapatalk 2 -
No flashing. Sticking with original firmware to keep test consistent.
-
Right, fair enough.
Sent from my GT-I8190 using Tapatalk 2 -
Wow, I never would have expected the TLC in the 840 to last 180TB+. Pretty disappointing to see that performance has tanked into the 200MS/s range, but at least that's still better than a mechanical HDD.
Thanks a lot for the testing, HTWingNut. I'll have an eye open looking for the final results. -
Thanks for the test. Very useful from my perspective, as I have one 120GB 840 series SSD in my laptop. +rep of course.
-
-
This is TLC NAND though that with eight voltage states (3 bits per cell) so has much lower P/E cycles.
-
Good luck! I also have the 120 version and it works good. The only problem is when i tried to use Samsung Magician 4.0 to enable the OP, it crashed my partition and turned it into RAW.. I had to use the 3.0 to set the OP and then upgrade ..
-
I used Magician 4.0 to set the Magician default OP (10%, in addition to the std factory default 7%) in my sig machine, and it worked like a charm. Magician was a nifty bonus, IMHO. And, Magician's been rock-stable faultless and efficient in use, and I use it reasonably regularly, several times a week. So my experience has been very positive.
I did d/l my copy directly from the Samsung website.
Sorry to hear about your experience. -
200TB achieved! I should have data up in the next few days.
-
Can you still fully use the drive (read and write)? And has performance tanked again, or is it relatively stable now?
-
Yes, no errors on the drive, 100% usable. Overall performance has dropped considerably from new but I'll share that in data later. Odd thing is that read performance suffered greatly where as writes remained fairly stable throughout.
Samsung 840 120GB Endurance Testing
Discussion in 'Hardware Components and Aftermarket Upgrades' started by HTWingNut, Mar 2, 2013.