The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
 Next page →

    Samsung 840 120GB Endurance Testing

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by HTWingNut, Mar 2, 2013.

  1. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    ORIGINAL POST is in SPOILER tags below, it has been replaced with the test data/review below the SPOILER tag.

    I managed to snag two Samsung 840 120GB drives from someone (brand new) for $75 each. I should get them in the next few days and am planning to run endurance testing on them using Anvil's Storage Utilities Endurance Test.

    They will be running in a Core 2 Duo desktop with G33/ICH9DH chipset running SATA II, one at a time, however. Max sequential write speed for the Samsung 840 120GB is only about 130MB/sec so SATA II shouldn't restrict write performance. One will be just out of the box no over-provisioning, and the other will be 20% over-provisioned (so approx 90GB).

    I will be running a read/write performance test every 50TB (about every 5 days I estimate) by removing the drive and putting it in my Intel desktop with SATA III controller.

    Hopefully this will provide data for several things:

    (1) TLC write endurance
    (2) Performance degradation after being hammered with data
    (3) Effect of Over-provisioning with read/write performance over time

    One thing that I'm trying to figure out is what the WA is for the drive so I can determine total P/E cycles, and can't find anything to show MWI for the drive either. Maybe I'm crazy but I thought it'd be fun to see first hand how this works, just want to make sure I have the right tools in place before I start. Thanks for any suggestions.

    TORTURE TESTING THE SAMSUNG 840 120GB SSD

    I purchased a Samsung 840 (non PRO) SSD 120GB to evaluate for torture test since I got it at a bargain price. In any case I decided to not just hammer the SSD with data until it failed, but to actually test it with actual data writes and deletes as a normal user would to some extent. An accelerated user workload so to speak. So I set down a path to write a regular Windows command line script to perform the tasks. This specific SSD was taken through over 200TB of writes and deletes over the course of several months to check the reliability and longevity of the drive.

    ABOUT THE SAMSUNG 840:

    The Samsung 840 is part of Samsung's latest line of consumer SSD's and utilizes TLC NAND. This is different than past SSD's which typically use MLC NAND for consumer devices, and SLC for enterprise or server type devices. Without going into detail, the bottom line is that TLC has many fewer overall write/erase cycles (about 1000) than MLC (about 5000-10000) or SLC (100000+). This is due to the MLC storing more voltage states per cell, which over time due to wear and many factors can make it more difficult to maintain the minute differences in voltage to determine what's stored in each voltage state. I highly recommend reading this blurb at anandtech if you want to know more: http://www.anandtech.com/show/6337/samsung-ssd-840-250gb-review/3

    1000 Cycles doesn't seem like much especially if you look at 120MB drive with 2:1 Write Amplification, which means really 500 cycles * 120GB ~ 60TB of writes before it can't write any more. Granted these drives are intended for consumer devices with either a second drive for data and storage or a light use machine. In any case even if you write 20GB per day (including background system tasks), 365 days a year, which is a lot, that means it will survive 60000/20GB ~ 3000 days or over 8 years. The issue isn't really with longevity in my mind, however but performance. While these drives are a massive improvement over any laptop hard drive, they also are slow in comparison with other SSD's released today. Sequential read rates can make use of SATA 3 speeds running over 500MB/sec, but all other cases work within the SATA 2 (300MB/sec). As you will see in these tests, the read performance degrades over time with number of writes, however, oddly enough, write speeds, while slow comparatively, remained steady throughout the testing.


    TEST SYSTEM:

    Shuttle SH67H3 with Intel H67
    Intel i5-4600 quad core CPU
    2x4GB DDR3 1600
    Intel X25-M G2 80GB SATA II SSD
    Samsung 840 120GB (the drive to be tortured)


    SOFTWARE:

    Software that was used to manage the torture tasks and measure performance were:

    (1) My own personal command line script for writes and deletes to the drive to be tortured, each action timed and logged
    (2) CrystalDiskMark - to measure performance of the drive at regular intervals
    (3) CrystalDiskInfo - to check status of the drive
    (4) Samsung Magician 4.0 - to check status of the drive including SMART, manual trim, and secure erase
    (5) MS Paint - to save images as needed


    INITIAL PERFORMANCE:

    The Intel X25-M 80GB probably isn't the best choice as a source drive, but considering the meager performance of the Samsung 840 it fit the performance criteria just fine. The Intel SSD was used for the system OS and also stored the files to be written to the SSD to be tested/tortured. It has been around for a while, but been a solid performer through many machines and as a test drive. Following is the performance of the Intel 80GB X25-M G2 SSD at the start of the tests:

    [​IMG]

    Fresh out of the box, below is the performance of the 120GB Samsung 840 SSD to be torture tested.

    [​IMG]


    METHODOLOGY:

    Instead of hammering the drive continuously with random data, I decided to use a more "accelerated consumer use" approach. It may not be entirely realistic of user habits, but I figured it would be more beneficial to use actual files and folders using Windows write/delete commands. I used a mix of personal and public domain files containing images, videos, music, and text documents in five different folders of varying file sizes, number of files, and folder depths. I wrote a command line script, which was much more of a challenge than I expected, but in the end I feel it turned out the way I wanted, and learned a bit about more about batch file programming along the way.

    I decided not to over-provision the drive just to test it out as a customer would receive it. Despite the possibly benefits of over-provisioning, my guess is that a large majority of users don't know what over-provisioning is or care.

    The folders utilized contained the following contents:

    folder0 = documents, 409MB, 656 files, 6 folders
    folder1 = game (FlightGear), 1.18GB, 11673 files, 1432 folders
    folder2 = music MP3, 1.00GB, 182 files, 0 folders
    folder3 = video, 7.53GB, 6 files, 0 folders
    folder4 = images, 538MB, 242 files, 2 folders

    The command line routine would randomly choose one of these folders from the source drive (Intel SSD) and write to the Torture drive (Samsung 840) renamed to a different folder name. Folders were added and randomly deleted off the Torture drive. The script was written so that there was a more likelihood of a delete than a write when the drive was over 80% filled, and more likelihood of a write than a delete at less than 80% filled. So the drive would typically hover from 75% to 95% filled throughout the torture testing. There was a random delay of up to 30 seconds between each write or delete action. Every action was timed and logged.

    At approx every 20TB performance of the drive was measured with CrystalDiskMark in five states:

    (1) immediately after torture
    (2) after 1 hr idle (to test for garbage collection)
    (3) after 8 hr idle (to test for garbage collection)
    (4) after quick format, manual trim, 1 hr idle
    (5) after secure erase

    SMART attributes were also recorded, although it didn't seem to provide much useful information.

    Below you an see the general trend of read and write performance for each of the 5 states listed above.

    READ performance immediately after each torture session:
    [​IMG]

    WRITE performance immediately after each torture session:
    [​IMG]


    READ performance 1hr idle
    [​IMG]

    WRITE performance 1hr idle
    [​IMG]


    READ performance minimum 8hr idle
    [​IMG]

    WRITE performance minimum 8hr idle
    [​IMG]


    READ performance after quick format, manual TRIM, 1hr idle
    [​IMG]

    WRITE performance after quick format, manual TRIM, 1hr idle
    [​IMG]


    READ performance after secure erase
    [​IMG]

    WRITE performance after secure erase
    [​IMG]


    You can also see performance after each of the above cycles based on sequential, 512k, 4k, and 4k QD32. You will notice some anomalies or broken parts of the data, that is because I did not collect all the data points for everything. This was somewhat of a work in progress given this is the first time I did this, but the trend is still apparent.

    SEQUENTIAL READ performance
    [​IMG]

    SEQUENTIAL WRITE performance
    [​IMG]


    512k READ performance
    [​IMG]

    512k WRITE performance
    [​IMG]


    4k READ performance
    [​IMG]

    4k WRITE performance
    [​IMG]


    4k QD32 READ performance
    [​IMG]

    4k QD32 WRITE performance
    [​IMG]


    We can also evalute the % change after a secure erase compared with the performance immediately after a torture test session. This is shown because this is where SSD's typically improve their performance the most. Other than a write anomaly, you can see as the more wear, the more significant performance percentage increases. For writes, other than the one -14% anomaly, it's more or less within a few percent. Writes on this drive remain rock solid no matter what.

    READ performance change after secure erase
    [​IMG]

    WRITE performance change after secure erase
    [​IMG]


    As far as general SMART info, here is the SMART information at 3TB (duh, I forgot to record it fresh) and after 200TB writes.

    SMART at 3TB (click image to enlarge and browser back arrow to come back)


    SMART at 200TB (click image to enlarge and browser back arrow to come back)



    I have also recorded the time it takes to process each read and erase cycle, and am in the process of trudging through thousands of log entries to determine the best way to present that data. That is coming soon.
     
  2. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    1) I wouldn't have too much faith in TLC write endurance.

    2) Call this the 'steady state' performance for the drive.

    3) This should not change with an 'endurance' type (synthetic) work load. Curious to see if it does.


    As far as SATA2 vs. SATA3 - yeah there are differences (think max latency... for example...) but 'endurance' testing doesn't account for that as far as I know.


    I think this is a waste of two drives :^) but good luck and curious to see your ongoing results.
     
  3. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    Well what would you rather see done with them as far as testing?
     
  4. Marksman30k

    Marksman30k Notebook Deity

    Reputations:
    2,080
    Messages:
    1,068
    Likes Received:
    180
    Trophy Points:
    81
    Actually, endurance testing would be invaluable. The guys at Xtremesystems.org would be very interested in your results since they found the MLC drives tend to last about twice what their P/E endurance would suggest. We already know the 840 has crap steady state performance due to the high latency to reprogram TLC flash.
    You can also test the effect of sudden power loss on one of the 840 drives. This would definitely be pioneering work.
     
  5. Peon

    Peon Notebook Virtuoso

    Reputations:
    406
    Messages:
    2,007
    Likes Received:
    128
    Trophy Points:
    81
    Short of catastrophic failure, it would be very hard to accurately measure the damage done to the drive without an SATA debugger and a custom-written program though. Even if you image the drive and do a bit-by-bit comparison of the drive contents to the image, data corruption could have hit free/unused space or the spare area, for example, and you'd never be able to tell.
     
  6. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    I'm considering doing my own torture test, and basically checking for performance with and without over-provisioning, with and without trim, and then hammer it in the end til death.

    I've already got an idea for a script to write a set of data of varying file sizes, number of folders, etc at random intervals, with random wait times between folder writes and deletes. Something more akin to an "accelerated" regular work load. There will be 8-10 folders, some with large video files, another a game folder with hundreds/thousands of varying file sizes from 1KB to hundreds of MB, another with a bunch of 2-4MB images. Folders will be selected randomly and written to SSD, sometimes folders deleted. I think this may be more realistic than a static write to the SSD constantly until death.
     
  7. jefflackey

    jefflackey Notebook Evangelist

    Reputations:
    96
    Messages:
    352
    Likes Received:
    38
    Trophy Points:
    41
    I'm actually very interested to see, on the 840 pro, any differences seen between a drive with no OP specifically set aside and one with space set aside for OP.
     
  8. MyDigitalSSD

    MyDigitalSSD Company Representative

    Reputations:
    68
    Messages:
    294
    Likes Received:
    18
    Trophy Points:
    31
    That will give some good valuable quantifiable info. Cannot wait to see how it goes.
     
  9. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    So what would you rather see, just a constant data dump to the SSD, or actual writing and deletion of files with random pauses in between? I'm working on a script and collected data to do just that. I have a few folders using all public domain/freeware material in case someone else wants to use it:

    (1) Game Folder: 1.2GB, 11673 files, 1432 folders (Freeware/Open Source game FlightGear)
    (2) Documents: 410MB, 656 files, 6 folders (DOC files with random characters)
    (3) Music: 1.0GB, 181 files, 0 folders (MP3)
    (4) Videos: 7.53GB, 5 files, 0 folders (MP4, MPEG, AVI)
    (5) Images: still working on it, but probably about 700MB, 400 files, 5 folders

    The script will randomly choose one of these folders and write it to disk as a renamed folder (like folder001, folder002, etc), with occasionally random deletion of a folder or folders, and occasional quick format. It will also pause a random amount of time from 0 minutes to 20 minutes before writing again.

    I figured this would be more of an "accelerated" closer to real world use wear than just shoving random data bits to the SSD over and over and over again. Maybe it shouldn't matter.

    I was planning on checking read/write performance occasionally (every couple days) under a few conditions as well:

    - immediately after pausing the script (real-time TRIM effect)
    - after 60 minutes idle time (GC effectiveness)
    - after secure erase (resetting bits after x% wear)

    I may just run the script, check performance every day for about 4-5 days, then just hammer it with Anvil endurance test until dead. Of course this would be done both on a drive with no OP and with 20% OP.
     
  10. saturnotaku

    saturnotaku Notebook Nobel Laureate

    Reputations:
    4,879
    Messages:
    8,926
    Likes Received:
    4,701
    Trophy Points:
    431
    Just do what you had planned. Tiller is the last person who should be giving you advice on this type of stuff.
     
  11. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    Yeah I think I am. I just wanted it to be useful information before killing two SSD's is all. I'm making a regular command line batch script and haven't messed with that in a while. It'll be ugly coding but at least it will work (I hope, lol). Using a RAM disk to test it out before I run it on the SSD.
     
  12. Abidderman

    Abidderman Notebook Deity

    Reputations:
    376
    Messages:
    734
    Likes Received:
    0
    Trophy Points:
    30
    I am very interested in following this. Both to see the OP vs standard OP by the manufacturers and the effects it has over time. Thanks up front for doing this.
     
  13. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    If I can ever find time to get it going, lol. Life has become just too dang busy lately.
     
  14. Abidderman

    Abidderman Notebook Deity

    Reputations:
    376
    Messages:
    734
    Likes Received:
    0
    Trophy Points:
    30
    Because your worrying about the Habs? Lol, j/k.
     
  15. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    More like the Habs not. :p

    It's crazy, besides Chicago, there's only a few points separating the rest of the teams in the Western conference. It could be anyone's game. Would this be the year the Wings don't make the playoffs? I sure hope not.
     
  16. jclausius

    jclausius Notebook Virtuoso

    Reputations:
    6,160
    Messages:
    3,265
    Likes Received:
    2,573
    Trophy Points:
    231
    <Maybe I'm jinxing things, but I'll risk it.> a) The newest point streak has started. which leads to b...) Does Lord Stanley have a cup calling in 2013 to Kaner, Towes, Craw, and the rest?
     
  17. MyDigitalSSD

    MyDigitalSSD Company Representative

    Reputations:
    68
    Messages:
    294
    Likes Received:
    18
    Trophy Points:
    31
    I think seeing how many times the drive can be cycled full write, read, and erase would be interesting.

    MyDigitalSSD
     
  18. highlandsun

    highlandsun Notebook Evangelist

    Reputations:
    66
    Messages:
    615
    Likes Received:
    6
    Trophy Points:
    31
    Even nonstop it should take at least a month of continuous I/O to kill the drive. You're going to be running an awful lot of tests. Would be nice if instead your tests produced useful results (e.g., a distributed computing work unit, I dunno) because you're going to spend a lot of CPU/energy/time to get there from here.
     
  19. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    Torture testing has begun already with nonOP drive.

    Took me a lot longer than expected to write a command line batch file to execute the way I wanted but it's under way anyhow.

    And it won't take a month of continunous IO to kill it if you do the math. Let's say you average 200MB/sec (with continuous large and random small files) with continuous writes that's 720GB/hr @ ~ 1000 write cycles with Write Amplification = 1 and Wear Leveling = 1 (in reality it will be worse) that's about 120000GB on the 120GB drive so 120000GB/720GB/hr ~ 167 hours ~ 7 days.

    If you hammer it with continuous IO it will be about half that.
     
  20. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    Surprisingly with 3TB of writes, taking it to 98% full, performance hasn't dropped much at all. I will provide more data later, but so far I'm impressed. Strange thing is after idle for a while to let GC do its thing, read performance suffered slightly, but write performance improved. Scratching my head on that one.

    Technically 3TB of data on 120GB drive should be about 30 r/w cycles, or about 3% life of the cells. I'm going to increase write frequency from random 0-60 seconds to 0-20 seconds and see how that affects it. Plus let it run to about 10TB and then check again.
     
  21. Abidderman

    Abidderman Notebook Deity

    Reputations:
    376
    Messages:
    734
    Likes Received:
    0
    Trophy Points:
    30
    Any results yet, HT?
     
  22. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    Working on it. I updated my batch file to include some more reporting and improve/streamline the code a bit more. It got a lot more complicated than I cared for, but it seems to be working well so far. I'm going to report data out in many ways. I don't think I'll have to run a test with 20% OP because with GC and secure erase the drive pretty much recovers close to peak performance. I'm at about 13TB writes now. I had about a week down time as I updated my batch code. I should average about 3TB a day and will provide all sorts of data once I reach 20TB, and then every 10TB thereafter.

    I'm checking for read performance, write performance, any write or read delay times, performance immediately after write torture (at 80%+ filled), after 1 hr GC, after 12hrs GC, and after secure erase, then torture for another 10TB, and report out again. Also checking SMART data through Samsung Magician. Technically it should last 100TB+ no problem, but we'll see. :)
     
  23. Abidderman

    Abidderman Notebook Deity

    Reputations:
    376
    Messages:
    734
    Likes Received:
    0
    Trophy Points:
    30
    Thanks and good job.
     
  24. Captmario

    Captmario Notebook Consultant

    Reputations:
    50
    Messages:
    200
    Likes Received:
    9
    Trophy Points:
    31
    ill be waiting for results on this one :p since im planning to buy the same drive as OS drive
     
  25. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    Hopefully first set of data in the next day or so at 20TB.
     
  26. Prostar Computer

    Prostar Computer Company Representative

    Reputations:
    1,257
    Messages:
    7,426
    Likes Received:
    1,016
    Trophy Points:
    331
    Considering the 840 uses TLC I'm interested as well. :)
     
  27. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    Still churning away. Almost to 40TB. CrystalDiskInfo and Samsung Magician still saying 100% and "Good" status. Trying to figure out how to slice and dice all the data I've collected now, lol. The odd thing is that read speeds are affected with a decrease in performance, although writes no matter what, are consistent. I expected opposite.
     
  28. Captmario

    Captmario Notebook Consultant

    Reputations:
    50
    Messages:
    200
    Likes Received:
    9
    Trophy Points:
    31
    Was decrease in read performance very noticeable? i mean around how much decrease in read performance did you notice
     
  29. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    It's measured performance. Not using it as my drive, just a torture test.

    Just to show a snapshot in time, here's a comparison of CrystalDiskMark with a fresh drive (no writes) vs. 40TB of writes after a secure erase:

    Samsung 840 120GB No OP FRESH
    [​IMG]

    Samsung 840 120GB No OP after 40TB Writes then Secure Erase
    [​IMG]


    So far I've collected data:
    Fresh Drive
    3TB Torture
    10TB Torture
    20TB Torture
    40TB Torture

    During the torture session I have a log file to collect data:
    - Folder contents and size written
    - Time to write/erase each data folder during torture to test for any noticeable lag times and performance during torture testing
    - SSD free space before each data folder written
    - Total bytes written/erased for each torture session

    After each torture session I have collected performance data (primarily CrystalDiskMark):
    - Immediately after Torture
    - After 1 Hr Idle (to check GC routine)
    - After 8 to 12 Hrs idle (to check GC routine)
    - After Secure Erase
    - Also collected SMART attributes immediately after a torture session

    Then I start up the next torture session

    Keep in mind that this is not a constant write. It writes one of five folders I've created, each with various file sizes and folders. There is a random delay of 0-30 seconds between each write or erase (which is determined randomly), and writes up to about 80% full then it more or less remains at 80-90% filled, occasionally dropping to 65-70% and then up to 100%. This is to mimic an accelerated "real world" use and not just hammer it with random data without stopping.

    I plan on testing every 20TB until 100TB then I will check every 10TB or so because it will likely be running near failure.
     
  30. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    It's going on its run to 100TB now, should be there in a few days. No failures or bad blocks yet. I'll have more data than you'll probably want to see once it's all done. Read performance is quite dismal though, 200-240MB/sec. Write performance hasn't wavered. Similar performance both read and write whether 90%+ full or 0% full. Secure erase seems to recover some of the performance, but still only about 250MB/sec read.
     
  31. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    I exceeded 100TB. Performance is starting to tank though, even write speeds now. Only a matter of time...
     
  32. senshin

    senshin Notebook Evangelist

    Reputations:
    124
    Messages:
    311
    Likes Received:
    11
    Trophy Points:
    31
    Nice to see some real life test of degrading SSD drives :).

    + 1 rep
     
  33. eawtan

    eawtan Notebook Enthusiast

    Reputations:
    2
    Messages:
    12
    Likes Received:
    0
    Trophy Points:
    5
    Nice work testing out the wear/life of a TLC, pls continue the great work.
     
  34. SAiLO

    SAiLO Notebook Evangelist

    Reputations:
    35
    Messages:
    516
    Likes Received:
    243
    Trophy Points:
    56
    Great work, thanks!
     
  35. StratCat

    StratCat Notebook Evangelist

    Reputations:
    28
    Messages:
    301
    Likes Received:
    0
    Trophy Points:
    30
    Yes, thank you for this; it is important work in terms of the new TLC technology, but it also is adding data to the currently hot topic of choosing between the pro & non-pro versions of the 840.

    And testing the 120GB non-pro, being the lowest performer of both 840 series and often painted in reviews as somewhat of an undeserved step-child, is no bad thing either.

    +rep for you, my friend.

    Thank you, again.
     
  36. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    Still going, almost 140TB.
     
  37. energydream2007

    energydream2007 Notebook Enthusiast

    Reputations:
    0
    Messages:
    24
    Likes Received:
    0
    Trophy Points:
    5
  38. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    Almost to 180TB, lol. Still going strong. Performance has pretty much stabilized, albeit slow, but it still stabilized. I'll be sure to publish all the data once I reach 200TB regardless. Then it will only be a matter of time before it goes kaput completely. But quite impressive that a 120GB SSD has reached a 180TB writes. That's like a minimum 1500 w/e cycles on something that was supposed to only have 1000.
     
  39. Encrypted11

    Encrypted11 Notebook Evangelist

    Reputations:
    137
    Messages:
    317
    Likes Received:
    0
    Trophy Points:
    30
  40. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    No flashing. Sticking with original firmware to keep test consistent.
     
  41. Encrypted11

    Encrypted11 Notebook Evangelist

    Reputations:
    137
    Messages:
    317
    Likes Received:
    0
    Trophy Points:
    30
    Right, fair enough. :)

    Sent from my GT-I8190 using Tapatalk 2
     
  42. Jarhead

    Jarhead 恋の♡アカサタナ

    Reputations:
    5,036
    Messages:
    12,168
    Likes Received:
    3,133
    Trophy Points:
    681
    Wow, I never would have expected the TLC in the 840 to last 180TB+. Pretty disappointing to see that performance has tanked into the 200MS/s range, but at least that's still better than a mechanical HDD.

    Thanks a lot for the testing, HTWingNut. I'll have an eye open looking for the final results.
     
  43. Marecki_clf

    Marecki_clf Homo laptopicus

    Reputations:
    464
    Messages:
    1,507
    Likes Received:
    170
    Trophy Points:
    81
    Thanks for the test. Very useful from my perspective, as I have one 120GB 840 series SSD in my laptop. +rep of course.
     
  44. Marksman30k

    Marksman30k Notebook Deity

    Reputations:
    2,080
    Messages:
    1,068
    Likes Received:
    180
    Trophy Points:
    81
    Most high quality reputable consumer NAND tends to be overspecced, the rated P/E cycles is very conservative usually with a x2 Engineering factor at least with MLC flash. Its not unheard of for the 25nm and 34nm flash (especially the Intel ones) to exceed the P/E cycles by a factor of 3 or 4 (especially for 34nm)
     
  45. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    This is TLC NAND though that with eight voltage states (3 bits per cell) so has much lower P/E cycles.
     
  46. Leiser

    Leiser Newbie

    Reputations:
    0
    Messages:
    3
    Likes Received:
    0
    Trophy Points:
    5
    Good luck! I also have the 120 version and it works good. The only problem is when i tried to use Samsung Magician 4.0 to enable the OP, it crashed my partition and turned it into RAW.. I had to use the 3.0 to set the OP and then upgrade ..
     
  47. StratCat

    StratCat Notebook Evangelist

    Reputations:
    28
    Messages:
    301
    Likes Received:
    0
    Trophy Points:
    30
    Interesting...

    I used Magician 4.0 to set the Magician default OP (10%, in addition to the std factory default 7%) in my sig machine, and it worked like a charm. Magician was a nifty bonus, IMHO. And, Magician's been rock-stable faultless and efficient in use, and I use it reasonably regularly, several times a week. So my experience has been very positive.

    I did d/l my copy directly from the Samsung website.

    Sorry to hear about your experience.
     
  48. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    200TB achieved! I should have data up in the next few days.
     
  49. Jarhead

    Jarhead 恋の♡アカサタナ

    Reputations:
    5,036
    Messages:
    12,168
    Likes Received:
    3,133
    Trophy Points:
    681
    Can you still fully use the drive (read and write)? And has performance tanked again, or is it relatively stable now?
     
  50. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    Yes, no errors on the drive, 100% usable. Overall performance has dropped considerably from new but I'll share that in data later. Odd thing is that read performance suffered greatly where as writes remained fairly stable throughout.
     
 Next page →