The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    Sandisk Dashboard. Is it accurate?

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by HopelesslyFaithful, Sep 16, 2014.

  1. HopelesslyFaithful

    HopelesslyFaithful Notebook Virtuoso

    Reputations:
    1,552
    Messages:
    3,271
    Likes Received:
    164
    Trophy Points:
    0
    So i bought the Sandisk Extreme Pro 480GB recently because i wanted to get the fastest SSD and i noticed that crystaldiskinfo can't read the wear. The dashboard only showed 99%-100% (it just changed to 100% after smart scan) left of drive. I just noticed after doing an in-depth smart reading that it shows wear amounts in there. It also shows reads. So far my drive has been on for ~800 hours and i have read 10200GB and wrote 2875 GB! Is that possible? I just bought this not that long ago and before i was averaging 30GB a day but that was over a year. I lately have been averaging A LOT more but i doubt 86+GB a day o_O The reads i wouldn't be surprised about (though i thought it would be more considering i do daily AV scans and load lots of data) but the writes Blarg?!?
     
  2. alexhawker

    alexhawker Spent Gladiator

    Reputations:
    500
    Messages:
    2,540
    Likes Received:
    792
    Trophy Points:
    131
    Could it be due to WA?
     
  3. HopelesslyFaithful

    HopelesslyFaithful Notebook Virtuoso

    Reputations:
    1,552
    Messages:
    3,271
    Likes Received:
    164
    Trophy Points:
    0
  4. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    WA=Write Amplification


    Is 2875GB possible in 800 hours? Sure it is; but only you know the workflow you put on the drive. And no matter what a user initiates in writes, depending on the state of the nand, the WA could increase it by 10x and 20x easily.
     
    Ferris23 likes this.
  5. HopelesslyFaithful

    HopelesslyFaithful Notebook Virtuoso

    Reputations:
    1,552
    Messages:
    3,271
    Likes Received:
    164
    Trophy Points:
    0
    yea but does that actually count WA? crystal disk i dont think counted WA
     
  6. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    A program can only report what was written in total, it doesn't know the WA factor.

    The WA factor depends on the health of the nand, the quality of the firmware with regards to TRIM and GC and the algorithms used to ensure that each nand chip is used equally.

    See here for much more info on the WA factor and most everything SSD related:

    See:
    AnandTech | The SSD Relapse: Understanding and Choosing the Best SSD
     
    HopelesslyFaithful and Ferris23 like this.
  7. HopelesslyFaithful

    HopelesslyFaithful Notebook Virtuoso

    Reputations:
    1,552
    Messages:
    3,271
    Likes Received:
    164
    Trophy Points:
    0
    thanks...i have read severa; articles on WA and other aspects but this is by far the easiest and most detailed read. Truly broken down barney style :D

    why couldn't sandisks own program know WA? "Could" they not add the ability to track the firmwares garbage collection?
     
  8. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Nothing can track WA on it's own.

    The way to determine WA is to write a known quantity of data to a drive and then read the registers for total GB's written. These will never match 100%.

    The total GB's written divided by the known quantity of data written to the disk will determine the WA factor (it is always a percentage over and above the actual data).

    The OCZ drives with the SandForce controllers and their methods of compressing written data were actually writing less data than what they were storing, but they had to compress/un-compress each file on the fly and that made them slower when working with uncompressible data.

    The Intel X25-M's (I think...) were one of the first to tackle WA head on without data compression techniques. The WA factor for those drives was around 1.1 if I remember correctly when other drives at the time were closer to a WA factor of 10. Yeah; the Intel drives would write 1.1 GB to the nand chips for each 1 GB that was actually real data. Other drives would write 10GB for the same GB of data.

    We've progressed a lot since then. But I'm sure that WA factors could be 2 or higher in specific circumstances with specific workloads (and no OP'ing and/or free space at all).


    The firmware and nand doesn't know what is 'just' data - all it knows is it used up x write cycles to save it. And then it can only report back that xxxxGB's have been written. Of that, only a smaller portion was actually data - but only the user could know that. To the nand, every write is 'data' written.
     
  9. HopelesslyFaithful

    HopelesslyFaithful Notebook Virtuoso

    Reputations:
    1,552
    Messages:
    3,271
    Likes Received:
    164
    Trophy Points:
    0
    oh at first i thought you were saying the GB written was actual data and the WA GB was unknown by the SSD rofl. I gotcha. That still doesn't make sense how sandisk dashboard wouldn't know the amount of data it actually had to write vs the data it used. Both values have to be know by the system because if it doesn't know how much it just wrote of actual data....how can it write data :) See what i am saying? Both values have to be known. I can't write a sentence without knowing how many words i wrote. That just isn't possible :)

    OS says write 10GB...it has to know it just wrote 10GB because it was just told to do it :)
    Now the SSD also has to know the WA because it just used some method of garbage collection and wrote 12GB of data.

    Basically, it has to know those numbers no matter what because if it doesn't how can it perform the operation :) Now does it record and track that info is a different story. I know i just recorded all these characters but did i count them and save that info? Nope :)

    Characters with spaces:2565
    Now i added the feature of actually recording that info that i already knew :) What you said doesn't add up.

    :/ 2936GB now 61GB in a day :/
     
  10. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    No, you're still confused.

    Let me try like this...

    If we write a folder of files that are 10GB in size to an SSD with used nand (which is any SSD used more than a few minutes), the SSD will need to use up much more than 10GB worth of writes to actually store that data.

    If it needs to use 12GB's of write cycles to store those 10GB's worth of data, the WA factor is 1.2... and the GB written is 12GB and is reported as such. The controller doesn't really know or care how much data you actually are storing - all it knows is how many write cycles are used.

    If the controller needs to do massive GC to store that 10GB's worth of data, the WA factor increases dramatically. A close to full drive that was used constantly with a less than great GC algorithm can have a WA factor of 10 or more.

    The WA factor is something we calculate knowing the 'real' data we need saved vs. the actual write cycles consumed to commit that data to the nand chips.

    No SSD controller is going to know that by itself, because to an SSD, everything written is 'data' - and every operation that consumes write cycles is actually and truly GB's Written.
     
  11. HopelesslyFaithful

    HopelesslyFaithful Notebook Virtuoso

    Reputations:
    1,552
    Messages:
    3,271
    Likes Received:
    164
    Trophy Points:
    0
    my point is that there is no reason that the controller or sandisk dashboard can't record that. It is 100% possible but the industry has deemed it as not necessary. That is what i was saying.
     
  12. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    I don't see how it's possible, but you're welcome to write the firmware that does it. :)

    An SSD controller does not see 'data' as a whole, it sees it in bits, bytes and blocks. The GC routines and TRIM create their own 'data' for the nand chips to write (whether that write is actual data or simply clearing the nand - it is still a write and consumes a write cycle).

    The O/S and programs we use also initiate their own writes to the nand - which may be temporary or not, vs. the save 'data' command a user initiates too.

    So, how do you think a controller would differentiate between those scenarios? It can't.

    It can record total writes and does, but what is data and what is not data is pretty elusive to an ssd controller.


    And your example of writing a sentence is off... I can write many sentences without knowing (in advance) how many words I'll write.

    Sure, after it is written I can count them easily. But not knowing the number of words does not stop me from writing a sentence in the first place.



    Same with the SSD - it may know what it has to write, but it will only know after the fact how many write cycles it took to accomplish that. And even then; it still doesn't know what is 'data' and what isn't. ;)
     
  13. HopelesslyFaithful

    HopelesslyFaithful Notebook Virtuoso

    Reputations:
    1,552
    Messages:
    3,271
    Likes Received:
    164
    Trophy Points:
    0
    i'll write an example and draw a picture for you.

    The point is the controller knows it receives 128KB file to write but has to do GC of 192KB to actually write it. So the point is it can be designed to know how much data it wrote of user data and how much it wrote of "actual" SSD data (WA data). i'll draw a diagram later but just look at the anandtech article you posted....it gets the 128KB file so it knows the original amount and the final amount...or it can. They just never made it to track the prior amount but the later amount according to you.

    does that make sense? i'll draw a diagram is needed


    you just stated my point...it does know both variables...it just doesn't track both. I am surprised they never set it up to track it.
     
  14. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    It only reports what the SSD decides to report. So it depends on the drive.
     
  15. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Trying to explain this is obviously above my pay grade. :)

    It is not as simple as you may think. If the 10GB of DATA we're using as an example is not a single file (simple) but hundreds of small and large files, the acrobatics that a controller has to do can quickly boggle the mind.

    First, it is juggling the read/erase/write penalty for needing to write a file, but having to erase a block.

    Second, it is juggling the fragmentation and also write combining algorithms to try to ensure that data is spread across controller channels and nand chips too - this parallelism is what gives us higher than a single nand chip can offer after all.

    Third, it is doing all the above or delaying all the above (if the state of the nand chips allow it to) to try to give us the performance we are asking from it 'now'.


    When all that is taken into account and the juggling of not only the new DATA we want saved, but the DATA that is already on the disk is read/rewritten and erased over and over again - which DATA does the controller count as 'new' and which does it attribute to WA?

    Specifically; image updating a database that changes two bytes in a 2MB file - that will incur at least one Block erase to add those two bytes (assuming this is a well used SSD). So, what do we attribute as DATA and what part as WA (in this case WA would be through the roof...).


    Anyway, I can see a glimpse of what you're saying and you do have a point. But I can also see the pitfalls of trying to implement this (even if the manufacturers wanted to show us this data) and the cries of 'foul!' that would be heard around the world depending on which algorithm is used to define DATA and which part of the used write cycles would be described as actually wasted.

    If the cheating in benchmarks is rampant now with cpu's, gpu's and such - image the questions you would have of SanDisk Dashboard if it showed it's supported drives as all pretty and cheerful or; all wasteful and dreaded. And no, there is no middle ground. :) :) :)
     
  16. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Here is a post that seems interesting to this topic, even though it was written for the slow read EVO/TLC drives.

    See:
    Samsung 840 EVO read speed drops on old-written data in the drive - Page 64


    The interesting part to me is that in one year, two files were written to the brand new EVO; a 90GB file and a 42MB file, yet Magician is reporting that 250GB of nand cycles were used. A WA factor of > 2.7 - compare that to the 1.1 WA factor Intel had achieved years ago on an O/S, Program and data drive...

    Sure, as the author in the link states, Windows may have been writing to the drive too - but my guess? It is the pseudo SLC nand that is causing such a huge WA factor here (the data is basically written twice, at least).


    The topic of that link is sad... ~18MB/s READ SPEED after a year of being powered up but not touched as a storage drive...

    If you look at the graph; 5030 seconds to read two files of ~90GB; yeah, over 83 MINUTES to do what should be done at over 400 or 500 MB/s (~3 minutes if the drive worked as it should).

    Samsung has really screwed the consumers with their TLC nand.

    I hope that a firmware upgrade can fix this issue (looks like a problem with decaying nand and/or ECC routines), otherwise, I'm expecting refunds or 850 Pro's to be given to everyone affected. And Samsung, I don't want the 850 Pro either.
     
    HopelesslyFaithful likes this.
  17. HopelesslyFaithful

    HopelesslyFaithful Notebook Virtuoso

    Reputations:
    1,552
    Messages:
    3,271
    Likes Received:
    164
    Trophy Points:
    0

    i am guess he is noticing the endurance issue not heat....did he run the test over and over when it was cooled? If not then his test is irrelevant and false until he proves that the cold test doesn't run the endurance issue.
     
  18. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Huh?

    10chars
     
  19. HopelesslyFaithful

    HopelesslyFaithful Notebook Virtuoso

    Reputations:
    1,552
    Messages:
    3,271
    Likes Received:
    164
    Trophy Points:
    0
    if you do a ,lot of read or writes with the samsung drives they get slowr over time. They lack "endurance" tweak town has covered this extensively. He might be noticing endurance issues not heat issues.
     
  20. djembe

    djembe drum while you work

    Reputations:
    1,064
    Messages:
    1,455
    Likes Received:
    203
    Trophy Points:
    81
    If you're talking about what I think you're talking about, the issue in Samsung EVO drives is a slowdown in access speed for data that's over a couple months old. It has nothing to do with the total amount of data written or the usable lifetime of the drive.
     
  21. HopelesslyFaithful

    HopelesslyFaithful Notebook Virtuoso

    Reputations:
    1,552
    Messages:
    3,271
    Likes Received:
    164
    Trophy Points:
    0
    no. anandtech has written extensively about continuous usage and how it affects speeds. After this type of usage they need time to recover. So you if are doing a lot of random reads and writes it will be slower over time and continue to get slower until you give it time to recover. I don't know if maybe the continuous usage with slowing down is a sign of overheating or endurance but the general consensus is that it is endurance issues not heat. My point is if he didn't run the test cold for as long as he did the hot test then he didn't show anything meaningful.

    So i googled tweaktown review and they test for speeds after a lot of use and how well it does after a period of time to recover from the operation. He may just be seeing recovery time issues not some unknown heat issues. I don't know the whole testing process he did but that could be the issue if he didn't plan to work that variable out of the testing.