So i bought the Sandisk Extreme Pro 480GB recently because i wanted to get the fastest SSD and i noticed that crystaldiskinfo can't read the wear. The dashboard only showed 99%-100% (it just changed to 100% after smart scan) left of drive. I just noticed after doing an in-depth smart reading that it shows wear amounts in there. It also shows reads. So far my drive has been on for ~800 hours and i have read 10200GB and wrote 2875 GB! Is that possible? I just bought this not that long ago and before i was averaging 30GB a day but that was over a year. I lately have been averaging A LOT more but i doubt 86+GB a dayThe reads i wouldn't be surprised about (though i thought it would be more considering i do daily AV scans and load lots of data) but the writes Blarg?!?
-
HopelesslyFaithful Notebook Virtuoso
-
Could it be due to WA?
-
HopelesslyFaithful Notebook Virtuoso
WA?
10 char -
tilleroftheearth Wisdom listens quietly...
WA=Write Amplification
Is 2875GB possible in 800 hours? Sure it is; but only you know the workflow you put on the drive. And no matter what a user initiates in writes, depending on the state of the nand, the WA could increase it by 10x and 20x easily.Ferris23 likes this. -
HopelesslyFaithful Notebook Virtuoso
yea but does that actually count WA? crystal disk i dont think counted WA
-
tilleroftheearth Wisdom listens quietly...
A program can only report what was written in total, it doesn't know the WA factor.
The WA factor depends on the health of the nand, the quality of the firmware with regards to TRIM and GC and the algorithms used to ensure that each nand chip is used equally.
See here for much more info on the WA factor and most everything SSD related:
See:
AnandTech | The SSD Relapse: Understanding and Choosing the Best SSDHopelesslyFaithful and Ferris23 like this. -
HopelesslyFaithful Notebook Virtuoso
why couldn't sandisks own program know WA? "Could" they not add the ability to track the firmwares garbage collection? -
tilleroftheearth Wisdom listens quietly...
Nothing can track WA on it's own.
The way to determine WA is to write a known quantity of data to a drive and then read the registers for total GB's written. These will never match 100%.
The total GB's written divided by the known quantity of data written to the disk will determine the WA factor (it is always a percentage over and above the actual data).
The OCZ drives with the SandForce controllers and their methods of compressing written data were actually writing less data than what they were storing, but they had to compress/un-compress each file on the fly and that made them slower when working with uncompressible data.
The Intel X25-M's (I think...) were one of the first to tackle WA head on without data compression techniques. The WA factor for those drives was around 1.1 if I remember correctly when other drives at the time were closer to a WA factor of 10. Yeah; the Intel drives would write 1.1 GB to the nand chips for each 1 GB that was actually real data. Other drives would write 10GB for the same GB of data.
We've progressed a lot since then. But I'm sure that WA factors could be 2 or higher in specific circumstances with specific workloads (and no OP'ing and/or free space at all).
The firmware and nand doesn't know what is 'just' data - all it knows is it used up x write cycles to save it. And then it can only report back that xxxxGB's have been written. Of that, only a smaller portion was actually data - but only the user could know that. To the nand, every write is 'data' written. -
HopelesslyFaithful Notebook Virtuoso
See what i am saying? Both values have to be known. I can't write a sentence without knowing how many words i wrote. That just isn't possible
OS says write 10GB...it has to know it just wrote 10GB because it was just told to do it
Now the SSD also has to know the WA because it just used some method of garbage collection and wrote 12GB of data.
Basically, it has to know those numbers no matter what because if it doesn't how can it perform the operationNow does it record and track that info is a different story. I know i just recorded all these characters but did i count them and save that info? Nope
Characters with spaces:2565
Now i added the feature of actually recording that info that i already knewWhat you said doesn't add up.
:/ 2936GB now 61GB in a day :/ -
tilleroftheearth Wisdom listens quietly...
No, you're still confused.
Let me try like this...
If we write a folder of files that are 10GB in size to an SSD with used nand (which is any SSD used more than a few minutes), the SSD will need to use up much more than 10GB worth of writes to actually store that data.
If it needs to use 12GB's of write cycles to store those 10GB's worth of data, the WA factor is 1.2... and the GB written is 12GB and is reported as such. The controller doesn't really know or care how much data you actually are storing - all it knows is how many write cycles are used.
If the controller needs to do massive GC to store that 10GB's worth of data, the WA factor increases dramatically. A close to full drive that was used constantly with a less than great GC algorithm can have a WA factor of 10 or more.
The WA factor is something we calculate knowing the 'real' data we need saved vs. the actual write cycles consumed to commit that data to the nand chips.
No SSD controller is going to know that by itself, because to an SSD, everything written is 'data' - and every operation that consumes write cycles is actually and truly GB's Written. -
HopelesslyFaithful Notebook Virtuoso
-
tilleroftheearth Wisdom listens quietly...
I don't see how it's possible, but you're welcome to write the firmware that does it.
An SSD controller does not see 'data' as a whole, it sees it in bits, bytes and blocks. The GC routines and TRIM create their own 'data' for the nand chips to write (whether that write is actual data or simply clearing the nand - it is still a write and consumes a write cycle).
The O/S and programs we use also initiate their own writes to the nand - which may be temporary or not, vs. the save 'data' command a user initiates too.
So, how do you think a controller would differentiate between those scenarios? It can't.
It can record total writes and does, but what is data and what is not data is pretty elusive to an ssd controller.
And your example of writing a sentence is off... I can write many sentences without knowing (in advance) how many words I'll write.
Sure, after it is written I can count them easily. But not knowing the number of words does not stop me from writing a sentence in the first place.
Same with the SSD - it may know what it has to write, but it will only know after the fact how many write cycles it took to accomplish that. And even then; it still doesn't know what is 'data' and what isn't. -
HopelesslyFaithful Notebook Virtuoso
i'll write an example and draw a picture for you.
The point is the controller knows it receives 128KB file to write but has to do GC of 192KB to actually write it. So the point is it can be designed to know how much data it wrote of user data and how much it wrote of "actual" SSD data (WA data). i'll draw a diagram later but just look at the anandtech article you posted....it gets the 128KB file so it knows the original amount and the final amount...or it can. They just never made it to track the prior amount but the later amount according to you.
does that make sense? i'll draw a diagram is needed
-
It only reports what the SSD decides to report. So it depends on the drive.
-
tilleroftheearth Wisdom listens quietly...
Trying to explain this is obviously above my pay grade.
It is not as simple as you may think. If the 10GB of DATA we're using as an example is not a single file (simple) but hundreds of small and large files, the acrobatics that a controller has to do can quickly boggle the mind.
First, it is juggling the read/erase/write penalty for needing to write a file, but having to erase a block.
Second, it is juggling the fragmentation and also write combining algorithms to try to ensure that data is spread across controller channels and nand chips too - this parallelism is what gives us higher than a single nand chip can offer after all.
Third, it is doing all the above or delaying all the above (if the state of the nand chips allow it to) to try to give us the performance we are asking from it 'now'.
When all that is taken into account and the juggling of not only the new DATA we want saved, but the DATA that is already on the disk is read/rewritten and erased over and over again - which DATA does the controller count as 'new' and which does it attribute to WA?
Specifically; image updating a database that changes two bytes in a 2MB file - that will incur at least one Block erase to add those two bytes (assuming this is a well used SSD). So, what do we attribute as DATA and what part as WA (in this case WA would be through the roof...).
Anyway, I can see a glimpse of what you're saying and you do have a point. But I can also see the pitfalls of trying to implement this (even if the manufacturers wanted to show us this data) and the cries of 'foul!' that would be heard around the world depending on which algorithm is used to define DATA and which part of the used write cycles would be described as actually wasted.
If the cheating in benchmarks is rampant now with cpu's, gpu's and such - image the questions you would have of SanDisk Dashboard if it showed it's supported drives as all pretty and cheerful or; all wasteful and dreaded. And no, there is no middle ground. -
tilleroftheearth Wisdom listens quietly...
Here is a post that seems interesting to this topic, even though it was written for the slow read EVO/TLC drives.
See:
Samsung 840 EVO read speed drops on old-written data in the drive - Page 64
The interesting part to me is that in one year, two files were written to the brand new EVO; a 90GB file and a 42MB file, yet Magician is reporting that 250GB of nand cycles were used. A WA factor of > 2.7 - compare that to the 1.1 WA factor Intel had achieved years ago on an O/S, Program and data drive...
Sure, as the author in the link states, Windows may have been writing to the drive too - but my guess? It is the pseudo SLC nand that is causing such a huge WA factor here (the data is basically written twice, at least).
The topic of that link is sad... ~18MB/s READ SPEED after a year of being powered up but not touched as a storage drive...
If you look at the graph; 5030 seconds to read two files of ~90GB; yeah, over 83 MINUTES to do what should be done at over 400 or 500 MB/s (~3 minutes if the drive worked as it should).
Samsung has really screwed the consumers with their TLC nand.
I hope that a firmware upgrade can fix this issue (looks like a problem with decaying nand and/or ECC routines), otherwise, I'm expecting refunds or 850 Pro's to be given to everyone affected. And Samsung, I don't want the 850 Pro either.HopelesslyFaithful likes this. -
HopelesslyFaithful Notebook Virtuoso
i am guess he is noticing the endurance issue not heat....did he run the test over and over when it was cooled? If not then his test is irrelevant and false until he proves that the cold test doesn't run the endurance issue. -
tilleroftheearth Wisdom listens quietly...
10chars -
HopelesslyFaithful Notebook Virtuoso
if you do a ,lot of read or writes with the samsung drives they get slowr over time. They lack "endurance" tweak town has covered this extensively. He might be noticing endurance issues not heat issues.
-
-
HopelesslyFaithful Notebook Virtuoso
So i googled tweaktown review and they test for speeds after a lot of use and how well it does after a period of time to recover from the operation. He may just be seeing recovery time issues not some unknown heat issues. I don't know the whole testing process he did but that could be the issue if he didn't plan to work that variable out of the testing.
Sandisk Dashboard. Is it accurate?
Discussion in 'Hardware Components and Aftermarket Upgrades' started by HopelesslyFaithful, Sep 16, 2014.