Hi
Your results doesn't really tell us anything other than the secure erase didn't make a difference. I.e. whatever process is keeping the disks fast is good enough to make secure erase unnessecary. Or were you able to degrade your disks before erasing them?
-
Actually....significant low 4k random write results between write cache enabled and disabled... Interesting.
-
I wonder what kind of files are needed to degrade the drive. Large or small (4k) files? -
I tend to use both. It's slightly more realistic, although I'm not sure it makes a huge difference.
My iTunes files generally work for me. I copy the album art directory that has several thousand files and then copy some of the TV episodes. From there, I copy the copies.
If you haven't already, make a post with your steps for SECURE_DELETE. I'm sure this will be used by many. I'm assuming you feature enabled the BIOS to get this done, so it will be for experts only.
Nice work -
Thanks ZoinksS2k.
Actually the method to secure erase the SSDs is rather crude. It's based on the followings:
1. Set the SATA controller to ACHI mode and enable "HotPlug" on each drive.
2. Disassemble the notebook and remove the cables connecting the SSDs.
3. Put a long strip of plastic covering the data (not power) section of the SATA connector on the SSD to simulate disconnected state.
4. Boot the GParted Live CD. As soon as the booting/loading process is completed, remove the strip of plastic and refresh GParted. It should recognise the SSD.
5. Run hdparm and check to make sure the drive is "unfrozen".
6. Follow the instruction posted here https://ata.wiki.kernel.org/index.php/ATA_Secure_Erase to secure erase the drives.
The above method might damage the SSDs and/or brick you lovely Z. Be warned.
I haven't tried it with HDDErase but it might work (set SATA to IDE mode). However, you can erase only 3 out of the 4 SSD (if you have such setup) as one of the top 4 IDE devices is the optical drive.
-
I filled the SSD RAID completely with files, then deleted and wrote a number of files (total = 20GB) 10 times. Here are the CDM results (64GBx4, RST 9.6, write-back cache enabled)
I delete the dummy files and the CDM results are as follows:
It appears that there were some variations in write speed but nothing major.
I don't know why the 4k write speed (CDM 2.2) after file deletion was so high. -
Here's a summary of the results. Dark Green = Highest, Red = Lowest.
Benchmarks were run after the stated events (3 full passes, write-back cache enabled, RST 9.6)
Note:
PD/AS-Cleaner O/N = Perfect Disk drive space consolidation, followed by writing the free space with FF using AS-Cleaner. The notebook was left idle (not sleep) overnight. -
Hi, ozbimmer,
I read a similar post about bypassing the BIOS to successfully send an ATA secure erase command, although I didn't try...
Do you or others know what's the difference between
an ATA secure erase
&
filling "FF"s to the whole disk and then re-create RAID (so that all partitions are deleted)"?
The latter could be much more easier to do... -
Kevinhk: I think the main diff is that writing FF to all sectors doesn't really tell the SSD controller that you have done so. Of course, a smart controller could in theory check if a sector is written with all ones, and then automagically mark it as free, but I doubt this is done.
Ozbimmer: How full was your drive after filling it up with the last 20GB? From where did you read those 20GB? How fast was the SSD able to write it? And how fast after the copy was done did you start CDM?
I think most of those questions are important for the end result. And of course the write cache. It may be a more real-world test with that enabled, but it doesn't really test the *drive*.
Nice chart! I got *pretty* similar results from CDM 2.2 and 3.0, but I used 100MB also on 3.0 where the default is 1000MB. The biggest diff for me was mostly the Seq read value, 3.0 being higher. The other values were not identical, but within the normal span of CDM regardless of version. -
The 20GB was splited into 2 half, one half before I filled the drive with a mega dummy file (around 180GB) and the other afterwards.
I notice there was some slowing down when I continuously deleted/wrote the 20GB files, but if I waited (say 5-10 seconds) and did the delete/write again the speed picked up again.
I started CDM as soon as the copy was completed, possibly 1-2 seconds.
I agree with you re your comment on write-back cache and actual drive performance. -
Based on this post ( http://www.ocztechnologyforum.com/f...TURBO-owners-with-FW-1.5-you-do-not-need-this.) by OCZ Tony, it was mentioned that "AS-Clean writes logical 1's (FF) across all free blocks on the drive, writing 1's actually erases the blocks so this is much like a TRIM."
I think secure erase clean the drive so it's like new, whilst the FF method prepares the drive to be cleaned by whatever "garbage collection" method used in the SSD controller.
The latter method is indeed easier to do, but it might take some time for GC to complete the whole cleaning process.
-
-
-
Ok, that might be the reason you don't see the degradation I do right after copying my last 10GB. When my last copy is done, I make sure CDM has enough space to work. I also only set it run once, and I only start the 4k random tests (without write cache). Only then I can make it drop below 5MB/s, even under 1MB/s once.
-
@McMagnus,
Can you tell me your exact methodology of degradation testing? Just want to make sure my result is comparable to yours.
Thanks. -
What we want is an empty block (marked as empty by the controller, issued by the TRIM command) and not a looks-like-empty block (which has all 1s) - so writing 0xFF instead of a TRIM command is pretty useless imho. -
The main parts are:
Write cache off
Make sure you have a pretty large dir, I used 10GB.
Then make sure you have those 10GB + ~100MB of free space, just enough so that CDM will be able to run a 100MB test when the copy is done.
Make a copy of the large dir.
During the copy, watch the copy speed. It will probably settle on a sustainable copy rate and will differ depending on the number of striped drives and unused space. More info in my previous post.
Make sure CDM is started before the copy is done. Select 1 pass using 100MB.
Exactly when the copy is completed (within 1 second), click the 4k button so that only the 4k random test is run, and it will be run only once, otherwise the SSD will have time to recover performance while it is running.
Let me know if you can see some really bad numbers. How low can you go? -
I followed your method and got 11.8MB/s (read) and 8.7MB/s (write) immediately after the copy was completed. I run the 4k Random test again and the speed already restored to pre-test level.
I also run some tests based on the following article ("Test Data" section)
http://www.oczenterprise.com/whitepapers/ssds-write-amplification-trim-and-gc.pdf
My figures were:
READ
Pre-test: 369.53MB/s
Post-test (immediately after): 225.86MB/s
Post-test (after 5mins): 381.85MB/s
Post-test (after 10mins): 387.08MB/s
Post-test (after 15mins): 378.92MB/s
WRITE
Pre-test: 208.48MB/s
Post-test (immediately after): 63.00MB/s
Post-test (after 5mins): 179.80MB/s
Post-test (after 10mins): 197.30MB/s
Post-test (after 15mins): 208.30MB/s
Please note that I am using RST 9.6 with write-back cache and TRIM both disabled. -
I was going to buy this laptop (Z11 2x64GB) and luckily I found this topic. Is the drive hangs up the system during start write files? Is the slowdown is at all times or only slowdown for awhile? Is it very annoying? It is possible to disable raid and enable AHCI?
Is exist a model like the Z11 but with normal 2.5" SSD?
Sorry for my bad English -
I wrote this up a while back to explain the differences between secure erases and SECURE_ERASE.
-
TofuTurkey Married a Champagne Mango
ziele said: ↑I was going to buy this laptop (Z11 2x64GB) and luckily I found this topic. Is the drive hangs up the system during start write files? Is the slowdown is at all times or only slowdown for awhile? Is it very annoying? It is possible to disable raid and enable AHCI?
Is exist a model like the Z11 but with normal 2.5" SSD?
Sorry for my bad EnglishClick to expand...
I don't think anyone has encountered second-long hangs with the drive (yet?). It's not clear if that can occur in the future, so far nobody has been able to make that happen. It looks like there's something in the drives that's cleaning them, could be garbage collection. The drive doesn't slow down to second-levels, and recovers relatively quickly. For such slowdowns (sub-second), I don't think it's that noticeable (compared to a second or longer). The drives can be configured as JBOD.
There are versions of the Z with a normal HDD in place of the optical drive (i.e. no DVD or Blue-Ray), it also doesn't have the Sony SSDs. Some people (including me) replace that HDD with an SSD. Sony doesn't sell a Z with a 2.5" SSD. -
In simpler terms, what does "drivers hanging up the system"? Is this something very worrisome?
-
TofuTurkey Married a Champagne Mango
DeathDealer said: ↑In simpler terms, what does "drivers hanging up the system"? Is this something very worrisome?Click to expand... -
@TofuTurkey Thank you. I started to worry but I see that it's not so bad comparing to my laptop
@DeathDealer I have Lifebook p7120 with cheap 64GB ssd. After a few days of use it has terrible write lag. Sometimes system (windows xp) stops responding for 10s or even 20s !!Installing apps with a lot of small files also isn't ok
-
TofuTurkey Married a Champagne Mango
ziele said: ↑@TofuTurkey Thank you. I started to worry but I see that it's not so bad comparing to my laptop
@DeathDealer I have Lifebook p7120 with cheap 64GB ssd. After a few days of use it has terrible write lag. Sometimes system (windows xp) stops responding for 10s or even 20s !!Installing apps with a lot of small files also isn't ok
Click to expand... -
This is called stuttering and it was a huge problem with early SSD's.
SSD manufacturers, for the most part, solved this problem by adding more on board cache. Controllers have been changed to be more efficient as well.
Stuttering should not be a problem. Writes may slow down, but no jerkiness. -
ozbimmer said: ↑I followed your method and got 11.8MB/s (read) and 8.7MB/s (write) immediately after the copy was completed. I run the 4k Random test again and the speed already restored to pre-test level.Click to expand...
Interesting numbers with the other test you found, much slower performance recovery, but as long as it still recovers within 15 minutes (and without having to do anything manually), I don't worry at all.
Based on the terminology used in that article, it seems the Z drives use some sort of "idle time GC", but it seems faster than the one used for the OCZ drives. -
ZoinksS2k said: ↑This is called stuttering and it was a huge problem with early SSD's.
SSD manufacturers, for the most part, solved this problem by adding more on board cache. Controllers have been changed to be more efficient as well.
Stuttering should not be a problem. Writes may slow down, but no jerkiness.Click to expand...
Defiantly no stuttering on my Z.
.dj -
Stuttering was a global problem on early SSD's.
EDIT: J Micron controllers were the worst offenders, but early Samsungs did it too. -
I run some benchmarks on the SSD RAID 0 after secure erase (Windows 7 with installed on another SSD (in the optical bay). RST 9.6 TRIM disabled).
CDM3, write-back cache DISABLED
CDM3, write-back cache ENABLED
ATTO, write-back cache DISABLED
ATTO, write-back cache ENABLED
HD Tune, write-back cache DISABLED
HD Tune, write-back cache ENABLED
AS-SSD, write-back cache DISABLED
AS-SSD, write-back cache ENABLED
I then filled up the SSD RAID 3 times (all files deleted after each pass). At the third pass I left about 1200MB unoccupied. CDM3 was run afterwards
CDM3, write-back cache DISABLED
-
ZoinksS2k said: ↑I wrote this up a while back to explain the differences between secure erases and SECURE_ERASE.Click to expand...
-
I just run another test. It show that we currently have Garbage Collector
1.Run SSD Benchmark on empty partition. Got scores 345(read 137/write 138)
2.Run AS Cleaner with FF on that partition and then run benchmark. Got scores 269(read 139/write 66)
3.I tried reboot and benchmark again it didn't help. Scores 263(read 138/write 64)
4.After left notebook idle for 5.30hrs and run benchmark again, the score is
back to 340 !!!(read 138/write 133)
Tested on Clean install Windows7 x64 Professional with intel RST 9.6 driver, 4x64GB Raid0 Model -
b_ambee said: ↑...
2.Run AS Cleaner with FF on that partition and then run benchmark. Got scores 269(read 139/write 66)
...Click to expand...
I have been doing some testings of the quad SSD RAID (256GB, 4x64GB) over the weekend based on the methodology described in the article here ( http://www.anandtech.com/show/2865/2). It's supposed to test TRIM but should be relevant for GC/ITGC testing.
The test involved a 4KB random write, using Iometer, across the entire disk space for five minutes. RST 9.6 (write-back cache and TRIM disabled) was used. The computer was left to idle (but not sleep) between runs.
Here's a graph showing the write speed of the runs.
Legend:
Run 1 = Pre-test Run
Run 2 = 15 seconds after the end of Run 1
Run 3 = 7 mins after the end of Run 2
Run 4 = 5 mins after the end of Run 3
Run 5 = 10 mins after the end of Run 4
Run 6 = 15 mins after the end of Run 5
Run 7 = 10 secs after the end of Run 6
Based on the above observations, some sort of "performance restoring" mechanism was automatically triggered as soon as the computer was left idle (likely to be Idle Time Garbage Collection - ITGC). 4K random write performance was fully restored at 7-10 mins after the test. Further period of idling did not significantly improve random write performance.
I noticed an interesting phenomenon whilst running the test during Run 3,4,5 and 6. After the test was initiated the 4K random write speed reached around 12MB/s. It then slowly decreased and plateaued at 11MB/s. At 90 (Run 4) and 120 (Run 3,5,6) seconds mark the write speed rapidly dropped and eventually came down to the reported figures at the 5 minutes mark. I wonder if some sort of "performance sustaining" mechanism is helping the random write speed for certain period, but unable to do so afterwards as such mechanism is fully exhausted.
Lastly, I would like to draw you attention to the section here ( http://www.anandtech.com/show/2829/14). Hopefully TRIM is supported in RAID soon.
Update
Below is a graph showing the average write time of the runs. As expected, it's the mirror image of the write speed.
-
So what's the verdict? Is there noticeable, unrecoverable performance degradation?
-
shurcooL said: ↑So what's the verdict? Is there noticeable, unrecoverable performance degradation?Click to expand...
However, in the long run, I don't know if the reads/writes during ITGC would significantly impact on the quad SSD's performance. As suggested by Anand TRIM is a better method to restore performance.
I appreciate any comments on the matter. -
I've taken the decision to just use it as a standard machine for the most part. We have a pretty large statistical sampling and can run the same tests against a fresh installed machine in a few months and see if there is a true long term degradation.
-
Fresh format, JBOD 384gb core i5 540 4gb ram...
Is this okay for the jbod format?
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]
Sequential Read : 214.960 MB/s
Sequential Write : 153.660 MB/s
Random Read 512KB : 166.496 MB/s
Random Write 512KB : 113.378 MB/s
Random Read 4KB (QD=1) : 14.847 MB/s [ 3624.9 IOPS]
Random Write 4KB (QD=1) : 9.276 MB/s [ 2264.6 IOPS]
Random Read 4KB (QD=32) : 23.845 MB/s [ 5821.4 IOPS]
Random Write 4KB (QD=32) : 15.018 MB/s [ 3666.5 IOPS]
Test : 1000 MB [C: 17.7% (21.1/119.1 GB)] (x5)
Date : 2010/04/18 18:55:37
OS : Windows 7 Ultimate Edition [6.1 Build 7600] (x64) -
This is a great thread. Everyone's efforts to try and identify what’s going on are commendable. No wonder it showed up on Engadget.
The way I'm seeing it, whatever the mechanism, there is some kind of GC in place that's cleaning up the drive and fixing performance during idle time, whether or not the drives are setup as JBOD or RAID.
So now my question: Doesn't that by itself show long term degradation is not a concern?
The only practical worry with SSD's is random write slowdown as the drives fill up, unless some kind of mechanism is in place to rectify the issue (i.e. TRIM for non-RAID-configured drives). In this case, whatever spooky voodoo it is Sony/Samsung have put into the Z series, it seems to maintain the performance. Yes, it's not as transparent as TRIM or other implementations, but the fact is it works. To some, the several minutes of delay may be an issue, but for most, they probably wouldn't even notice.
So what are we worried about here? Isn't the issue of long term degradation pretty much addressed?
Someone posted maybe these drives are prone to degradation when most of the data is contained in small files rather than large ones. This may be the case, but that's not really a representation of real-life drive usage, so regardless of the potential irreversible performance degradation in such a situation, it is not relevant to the vast majority, if not all, of the notebook's users. I believe the poster pointed this out in his/her post.
As Anand writes in his articles, the "used" state performance of the drive is what represents long-term performance. Most of the tests in this thread are performed on the drives when they are in such a "used" state and we see the performance jumps right back up after some idle time (usually 15 minutes). Is there any difference between imitating the long-term/"used" state performance as compared to performance when the drive has actually been used for a longer period of time? I'm failing to see why usage over a long time is any different than filling the drive in a short period of time (as the tests in this thread have done). If there is something I'm missing here, then I'd love to learn what it is. If not, can we reasonably conclude that empirical testing indicates performance degradation is a non-issue for the long-term assuming average to above average usage of the drive?
Sorry for the above sounding like a rant, but I really want to buy this machine. I am holding off because of this SSD degradation matter. This thread has addressed most of my concerns, but there still seems to be some cautionary words from the very people whose tests and results show this is a non-issue. I am very grateful to you guys for running these tests and am in no way speaking against you. You guys obviously know a lot more than I do and I'm just trying to figure out what it is I may be missing.
Lastly, just to clarify, I know about MLC cells and that they can sustain around 10k writes over their lives. This in turn affects the life of the SSD. I am not concerned about this issue. Intel states its MLC drives can survive up to 100GB of writes per day for 5 years before dying. It should be easy to understand why I am not worried. Even if the Samsung drives have 1/10th that capacity (which is being super-conservative), it is still more than enough for me and for most people I presume. Someone else did some math in this thread showing how MLC life is an absolute non-issue for any sane computer user.
My concern is with the long-term performance, which I think is also a non-issue after reading this thread but what am I missing that I should be concerned about? If I don't mind giving the computer some idle time when performance slows down a bit (after most of drive has been written to), then is there anything else I should be on the lookout for wrt SSD performance degradation? Thanks in advance. -
First, I have zero understanding how the RAID controller works or how the Garbage Collection function works, and I apologize if this question is idiotic.
When I get my Z, I plan on remapping the SSDs as Zoinks or someone suggested a number of posts back and leave ~15% unallocated -- as a GC work area, as I understand it.
My question is as follows: Over time, will the location of this GC work area wander though the full capacity of the RAID 0 SSDs or will it always reside at a fixed location(s)? If the latter, then wouldn't the GC process overtime use up the 10,000 cycles (for those portions of the SSDs) fairly quickly?
Perhaps a different way to ask this question is -- regardless of the logical mappings of the SSD array (let's say, "C:", "D:", & "Unallocated"), does the RAID controller eventually step through 100% of the SSD array, such that, over time, all locations are used more or less equally regardless if the access is for program/data storage/retrieval, garbage collection, or any other "hidden" process?
I hope this question makes sense to someone. TIA for any help in understanding how this works. -
TofuTurkey Married a Champagne Mango
kollector44 said: ↑...
My question is as follows: Over time, will the location of this GC work area wander though the full capacity of the RAID 0 SSDs or will it always reside at a fixed location(s)? If the latter, then wouldn't the GC process overtime use up the 10,000 cycles (for those portions of the SSDs) fairly quickly?
Perhaps a different way to ask this question is -- regardless of the logical mappings of the SSD array (let's say, "C:", "D:", & "Unallocated"), does the RAID controller eventually step through 100% of the SSD array, such that, over time, all locations are used more or less equally regardless if the access is for program/data storage/retrieval, garbage collection, or any other "hidden" process?
....Click to expand...
That said, your question still stands. What happens to the first few gigs of disk where the OS resides and will probably never move? Won't those bits be used less than all others? -
kollector44 said: ↑First, I have zero understanding how the RAID controller works or how the Garbage Collection function works, and I apologize if this question is idiotic.
When I get my Z, I plan on remapping the SSDs as Zoinks or someone suggested a number of posts back and leave ~15% unallocated -- as a GC work area, as I understand it.
My question is as follows: Over time, will the location of this GC work area wander though the full capacity of the RAID 0 SSDs or will it always reside at a fixed location(s)? If the latter, then wouldn't the GC process overtime use up the 10,000 cycles (for those portions of the SSDs) fairly quickly?
Perhaps a different way to ask this question is -- regardless of the logical mappings of the SSD array (let's say, "C:", "D:", & "Unallocated"), does the RAID controller eventually step through 100% of the SSD array, such that, over time, all locations are used more or less equally regardless if the access is for program/data storage/retrieval, garbage collection, or any other "hidden" process?
I hope this question makes sense to someone. TIA for any help in understanding how this works.Click to expand...
The issue is handled by Wear Levelling
http://www.anandtech.com/show/2829/6
http://en.wikipedia.org/wiki/Wear_levelling
If you wish to know more about SSD, the following articles from Anandtech are very useful:
http://www.anandtech.com/show/2738
http://www.anandtech.com/show/2829
http://www.anandtech.com/show/2865 -
knightofdarkness said: ↑... whatever spooky voodoo it is Sony/Samsung have put into the Z series, it seems to maintain the performance.Click to expand...
A 64 GB SSD has an actual size of 72 GB for example - 64 GB for user data and 8 GB scratch memory. If the user has filled up the entire disk (64 GB of data) and requests a block write, the controller can take a block from the scratch area. This is why you won't see any performance degradation even on full drives, as long as the controller has some free scratch memory. If there are continuous block requests the controller will run out of scratch memory eventually - this is why you still see some kind of performance degradation sometimes. As soon as there is time the controller can recycle (read 'delete') unused blocks - and there are always unused blocks as the actual capacity is bigger than the usable capacity - and build up a new scratch area. This process is what you call 'garbage collection' and is responsible for that fact, that even if you have suffered from performance degradation you will regain full performance after a few minutes on these drives.
Having these facts in mind imho there is no need for an additional scratch area in the form of non-partitioned space. I'm not quite sure if this space might be even used by the controller (which would mean that the controller has knowledge of the disks partitions, which could be tricky in RAID environments).
As ozbimmer stated, wear levelling is one of the controllers duties. If it does the job quite well it will do it for the scratch area and 'static' areas too, if not you are screwed anyways -
I would like to thank everyone for putting in so much effort. I am new to this, so I just want to reconfirm what I read.
The SSDs on the Z11 will not see tremendous degradation, because of its wear leveling and garbage collector. Is that correct?
And also, TRIM is a better method to restore performance but it is not yet available for RAID SSDs. Is this correct?
If TRIM does come out for RAID SSDs, would the ones on Z11 support it?
Thanks for your help -
Kuane said: ↑The SSDs on the Z11 will not see tremendous degradation, because of its wear leveling and garbage collector. Is that correct?Click to expand...
Wear levelling ensures long-life cycle.
Kuane said: ↑And also, TRIM is a better method to restore performance but it is not yet available for RAID SSDs. Is this correct?Click to expand...
Kuane said: ↑If TRIM does come out for RAID SSDs, would the ones on Z11 support it?Click to expand... -
To be more precise - it is Intel SATA controller found in all Calpella generation Centrino notebooks (with mobile Core i3/i5/i7)
It basically completely depends on Intel, not Sony.
Since RAID in this case is actually a software RAID (or "fakeraid") disguised as "hardware" it should be just a matter of improving the AHCI/RAID driver itself.
But Intel is known to be notoriously slow when it comes to their SATA drivers (it took them months and months to add TRIM support even for non-RAID setups) so I wouldn't put my bet on this actually happening anytime soon. -
Ok, I understand
Thank you guys
-
Garbage collection, over-provisioning and wear levelling ( http://www.storagesearch.com/ssd-jargon.html) work together to maximise performance and enhance longevity of SSD.
The disadvantage of GC is added write, hence the necessity of over-provisioning. TRIM alleviates the issue.
I am not sure of the remark "SSD RAID do support TRIM". Do you know any RAID adapter supports TRIM natively? -
ozbimmer said: ↑I am not sure of the remark "SSD RAID do support TRIM". Do you know any RAID adapter supports TRIM natively?Click to expand...
The fact that it is not supported yet is a flaw of the RAID controllers / drivers, not of the combination SSD and RAID, isn't it? -
I agree with you. Nevertheless, it is interesting to note Adaptec's response re TRIM support.
http://ask.adaptec.com/scripts/adap...nMuc2VhcmNoX25sJnBfcGFnZT0x&p_li=&p_topview=1
BTW, I understand Areca and LSI are going to release firmware with TRIM support soon. -
ozbimmer said: ↑Click to expand...
Sony Z11 - Long Term SSD Performance - Post your results
Discussion in 'VAIO / Sony' started by ZoinksS2k, Mar 20, 2010.