Initially I had the abstracts from the text but I removed them because I needed the written permission from the author... If someone needs them I can PM him
-
-
-
Lol it was a joke obv.. I still read a lot of books
Thanks for your help btw! -
I know
Your welcome -
big thanks from me for doing all the finger/ete work for me, greatly appreciated
-
My XPS16 arrived today from the Dell Outlet and I'm not too impressed with the SSD performance. I wiped the drive and reinstalled Win7 Ultimate. I made sure my alignment is correct (divisible by 4096). Did most of the tweaks at the beginning of this thread. I installed Windows updates and some drivers that were missing (including the Intel drivers). However, my speeds seem pretty miserable. Any idea what's going on?
I'm also getting a 5.9 in WEI.
-
Did you do a clean install? -
-
Did you change the bios settings to AHCI?
-
-
LOUSYGREATWALLGM Notebook Deity
Which drive controller driver are you using?
Should be MS driver to get TRIM working. -
- Intel(R) 5 Series 6 Port SATA AHCI Controller
- Intel(R) 5 Series/3400 Series Chipset Family 6 Port SATA AHCI Controller - 3B2F
- Standard AHCI 1.0 Serial ATA Controller
I've tried them all, benchmarks seem to be about the same -
https://ata.wiki.kernel.org/index.php/ATA_Secure_Erase -
LOUSYGREATWALLGM Notebook Deity
Your TRIM and GC should start working
EDIT: Took a SS for you
-
-
Seq: 207.1 140.6
512k: 163.8 36.14
4k: 15.35 6.08
4k QD32: 21.66 2.18 -
LOUSYGREATWALLGM Notebook Deity
It wont help if you will just format the drive using OS installer. You should use secure erase feature. Like HDDerase, GParted, dban etc..
-
-
-
@Zippo, try Intel Rapid Storage driver for your chipset.
I got better results with it than the windows driver. Maybe i missed it but what kind of SSD is installed in your machine ? -
-
-
Is this really what happens to SSD performance after a few weeks of use or is my drive defectiveAccording to Diskinfo it has been powered on about 1100 hours. I think I'm going to call Dell this morning and request a new drive. Thanks for all the help guys!
Seq: 190.6 140.6
512k: 160.9 58.01
4k: 15.35 2.76
4k QD: 29.21 3.41
Seq: 215.3 139.9
512k: 169.7 49.53
4k: 15.43 4.27
4k QD: 23.32 3.88 -
have you tried installing Rapid storage driver ? it might improve the results a little.
Here's a benchmark of another user with the same SSD:
http://forum.notebookreview.com/showpost.php?p=5259121&postcount=2
His results seem better than yours although not crazy different especially in the 4k department. Having said that, i'd definitely call dell and complain that the SSD has 1100 hours on it. -
What you need is to utilize the SECURE ERASE ATA instruction in order to release all of the user LBA locations internally in the drive and result in all of the NAND locations being reset to an erased state.This is equivalent to resetting the drive to the factory shipped condition, and will provide the optimum performance.
I have already provided you with one method that worked for me and restored my performance to factory performance levels here:
http://forum.notebookreview.com/showpost.php?p=6167318&postcount=64
and here:
http://forum.notebookreview.com/showpost.php?p=6167420&postcount=70
this procedure shouldn't take more than a minute -
-
Don't worry zippoman, when there are many posts, it's hard to keep track of all of them
it also worked in this case using Parted Magic
http://forum.notebookreview.com/showpost.php?p=6152357&postcount=360 -
-
i was able to bring down the boot time to 22 seconds while loading all my startup programs and services
I have the following services disabled :
Disk Defragmenter
FAService (Face recognition)
Internet Connection Sharing
iPod Service (i only have an iphone and it connects to itunes beautifully without it)
Media Center Extender service (I don't use media center)
Windows Media Player network sharing service(sharing your library on LAN - i do not use media center)
Windows Search
Superfetch
I also removed the boot time animation as some forums suggest (see here for example) it adds as much as 4-7 seconds to your boot time -
Interesting article about when the price of SSD's will come down
http://www.pcworld.com/article/194576/why_arent_ssds_getting_cheaper.html -
BEFORE
Seq: 190.6 140.6
512k: 160.9 58.01
4k: 15.35 2.76
4k QD: 29.21 3.41
AFTER
Seq: 209.6 180.6
512k: 168.1 116.6
4k: 15.52 5.56
4k QD: 28.84 6.50
The 4k marks are still a bit low but the write speed in general is significantly faster!
FYI to anyone using Ubuntu and HDPARM (this might be XPS16 specific): Before attempting this you'll want to change your SATA BIOS setting from AHCI to ATA. With SATA on AHCI HDPARM said my drive was locked and I couldn't erase it. After that, I was getting "permission denied" when running "hdparm -I /dev/sda". I had to type "sudo passwd" to reset the root password, then "su - root". Once sudo'd as root, and SATA set to ATA in the BIOS, I was able to run the necessary commands to secure erase the drive. Before reinstalling your OS change it back to AHCI. -
LOUSYGREATWALLGM Notebook Deity
Glad you finally got it sorted.
Just a reminder: System Restore - off, Superfetch - manual, Prefetch folder - clear. -
that partition alignment you said you did is a mistake. You should use as big as possible an allocation unit size. You exponentially increase throughput by using larger allocation unit sizes like 64KB. Read Mattisdada's post http://blogs.msdn.com/e7/archive/20...ering-the-windows-7-improvements.aspx#9390673
-
-
which is the mistake, you should select the allocation unit size and the larger the faster/greater your throughput is. I've observed this with devices as small as an 8GB FAT32 usb drive. As for alignment I have no comment on that as I have not done research on it just heard it's important. I'm only commenting on some or parts of the steps you took to align not all of them.
-
-
It's just fine no need to change it
-
stamatisx it may seem fine but it improves performance to make it as big as possible. A limit if you absolutely want one would be your average file size. Windows installer defaults to 4KB but that's been the default since windows 2000 maybe even earlier and drive sizes along with average download file size have grown a lot since then. NTFS has a 64KB limit allocation size, try it out and see how much space is wasted vs how much more performance you get. One user told me he uses 64KB on all his setups with TBs of data and lots of SSDs & HDDs, he told me he only loses a few hundred MB of space by going up to 64KB allocation and throughput is significantly better. Think about it, on 4KB you have to do up to 16x more read operations on a 64KB file not counting seek times if you're using a spinning HDD.
-
Ghetto_Child, I won't disagree with the fact that bigger allocation size will improve sequential reads and writes. It won't benefit though the 4K random reads and writes. Most of the writes the OS is performing are 4K.
From Anandtech:
"it’s the random write performance that you’re most likely to notice and that’s where a good SSD can really shine; you write 4KB files far more often than you do 2MB files while using your machine."
http://www.anandtech.com/show/2738/3
"Today 4KB pages are standard on SSDs.
Pages are grouped together into blocks; today it’s common to have 128 pages in a block (512KB in a block). A block is the smallest structure that can be erased in a NAND-flash device. So while you can read from and write to a page, you can only erase a block (128 pages at a time)."
http://www.anandtech.com/show/2738/5
Considering all these I would sacrifice an increase on the sequential reads and writes for the shake of 4K random reads and writes that make SSDs so much better than HDDs.
Again it's a personal taste since I don't use my SSD for storage and I don't transfer big files. -
stamatisx I think you're mistaken again. The 4KB pages and 512KB blocks are the physical limitations of SSDs and not related to partition allocation units. Yes an increase in random performance at a cost to the seq performance is prefered but larger allocation units has the benefit of needing less actual units in the file system table. It improves erasing as fewer allocation units need to be allocated for a 512KB erasure. It reduces file system fragmentation and I bet the whole reason the OS does 4KB random writes is because that's the size of the allocation unit that the OS is running on to begin with.
I have a feeling you'll get 64KB random writes if you chose a 64KB allocation unit because the OS knows that the file system cannot write a block smaller than the partition's allocation unit. Even if a file gets padded with 60KB of 0s, when the file grows and needs to be updated there's no need to allocate more units, the same unit will be used until the file grows beyond 64KBs which also improves performance.
Then there's the 4KB reads, wanna bet no program doing random reads ONLY needs 4KB of data. I bet any one program doing a random read will read several 4KB samples before all the necessary data is into ram. Well reading files in 4KB chunks means 16x more read operations than reading in 64KB chunks. When was the last time a single webpage took up just 4KB of data total? If a page loads from your temp internet cache I bet there's a lot more than 4KB that need to be read for random different files (pics, objects, archives, cookies) let alone if you have multiple tabs to load.
Search your average file size. Mine is 257KB which means at 4KB random read on average my system has to perform 65 reads per file.
I'm convinced file system fragmentation and physical page/block fragmentation are 2 different things. I havn't tested to be sure of this next point yet but I'll find out later if the Intel Toolbox clears my fragmentation stats or if I need to run both defrag & the toolbox. I've been running my fresh install for a week now and defrag says:
Code:C:\Windows\SYSTEM32>defrag -b /v c: /a Analysis report for volume C: ACER Volume size = 74.58 GB Cluster size = 4 KB Used space = 19.84 GB Free space = 54.74 GB Percent free space = 73 % File fragmentation Percent file fragmentation = 5 % Total movable files = 105,081 Average file size = 257 KB Total fragmented files = 1,178 Total excess fragments = 8,446 Average fragments per file = 1.09 Total unmovable files = 36 Free space fragmentation Free space = 54.74 GB Total free space extent = 248 Average free space per extent = 226 MB Largest free space extent = 48.66 GB Folder fragmentation Total folders = 17,828 Fragmented folders = 66 Excess folder fragments = 157 Master File Table (MFT) fragmentation Total MFT size = 110 MB MFT record count = 105,340 Percent MFT in use = 93 Total MFT fragments = 3 Note: On NTFS volumes, file fragments larger than 64MB are not included in the fragmentation statistics
You guys should just try it out and see how that works. At the very least you can just copy your SSD over to another SSD with 64KB unit NTFS right? There is software that can clone the data without altering the file system structure on the 2nd SSD? -
This, in itself, is the bread and butter of visible performance difference between a typical ssd and the Intel and its also the reason that the Intel shines in most suite benchmarks even though the large seq writes arent a fraction of the typical ssd.
Have u seen this from jlbrightbill???
Top 5 Most Frequent Drive Accesses by Type and Percentage
-8K Write (56.35%)
-8K Read (7.60%)
-1K Write (6.10%)
-16 Write (5.79%)
-64K Read (2.49%)
Top 5 account for: 78.33% of total drive access over test period
Largest access size in top 50: 256K Read (0.44% of total)
Last edited by a moderator: May 8, 2015 -
Ghetto Child, I have to try and see what the results will be.
*EDIT*
it took me all night and half my day and after many benchmarks these are the results:
With 4K cluster size
http://img682.imageshack.us/i/4kcluster.png/
http://img714.imageshack.us/i/bench3.png/
with 64K cluster size
http://img704.imageshack.us/i/64kcluster.png/
http://img697.imageshack.us/i/64clusterwinsat.png/
Those 64K cluster size results are not very accurate because the performance of the disk was degraded after 7 formats and windows installations and countless benchmarks... -
After secure erasing the disk, the performance was restored. I performed a clean windows install with 64K cluster size and these are the results
http://img696.imageshack.us/i/info64k.png/
http://img696.imageshack.us/i/64ksecure.png/
http://img718.imageshack.us/i/64ksecurecrystal.png/
There is not a significant difference that will justify the extra space from the bigger cluster size. For now I will stick with the 64K because I am tired of those formats... (winsat shows that 64K give better results) -
stamatisx thank you immensely for taking the time to bench my theories. I myself would not have gotten around to benching it for another few months because I'm still learning OS tweaks and all. It's interesting how CDM showed such a balanced insignificant change vs WinSAT showing 64K has faster response time/latencies and increased throughput albeit small.
My only concern about your tests is that your CDM tests seem to have been run on different versions? 4K on a full release v3 and the 64K was on a beta2 v3. -
This is a 4K bench run with the same beta version of CDM
Imageshack - 4kbench.png
The CDM doesn't show any real difference but the winsat does, especially on Latency (huge difference). It's hard to decide right now what will definitely be the best cluster size for an SSD (considering all the parameters).
*EDIT*
For the sake of performance and every extra MB/s I can get, I will format it with the 64K (I don't really trust CDM) -
I don't mean to be a pest stamatisx but perhaps the "winsat disk" command is not running a random write test the way you're using it? Try using this command WINSAT DISK /ran /write /ransize 4096 /v /drive c. Other parameters are listed on the MSDN blog/library I think. Your existing WinSAT tests did random read and sequential write tests it seems. Still your timing/latency differences between cluster sizes are interesting especially the gap between maximum latencies.
Interestingly my Vista HP SP2 32bit has these settings as default. Anyone have an idea what SecondLevelDataCache is for? It sounds like it uses the cpu L2 to cache program data? Unless that key is connected to ReadyBoost. If it's cpu L2 then it should be useful for me with a 3MB L2 on this Core 2 Solo SU3500.
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem]
"Win31FileSystem"=dword:00000000
"Win95TruncatedExtensions"=dword:00000001
"SymlinkLocalToLocalEvaluation"=dword:00000001
"SymlinkLocalToRemoteEvaluation"=dword:00000001
"SymlinkRemoteToRemoteEvaluation"=dword:00000000
"SymlinkRemoteToLocalEvaluation"=dword:00000000
"NtfsDisable8dot3NameCreation"=dword:00000000
"NtfsDisableCompression"=dword:00000000
"NtfsDisableEncryption"=dword:00000000
"NtfsDisableLastAccessUpdate"=dword:00000001
"NtfsEncryptPagingFile"=dword:00000000
"NtfsMemoryUsage"=dword:00000000
"NtfsMftZoneReservation"=dword:00000000
"NtfsQuotaNotifyRate"=dword:00000e10
"UdfsCloseSessionOnEject"=dword:00000001
"UdfsSoftwareDefectManagement"=dword:00000000
"NtfsAllowExtendedCharacter8dot3Rename"=dword:00000000
"NtfsBugcheckOnCorrupt"=dword:00000000
"NtfsDisableVolsnapHints"=dword:00000002
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management]
"WriteWatch"=dword:00000001
"ClearPageFileAtShutdown"=dword:00000000
"DisablePagingExecutive"=dword:00000000
"LargeSystemCache"=dword:00000000
"NonPagedPoolQuota"=dword:00000000
"NonPagedPoolSize"=dword:00000000
"PagedPoolQuota"=dword:00000000
"PagedPoolSize"=dword:00000000
"PhysicalAddressExtension"=dword:00000001
"SecondLevelDataCache"=dword:00000000
"SessionPoolSize"=dword:00000004
"SessionViewSize"=dword:00000030
"SystemPages"=dword:00183000
"PagingFiles"=hex(7):64,00,3a,00,5c,00,70,00,61,00,67,00,65,00,66,00,69,00,6c,\
00,65,00,2e,00,73,00,79,00,73,00,20,00,34,00,35,00,30,00,33,00,20,00,34,00,\
35,00,30,00,33,00,00,00,00,00
"ExistingPageFiles"=hex(7):5c,00,3f,00,3f,00,5c,00,44,00,3a,00,5c,00,70,00,61,\
00,67,00,65,00,66,00,69,00,6c,00,65,00,2e,00,73,00,79,00,73,00,00,00,00,00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\PrefetchParameters]
"BootId"=dword:00000054
"BaseTime"=dword:111b44c6
"VideoInitTime"=dword:0000000f
"EnableSuperfetch"=dword:00000003
"EnablePrefetcher"=dword:00000003
"EnableBootTrace"=dword:00000000 -
I wish I have seen your post literally 2 minutes ago. I just formatted the disk (again!!! LOL)
I am going to use 64K cluster size.
I am also preparing a step by step guide how to install windows with 64K cluster size and all the registry tweaks along with the services I change in order to achieve this performance.
About the L2 let me do some research first after I finish with Windows installation -
"SecondLevelDataCache records the size of the processor cache, also known as the secondary or L2 cache. If the value of this entry is 0, the system attempts to retrieve the L2 cache size from the Hardware Abstraction Layer (HAL) for the platform. If it fails, it uses a default L2 cache size of 256 KB. If the value of this entry is not 0, it uses this value as the L2 cache size. This entry is designed as a secondary source of cache size information for computers on which the HAL cannot detect the L2 cache.
This is not related to the hardware; it is only useful for computers with direct-mapped L2 caches. Pentium II and later processors do not have direct- mapped L2 caches. SecondLevelDataCache can increase performance by approximately 2 percent in certain cases for older computers with ample memory (more than 64 MB) by scattering physical pages better in the address space so there are not so many L2 cache collisions. Setting SecondLevelDataCache to 256 KB rather than 2 MB (when the computer has a 2 MB L2 cache) would probably have about a 0.4 percent performance penalty. "
Source : Detailed Explanation of SecondLevelDataCache -
Imageshack - 64knewsettings.png
The only thing is that I didn't run this command with the 4K cluster size...
It was too late when I saw the post...
Installing an SSD? tips/tricks/benchmarks
Discussion in 'Alienware' started by mfractal, Apr 9, 2010.