I know that SSD is supposed to be faster than a regular old Sata HD even a 7200RPM. but since there is no rotation speed on the SSD how do you judge the speed on them ? For example how would you compare the speed of a SSD to a 15000 RPM Sata ?
-
Read and Write speed, in mbps.
7,200 RPM Drive, READ 90MBS, WRITE 85MBS
SSD SATA III READ 550MBS, WRITE 500MBS. -
There's more to SSD's than just raw read and write speeds; you have access times which are virtually nonexistant compared to a normal HDD where it could take .1 second to get to a file and read it [or more]; that's the main reason why SSD's appear to be faster to most people.
-
Also I noticed that alot of ultrabooks have a 500GB HD and a 32GB SSD. IS there 2 individual HDs in there ?
-
It technically counts as two drives, but the hardware makes them work together as one, the 32GB SSD works like RAM for the HDD, where it stores frequently used and accessed files based on certain usage patterns, with exceptions as to what does and doesn't get stored in it.
-
tilleroftheearth Wisdom listens quietly...
You don't judge the speed on spec's or benchmarks or marketing or some fan on a forum that was used to fully un-optimized 5400 RPM HDD's and gets all excited with a 64GB mediocre example of what an SSD should be with no basis to back up his/her 'wow, this is fast' remarks.
What an SSD should be (basics):
All channels filled and each channel optimally interleaved and all managed with an intelligent controller to get the most real world work done with the least amount of power consumption with stability and reliability at the top of the list. Or, get an Intel SSD with at least 240GB capacity or more).
How do you judge the speed? Simple - take your current working setup, duplicate it except for swapping in the SSD in question for the HDD (including all software, drivers and data needed to make your programs work) and then use it exactly (or as close to it as possible) as you did your HDD based setup. This is how you will see what is faster (and by how much - and if the $$$ to get that speed are worth it (to you) for the performance/productivity gained).
Even compared to a 15K RPM SATA or SAS drive, any (good) SSD will impress with faster boot times, faster program launches, faster shutdowns and faster copies OFF the drive (to another drive - SSD or HDD...). It will also continue to impress with any program that uses/runs off a database (LR4, for example) and/or reading/creating/modifying small files (such as PDF's).
Depending on the SSD in question, it may even impress in write speeds (but HDD's (ie. 1TB vRaptor) have now hit 200MB/s - dangerously close to SSD territory - close enough that it may not even matter (too much) if the SSD can do another few 10's of MB's/sec more (sustained, incompressible files...).
Another 'myth' you may want to prove/dispel during your testing is how much more power efficient (and heat producing...) the SSD in question is vs. the HDD it may be replacing when it is operating at it's extremes. Especially in a poorly designed notebook with bad/no/little ventilation for the storage subsystem - there are SSD's that take MORE power than the mechanical HDD it replaces.
Also consider that to keep an SSD operating at it's peak performance on a sustained/continuous basis you need to keep a lot of free space on it - ideally by partitioning it (at time of purchase - before any other usage) at up to 50% or less of it's nominal capacity. While this is only needed if you intend to punish the SSD (like, for a PS Scratch Disk...) for 8+hours every work day and need it to perform to at least close to it's spec's - depending on the actual workload the SSD is subject too can make this over-provisioning a matter of life and death (matter of weeks/months) to the SSD.
Finally, if you need to RAID (0/1/5/10/etc.) an SSD (for example: your data set won't fit on a single drive) - note that the SSD's in these arrays can degrade in a matter of days with the worst part being that you have to break the RAID - Secure Erase each drive and join them again in the RAID array you need/require (hope you make daily image backups...).
Having a solid baseline (of productivity) with your current system setup and setting up an identical SSD test system and seeing how that baseline moves (it won't be all positive, ime...) is the key to knowing (or how I would compare) to a 15K HDD.
If you don't want to do all that - there's always the truthful marketing to help you out... -
@chomper: Generally speaking, the flash storage.. because there's no mechanical parts.. has near instant access times, and time to max transfer. But there are extra logical operations involved when writing. It's really a large eprom. And as transistor size and cost has gone down -- that's really all there is to it.
And what it means is that short and large sectors will have the same instant peak transfer speed, etc. That's useful. Depending on the way the disk is set up, though, the throughput can be different. And you still get ssds with relatively low write speeds. This has to do with architecture and the internal controller on the disk, before the motherboard's IO controller interface(the sata port).
The write operations as well used to be a real pain, because for example Windows doesn't actually write to a disk before rewriting on top of other sectors again. Meaning that if you delete a file, then the sector is just marked as writeable, without the sector actually being wiped. So the next write operation would overwrite, etc. On an ssd, that would get sent to the controller, and you would get very erratic write performance, depending on whether the drive was prepared or not (because of that extra logical overhead having to do with how flash-storage works). But that's not really a problem at this point.
But it's a good question. How you rate the most expensive ssd towards a cheap one. Or if improvements were made lately that would make one purchase better than another.
Honestly, I think there's not much to say about that, because even the worst ssd, at the worst times - even one without automatic maintenance - still works faster than a hdd.. So for a normal consumer, it's not really a question about performance, but storage size and practical convenience.. A mechanical drive in an EeePC, for example -- counter-intuitive. So a very good option to put an ssd in those, even with the first terribly slow and terribly small ssds.
But then again - you could imagine striping two 500mb/s drives, and.. ideally.. max out the controller speed. Practically, though, you would want to get an ssd with a self-maintaining controller, so you can disable TRIM from the OS. And have one with high write speeds (sata 3/ towards 500mb/s). That's really the only interesting specs for us as users - that the disk can maintain a high transfer speed. And secondary, that it can deliver write speeds that are up towards the best hdds. ..and most disks on the market now can do that now.
If you're interested in how this has developed, though, you should take a look at the earliest EeePC reviews. That's where ssds made sense at the beginning.
jkkmobile: Asus Eee PC SSD read and write tests. 901 vs 900 vs 900 16G
..sounds like you had a great experience with setting up one of the first intel ssds for a server. Would love to hear the whole story
-
tilleroftheearth Wisdom listens quietly...
nipsen,
examples of SSD's with higher power consumption than HDD's (during sustained use...) is the Samsung 830 series and the OCZ Vertex 4 series drives. Intel 510 Series drives are also very power hungry - but not as much as these two.
The only SSD that showed a real world reduction in power use (by demonstratably longer battery life...) in my systems and usage workflows is the Intel 320 Series 160GB models.
With regards to the 'its not a problem for average controllers...' comment - I can see that you're not pushing SSD's like I do - or at least you are not noticing the performance degrade even while Windows, programs and data is installed to any current SSD - let alone use them at the edge of their performance envelopes.
I'll concede that SSD's offer benefits that HDD's simply cannot match (mostly; no moving parts and now; small/tiny mSATA form factor(s)) - but that does not mean they are better, performance-wise (overall) than HDD's - especially at the smaller capacities.
You also have a very optimistic viewpoint of how far SSD's have advanced. The issues you think are no longer a problem are still here and very real for workflows that stress the storage subsystems substantially and continuously on a daily basis. For casual users; sure SSD's make today's systems feel like something from the future - especially compared to slow, unoptimized HDD's that are shipped by default by all manufacturers.
But for users that are used to the fastest HDD's in optimal setups (via partitioning and maintenance), some SSD's are just now starting to give a tangible performance/productivity difference with the right kind of tlc (again; partitioning and maintenance) in a workstation type real world workflow.
(Please note that 'productivity' is not startup, shutdown or program launches (that is simply 'snappiness' of the O/S, not productivity) - here, it is used simply to indicate in minutes/hours per day/week/month/year how much more work an SSD achieves over an HDD in the same setup once inside the productivity program/suite of choice). -
davidricardo86 Notebook Deity
Since most of us here are average users or consumers, we use consumer line of ssds (basically the "cheap" stuff) & it works great! Some better than others but for the most our needs are satisfied. If you're so concerned about these (imo little) issues with consumer ssds so much & how they don't keep up with your "workloads & workflows", then why don't you step up to more expensive heavy-duty enterprise/business ssds? Apparently your usage patterns are so much more "extreme" than anyone elses here that it would warrant the best ssd money could buy (if money were no object) yet you're not using one (that we know of).
What exactly are you doing & using your ssd(s) for? Can you elaborate on your high performance usage & how you're pushing them to the "edge" on a daily basis? What exactly is a "workstation type real world workflow & how are you substantially & continously pushing your consumer ssd within a productivity program or suite?"
Again, id like to know where to get the software to test power consumption of my consumer ssds to see if its operating in accordance to what these reviewers are saying. Id like to verify, not just from myself but from as many users, what the actual statistics & data are. For me, one review/one website is not enough data, & its not the end all. I believe there will be (slight) variations even between two of the same ssd. Just like how not all cpus operate exactly the same, even if they are the exact same models.
Sent from my SPH-M580 using Tapatalk -
Ouch, those stats didn't seem very good.. I have a force gt drive, it maxes out at 2.5W or so (idles at 0.5-0.3). I didn't think many drives went higher than that.
Difficulty getting sustained throughput, and that this is variable, is a good point, though. And I know some drives just crash really badly on the performance when the disk starts to become full, and you get the rewrites more consistently. Even if the Intel disk I tested.. a year ago.. still didn't go below 80mb/s or so. Which is pretty fast for worst case scenarios.. When that also doesn't actually affect the read speed, or access times, I wasn't exactly complaining.
Besides, once you reach the queue-depth on an hdd, or create workloads that have small and spread out reads and writes, you get a similar effect (with lower transfer speeds on average - and you can get those extremely fast, if you create the right loads..). You get that even on the most ridiculously expensive drive. You certainly get it quickly on a normally priced drive. And placing data on particular spots on the drive, to make sure you don't get written things back and forth from different programs, or splitting on partitions, setting up a stripe, that kind of thing - this isn't done in a hurry, or very easily. While self-maintenance on an ssd, or letting it run trim once in a while. That arguably gives you better performance than the extremely carefully kept hdd stripe, etc.... this is literally no effort at all.
So since I'm not dependent on write speeds being consistent, and don't need massive storage space -- I notice an improvement with the things I do. Compiling java-projects, running things with some access to the disk while other tasks are running, and so on. Then the difference is huge. Because those things have always maxed the drive-queue on a normal disk before, even if it had reasonably large cache and so on. So now the actual operation is faster - and anything else I'm doing that needs light read/write on the disk is still snappy.
Definitely agree that it's not magic, though. That there are problems with this, and that they've been serious. Some of the OCZ drives were notoriously bad, and the firmware is still patched on their drives. Not disputing that. Still - I'm not going to start saying that "well, you could just buy this drive for €100, plug it in -- or, you can buy these two disks and stripe them, use five times the power, get more noise, spend some time setting it up -- and get about the same performance. So therefore it's just a matter of preference!". Can't do that.
Can't bring myself to pull some Anandtech quote either, and go: "The throughput degradation goes from 500mb/s to barely 180mb/s! Which is merely 36% of the advertised speed! A far cry from the much smaller 180mb/s to 150mb/s reduction that the more solid good old hdds do! Oh, yes, this is a load of false advertisement, and SSDs are fraud! Very questionable, and not reliable at all! Here's a list of hdds that have better best case scenarios than this ssd's absolute worst case! Make your informed choice!". You know.. There's a tendency that people write things like that. Or actually invent new stress-tests for ssds that would croak an hdd instantly.
.. But it is a good question - how do you see the difference between the best and the worst ssds. And I think sustained througput is the important part - to avoid the situations you've seen when the write performance drops extremely far down for no reason. Or that conditions for normal full-speed reads are kept when the disk starts to fill up. ..do manufacturers even have something suggesting that in their specs, by the way..? I went by Sandra tests when I bought a drive.. -
I have to agree somewhat here. For consumer level 99% of the time these newer MLC SSD drives is more than enough. Where high amounts of uninterupted disk writes is required SLC drives may be the ticket. You have to look at you usage pattern before deciding.
-
tilleroftheearth Wisdom listens quietly...
Unless the workload is a very high percentage of small R r/w's - SLC drives are not 'the ticket' at all. (They are best used in Servers, after all).
MLC drives are great for workstation type workloads (where most accesses are sequential, btw) as long as you know their limits.
The Vertex 4 drives are a great example of this:
See:
OCZ Vertex 4 128 GB: Revisiting Write Performance With Firmware 1.5 : OCZ Vertex 4 Write Performance, Revisited
See:
AnandTech - OCZ Vertex 4 Review (128GB), Firmware 1.4/1.5 Tested
As long as you're willing to sacrifice capacity; you gain performance and reliability (via lower WA factors and in the case of the V4 - a specific/deliberate 'performance' mode at 50% filled or less).
But, I have been using this technique for almost a year now in every SSD I have (or have setup for clients): ie: I ensure I use less than 50% to get the performance I paid for, sustained, over time.
In my case; I don't simply make sure to not fill it past 50% - I enforce it by partitioning my SSD's to 50% or less of their nominal capacity. And, I also make sure that I start with a 240/256GB SSD to ensure that the physical/logical interconnects between the controller and the nand chips offers the optimal (and fastest) performance possible.
Although I can't claim to have known this, I did see this effect as early as middle/late 2009 and in all the SSD's I've used since then: in the V4, when the % filled is 50% or less, the SSD drive operates in a type of 'SLC' mode.
See:
Our Theory About How v.1.4 Changed Write Performance : OCZ Vertex 4 128 GB: Revisiting Write Performance With Firmware 1.5
What I saw in my use is that as soon as installing Windows, Updates and Programs was finished - the performance of the SSD was at 'garbage' levels for me. In the case of smaller capacities especially; below what I would be expecting from my HDD's (which were/are optimized, setup and maintained for max sustained performance over time).
I quickly found out that the % filled was the problem. I also very quickly found out that the actual capacity had an impact on the performance an SSD offered. Not only was the smaller capacities limiting for my ~100GB sized 'C:' drive 'default' installs - but it was also obviously limiting the performance the drive could deliver right out of the box too.
I finally bought an Intel 510 250GB SSD for testing - simply as my C: drive. I partitioned it to 100GB and left the remainder 'unallocated'. The performance of the system did not drop (significantly, ~5%) with this extreme example of 'over provisioning'! I was getting somewhere with SSD's!
The test with the 510 series was my way of verifying/confirming all the information I had read about how free space (especially via partitioning) makes a drive achieve lower WA, sustains it's performance over time closer to its 'like-new' state and increases the reliability/dependability of the drive's nand at the same time with the important (especially in my workflow) greatly increased lifetime TB's written 'metric'.
Building a new system for a client allowed me to try this with a non-Intel drive (an M4 256GB model). Same results. Same results also for the Intel 320's 160GB drives I've deployed.
Many may say that using half (or less) of the capacity of an SSD is foolishness - considering what prices were/are at then/now - when SSD's provide so much more performance than HDD's anyway.
But that would just be buying into manufacturers marketing BS - not comparing to real world results - and certainly not comparing to the best HDD's available (including proper setup, optimization and maintenance of the HDD's).
The real foolish use for my case would be to pay the 'SSD Tax' premium that SSD's still command to get less than HDD performance from them - and have that poor performance further degrade with each day of use as a extra 'bonus' too. And that is exactly what I was getting by using small capacity SSD's and even with the higher capacity versions - using over 50-60% of their capacity.
I've stated it many times on this board: having the SSD operate at peak possible performance over time (hopefully the life of the system it will be installed in) is more important (productivity-wise) than worrying about a small(ish) one time cost or comparing it with a $$/GB ratio. This is especially important if/when the SSD can be whipped so hard in my 'normal' use that it backs into a corner it can't recover from (Patriot Inferno - I'm looking at you!!!).
This use of using less than the total capacity is not new to me. I have been doing the same thing for years/decades with HDD's - and have reaped similar benefits there too. Benefits like a minimum sustained (over time) performance from my storage subsystems and I feel, greater reliability because of the regular maintenance (using PerfectDisk) to not only keep that level of performance - but to also decrease the work the heads/motors had to do over their lifetime too.
So, the question remains: why not simply use SLC drives anyway when I'm effectively doubling the cost/gb ratio of the SSD's I purchase anyway?
Simple:
SLC drives are not offered at the capacities that make them interesting to me (yes, I would still over-provision them...) nor do they offer significantly higher performance (actually, most would be worst). Also, while I am paying 'double' for the way I use MLC drives - 'real' SLC drives still can cost 10x more and at that point; multiple HDD's with 'real' RAID controllers would be more cost effective while also giving the same effective type of performance in my workloads. So, SLC SSD's are not even in the running.
While the above scenario's are best suited for desktops - with the current crop of notebook systems available that offer two drive bays (plus more and more with mSATA connections...) this same setup is viable for notebooks too.
Every piece of tech has its place (and purpose) - I'm simply making the best of what we have available today - vs. the storage subsystems of 'yesterday' (like my 7K750's, my 750GB Hybrid's and my 600GB vRaptor's...).
But I'll be sticking to my guns on this: if you buy a small capacity drive (128GB or smaller) and use/fill it over ~70% - you're fooling yourself that your storage subsystem is 'vastly' above a properly setup/maintained 'modern' HDD - in anything else than 'snap' (shutdown/startup/launch programs).
To truly get above the 'snap' level (in other words; to improve in the best HDD's in EVERY way...) you need to consider the points I've brought up again here, in this post. -
I thought this one was funny
Btw, here's a quick demo of how it looks on my system. Corsair Force GT.. not the cheapest drive, and so on. But it's been filled up a few times, and I've run a few heavy file-transfer tests to try to disrupt things, and failed. What I get is this:
This test transfers a file in differently sized bits, as fast as possible, at queue-depth I can choose. So we should be able to read something from this. It begins with 0.5kb file-sizes, and increases up to 256mb. The first transfers stop at 14 and 15mb/s. Before it stabilizes at peak (the controller max speed) when you get to block sizes of 256kb, and 512kb and over.
When I increase the queue-depth to 8, the lower limit is doubled to 30, before it stabilizes at peak at 128kb and over. When I reduce the queue-depth to 2, the lower limit ends at 7-8mb/s. Pretty consistent.
I.e., on this drive, masses of tiny files will each create the same allocation overhead as one large file, reducing the potential speed when sequential transfers have to complete before the next can begin. The interesting thing is that this happens on an untrimmed drive that is completely full, so we should be seeing a reasonably close worst-case scenario for this drive.
I don't know how to determine when IO operations start throttling the controller. What it seems to me is that since the controller is so ridiculously fast, we're not actually getting thread interrupts. The seek times (nada) also help, of course..
Anyway. But to begin with.. can we imagine a scenario where we have masses of less than 16kb files, that need to be written in sequence (so the queue depth would never go higher than 1)?
And would a hdd perform better..?
(edit: here's the test. ATTO Disk Benchmark 2.46 download from Guru3D.com Tiny file to download, takes.. a couple of minutes to run.)
How do you determine the speed of an SSD ?
Discussion in 'Hardware Components and Aftermarket Upgrades' started by chomper, Aug 4, 2012.