JJB,
I'm not speaking for OCZ, but all manufacturers have had failures in the first day of releasing their SSD's and I wouldn't be surprised if they still have failures after 3 years with them too.
Your usage pattern seems right in line with what Intel is expecting from their users (7*30*20GB/day=4.2TB) so you're not 'abusing' your drive.
What ocztony is saying is that there is a difference between normal use and simply benchmarking (with certain tools) an SSD over and over and expecting it to perform the same.
A good example is the many stories I read of people initially buying the Intel G1's and killing them in a few short weeks in a server setup.
With the proper SSD in the servers (an Intel X25-E SSD), the 'issues' were resolved.
So, what this is indicating to me is that today's SSD's are targeted very narrowly to consumers or enterprise users - know what your usage model is and buy appropriately.
-
tilleroftheearth Wisdom listens quietly...
-
First of all, Ocztony thank you very much for the heads up concerning the extended usage of benchmarking tools and their impact on the faster wear and tear of the SSDs. I am sure MLC owners will really appreciate it and should consider twice before they start benching.
Secondly, I really appreciated the links you provided and especially the one with the batch file.
The thing is though that this thread is dedicated to the low performance of SSDs (of any brand, not a specific one) and specifically to the low 4K random reads and writes that PM55/HM55 chipset based laptop owners experience and for this reason I would like to ask you, what is your opinion concerning this matter. Is this behavior normal or is something wrong here?
I am pretty sure that members of this forum and of this particular thread, wouldn't have to perform 200+ benchmark runs of any program and try various drivers, bios settings, registry tweaks, etc in order to find out what gives them the best results and performance.
Why would someone pay a premium and buy a Vertex 2 instead of a Vertex or in my case an Intel X25-E instead of X25-V if he gets the same capped 4K performance? (I know there are many other reasons but you get my point)
Isn't it more important for a consumer to get the performance that he/she paid for instead or worrying about the wear level of the NANDs when he tried to figure out what's wrong with the drive? (In the very end that's why the warranty is for 3 years) -
I just ran CDM while on battery, which with the Envy 15 the CPU is throttled to an x9 multiplier (1.2Ghz). Surprisingly when on the 'idle disabled' tweak the results were almost as good as when plugged in (see below), the interesting thing is that my CPU power draw when on battery is ~8.5W vs. 19.5W when plugged in and the idle temps are 46C core 0 and 39C core 1. This to me indicates that even though the CPU is throttled the C1E power state is still disabled giving full (almost) SSD performance without causing eccessive idle temps and power draw
.
After looking at all the registry options for CPU idle settings, parking of cores and other power setting options that there must be a way to reduce the 'idle disabled' state to either 1 core (or possibly just 1 thread) and give reasonable temps while getting full SSD speeds. Anyone who knows how to further adjust these regisrty settings, please see if there is a way to make this work as a permanent fix while giving reasonable CPU temps....
On battery with 'idle disabled' and HP throttled CPU x9 multiplier (1.2Ghz), CPU temp 46C / 39C.
-
My point is, for my companies business usage (we have 5 Envy 15's so far w/ SSD's), the life expectancy is a non issue at our current (and estimated future) usage levels. My numbers that you quoted as appearing 'right in line with what intel is expecting...' are just that, well within the range of the expected long term capabilities of the SSD's we spec'd. Actually since my machine is the 'test bed' for the company it has significantly higher R/W volume than our other machines. Considering this and looking very conservatively at the wear level, these drives will far exceed the life of the next 2 or 3 notebook upgrades we will most likely have over the next 4 to 6 years. And by that time I truly expect that the current drives will be basically obsolete and relegated to an external portable / backup drive function...
So again no worries at all about premature wear of these drive in our application. I would think that most people attempting to use SSD's in a server (or other high usage) application would be smart enough to work out the numbers and select the appropriate enterprise type drive..... -
Now if only my Crucial C300 256GB would arrive I could join in the search for a solution, I'm not scared of a little creative registry editing, that's what we have backups for
Cheers,
Sander Sassen - Hardware Analysis -
-
if someone can make a solution which doesn't cause overheating , i wouldn't mind disabling c-states.. for now , best way is bombard intel with requests for new chipset software.
-
Tinderbox (UK) BAKED BEAN KING
Can somebody make an message/alert that can be posted on other forums so that we can get as much publicity as possible, just post the alert here and i will post it on all my other forums.
Basically the problem in an nutshell , also we can link to this thread, for anybody who might be interested. -
Has anyone tried running Crystal with a PM55 and normal hard disk drive?
And Seagate XT?
I wonder how those will be affected, if they're affected at all.
-
Tinderbox (UK) BAKED BEAN KING
Well that`s five other forums made aware of the problem, if everybody does this we should have a fix soon
-
I am still waiting for a reply from Intel concerning this matter after the email I have sent, did anybody received anything in the meanwhile?
-
I sent emails to Techreport and Tomshardware. Would like to email Anand too but can't find his contact details.
-
-
NotEnoughMinerals Notebook Deity
+1 to stamatisx and Phil, keep up the good work guys
I can't contribute so much because I'm still debating and bargain hunting for SSDs but really appreciate the investigation -
Emailed Anand too.
-
-
LOUSYGREATWALLGM Notebook Deity
-
LOUSYGREATWALLGM Notebook Deity
-
With the tweak you posted 39MB/sec random read. I don't think any Sandforce drive can do that at default settings with any chipset.
Try running CDM at the default 1000MB size. In your results it was set to 100MB.
You might want to only run the 4K benchmark, maybe one or two runs to reduce wearing. -
tilleroftheearth Wisdom listens quietly...
I'm pretty sure that he is getting such good results because he is using a 240GB SandForce drive @ only 55% full.
I'm now getting around 19MB/s with no load and without using the idle tweak @ 70% full on my 100GB drive. -
LOUSYGREATWALLGM Notebook Deity
tilleroftheearth said: ↑I'm pretty sure that he is getting such good results because he is using a 240GB SandForce drive @ only 55% full.
I'm now getting around 19MB/s with no load and without using the idle tweak @ 70% full on my 100GB drive.Click to expand...
Just a little confused why 1000MB is to be used
One of our members (sgilmore, detlev or daveperman?) here mentioned on the other SSD thread that 100MB is better to check your 4K read/write -
Cause we're all running at default settings.....
Otherwise we end up in discussions like these.
tilleroftheearth said: ↑I'm pretty sure that he is getting such good results because he is using a 240GB SandForce drive @ only 55% full.Click to expand... -
LOUSYGREATWALLGM Notebook Deity
Phil said: ↑Cause we're all running at default settings.....
Otherwise we end up in discussions like these.Click to expand...
Here it is
-
tilleroftheearth Wisdom listens quietly...
Still not at the defaults...
You also need to run it 5 times with Random data. -
It's good enough for me. Looks close to normal capped results. A bit better but that may have to do with it being the 240GB.
-
LOUSYGREATWALLGM Notebook Deity
tilleroftheearth said: ↑Still not at the defaults...
You also need to run it 5 times with Random data.Click to expand...
oh well, brb with the result
slightly better
-
tilleroftheearth Wisdom listens quietly...
LOUSYGREATWALLGM said: ↑hmm, I'm avoiding that much of write on my SSD.
oh well, brb with the result
slightly better
Click to expand...
Thanks for re-running those tests for us!
Hmmm... here's mine on battery power, no load and no idle tweak applied.
You're 20% faster on reads and almost 30% faster on writes. Now, is this because of the bigger SSD or the lower % capacity used?
Or, both?Attached Files:
-
-
What I would like to mention is that by running the program with random data instead of 1s or 0s will probably give different results to SSDs with the Sandforce controller (for this reason I would advice everybody to use the random data).
For the 4K, I also don't think there is any need for a size bigger than 50MB and no more than 2-3 runs (the results will be indicative enough without wearing the NANDs too much). -
LOUSYGREATWALLGM Notebook Deity
tilleroftheearth said: ↑..You're 20% faster on reads and almost 30% faster on writes. Now, is this because of the bigger SSD or the lower % capacity used?
Or, both?Click to expand...
And for the capacity used, yes you will get slightly slower scores as the SSD gets filled. But 20%?
EDIT: Is yours Vertex 2? -
stamatisx said: ↑What I would like to mention is that by running the program with random data instead of 1s or 0s will probably give different results to SSDs with the Sandforce controller (for this reason I would advice everybody to use the random data).
For the 4K, I also don't think there is any need for a size bigger than 50MB and no more than 2-3 runs (the results will be indicative enough without wearing the NANDs too much).Click to expand...
Therefore, we may as well limit the sample size to 50MB and 3 runs for the sake of averaging the results. I have performed these tests with various sample sizes, and the difference is generally in the range of ±0.5 MB/s for 4K and ±2MB/s for Sequential/512K, which represents a range that often falls within the latent margin of error associated with these benchmarks. If you notice, those are the settings I had used in all of my benchmarks, since a little common sense told me that anything further adds little value to the results, and simply adds unnecessary wear to the NAND.
Suffice to say, let's stop adding FUD to the discussion and claim that consistent benchmark settings are an essential among all of these resultsThere are too many inconsistent variables for the deviation to bear any real significance. Let's leave the test methodology to those with a true test platform, such as Anand or Tom's Hardware.
-
tilleroftheearth said: ↑Click to expand...
-
tilleroftheearth Wisdom listens quietly...
LOUSYGREATWALLGM said: ↑EDIT: Is yours Vertex 2?Click to expand...
eight35pm said: ↑Anyone know what "Disable Large System Cache" does? That was the only box that was checked when I opened it. Doing "Auto Tweak' unchecked it. Should I leave it unchecked? Should I change anything that "Auto Tweak" does? Thanks.Click to expand...
I would say leave that box unchecked if you have more than 2GB RAM.
Do you notice your system faster/smoother/snappier after rebooting?
I did. -
stamatisx said: ↑What I would like to mention is that by running the program with random data instead of 1s or 0s will probably give different results to SSDs with the Sandforce controller (for this reason I would advice everybody to use the random data).
For the 4K, I also don't think there is any need for a size bigger than 50MB and no more than 2-3 runs (the results will be indicative enough without wearing the NANDs too much).Click to expand... -
My laptop has the PM55 and i have an external HDD that should transfer at 143MBps with writes about 130MBps but its capped at 90-95MBps. this is going through eSATA and my internals dont show it becuase they are just below that threshold. I found in one of these screen shots that it is being set at UMDA mode 6 which is ultra ata/133. you cna see this in these screen shots. I assume my issue has to also do with the PM55 chipset. I am hoping maybe onee fo you guys can help/this info can help you guys too seeing another side of the issue as well. Also notice that the cache runs at a consistant 95MBps in those tests....should the cache be super fast?
http://forum.notebookreview.com/har...ts-can-not-take-full-advantage-fast-ssds.html
link top the forum where i have been posting on my external.
will post links to photos here as well give em 10 mins
EDIT:
HD Tune showing that its set in UMDA mode 6?
http://img690.imageshack.us/img690/1839/ultraata133.png
http://img685.imageshack.us/img685/6979/secondinternalhardrive.png
External enclosure i got
http://www.newegg.com/Product/Produ..._-na&AID=10446076&PID=3640576&SID=skim525X832
Gallery of HD Tune pro tests
http://img842.imageshack.us/gal.php?g=14july20100236.png
Internall drive
http://img821.imageshack.us/img821/2812/internallol.png
Now these photo show it at its real speed like one sec...not sure why but than back to capped....from running it 100s of times i got like 3-5 tests at regular speeds....very weird
http://img837.imageshack.us/img837/6659/wellthisshowsbetteer.png
http://img828.imageshack.us/img828/804/59010033.png
http://img163.imageshack.us/img163/8436/againp.png
http://img841.imageshack.us/img841/7758/weirdhuh.png
weird i got this time several tests at full speed/close to it but here is the test showing it running slow happened several times in a row. -
tilleroftheearth Wisdom listens quietly...
DCMAKER,
You forgot to mention that you are using a Rosewill external enclosure that limits your bandwidth to your notebook - even through eSATA.
Try another 'quality' enclosure and your speeds should be 'normal'.
This has nothing to do with your chipset - what you're measuring is Sequential transfer rates - what we're talking about here is 4K Random R/W performance drops.
Good luck. -
tilleroftheearth said: ↑DCMAKER,
You forgot to mention that you are using a Rosewill external enclosure that limits your bandwidth to your notebook - even through eSATA.
Try another 'quality' enclosure and your speeds should be 'normal'.
This has nothing to do with your chipset - what you're measuring is Sequential transfer rates - what we're talking about here is 4K Random R/W performance drops.
Good luck.Click to expand...
again i posted the link and said i am using an enclosure brianiac. Again my internals show the same issue on the burst...it wont break 100MBps It also shows in HD tune that the internal and external both are running at PATA speeds....this could be another problem of the chipset....not sure. Thought this might help you guys out. So maybe you should read and not jump to conclusions. -
JJB said: ↑What part of comparing apples and apples don't you understand? What, you want us all to rerun all tests at 50MB now? Also why do you have any concern about wear on an Enterprise series intel drive? it will probably last until flash memory is an obsolete antique technology only seen in museums....Click to expand...
After all if it is decided all the runs to be performed with a size of 100MB I am not the one to object ( SLC inside).
-
tilleroftheearth said: ↑What you should have done is test/notice the performance before you applied the tweaks and see if it did any improvements for you.
I would say leave that box unchecked if you have more than 2GB RAM.
Do you notice your system faster/smoother/snappier after rebooting?
I did.Click to expand...
What's weird, is that (after restarting my computer) I changed from "High performance" setting to "HP Recommended and my 4k random read/write scored jumped from 15/23 to 18/30. I restarted my computer again, still on HP Recommended, and now they are back down to 15/23. I don't know what I did.
I have an Intel 160GB G2 SSD, and am on a dv6tse. -
tilleroftheearth Wisdom listens quietly...
eight35pm said: ↑Yeah, I know, I screwed up. After messing around with it, it doesn't appear to affect my speeds.
What's weird, is that (after restarting my computer) I changed from "High performance" setting to "HP Recommended and my 4k random read/write scored jumped from 15/23 to 18/30. I restarted my computer again, still on HP Recommended, and now they are back down to 15/23. I don't know what I did.
I have an Intel 160GB G2 SSD, and am on a dv6tse.Click to expand...
Hmmm... I was surprised that I noticed it to be honest, but I did.
With the variable scores; did you maybe run the test while the machine was still loading (and therefore pushing the chipset/cpu idle states into 'off' mode)?
I know that they can vary, but that is the only explanation I can think of right now. -
you say i got one. you posted an assumption without reading what i wrote...so of course i am going to react to a blatant ignorance that ignores what i just wrote and said. so lets get back to the point please. I have a very valid point here even if your ignorance is bliss and disagrees. it may not have to do with the PM55 so maybe its drivers/bios. But it still could have to do with the PM55 so it may help you guys understand the PM55 more fully. Maybe there are other issues too. so plz focus
-
JJB said: ↑What part of comparing apples and apples don't you understand? What, you want us all to rerun all tests at 50MB now? Also why do you have any concern about wear on an Enterprise series intel drive? it will probably last until flash memory is an obsolete antique technology only seen in museums....Click to expand...
I have already done tests on my computer, and at least in my anecdotal experience, I have found that there is little deviation among the benchmark results, regardless of the test size. Does this necessarily apply to all systems? Perhaps not, which is why my selection of words included the term "anecdotal". Nevertheless, while accounting for the occasional outlier, the results should remain fairly consistent, with little dependence on the test size for the benchmark. The only thing we should expect to change is some variation in the average transfer rate, for which a larger test size would reduce the margin of error.
Most of us are using MLC SSDs, so why bother generating additional wear for the sake of some sort of improvement on the margin of error? These tests are hardly a controlled environment, and we are using them merely to observe the trend in SSD performance and behavior under different power policies. No one here (as far as I am aware) is conducting true data acquisition and analysis. That would go beyond the scope of the tools and controls that we have available within this forum. -
tilleroftheearth said: ↑Hmmm... I was surprised that I noticed it to be honest, but I did.
With the variable scores; did you maybe run the test while the machine was still loading (and therefore pushing the chipset/cpu idle states into 'off' mode)?
I know that they can vary, but that is the only explanation I can think of right now.Click to expand...
Hmmm, I'm pretty sure that it wasn't still loading. What I found now was that the first run after change the power settings is faster, and the second run is slow again. I got a faster results for High Performance now, so I guess that's not what's slowing it down. My runs are all over the place now. For example, I got one run of 20/23, and even as fast as 20/37, but also 14/22. -
For anyone who cares whether a standard HDD is affected by the processor idle fix, here are my results:
This test is on a Hitachi 7200RPM 500Gig HD with the idle enabled
This is the run with idle disabled
The results are pretty much no meaningful change in performance.
Tiller: I have been thinking about our two posts where I notice no change and you have a noticeable difference in speed on everyday windows tasks with idle disabled. I wonder how much of this is from the SSD and how much from the processor not throttling back. I have not performed any meaningful testing to see how much of a difference I have in doing things like virus scans, file open times, etc.. but the fact that I have a Quad has factored in my thoughts as to why I may not perceive a difference.
I thought about starting a new thread with people performing CPU intensive benchmarks and then we have data on drives from this thread and see if there is some way to try and gauge how much improvement is CPU alone. Unfortunately I am getting reading to go on vacation and I need to concentrate on that, so I don't want to start a thread.
Anyway what are your thoughts on this? -
@stamatisx & Jakeworld
Maybe I wasn't being clear. Almost everyone from page 1 on has already run numerous CDM "defualt" runs at the 1000MB setting for comparisons. Also Phil has requested several time to run it at 1000MB. So why keep bringing up using other settings? We have a fairly good 'baseline' already and I see absolutely no reason to change things in midstream.
And Jakeworld, regarding your comment "Apples to apples is a meaningless analogy, when there are already so many uncertain and uncontrolled variables in the mix." Wouldn't changing the standard run now just add another variable? I know for a fact that I get much higher 4K R/W speeds when I use the 50MB size (especially write speeds). I also know that rerunning the same test at the same settings has variables from run to run but the results are much closer to each other than when I run different sizes.
Sorry for my 'candor' but if you read back it appears that this issue has been discussed and decided upon by our moderator (several times). So why are we even discussing it again? -
I completely agree that different benchmark settings add another variable of uncertainty, but the fact of the matter is, we are looking for general trends. Perhaps a lack of insight prompted the recommendation for default settings, but that doesn't change the prospect that such guidelines are ill-informed. We are all prone to err, and because of that, we should always be open to revising our existing methods.
I called you out because I felt your post carried a tone of condescension. Perhaps I felt your words conveyed a tone you did not intend to evoke, and I am willing to reconsider my choice of words. My point is that we should retain our sense of logic and speak sensibly to one another, rather than provokingly question one another. I can empathize with your frustration, but I feel it is reasonable to at least consider opposing viewpoints with respect to the benchmarking method. In this case, I respectfully disagree and stand by stamatisx. I believe that a test size 50MB with 3 runs is not significantly less valid than a 1000MB test size with 5 runs.
If we are considering the margin of error, then that certainly adds another element for consideration. However, since we seem to be treating that aspect as negligible, it's logical to subsequently treat the test size as insignificant, provided the sample size is sufficient. In my own observations, I have found this to be true, so any further increase is wasted productivity.
If consensus leads to delving into this subject matter, I am well acquainted with data acquisition and error analysis, and would be more than happy to provide some insight into that discussion should it materialize. -
The fact that we can compare across results is a nice bonus of all running at the same settings.
I think it's good idea to change the test size to 50MB with 3 runs to reduce wear. So let's do that, from this point on test size 50MB with 3 runs. To reduce further wear we may as well only do the 4K random runs. -
Phil said: ↑The fact that we can compare across results is a nice bonus of all running at the same settings.
I think it's good idea to change the test size to 50MB with 3 runs to reduce wear. So let's do that, from this point on test size 50MB with 3 runs. To reduce further wear we may as well only do the 4K random runs.Click to expand...
I also agree with Phil that we should concentrate on the 4K random reads and writes because those are the ones mostly affected. -
For comparison with other C300 owners, this is Crucial C300 64GB on GS45 chipset.
On the left without CPU load, on the right with 100% CPU load through HyperPI. -
Here are my numbers at 50MB x 3, note that all 3 of the runs have significantly higher write speeds when using the 50MB test.
No load idle enabled
Full load all threads (Everest stability test) idle enabled
No load idle disabled
Disregard last image, it's the 1000MB x 5 I uploaded by mistake. For some reason I can't seem to delete it from the edit page.....Attached Files:
-
'Laptops w. Intel Series 5 chipset can not take full advantage of fast SSDs'
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Phil, Aug 27, 2010.