The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    'Laptops w. Intel Series 5 chipset can not take full advantage of fast SSDs'

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by Phil, Aug 27, 2010.

  1. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    JJB,

    I'm not speaking for OCZ, but all manufacturers have had failures in the first day of releasing their SSD's and I wouldn't be surprised if they still have failures after 3 years with them too.

    Your usage pattern seems right in line with what Intel is expecting from their users (7*30*20GB/day=4.2TB) so you're not 'abusing' your drive.

    What ocztony is saying is that there is a difference between normal use and simply benchmarking (with certain tools) an SSD over and over and expecting it to perform the same.

    A good example is the many stories I read of people initially buying the Intel G1's and killing them in a few short weeks in a server setup.

    With the proper SSD in the servers (an Intel X25-E SSD), the 'issues' were resolved.

    So, what this is indicating to me is that today's SSD's are targeted very narrowly to consumers or enterprise users - know what your usage model is and buy appropriately.
     
  2. stamatisx

    stamatisx T|I

    Reputations:
    2,224
    Messages:
    1,726
    Likes Received:
    0
    Trophy Points:
    55

    First of all, Ocztony thank you very much for the heads up concerning the extended usage of benchmarking tools and their impact on the faster wear and tear of the SSDs. I am sure MLC owners will really appreciate it and should consider twice before they start benching.

    Secondly, I really appreciated the links you provided and especially the one with the batch file.

    The thing is though that this thread is dedicated to the low performance of SSDs (of any brand, not a specific one) and specifically to the low 4K random reads and writes that PM55/HM55 chipset based laptop owners experience and for this reason I would like to ask you, what is your opinion concerning this matter. Is this behavior normal or is something wrong here?

    I am pretty sure that members of this forum and of this particular thread, wouldn't have to perform 200+ benchmark runs of any program and try various drivers, bios settings, registry tweaks, etc in order to find out what gives them the best results and performance.
    Why would someone pay a premium and buy a Vertex 2 instead of a Vertex or in my case an Intel X25-E instead of X25-V if he gets the same capped 4K performance? (I know there are many other reasons but you get my point)
    Isn't it more important for a consumer to get the performance that he/she paid for instead or worrying about the wear level of the NANDs when he tried to figure out what's wrong with the drive? (In the very end that's why the warranty is for 3 years)
     
  3. JJB

    JJB Notebook Virtuoso

    Reputations:
    1,063
    Messages:
    2,358
    Likes Received:
    3
    Trophy Points:
    56
    I just ran CDM while on battery, which with the Envy 15 the CPU is throttled to an x9 multiplier (1.2Ghz). Surprisingly when on the 'idle disabled' tweak the results were almost as good as when plugged in (see below), the interesting thing is that my CPU power draw when on battery is ~8.5W vs. 19.5W when plugged in and the idle temps are 46C core 0 and 39C core 1. This to me indicates that even though the CPU is throttled the C1E power state is still disabled giving full (almost) SSD performance without causing eccessive idle temps and power draw :).

    After looking at all the registry options for CPU idle settings, parking of cores and other power setting options that there must be a way to reduce the 'idle disabled' state to either 1 core (or possibly just 1 thread) and give reasonable temps while getting full SSD speeds. Anyone who knows how to further adjust these regisrty settings, please see if there is a way to make this work as a permanent fix while giving reasonable CPU temps....

    IDLE DISABLED ON BATTERY X9 MULTI (THROTTLED).PNG On battery with 'idle disabled' and HP throttled CPU x9 multiplier (1.2Ghz), CPU temp 46C / 39C.
     
  4. JJB

    JJB Notebook Virtuoso

    Reputations:
    1,063
    Messages:
    2,358
    Likes Received:
    3
    Trophy Points:
    56
    I agree. Although the 'first day failures' you mention I would think are a completly different issue than the wear levels we're talking about here.

    My point is, for my companies business usage (we have 5 Envy 15's so far w/ SSD's), the life expectancy is a non issue at our current (and estimated future) usage levels. My numbers that you quoted as appearing 'right in line with what intel is expecting...' are just that, well within the range of the expected long term capabilities of the SSD's we spec'd. Actually since my machine is the 'test bed' for the company it has significantly higher R/W volume than our other machines. Considering this and looking very conservatively at the wear level, these drives will far exceed the life of the next 2 or 3 notebook upgrades we will most likely have over the next 4 to 6 years. And by that time I truly expect that the current drives will be basically obsolete and relegated to an external portable / backup drive function...

    So again no worries at all about premature wear of these drive in our application. I would think that most people attempting to use SSD's in a server (or other high usage) application would be smart enough to work out the numbers and select the appropriate enterprise type drive.....
     
  5. ssassen

    ssassen Newbie

    Reputations:
    32
    Messages:
    2
    Likes Received:
    0
    Trophy Points:
    5
    I've done some digging as well, but can't find any such registry settings either (yet). For what it's worth, I wholeheartedly agree with you on the lifespan of these SSDs, a few runs of CDM won't suddenly turn them into expensive paperweights.

    Now if only my Crucial C300 256GB would arrive I could join in the search for a solution, I'm not scared of a little creative registry editing, that's what we have backups for :D

    Cheers,

    Sander Sassen - Hardware Analysis
     
  6. JJB

    JJB Notebook Virtuoso

    Reputations:
    1,063
    Messages:
    2,358
    Likes Received:
    3
    Trophy Points:
    56
    I hope you can find something that works in the registry. I played with a few settings with no luck (yet). FYI there is a 'control set 001' and 'control set 002' that at first appeared to be the same set of power options, after comparing them some are actually different, any idea what the set '1' vs. set '2' differences are?
     
  7. sean473

    sean473 Notebook Prophet

    Reputations:
    613
    Messages:
    6,705
    Likes Received:
    0
    Trophy Points:
    0
    if someone can make a solution which doesn't cause overheating , i wouldn't mind disabling c-states.. for now , best way is bombard intel with requests for new chipset software.
     
  8. Tinderbox (UK)

    Tinderbox (UK) BAKED BEAN KING

    Reputations:
    4,740
    Messages:
    8,513
    Likes Received:
    3,823
    Trophy Points:
    431
    Can somebody make an message/alert that can be posted on other forums so that we can get as much publicity as possible, just post the alert here and i will post it on all my other forums.

    Basically the problem in an nutshell , also we can link to this thread, for anybody who might be interested.
     
  9. Phil

    Phil Retired

    Reputations:
    4,415
    Messages:
    17,036
    Likes Received:
    0
    Trophy Points:
    455
    Has anyone tried running Crystal with a PM55 and normal hard disk drive?

    And Seagate XT?

    I wonder how those will be affected, if they're affected at all.

    Here's what I ~ posted on Macrumors:
     
  10. Tinderbox (UK)

    Tinderbox (UK) BAKED BEAN KING

    Reputations:
    4,740
    Messages:
    8,513
    Likes Received:
    3,823
    Trophy Points:
    431
    Well that`s five other forums made aware of the problem, if everybody does this we should have a fix soon :)
     
  11. stamatisx

    stamatisx T|I

    Reputations:
    2,224
    Messages:
    1,726
    Likes Received:
    0
    Trophy Points:
    55
    I am still waiting for a reply from Intel concerning this matter after the email I have sent, did anybody received anything in the meanwhile?
     
  12. Phil

    Phil Retired

    Reputations:
    4,415
    Messages:
    17,036
    Likes Received:
    0
    Trophy Points:
    455
    I sent emails to Techreport and Tomshardware. Would like to email Anand too but can't find his contact details.
     
  13. stamatisx

    stamatisx T|I

    Reputations:
    2,224
    Messages:
    1,726
    Likes Received:
    0
    Trophy Points:
    55
    +1 to that, I was considering to email Anand for a long time now but I wanted more results first. I guess now it's the time
     
  14. NotEnoughMinerals

    NotEnoughMinerals Notebook Deity

    Reputations:
    772
    Messages:
    1,802
    Likes Received:
    3
    Trophy Points:
    56
    +1 to stamatisx and Phil, keep up the good work guys

    I can't contribute so much because I'm still debating and bargain hunting for SSDs but really appreciate the investigation
     
  15. Phil

    Phil Retired

    Reputations:
    4,415
    Messages:
    17,036
    Likes Received:
    0
    Trophy Points:
    455
    Emailed Anand too.
     
  16. JJB

    JJB Notebook Virtuoso

    Reputations:
    1,063
    Messages:
    2,358
    Likes Received:
    3
    Trophy Points:
    56
    Well maybe with added exposure from more forums it will have an impact this time. The envy 15 owners lounge forums (on #3 now and 2500+ pages) had made a concerted effort to get any info / fix out of intel and HP as far back as last December with no results whatsoever, this was with sevreral hundred requests to both parties.....
     
  17. LOUSYGREATWALLGM

    LOUSYGREATWALLGM Notebook Deity

    Reputations:
    172
    Messages:
    1,053
    Likes Received:
    10
    Trophy Points:
    56
    Just an FYI: I get 22 MB/s 4K random read without sacrificing battery life/ this Proc idle tweak.
     
  18. Phil

    Phil Retired

    Reputations:
    4,415
    Messages:
    17,036
    Likes Received:
    0
    Trophy Points:
    455
    Didn't you change the default settings? I think you did.
     
  19. LOUSYGREATWALLGM

    LOUSYGREATWALLGM Notebook Deity

    Reputations:
    172
    Messages:
    1,053
    Likes Received:
    10
    Trophy Points:
    56
    You mean the Proc Idle? from default (enable to disable)? No.
    The left photo is the fresh install with some OS tweaks/optimization like we often do after OS install but no Processor idle tweaking. Link
     
  20. Phil

    Phil Retired

    Reputations:
    4,415
    Messages:
    17,036
    Likes Received:
    0
    Trophy Points:
    455
    With the tweak you posted 39MB/sec random read. I don't think any Sandforce drive can do that at default settings with any chipset.

    Try running CDM at the default 1000MB size. In your results it was set to 100MB.

    You might want to only run the 4K benchmark, maybe one or two runs to reduce wearing.
     
  21. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    I'm pretty sure that he is getting such good results because he is using a 240GB SandForce drive @ only 55% full.

    I'm now getting around 19MB/s with no load and without using the idle tweak @ 70% full on my 100GB drive.
     
  22. LOUSYGREATWALLGM

    LOUSYGREATWALLGM Notebook Deity

    Reputations:
    172
    Messages:
    1,053
    Likes Received:
    10
    Trophy Points:
    56
    Yep, 100MB

    Just a little confused why 1000MB is to be used :confused:
    One of our members (sgilmore, detlev or daveperman?) here mentioned on the other SSD thread that 100MB is better to check your 4K read/write
     
  23. Phil

    Phil Retired

    Reputations:
    4,415
    Messages:
    17,036
    Likes Received:
    0
    Trophy Points:
    455
    Cause we're all running at default settings.....

    Otherwise we end up in discussions like these.

    Maybe so, let's see what his results are with default settings.
     
  24. LOUSYGREATWALLGM

    LOUSYGREATWALLGM Notebook Deity

    Reputations:
    172
    Messages:
    1,053
    Likes Received:
    10
    Trophy Points:
    56
    Agreed. Should be apple with apple. Was only wondering which is more accurate to check 4K read/write :)

    Here it is
    [​IMG]
     
  25. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Still not at the defaults...

    You also need to run it 5 times with Random data.
     
  26. Phil

    Phil Retired

    Reputations:
    4,415
    Messages:
    17,036
    Likes Received:
    0
    Trophy Points:
    455
    It's good enough for me. Looks close to normal capped results. A bit better but that may have to do with it being the 240GB.
     
  27. LOUSYGREATWALLGM

    LOUSYGREATWALLGM Notebook Deity

    Reputations:
    172
    Messages:
    1,053
    Likes Received:
    10
    Trophy Points:
    56
    hmm, I'm avoiding that much of write on my SSD.
    oh well, brb with the result :)

    slightly better
    [​IMG]
     
  28. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631


    Thanks for re-running those tests for us! :)

    Hmmm... here's mine on battery power, no load and no idle tweak applied.

    You're 20% faster on reads and almost 30% faster on writes. Now, is this because of the bigger SSD or the lower % capacity used?

    Or, both? :confused:
     

    Attached Files:

  29. stamatisx

    stamatisx T|I

    Reputations:
    2,224
    Messages:
    1,726
    Likes Received:
    0
    Trophy Points:
    55
    What I would like to mention is that by running the program with random data instead of 1s or 0s will probably give different results to SSDs with the Sandforce controller (for this reason I would advice everybody to use the random data).
    For the 4K, I also don't think there is any need for a size bigger than 50MB and no more than 2-3 runs (the results will be indicative enough without wearing the NANDs too much).
     
  30. LOUSYGREATWALLGM

    LOUSYGREATWALLGM Notebook Deity

    Reputations:
    172
    Messages:
    1,053
    Likes Received:
    10
    Trophy Points:
    56
    I saw people saying bigger size SSDs are faster (40GB, 80GB 120GB) not sure 120GB vs 240GB :eek:

    And for the capacity used, yes you will get slightly slower scores as the SSD gets filled. But 20%? :confused:

    EDIT: Is yours Vertex 2?
     
  31. Jakeworld

    Jakeworld Notebook Consultant

    Reputations:
    116
    Messages:
    115
    Likes Received:
    0
    Trophy Points:
    30
    Agreed. While consistency is certainly important in data acquisition, the difference between 3 runs at 50MB and 5 runs at 1000MB is splitting hairs when it comes to an SSD. If we were concerned about reducing the margin of error, the latter option presents some merit, but due to the vast deviation among systems and their respective configuration, I would hardly refer to this investigation as controlled analysis.

    Therefore, we may as well limit the sample size to 50MB and 3 runs for the sake of averaging the results. I have performed these tests with various sample sizes, and the difference is generally in the range of ±0.5 MB/s for 4K and ±2MB/s for Sequential/512K, which represents a range that often falls within the latent margin of error associated with these benchmarks. If you notice, those are the settings I had used in all of my benchmarks, since a little common sense told me that anything further adds little value to the results, and simply adds unnecessary wear to the NAND.

    Suffice to say, let's stop adding FUD to the discussion and claim that consistent benchmark settings are an essential among all of these results :) There are too many inconsistent variables for the deviation to bear any real significance. Let's leave the test methodology to those with a true test platform, such as Anand or Tom's Hardware.
     
  32. eight35pm

    eight35pm Notebook Evangelist

    Reputations:
    20
    Messages:
    383
    Likes Received:
    0
    Trophy Points:
    30
    Anyone know what "Disable Large System Cache" does? That was the only box that was checked when I opened it. Doing "Auto Tweak' unchecked it. Should I leave it unchecked? Should I change anything that "Auto Tweak" does? Thanks.
     
  33. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    No, I'm running an Inferno 100GB model.



    What you should have done is test/notice the performance before you applied the tweaks and see if it did any improvements for you. ;)

    I would say leave that box unchecked if you have more than 2GB RAM.

    Do you notice your system faster/smoother/snappier after rebooting?

    I did. :)
     
  34. JJB

    JJB Notebook Virtuoso

    Reputations:
    1,063
    Messages:
    2,358
    Likes Received:
    3
    Trophy Points:
    56
    What part of comparing apples and apples don't you understand? What, you want us all to rerun all tests at 50MB now? Also why do you have any concern about wear on an Enterprise series intel drive? it will probably last until flash memory is an obsolete antique technology only seen in museums....
     
  35. DCMAKER

    DCMAKER Notebook Deity

    Reputations:
    116
    Messages:
    934
    Likes Received:
    0
    Trophy Points:
    0
    My laptop has the PM55 and i have an external HDD that should transfer at 143MBps with writes about 130MBps but its capped at 90-95MBps. this is going through eSATA and my internals dont show it becuase they are just below that threshold. I found in one of these screen shots that it is being set at UMDA mode 6 which is ultra ata/133. you cna see this in these screen shots. I assume my issue has to also do with the PM55 chipset. I am hoping maybe onee fo you guys can help/this info can help you guys too seeing another side of the issue as well. Also notice that the cache runs at a consistant 95MBps in those tests....should the cache be super fast?


    http://forum.notebookreview.com/har...ts-can-not-take-full-advantage-fast-ssds.html


    link top the forum where i have been posting on my external.


    will post links to photos here as well give em 10 mins

    EDIT:
    HD Tune showing that its set in UMDA mode 6?
    http://img690.imageshack.us/img690/1839/ultraata133.png

    http://img685.imageshack.us/img685/6979/secondinternalhardrive.png

    External enclosure i got
    http://www.newegg.com/Product/Produ..._-na&AID=10446076&PID=3640576&SID=skim525X832

    Gallery of HD Tune pro tests
    http://img842.imageshack.us/gal.php?g=14july20100236.png

    Internall drive
    http://img821.imageshack.us/img821/2812/internallol.png

    Now these photo show it at its real speed like one sec...not sure why but than back to capped....from running it 100s of times i got like 3-5 tests at regular speeds....very weird

    http://img837.imageshack.us/img837/6659/wellthisshowsbetteer.png

    http://img828.imageshack.us/img828/804/59010033.png

    http://img163.imageshack.us/img163/8436/againp.png

    http://img841.imageshack.us/img841/7758/weirdhuh.png

    weird i got this time several tests at full speed/close to it but here is the test showing it running slow happened several times in a row.
     
  36. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    DCMAKER,

    You forgot to mention that you are using a Rosewill external enclosure that limits your bandwidth to your notebook - even through eSATA.

    Try another 'quality' enclosure and your speeds should be 'normal'.

    This has nothing to do with your chipset - what you're measuring is Sequential transfer rates - what we're talking about here is 4K Random R/W performance drops.

    Good luck.
     
  37. DCMAKER

    DCMAKER Notebook Deity

    Reputations:
    116
    Messages:
    934
    Likes Received:
    0
    Trophy Points:
    0

    again i posted the link and said i am using an enclosure brianiac. Again my internals show the same issue on the burst...it wont break 100MBps It also shows in HD tune that the internal and external both are running at PATA speeds....this could be another problem of the chipset....not sure. Thought this might help you guys out. So maybe you should read and not jump to conclusions.
     
  38. stamatisx

    stamatisx T|I

    Reputations:
    2,224
    Messages:
    1,726
    Likes Received:
    0
    Trophy Points:
    55
    Personally I am not concerned about the wear of my drive. I am concerned about people who have MLCs and are willing to run endless benchmarks in order to help the community on their SSD's expense. That's the reason I am pointing this out so we can have the results we want with the minimum damage possible. You can try it out and run one benchmark with size of 50MB and one of 100MB and check the differences, then it's up to you to decide how do you want to proceed.
    After all if it is decided all the runs to be performed with a size of 100MB I am not the one to object ( SLC inside ;) ).
     
  39. eight35pm

    eight35pm Notebook Evangelist

    Reputations:
    20
    Messages:
    383
    Likes Received:
    0
    Trophy Points:
    30
    Yeah, I know, I screwed up. After messing around with it, it doesn't appear to affect my speeds.

    What's weird, is that (after restarting my computer) I changed from "High performance" setting to "HP Recommended and my 4k random read/write scored jumped from 15/23 to 18/30. I restarted my computer again, still on HP Recommended, and now they are back down to 15/23. I don't know what I did.

    I have an Intel 160GB G2 SSD, and am on a dv6tse.
     
  40. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631

    Hmmm... I was surprised that I noticed it to be honest, but I did.

    With the variable scores; did you maybe run the test while the machine was still loading (and therefore pushing the chipset/cpu idle states into 'off' mode)?

    I know that they can vary, but that is the only explanation I can think of right now.
     
  41. DCMAKER

    DCMAKER Notebook Deity

    Reputations:
    116
    Messages:
    934
    Likes Received:
    0
    Trophy Points:
    0
    you say i got one. you posted an assumption without reading what i wrote...so of course i am going to react to a blatant ignorance that ignores what i just wrote and said. so lets get back to the point please. I have a very valid point here even if your ignorance is bliss and disagrees. it may not have to do with the PM55 so maybe its drivers/bios. But it still could have to do with the PM55 so it may help you guys understand the PM55 more fully. Maybe there are other issues too. so plz focus
     
  42. Jakeworld

    Jakeworld Notebook Consultant

    Reputations:
    116
    Messages:
    115
    Likes Received:
    0
    Trophy Points:
    30
    Even though this quote is not directed at me, I *really* do not like your candor. Did you read my post at all? Apples to apples is a meaningless analogy, when there are already so many uncertain and uncontrolled variables in the mix. There is no reason to attack other users for using sensible logic. Ad hominem tactics are hardly a respectable tactic in any debate. Try viewing the opposing perspective to better construct your own argument, rather than attack another's intelligence.

    I have already done tests on my computer, and at least in my anecdotal experience, I have found that there is little deviation among the benchmark results, regardless of the test size. Does this necessarily apply to all systems? Perhaps not, which is why my selection of words included the term "anecdotal". Nevertheless, while accounting for the occasional outlier, the results should remain fairly consistent, with little dependence on the test size for the benchmark. The only thing we should expect to change is some variation in the average transfer rate, for which a larger test size would reduce the margin of error.

    Most of us are using MLC SSDs, so why bother generating additional wear for the sake of some sort of improvement on the margin of error? These tests are hardly a controlled environment, and we are using them merely to observe the trend in SSD performance and behavior under different power policies. No one here (as far as I am aware) is conducting true data acquisition and analysis. That would go beyond the scope of the tools and controls that we have available within this forum.
     
  43. eight35pm

    eight35pm Notebook Evangelist

    Reputations:
    20
    Messages:
    383
    Likes Received:
    0
    Trophy Points:
    30

    Hmmm, I'm pretty sure that it wasn't still loading. What I found now was that the first run after change the power settings is faster, and the second run is slow again. I got a faster results for High Performance now, so I guess that's not what's slowing it down. My runs are all over the place now. For example, I got one run of 20/23, and even as fast as 20/37, but also 14/22.
     
  44. othonda

    othonda Notebook Deity

    Reputations:
    717
    Messages:
    798
    Likes Received:
    15
    Trophy Points:
    31
    For anyone who cares whether a standard HDD is affected by the processor idle fix, here are my results:

    This test is on a Hitachi 7200RPM 500Gig HD with the idle enabled

    [​IMG]

    This is the run with idle disabled

    [​IMG]

    The results are pretty much no meaningful change in performance.

    Tiller: I have been thinking about our two posts where I notice no change and you have a noticeable difference in speed on everyday windows tasks with idle disabled. I wonder how much of this is from the SSD and how much from the processor not throttling back. I have not performed any meaningful testing to see how much of a difference I have in doing things like virus scans, file open times, etc.. but the fact that I have a Quad has factored in my thoughts as to why I may not perceive a difference.

    I thought about starting a new thread with people performing CPU intensive benchmarks and then we have data on drives from this thread and see if there is some way to try and gauge how much improvement is CPU alone. Unfortunately I am getting reading to go on vacation and I need to concentrate on that, so I don't want to start a thread.

    Anyway what are your thoughts on this?
     
  45. JJB

    JJB Notebook Virtuoso

    Reputations:
    1,063
    Messages:
    2,358
    Likes Received:
    3
    Trophy Points:
    56
    @stamatisx & Jakeworld

    Maybe I wasn't being clear. Almost everyone from page 1 on has already run numerous CDM "defualt" runs at the 1000MB setting for comparisons. Also Phil has requested several time to run it at 1000MB. So why keep bringing up using other settings? We have a fairly good 'baseline' already and I see absolutely no reason to change things in midstream.

    And Jakeworld, regarding your comment "Apples to apples is a meaningless analogy, when there are already so many uncertain and uncontrolled variables in the mix." Wouldn't changing the standard run now just add another variable? I know for a fact that I get much higher 4K R/W speeds when I use the 50MB size (especially write speeds). I also know that rerunning the same test at the same settings has variables from run to run but the results are much closer to each other than when I run different sizes.

    Sorry for my 'candor' but if you read back it appears that this issue has been discussed and decided upon by our moderator (several times). So why are we even discussing it again?
     
  46. Jakeworld

    Jakeworld Notebook Consultant

    Reputations:
    116
    Messages:
    115
    Likes Received:
    0
    Trophy Points:
    30
    I completely agree that different benchmark settings add another variable of uncertainty, but the fact of the matter is, we are looking for general trends. Perhaps a lack of insight prompted the recommendation for default settings, but that doesn't change the prospect that such guidelines are ill-informed. We are all prone to err, and because of that, we should always be open to revising our existing methods.

    I called you out because I felt your post carried a tone of condescension. Perhaps I felt your words conveyed a tone you did not intend to evoke, and I am willing to reconsider my choice of words. My point is that we should retain our sense of logic and speak sensibly to one another, rather than provokingly question one another. I can empathize with your frustration, but I feel it is reasonable to at least consider opposing viewpoints with respect to the benchmarking method. In this case, I respectfully disagree and stand by stamatisx. I believe that a test size 50MB with 3 runs is not significantly less valid than a 1000MB test size with 5 runs.

    If we are considering the margin of error, then that certainly adds another element for consideration. However, since we seem to be treating that aspect as negligible, it's logical to subsequently treat the test size as insignificant, provided the sample size is sufficient. In my own observations, I have found this to be true, so any further increase is wasted productivity.

    If consensus leads to delving into this subject matter, I am well acquainted with data acquisition and error analysis, and would be more than happy to provide some insight into that discussion should it materialize.
     
  47. Phil

    Phil Retired

    Reputations:
    4,415
    Messages:
    17,036
    Likes Received:
    0
    Trophy Points:
    455
    The fact that we can compare across results is a nice bonus of all running at the same settings.

    I think it's good idea to change the test size to 50MB with 3 runs to reduce wear. So let's do that, from this point on test size 50MB with 3 runs. To reduce further wear we may as well only do the 4K random runs.
     
  48. stamatisx

    stamatisx T|I

    Reputations:
    2,224
    Messages:
    1,726
    Likes Received:
    0
    Trophy Points:
    55
    I post those two runs just to confirm Phil's and Jakeworld's sayings

    [​IMG]

    I also agree with Phil that we should concentrate on the 4K random reads and writes because those are the ones mostly affected.
     
  49. Phil

    Phil Retired

    Reputations:
    4,415
    Messages:
    17,036
    Likes Received:
    0
    Trophy Points:
    455
    For comparison with other C300 owners, this is Crucial C300 64GB on GS45 chipset.

    [​IMG] [​IMG]

    On the left without CPU load, on the right with 100% CPU load through HyperPI.
     
  50. JJB

    JJB Notebook Virtuoso

    Reputations:
    1,063
    Messages:
    2,358
    Likes Received:
    3
    Trophy Points:
    56
    Here are my numbers at 50MB x 3, note that all 3 of the runs have significantly higher write speeds when using the 50MB test.

    NO LOAD IDLE ENABLED.PNG No load idle enabled
    FULL LOAD ALL THREADS IDLE ENABLED (USING EVEREST STABILITY TEST).PNG Full load all threads (Everest stability test) idle enabled
    no load disabled.PNG No load idle disabled

    Disregard last image, it's the 1000MB x 5 I uploaded by mistake. For some reason I can't seem to delete it from the edit page.....
     

    Attached Files:

← Previous pageNext page →