CrystalDiskMark 2.2/3.0
Has anyone honestly investigated this benchmark? Some people are complaining about poor SSD 4k read/writes on the Alienware M17x as measure by CyrstalDiskMark2.2. I never took it seriously from the get go but after seeing the benchmark bandied about ad naseum my curiosity was piqued so here is what I found:
“The program uses FILE_FLAG_NO_BUFFERING when creating the file handle that it reads to or writes from, but does not provide an aligned buffer to the ReadFile() or WriteFile() calls subsequently used against the handle. This means that the driver must still do some buffering, which enables it to do caching, which will alter the results.”
“I'm also a bit concerned with the way that the tool initializes its test file, as it seems to create it very quickly. Since most people (me, too) run the tests as administrator, it's possible that the file system is creating the file and being extended with no fill, then the first writes to the file actually cause that fill. This leads to another concern with the tests--that the order of the tests matters because nothing is done to flush cache or reset the file between the tests.”
“Actually, I'm investigating the tool, first. How can I measure anything meaningful without first validating the tool that I'm using to measure? Indeed, there's more caching on the server's controller. But they're also running completely different operating systems. The sequential test is calling WriteFile() in a loop to do blocking I/O against the file. It writes the same buffer again and again. The buffer is a megabyte in size. The docs say that the API doesn't do partial writes, but not checking this in the code seems a bit iffy to me.”
“Crystal Mark Disk seems fundamentally flawed in that it always issues a synchronous I/O to the device under test. As a result, it never queues up more than one operation, which means that larger-scale devices (such as enterprise drives or advanced RAID controllers) are never going to get a chance to reach their potential, particularly in the random-access tests. Even commodity-class devices will perform at least a bit better with a deeper queue”
“That's just the height of incompetence right there. It's sitting in a reused OS buffer so you get an insane CPU cache hit rate, which then gets passed onto the disk cache, which if the algorithm is smart - like Hitachi - will discard the additional writes and instead mark down to write a duplicate of what's already in cache, since it's the same write request.”
“But why was the cache involved in the test? A good benchmark wouldn't fit in cache (unless that's what your final workload will look like, I guess) so the way the program was written has an effect on the outcome of the tests. The point is that the benchmark is poorly written, so arguing about the results is irrelevant.”
“They're completely bogus and have absolutely no legitimacy whatsoever. If you're foolish enough to buy into those numbers, don't come here whining about how you're not getting them.”
“I think they're neither accurate nor reliable indicators of in situ performance.”
Investigating CrystalDiskMark - [H]ard|Forum
-
-
CrystalDiskMark
History
3.0.x
3.0.0 [2010/03/21]
◦Added NCQ Test (Queue Depth = 32)
◦Added Test Data Selection (Random, 0Fill, 1Fill)
◦Added Zoom Feature (100%~400%, required IE8 or later)
◦Added Digital Signature
◦Updated Icon (Added 256x256x32bit)
◦Updated About Dialog
1.x - 2.x
2.2.0 [2008/09/15]
◦Fixed Bugs
2.1.0 [2008/02/02]
◦Fixed Bugs
2.0.0 [2008/01/13]
◦Fixed Bugs
1.0.0 [2007/03/31]
◦Initial Release -
Why the result differ from other benchmark software?
◦The result depends on Test File Size, Test File Position, Fragmentation, IDE(PATA)/SATA/RAID/SCSI controller and CPU speed etc...
◦In a part of SSD, the result depends on test data(random, 0fill, 1fill). Other software may use 0fill as test data. -
TurbodTalon Notebook Virtuoso
I use this program for my SSD. What program gives the most accurate benchmark if this one is bogus?
-
HDtune pro and CDM produce more or less the same results, Write performance was disabled in HD tune pro for the moment, but if I can figure it out I'll post it up.
If we were comparing two different tool's results from the R1 and R2 it would be different, but it's the same tools and testing methadology, so there *is* an issue.Attached Files:
-
-
-
I have been one if not 'the' strongest advocate for SSD on this site. I told everyone last week that I was purchasing another M17x and would not even consider HDD. I purchased a single 256 SSD (my third computer with a samsung SSD as the primary drive).
Have noticed that when people have really weak arguments they throw a tantrum and ask for threads to be shut down. That is disrespectful to everyone and especially the moderators that have been so gracious to create this forum on the first place. Please stop grossly misquoting me. Thxs -
-
-
-
-
well see how long it takes for Batboy to delete it, although its just as relavent to the thread as the rest of the posts IMHO
-
The last thread was closed because cleverpsuedonym had a hissy fit, and Joe barchuch grossly misquoted me.
Psueds exact words were "its pointless to continue the thread. Falcon keeps answering like were saying SSDs are bad. OBVIOUSLY they are not. but the question is akin to this, if you bought a car that was advertised to have 300hp, but you discovered it was really only putting out 200,which puts it far ahead of the average vehicle, but significantly lower than advertised in that specific model of car, would you just let it slide beacuse its still more than the other cars? nobody with commonsense would do this. in this instance its the same issue. the ssds are faster than spindle drives in most areas, on the m17x-r2, im not referring to any other computer, its hindered. but im sure truth will fall on def ears yet again, as majority opinion rules over facts."
Now again yet another thread is disrupted by cleverpsuedonym and Lozz. Too funny allways the same people with weak arguements making lots of noise to be heard. -
Some vendors have been accused of "cheating" at benchmarks -- doing things that give much higher benchmark numbers, but make things worse on the actual intended workload. Users can have very different perceptions of performance than benchmarks may suggest which is why there are so many unhappy Alienware owners. Benchmarks serve a purpose but any meaningful discussion about them needs to be balanced.
-
-
-
-
i will agree that barchuck miss quoted you, he should have directed his post at me& stamatisx. he obviously didn't read the thread thoroughly.
http://forum.notebookreview.com/alienware-m17x/508736-ssd-recommended-m17xr2-5.html#post6590014
read this again. -
Thanks, Anyways lets get back on topic, does anyone have any further data that could explain some anomolies with CyrstalDiskMark? The R1 did come with Vista with an upgrade certificate and the R2 does come with 7. Had suggested a while back that firmware in conjuction with the change in OS may be a factor. I don't recall anyone answering that question.
-
-
-
Lozz, why don't you explain the 'anomaly" why the X25 E drops in performance when moved from R1 to R2. Obviously no one at Dell, or Alienware know so please share.
-
-
Give each thread its due, please. I opened this thread so the question I asked of you I deem relevant. Too many people are unhappy with their Alienware M17x due to benchmarks. The X25-E and I7 are both Intel chipsets so it seemed strange to me that this is simply an "Alienware" problem.
-
That is why unless Dell can provide some meaningful data to suggest otherwise, the rightful assumption is that this is in fact an Alienware problem with the R2 hardware (since it has been covered here ad nauseum that the exact same drive performs significantly differently when simply lifted from an R1 and placed into an R2).
I think you just like to argue and troll others on this forum who don't like to see opinion masquerading as fact. -
-
Much logic on this forum is like that of a Salem Witch Trial. Someone makes an observation, than a hypothesis, than it become fact and anyone questioning that fact becomes labelled a troll or heretic.
As per original discussion someone accused “Alienware M17x” of ruining their SSD based on a CrystalMarkDisk test. Now if Lozz is correct and every PM55 or HM55 chip set that has recorded their performance has experienced the 4K problem than my concerns were 100% correct and this is not an Alienware problem.
Benchmarks are not a holy grail and although interesting often don't correlate to real world applications. OCZ Vertex for example “recommends ATTO, IOMeter, and PC Mark Vantage for benchmarking SSDs.” Notice OCZ does not mention ‘said’ benchmark currently under discussion. Rated SSD speeds may vary slightly depending on the benchmark used, drivers, windows version, bios version and file size. In order to reach any inteligent conclusion you would have to isolate all these variables before comming to any meaningful conclusion.
For those of you who think Intel walks on water may I remind you that in October 2009 Intel pulled a firmware upgrade it released for the x25-M consumer solid state drive as users complained it destroyed their Windows 7 operating systems. The package was an SSD Toolbox which included software called SSD Optimizer that included diagnostic tools to keep their Intel SSD running at high performance. This firmware update led some users to have driver problems that led to crashed SSD's and making systems with Windows 7 OS unusable. I think the current 4K issue is more likely an Intel issue given their past firmware updates or possible a chipset & OS issue. Hence is the 4 k issue caused by benchmark, intel firmware, drivers, windows version or bios version and file size, chipsets, or some other variable?
My next thought would be if that if the chipset is looking like the culprit than is it also true for that chip set on both Vista 64 and Windows 7 64?. My understanding is that Vista was optimized for HDD exclusively while Windows 7 has better optimization for SSD. The X25-E being a year old was originally designed for Vista and its original firmware was designed accordingly. Has anyone tried a comparison between that chip set on both Vista or Windows 7 to see what impact the operating system has? As per the windows experience index there is a vast difference in what various users are getting as a score and their practical experience with SSD’s
Edit, Here is why I hate benchmarks,
When I first bought my M17x I spent $40 on two popular benchmarks. The benchmarks stuttered, hip hopped along and looked terrible. The graphics were so appauling and so pitiful they looked like something I use to run on Windows 3.1 a decade ago. Considering I purchased the fastest laptop GPU in the world what does that tell me other than the 5870's are so new that the bemchmark needs to be rewritten? I threw the benchmarks in the garbage where they they belonged as they were nonsensical. Next year like clockwork they will update the benchmark and make the 5870 look like heaven and anything else will struggle. The main purpose of benchmarks are to sell equipment. Same with SSD benchmarks. Sure, CrystalDiskMark is interesting but I wouldn't lose any sleep over it and I certainly wouldn't advise buying a new SSD over the issue. Your Alienware M17x runs better with any brand of SSD.
If it truly is the Mobile chipset than sadly this is yet another mark against mobile gaming. Perhaps Intel has trimmed performance to save battery power, wow what a novel idea.
Alienware M17x & CrystalMarkDisk 2.2
Discussion in 'Alienware 17 and M17x' started by FalconMachV, Aug 13, 2010.