I have concluded recently that RAID on a laptop is stupid and causes a huge lag since its software RAID that uses CPU so unless you have a dedicated RAID controller, RAID on a laptop is just for benchmark show offs but in the real world, you get a huge performance hit which is what I experienced on my Alienware 18 laptop.
Now on the Page of the SAGER laptop here: http://www.sagernotebook.com/Gaming-Notebook-NP9772-S.html
it says in the bottom that it has hardware RAID, is that true? or another marketing lie? does it have a dedicated RAID card or how do they claim it's hardware RAID?
-
Spartan@HIDevolution Company Representative
-
My vote goes for "another marketing lie". Hardware RAID is usually based on SAS; even Eurocom Panther 5SE laptop, which is advertised by the company as a mobile server and is based on Intel C600/X79 Express Chipset, supports SATA 6Gb only and has software RAID only.
Spartan@HIDevolution likes this. -
Meaker@Sager Company Representative
The overheads involved in raid 0 especially are negligible.
-
Spartan@HIDevolution Company Representative
What do you mean, can you elaborate please I didn't understand your message, what is overhead? I don't know what that word means -
Meaker@Sager Company Representative
http://en.m.wikipedia.org/wiki/Overhead_(computing)
Is a pretty good definition. -
Well... sort of. It's Intel RST which isn't what I'd call a proper RAID controller (like, say, a 3ware controller if you're looking for something cheap) but at the same time it's not a purely software-based RAID system like ZFS or Btrfs, either.
The overhead is when you're doing parity calculations (RAID 5, 6, 10, etc). For every block written to a physical device, parity information is calculated and written to at least two other devices. These parity calculations are the overhead. RAID 0 and 1 have no overhead because there are no parity calculations involved with these RAID levels. Reads have no overhead unless a device in a RAID 5/6/10/etc volume fails and the data being read needs to be calculated from the parity information.zexel and Spartan@HIDevolution like this. -
Spartan@HIDevolution Company Representative
so are you saying that the RAID on laptops isn't that bad even though it's not a pure hardware RAID if it was RAID 0? -
I have the 15" version of that notebook (Clevo P770ZM) with three SSDs: one 250GB MX200 and two 500GB BX100 as RAID 0. Running Crystal Disk Mark has essentially identical CPU usages for both volumes. Actual performance, the RAID 0 set is faster overall but it's a little slower for the Random 4KiB torture tests because 4KiB blocks are much smaller than my stripe size.
I can't say for sure about RAID 5. You'd want to have three identical SATA or M.2 ports to keep the I/O balanced and that's not an option with the P7 series. -
Spartan@HIDevolution Company Representative
I don't do RAID 5, only interested in RAID 0 -
Spartan@HIDevolution Company Representative
and BTW, what stripe size do you use? -
Spartan@HIDevolution Company Representative
I always used 16K stripe size on my RAID 0 setups for maximum performance with small files, I don't care about sequential.
Here is a good comparison of Strip Sizes:
http://www.hardwaresecrets.com/printpage/Some-thoughts-on-the-performance-of-SSD-RAID-0-arrays/1876 -
RAID 0 with RST is fine. I'm currently using 128KB because that's the default that mdadm gave me (mdadm is one of the two the Linux software RAID configuration tools). CDM tells me it's good enough for me so I'll stick with it for the foreseeable future.
-
Spartan@HIDevolution Company Representative
That is the recommended stripe size for HDDs bro not for SSDs. Intel recommends 16K stripe size for RAID 0 on SSDs -
Meaker@Sager Company Representative
Anyway no overheads and performance issues might appear if you want to run a corperate database on it and you have the CPU maxed out on other tasks but then a laptop perhaps is not the best idea for such a setup
-
Well, maybe I will rebuild it. I got time.
-
Spartan@HIDevolution Company Representative
You don't need to,
Backup ur current image using macrium reflect then destroy your raid array and rebuild it using a 16K stripe size then restore the image back
Benchmarks will be a bit slower due to the larger stripe size being better on sequential data but speaking of OS snappines and programs, the 16K beats a larger stripe size easily and you will immediately notice it
Imagine this, all the OS files are small files so on a 128k stripe size, each file is actually copied to a separate disk and thus, not giving you the magic of RAID whereally a file is copied on to 2 disks to give better performance so really you are running still o a single SSD setup for the Windows + programs with that large stripe size
-
I'm well aware of the performance differences that come from different stripe sizes. I've been working with real RAID controllers for years, and more recently with ZFS and Btrfs. My little home server is 12TB, 4 spindles, of Btrfs raid1, and last year I got to deploy a slightly less little 80TB, 24 spindle, ZFS raidz2 pool for one of the researchers I support. Good stuff. Anyway, I keep my OS and data on separate volumes and in my case the RAID 0 set is just data. The OS is on the single MX200.
I will at some point rebuild the array just to see for myself how much difference the different stripe sizes really make with the BX100's. I've a lot of experience with rotating disks but very little with solid state and none prior to this with solid state in RAID configurations.Spartan@HIDevolution likes this. -
Meaker@Sager Company Representative
Still nothing wintel based I have seen has managed to match what an IBM Iseries does as standard with its disks.
-
My favorites of all time are Data General's AViiON and CLARiiON systems. I had one of each a few jobs ago, small units but still very good stuff. Then EMC bought out DG and gutted the company for CLARiiON because their own technology was a dead end. I still haven't forgiven them for that.
IBM's storage division, now HGST, had some neat stuff around that time but I never got to play... ahem, work with any of it.
So, I went ahead and rebuilt the raid set with 16K stripes. The results from Crystal Disk Mark are surprising: 16K and 128K write performance are identical; 16K random read performance is identical to 128K read performance while 16K sequential read performance is about 1% slower. I was expecting significant differences but what I'm seeing from CDM is close enough to even that it's not worth rebuilding again.
Random I/O is sometimes a little worse, sometimes a little better than the single MX200, but not by more than about 1-1.5% either way. That's not surprising. SSDs don't benefit much, if at all, from the latency reductions you get from scaling across many rotating spindles. RAID 0 sequential I/O is significantly better but that is probably due to SATA bottlenecks. My MX200 is an M.2 card but it uses a SATA controller so it's not full PCIe speed.
Now that I've written that, I think my CDM results aren't surprising after all. Rotating platter performance is limited by physically rotating platters while the limiting factor in SSD performance is the I/O bus, in this case SATA 3's 6Gb/s. All three drives are close to saturating their respective bus channels so I figure that the performance hit I'm seeing with 16K stripes is the additional overhead of issuing ~8 times as many I/O requests.
Neat.Spartan@HIDevolution likes this. -
Spartan@HIDevolution Company Representative
but how do u feel the overall system snappines? -
No difference? The OS isn't on the RAID set.
Edit: as a note, this system is substantially snappier than my old notebook which is a MSI GP60 with a 700GB 5200RPM drive for data so I'm not sensitive enough to game data load times on the new rig to tell if there are differences with the different stripe sizes.Last edited: May 17, 2015 -
Meaker@Sager Company Representative
You get some truly massive raid sets working on them
But of course it's now going VIOS with flash based buffering and bit for bit replication.
Sager has hardware RAID?
Discussion in 'Sager and Clevo' started by Spartan@HIDevolution, May 16, 2015.