The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    The new SSD Thread (Benchmarks, Brands, News and Advice)

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by Les, Jan 14, 2008.

  1. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,788
    Likes Received:
    0
    Trophy Points:
    205
    great if you have a bluescreen and the last half gb worth of data you've written on disk gets lost. or a powerout, for that matter. have fun.
     
  2. Kamin_Majere

    Kamin_Majere =][= Ordo Hereticus

    Reputations:
    1,522
    Messages:
    2,680
    Likes Received:
    0
    Trophy Points:
    55
    Your the guy that keeps my company killing trees aren't you.
    Gotta have paper back ups for everything in case the computer crashes...the server crashes... and the company housed back up server crashes...all at the same time :eek:

    j/k :p :)
     
  3. Cape Consultant

    Cape Consultant SSD User

    Reputations:
    153
    Messages:
    1,149
    Likes Received:
    1
    Trophy Points:
    55
    Is that why cache's are so small on hard drives? I guess so :) Just never thought of it that way. I always thought it was because they were cheap :) Let's get the scoop on the new Samsung? Where is it? What is it? Is it busy off playing somewhere with the SanDisk G3 ???
     
  4. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,788
    Likes Received:
    0
    Trophy Points:
    205
    the problem with any form of caching is, it can, at any point in time, not be successfully synched back while flushing the cache. and if you think that doesn't happen, or can't be something big. i've read about one company, where they had caching on the raid controller enabled (a raid5 afaik) for their database server. the system performed much better thanks to the raid.

    problem now was, a commit to database got reported back as successful the moment it was written to cache. problem was, a disk failed while then writing the cache to disk, while the write was successful for the database. this afaik hold stable for a while till the cache got filled and started to report errors by itself. but at this moment, the data in the cache was yet helpless lost.

    now imagine if it would've been an important database, and half a gig of cache (as suggested above). lets just hope this didn't happen at a hospital with important patient information they gathered to save your life (i know, it's always the same example.. but it _could_).

    and no kamin, i hate printing to paper, think it's completely utter useless. i use myself a homeserver at home for all the databackup, and will make it sync to the parents home server as soon as i leave their house (3 month to go). then, the two home-servers will sync eachother (only the important data, as it's over the web), so even a burned down house can't make me lose my data.

    we're building up the same in our company. our city is known to have a bigger earthquake all more or less 500 years. and it's been over 500 years since the last. so we're actively working on a 100% fallback system in another city 100km away to let the company stay alive in case of a huge catastrophe.

    the other thing i don't like about cache: imagine an ocz that can write at worst cases 4 times a second to disk (that's 16kb/s then). imagine now your cache is filled, 512gb filled up with data to write. and imagine (as said, worst case), the commit happens to go at that 16kb/s as it has problems remapping free data. it would have over a year to commit.

    now while that can't happen, a cache can "overflow" and then have to wait for commiting to disk. this can result in a stutter as big that you mean your system freezed and you just turn it off (and now hope that your cache has a battery large enough to allow writing down the 1year commit :))


    edit: that, my dear friends, was a much too big post :)
     
  5. Mormegil83

    Mormegil83 I Love Lamp.

    Reputations:
    109
    Messages:
    1,237
    Likes Received:
    0
    Trophy Points:
    55
    hahaha this review wants me wnat the intel drive even more than i did, and now price is dropping like a rock. Maybe in a month or so...
     
  6. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,788
    Likes Received:
    0
    Trophy Points:
    205
    just for info: anandtech will provide a new big benchmark roundup in february:

    sounds interesting.
     
  7. Nyceis

    Nyceis Notebook Deity

    Reputations:
    290
    Messages:
    956
    Likes Received:
    115
    Trophy Points:
    56
    You know, that's the exact same response I had :D If I was going to spend big bucks for an SSD, it had better blow everything and I mean everything out of the water. Intel does that for reads, but not so much writes.

    N
     
  8. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,788
    Likes Received:
    0
    Trophy Points:
    205
    and exactly that is why i bought mtron, which are now in about the same priceclass as the intel disk, but slc. not as high in read/write maximums, but still, i prefer slc :)
     
  9. Cape Consultant

    Cape Consultant SSD User

    Reputations:
    153
    Messages:
    1,149
    Likes Received:
    1
    Trophy Points:
    55
    SLC rocks. I would like to see more activity in that arena. Nice price drop on the Intel! Means they are probably on Ebay for less as they have been close to $400 there for a few weeks now.
     
  10. Cape Consultant

    Cape Consultant SSD User

    Reputations:
    153
    Messages:
    1,149
    Likes Received:
    1
    Trophy Points:
    55
    Dave, you mentioned in your above post a 512 GB (yikes) cache. I must agree, THAT could cause a whole big bunch of problems :)
     
  11. ProfessorShred

    ProfessorShred Notebook Evangelist

    Reputations:
    187
    Messages:
    336
    Likes Received:
    0
    Trophy Points:
    30
  12. IntelUser

    IntelUser Notebook Deity

    Reputations:
    364
    Messages:
    1,642
    Likes Received:
    75
    Trophy Points:
    66
    Usually, SLCs have better view towards them because of one single fact: PRICE

    The high price on them and the market segment they are at because of the high price means they have better controllers than MLC versions.

    On the Intel drives, both the MLC and SLC has the EXACT same controller and uses same DRAM buffer. Only difference is the actual flash chips.

    The performance loss that occurs after writing to its full capacity happens on both drives.

    (Actually, you can't say its a performance loss. It's just reaching steady-state performance levels, which are real world values. What Samsung and Mtron is doing is making full writes on the drive before shipping so the user gets the steady-state performance levels)
     
  13. TidalWaveOne

    TidalWaveOne Notebook Evangelist

    Reputations:
    14
    Messages:
    307
    Likes Received:
    0
    Trophy Points:
    30
    Chances of that happening are pretty slim... the drive would write the data as soon as it is ready, so it wouldn't leave the last half gig of data sitting in the cache all the time.

    I also wonder why the caches are so small when RAM is so cheap.
     
  14. sitecharts.com

    sitecharts.com Notebook Consultant

    Reputations:
    0
    Messages:
    156
    Likes Received:
    0
    Trophy Points:
    30
    I am disappointed that SLC prices (e.g. Mtron) are not dropping.
    Although, I would probably stay away from them due to the bad power consumption.
     
  15. IntelUser

    IntelUser Notebook Deity

    Reputations:
    364
    Messages:
    1,642
    Likes Received:
    75
    Trophy Points:
    66
    DRAM consumes much power power than flash. One of the things SSDs are marketed for against HDDs are power consumption.

    Ultimately you need a good controller. Caches help but if you have a crap controller what's going to happen after you run out of space??

    True, you are right in saying it won't have 512MB full all the time, but in things like random writes which the drive is slow at, you will lose data.
     
  16. Nyceis

    Nyceis Notebook Deity

    Reputations:
    290
    Messages:
    956
    Likes Received:
    115
    Trophy Points:
    56
    THat's weird - is that like a review of all the no-name drives? They're all tiny. Almost seems like it was written in 2008.

    N
     
  17. Mormegil83

    Mormegil83 I Love Lamp.

    Reputations:
    109
    Messages:
    1,237
    Likes Received:
    0
    Trophy Points:
    55
    Honestly i have no problem with the write speeds taking a hit once the whole drive has been written on. Hell i don't even mind traditional HDD having better write speeds. I'm in it for the lightning reads!!!
     
  18. sitecharts.com

    sitecharts.com Notebook Consultant

    Reputations:
    0
    Messages:
    156
    Likes Received:
    0
    Trophy Points:
    30
    Haha!
    I guess we have the first benchmark for a MLC SSD with cache!
    Probably a rebranded Vertex?


    http://www.tomshardware.com/reviews/ssd-hdd-flash,2127-6.html


     
  19. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,788
    Likes Received:
    0
    Trophy Points:
    205
    i took your numbers.. :) probably you ment 512MB? :) but it doesn't matter.

    writing should not _ever_ be cached. everything else can. for reading, for better managing the wear leveling for the next writes, etc. but writing has to go trough down to the place where it stays. only that way, you can trust the device.

    problem is that, at least in typical JMicron, the write would be so much slower than the write to cache that it could be impossible to write down the data, resulting in stuttering or data loss. it would only shift the problem (which may smooth it out (good), or delay it to let it then come back in one huge stutter (bad).

    uhm, no, because since years SLC _is_ better everywhere (usb sticks and all other places where flash gets used). SLC are much more easy to handle well, else mtron would have used MLC as well. there would simply not be that issue that is now. MLC has much worse write issues as SLC. every pricy usb stick is faster, and has less storage at the same time. if you read about them, it's exactly SLC vs. MLC.

    if you where right, all those issues would not exist we're discussing now.
     
  20. John Kotches

    John Kotches Notebook Evangelist

    Reputations:
    133
    Messages:
    381
    Likes Received:
    0
    Trophy Points:
    30
    If we're talking laptops, if you lose power the battery takes over. If you wear the battery down to 0 then, well... 'nuff said.
     
  21. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,788
    Likes Received:
    0
    Trophy Points:
    205
    batteries can fail (and did in my case, several (district case-based) times).
     
  22. Spare Tire

    Spare Tire Notebook Evangelist

    Reputations:
    18
    Messages:
    459
    Likes Received:
    0
    Trophy Points:
    30
    Terrible power consumption is a no go for me. Seems like for sata, the samsung SLC is still the most efficient. Price has not dropped at all on those, and probably never will as the retailer bought it at that price.
     
  23. IntelUser

    IntelUser Notebook Deity

    Reputations:
    364
    Messages:
    1,642
    Likes Received:
    75
    Trophy Points:
    66
    I probably worded it wrong but I tried to explain it. SLC is more expensive to manufacture, so they took that to high-end. MLC was developed later for cost sensitive reasons. What do you do with a one that's made for being cheap?? You bundle them with cheap components to make them even cheaper.

    Intel drive isn't the first consumer SSD with the DRAM buffers. Most SLC products have them, which makes them even more expensive in comparison to MLCs. Who knows what difference is there on the controller itself?

    Well so the real reason is targeting at different market segments. SLC is targeted at high-end, so they make it better and cost even more.

    See OCZ is sending their Core series drives with "fresh" state, while Samsung and Mtron are sending theirs after a full write cycle, or the so-called steady-state. That's why the Samsung and Mtron has less fluctuations.

    Intel ships their drive in fresh state, but the IOPS values are shown for steady-state performance.
     
  24. John Kotches

    John Kotches Notebook Evangelist

    Reputations:
    133
    Messages:
    381
    Likes Received:
    0
    Trophy Points:
    30
    I would be careful about Tom's Hardware's power consumption alert. They've been very, very wrong in the recent past and had to do a retraction.

    I think you were around on this thread when it was being discussed.
     
  25. John Kotches

    John Kotches Notebook Evangelist

    Reputations:
    133
    Messages:
    381
    Likes Received:
    0
    Trophy Points:
    30
    Sure they can. But at some point, with a large enough number of failures you put your hands up in the air.

    So, power is out and the battery fails simultaneously? Very low chance.

    The OS should be able to recover from the inconsistency.
     
  26. highlandsun

    highlandsun Notebook Evangelist

    Reputations:
    66
    Messages:
    615
    Likes Received:
    6
    Trophy Points:
    31
    Just installed my 256GB Titan into my dv5z. Right now I'm copying the partitions off my 128GB CoreV2 to it. Fwiw, I see exactly 200MB/sec read speed on the Titan, exactly as advertised. The CoreV2 only gave me 130-140MB/sec reads, well below its advertised speed. I have also seen bursts of up to 165MB/sec writes while copying partitions. The Titan is definitely living up to its specs so far, which the CoreV2 never did.
     
  27. John Kotches

    John Kotches Notebook Evangelist

    Reputations:
    133
    Messages:
    381
    Likes Received:
    0
    Trophy Points:
    30
    highland:

    Thanks for the update. I'll be interested to see how it performs after installation.

    I'm thinking about going to a 128GB model for my laptop.
     
  28. jedisolo

    jedisolo Notebook Deity

    Reputations:
    155
    Messages:
    933
    Likes Received:
    8
    Trophy Points:
    31
    I have the same drive that highland has in my T400 and it performs as advertised, haven't noticed any stuttering since I've been using it.
     
  29. tempdave

    tempdave Newbie

    Reputations:
    0
    Messages:
    2
    Likes Received:
    0
    Trophy Points:
    5
    @jedisolo - i'm wondering two things.

    a) is it possible you haven't written through the disk once? following inteluser suggestion - no stuttering until you have to do an erase. maybe you just haven't got to it yet?

    b) how full is your disk? if your disk is pretty empty, then when you do get to an erase cycle, maybe there is only a small amount of data in the block which needs to be copied to new pages? in which case stutter might be small.

    thanks
     
  30. sitecharts.com

    sitecharts.com Notebook Consultant

    Reputations:
    0
    Messages:
    156
    Likes Received:
    0
    Trophy Points:
    30
    Great points, tempdave.

    I also want to add another

    c) what are you doing? have you tried replicating some of the test that have resulted in in stuttering? (e.g. zipping 1GB of tiny files, at the same time unzipping small files and browsing the internet with firefox)
     
  31. Spare Tire

    Spare Tire Notebook Evangelist

    Reputations:
    18
    Messages:
    459
    Likes Received:
    0
    Trophy Points:
    30
    Yeah, i was around. But i'm looking at only the quoted idle and load consumption, not their interpretation.
     
  32. highlandsun

    highlandsun Notebook Evangelist

    Reputations:
    66
    Messages:
    615
    Likes Received:
    6
    Trophy Points:
    31
    tempdave: those are good questions, but answering them would require filling up the drive. After the testing, you'd need to do a Secure Erase to reinitialize the drive, before you could start using it again. At the moment that's more trouble than I'm willing to spend to find out the answer. Since I only copied a 120GB drive (that still had a lot of free space) to my 256GB drive, you can understand that the new drive is much less than 50% full right now...
     
  33. tempdave

    tempdave Newbie

    Reputations:
    0
    Messages:
    2
    Likes Received:
    0
    Trophy Points:
    5
    hi - what does "secure erase" do? - go through the disk erase block by erase block copying the referenced pages and getting rid of the garbage?

    to the main point - why does it require filling up the drive? wouldn't copy and delete get you to the point where the drive had to erase blocks?

    for the full disk question, couldn't you copy a bunch of stuff until you had a pretty full disk, and then see if the stuttering got worse? when you were done testing you could delete the bogus copies and be back where you are today? of course, the disk would be in, or approaching, the equilibrium state where block erases were needed. but eventually you are going to get to that point anyway. might be better to know now, while you still have rma privileges!
     
  34. jedisolo

    jedisolo Notebook Deity

    Reputations:
    155
    Messages:
    933
    Likes Received:
    8
    Trophy Points:
    31
    I compressed 6 GB's worth of images and I didn't notice any stuttering, than I uncompressed it to 2 different folders. I opened firefox and it did take a few seconds longer to open firefox with 20 tabs open.
     
  35. Humper

    Humper Notebook Enthusiast

    Reputations:
    0
    Messages:
    26
    Likes Received:
    0
    Trophy Points:
    5
    I installed WoW while using Firefox with 2 windows and 6 tabs each, and ran Windows Defender scan simultaneously. No stuttering. The WoW install took a mere 20 mins compared to the hour and a half it took on the 7k200.
     
  36. IntelUser

    IntelUser Notebook Deity

    Reputations:
    364
    Messages:
    1,642
    Likes Received:
    75
    Trophy Points:
    66
    Yea that's basically it.

    You are right. It doesn't require filling the drive. But that also depends on the quality of the write leveling mechanism. If the drive sucks and it's writing all over the place, it'll have hard time reaching the true steady-state point.

    tempdave: Is 256GB the regular hard drive way of saying 256GB, or is it real powers-of-2 256GB?? What's the real capacity?
     
  37. Cape Consultant

    Cape Consultant SSD User

    Reputations:
    153
    Messages:
    1,149
    Likes Received:
    1
    Trophy Points:
    55
    I am seeing some darn good reports on the TITAN! Makes me very happy. Heck, if I do enough junk at the same time I can bring even my spinning disk to its virtual knees! Dave
     
  38. IntelUser

    IntelUser Notebook Deity

    Reputations:
    364
    Messages:
    1,642
    Likes Received:
    75
    Trophy Points:
    66
    I actually took the HDAT2 program to make the powers-of-10 size to be approximately 64GB(So effectively I have 64GB Intel). I didn't get to do Secure Erase as Intel doesn't have their own program and it didn't seem easy to do. It seems its much more responsive after that.
     
  39. sitecharts.com

    sitecharts.com Notebook Consultant

    Reputations:
    0
    Messages:
    156
    Likes Received:
    0
    Trophy Points:
    30
    1. Delete something does not show the erase/write problem, since most SSDs don't actually delete the data but simply mark it deleted ... when that cell gets written to later, that's when the erase write happens.

    2. shouldn't secure erase eliminate the problem of erase/write? my understanding of that is that it goes through and forces the drive to do an erase/write (if that is possible).

    ------

    So we have people reporting problems with the Titan.
    And somebody without problems only filled is drive 50% ... sounds very much like drive performance goes down when (i.) the drive is (more) full or (ii.) when the 'steadystate' is achieved and every write requires an erase.
     
  40. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,788
    Likes Received:
    0
    Trophy Points:
    205
    happened 3 times to me. and i use ssd in pc's as well, don't want the need for an ups just for that.

    what i mean is, a device should NOT BY DESIGN have a failure chance. it should work 100% without a chance to ever report a false success back, than can result in dataloss. and it should perform well at that. everything else is dangerous. additional cache to enhance performance can be enabled in windows easily, and the checkbox has a huge warning because of the danger.
     
  41. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,788
    Likes Received:
    0
    Trophy Points:
    205
    now i can agree with you :)

    oh, and about the caching thing which some don't take seriously. go to the intel ssd page and read about how they use the cache. they explain in detail how they work around the problem, that you are never allowed to cache writes. you have to write down to disk and _then_ report the os "i've saved the data". you can cache everything else, but not writes.
     
  42. John Kotches

    John Kotches Notebook Evangelist

    Reputations:
    133
    Messages:
    381
    Likes Received:
    0
    Trophy Points:
    30
    Honestly, that's fairly naive as you can lose quite a bit of data with a power failure on a desktop PC.

    There is no device that has no failure chance in the real world. It's a beautiful fantasy and it is never going to be any more than a fantasy.

    You can get quite close to 0% but for each sigma you add in (that's a 9) it's going to get ever more costly... Here, I'm talking about:
    90 to 99 to 99.9 to 99.99 to 99.999 to 99.9999.
     
  43. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,788
    Likes Received:
    0
    Trophy Points:
    205
    there should not BY DESIGN, IN SPEC, be a non-guarantee that data is actually written when it writes.

    and believe me, that IS important.

    every hard drive on this world works that way: it reports back success on write ONLY when it successfully has written data. ssd's have to follow that, else you the os can't trust the device anymore, and won't even notice the failure.

    this is the way it's spec'd. of course, any time anything can go wrong. but only if everything goes write, it's allowed to report success.

    this is basic logic. really basic. there are examples of f.e. raid controllers (i've stated one), who don't follow that logic when using the cache. the result is more than dangerous.

    i can't lose data on power failure on my desktop pc the same as i can't on my notebook. _except_ i enable caching which states "you can have data loss on power loss". which means i have the choise and know what to do. i personally have caching enabled as i'm secured thanks to my homeserver.

    but a device is not allowed to lie to me. if it has a problem, it has to report as such. if a device caches writes and reports succes, it does lie

    none of my devices lie. doesn't mean they work 100%.

    you know what happens when ram lies (arbitary bluescreens).
     
  44. jketzetera

    jketzetera Notebook Evangelist

    Reputations:
    143
    Messages:
    328
    Likes Received:
    0
    Trophy Points:
    30
    Very interesting! Where did you learn that Samsung and Mtron "pre-write" their SSDs before they are shipped to customers? Do you have any link?
     
  45. IntelUser

    IntelUser Notebook Deity

    Reputations:
    364
    Messages:
    1,642
    Likes Received:
    75
    Trophy Points:
    66
    There isn't, but its a very reasonable assumption. There's always a big difference when every cell on the drive has nothing written to it, and after you write to full capacity on an SSD. That's because after its written to full capacity, it has to do something about store-delete-write procedure. That is very slow on SSDs, its a fundamental problem.

    Review sites that actually stress IOMeter tests on the SSD drives and do full capacity writes will often report "performance losses". Of course, if the drive was shipped with writes done already, the performance loss won't be drastic.
     
  46. John Kotches

    John Kotches Notebook Evangelist

    Reputations:
    133
    Messages:
    381
    Likes Received:
    0
    Trophy Points:
    30
    So when the data writes back to a defective block that the drive doesn't catch until after the fact, what then? I've seen it on both *nix and Windoze. Granted, it happens correct the vast majority of the time, but these types of failures do occur. There are complications to the whole mess with journaling file systems as well.

    Playing devil's advocate one could say that when the write fails unnoticed, the device is now out of spec.

    See above. It's right in the second instance BTW.

    Depends on the overall architecture. I don't dismiss all implementations; I look at them individually.

    If this were actually true, there would never be a need for a file system check or journaling file systems. Why is the journal there? To correct an error in the chain.

    Thankfully, file systems with journaling are the norm these days in most operating systems which makes the operation of checking the file system much faster.


    Complete fantasy. There are device driver bugs and hardware failure modes that totally invalidate your statement.

    Which is why server grade memory has more than basic error checking. In my experience, memory failures tend to manifest as MEM_MGMT errors or other similar types of messages in Windoze. That means they're no longer arbitrary.

    It is possible that your context is actually indicating arbitrary points in time which is another matter entirely.
     
  47. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,788
    Likes Received:
    0
    Trophy Points:
    205
    you are wrong but it doesn't matter. it's just a detail what i talk about.

    i have never seen a hdd reporting a write as completed when it couldn't. it may be it couldn't read it back afterwards due to a failure. but no device ever lied. this is not about reliability, it's not about 100% stability.

    it's about the os being capable to at least know if something went wrong.

    the chain is simple: user tells os to save. os tells disk to save. disk saves. disk reports success. os reports success. user is happy.

    if the disk reports success while it hasn't yet written the data, this is a lie. no manufacturer should ever create a disk that has this property by default.

    everywhere can be bugs. but we're not talking about a bug. if you plug in a write cache, this lie is a FEATURE of the device.

    and that's where i make the difference. bugs can happen, but it should not be a feature of the device. every good raid controller has the option to enable a cache for higher performance, but it should not be enabled by default, and is at least disable-able.

    an ocz core with a cache would not be usable without the cache => it could not be used in trustable way (except with the stuttering we all love).

    it would be the worlds first disk i know that would behave like that. the intel explicitely documents that they don't.

    i do agree with all your points btw, besides your believe that i'm wrong, which i, of course, can't agree with :)

    the important thing is we don't talk about failures or errors.

    every program i code can have bugs. but it should not have them by design.
     
  48. TidalWaveOne

    TidalWaveOne Notebook Evangelist

    Reputations:
    14
    Messages:
    307
    Likes Received:
    0
    Trophy Points:
    30
  49. davepermen

    davepermen Notebook Nobel Laureate

    Reputations:
    2,972
    Messages:
    7,788
    Likes Received:
    0
    Trophy Points:
    205
    i'd like to see more 48 and 96gb sizes instead of 32 and 64gb. they would be quite handy. 48 would be enough for any system that doesn't really require much data (office systems come to mind), and 96 for the ones that like a bit more.

    of course, could as well make them 50 and 100gb, but lets stay with powers of 2 :)
     
  50. sitecharts.com

    sitecharts.com Notebook Consultant

    Reputations:
    0
    Messages:
    156
    Likes Received:
    0
    Trophy Points:
    30

    I believe they use the old JMicron. Cavalry Pelican has been out for at least 4-6 months. They were even mentioned in this thread. Their SLC drive used to be the Cavalry Eagle. Have not heard about the Cavalry Elite before ...
     
← Previous pageNext page →