The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    SSD for multi-GB data base files

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by GlennT, Nov 2, 2011.

  1. GlennT

    GlennT Notebook Geek

    Reputations:
    13
    Messages:
    81
    Likes Received:
    0
    Trophy Points:
    15
    My newsreader/compiler generates multi-GB files (some as large as 16 GB) that make it extremely painful to query or sort using magnetic media. I frequently compress the files to keep them down to a few GB, which tends to work the drive hard, and it's killed more than one magnetic drive.

    My priorities (not necessarily ranked) are:
    • Speed
    • Size
    • Reliability
    • Price

    I blew through my 64GB SSD from Kensington, so I know I'll need at least 128 GB. Given the large file sizes, I think this may be an application that actually benefits from SATA-3. Many of the SATA-3 controllers seem to be "sub-optimal" for reliability, though.

    I would like to spend ~ $200 but will go higher if there's a compelling reason/benefit for spending the extra money.

    What do you recommend?
     
  2. Derz

    Derz Notebook Enthusiast

    Reputations:
    0
    Messages:
    21
    Likes Received:
    0
    Trophy Points:
    5
    I have read good things about the Crucial M4, which is SATA-3, and Newegg currently has the 128GB version for $205.
     
  3. AMATX

    AMATX Notebook Consultant

    Reputations:
    48
    Messages:
    203
    Likes Received:
    0
    Trophy Points:
    30
    I have an app that writes, then later reads ~60Gig of files per day...real I/O killer. What I did was set it up for use with a ramdrive. Super fast, no wear and tear on anything.

    Read up on ramdrives, if you're not familiar w/them, then take the plunge. You'll never go back to anything else.

    Note, however, that ramdrives go POOF when you shutdown the old 'puter, so you'll want to have the end-result data stashed on a HD or SSD when you're all done.

    If 16Gig is your upper limit on work file size, get a laptop w/32Gig or a desktop/server with gobs of memory and go to town...
     
  4. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    AMATX,

    You have a 60GB+ RAM enabled system? To be able to cache your data set...
     
  5. AMATX

    AMATX Notebook Consultant

    Reputations:
    48
    Messages:
    203
    Likes Received:
    0
    Trophy Points:
    30
    No, I'm currently running on a W700, which is limited to 8G. The -total- amount of data I process on a Monday-Friday day is ~60G. What I do is a little crude, but it works:

    * run a few minutes worth of data to the ramdrive(ramdrive is 5G). Performance here is critical; therefore the ramdrive for this initial write I/O.

    * offload that data to an SSD for staging purposes, via a background job(low priority, so the write I/O performance of the SSD doesn't matter on this step).

    * at top of hour, process the SSD data via another W700 networked in that has read access to the SSD. Since the data is on an SSD, reading it to the 2nd W700 is very fast, even over my networked router between the two.

    * processing of the data from the 2nd W700 is done via a ramdrive on it(5G in size, also).

    * data is processed in memory and very highly compressed, using the 2nd ramdrive as a work disk, and the final compressed hourly data file is then offloaded from the 2nd ramdrive to a conventional HD, where it's archived.

    My goal on all of this was to totally avoid using a conventional HD for I/O(other than for final archiving), as I didn't wanna burn out one of those. Plus, at the rates I'm writing/reading, an el cheapo HD would bog down too much.

    The only write activity to the SSD is the background staging step, which doesn't require high performance, so slower SSD write times are acceptable.

    The read activity from the SSD is very fast, as we all know SSDs work great for that :)

    And yes, I know that eventually, my SSD will crap out, but that's a minor issue, as I can get a replacement on the cheap.

    Since the W700 is so limited on memory, the question was how to arrange the I/O and related processing, using Ramdrive + SSD + HD. The above approach is the best I could come up with, especially considering I'm also maxed out on cpu horsepower on the first W700. Offloading some of the data post-processing to a 2nd W700 really helped with reducing the cpu load.

    I'm scrounging $$$ to get a top of the line W520, with a 2960xm chip and 32G of memory, at which time I'll be able to drop this cludgy setup and do all of the processing on the one W520, via a large ramdrive(probably 24-26G in size), and just write the final hourly compressed files out to a conventional HD for archiving. Very slick; just waitin' on the $$$ :)