The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    How did solid state drives overcome the flash memory write restriction ?

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by wearetheborg, Jul 29, 2007.

  1. wearetheborg

    wearetheborg Notebook Virtuoso

    Reputations:
    1,282
    Messages:
    3,122
    Likes Received:
    0
    Trophy Points:
    105
    I was under the impresssion that flash memory could only write around a million times toa location. How did SSDs overcome this ?
     
  2. Badjer

    Badjer Notebook Guru

    Reputations:
    0
    Messages:
    52
    Likes Received:
    0
    Trophy Points:
    15
    Hey, what do you mean by that? Are you saying the longevity of any given flash memory is one million writes?
     
  3. ldiamond

    ldiamond Notebook Evangelist

    Reputations:
    3
    Messages:
    571
    Likes Received:
    0
    Trophy Points:
    30
    The longevity of flash blocks are indeed counted in million Read/Write cycles. However, its not 1 million, more like hundreds of million. The chip uses a complex algorithm that distributes the Read/Write cycles across all the blocks as evenly as possible. Also, when a block gets corrupted, the chip stops using it. According to tests, NAND Flash memory (SSD) would last more time than an average hard disk drive.
     
  4. Ethyriel

    Ethyriel Notebook Deity

    Reputations:
    207
    Messages:
    1,531
    Likes Received:
    0
    Trophy Points:
    55
    Well, it used to be that the best industrial flash was tested to about 4 million cycles. One million and less was far more common. This was very recently, in fact I'd argue we're still in that era since this new crop of SSD haven't reached market saturation even for solid state. I think we won't see just how far NAND has advanced in this area for a few years, if and when we start seeing failures on these new drives.
     
  5. squeakygeek

    squeakygeek Notebook Consultant

    Reputations:
    10
    Messages:
    185
    Likes Received:
    0
    Trophy Points:
    30
    I really doubt the algorithm is that complex. :rolleyes:
     
  6. ldiamond

    ldiamond Notebook Evangelist

    Reputations:
    3
    Messages:
    571
    Likes Received:
    0
    Trophy Points:
    30
    Try coding one!
     
  7. squeakygeek

    squeakygeek Notebook Consultant

    Reputations:
    10
    Messages:
    185
    Likes Received:
    0
    Trophy Points:
    30
    Have you seen the algorithm or what? It probably just puts the data in some pseudo-random place.
     
  8. ldiamond

    ldiamond Notebook Evangelist

    Reputations:
    3
    Messages:
    571
    Likes Received:
    0
    Trophy Points:
    30
    I doubt its that simple.
    some bytes are accessed more often than others, they have to be moved to different blocks. There is a lot of data swapping to be done, it's not that simple.
     
  9. squeakygeek

    squeakygeek Notebook Consultant

    Reputations:
    10
    Messages:
    185
    Likes Received:
    0
    Trophy Points:
    30
    Data swapping? Wouldn't that just be waste of write cycles?
     
  10. ldiamond

    ldiamond Notebook Evangelist

    Reputations:
    3
    Messages:
    571
    Likes Received:
    0
    Trophy Points:
    30
    Thats what I'm saying, its not a simple algorithm.
     
  11. squeakygeek

    squeakygeek Notebook Consultant

    Reputations:
    10
    Messages:
    185
    Likes Received:
    0
    Trophy Points:
    30
    What? No, I'm saying what you suggested (data swapping) would just be a waste of write cycles. Placing data blocks randomly would not have that problem. I see no reason why you would have to do "data swapping".
     
  12. ldiamond

    ldiamond Notebook Evangelist

    Reputations:
    3
    Messages:
    571
    Likes Received:
    0
    Trophy Points:
    30
    If you simply place data randomly, files that are accessed more often (Small OS files for instance) will stay in the same block and will cause the block to die faster. Thats why you have to find a way to swap data.
    For example, you place a large 10GB file that you never touch. These blocks wont ever be accessed, and your SSD will die faster.
     
  13. wearetheborg

    wearetheborg Notebook Virtuoso

    Reputations:
    1,282
    Messages:
    3,122
    Likes Received:
    0
    Trophy Points:
    105
    Reading data is no problem, we only have restrictions on the number of times a location can be written to
     
  14. ldiamond

    ldiamond Notebook Evangelist

    Reputations:
    3
    Messages:
    571
    Likes Received:
    0
    Trophy Points:
    30
    No, reading also shortens the SSD life.

    However, if you can only write on 30% of your ssd, it will die quite faster, as explained in my previous post.
     
  15. lupin..the..3rd

    lupin..the..3rd Notebook Evangelist

    Reputations:
    154
    Messages:
    589
    Likes Received:
    0
    Trophy Points:
    30
    Yes, they do use an algorithm to distribute the I/O load across more of the "disk". The reason is that heavily accessed areas (like file allocation table) would die very quickly, giving the product a short life, so these heavily accessed areas are dynamically reallocated and moved around to even out the wear.

    The problem I see with SSD's, is that traditional disk drives are improving so dramatically, SSD's just aren't that appealing (except for certain extreme temperature or vibration conditions).

    The new Hitachi 7k200 drives are rated for 350 G's of operating shock, and 1000 G's of non-operating shock. Thats a whole lotta shock!!

    Secondly, SSD's are not very fast. Yes, they have excellent I/O rates since there is no mechanical seek involved, but they don't have great throughput, particularly on writes. Any current 2.5" SSD has a write speed that's worse than most 4200 rpm notebook drives!!

    When it comes to throughput though, the one advantage an SSD has is consistent performance for the entire length of the drive. It's not a descending curve like a mechanical drive, it's a flat line.

    Mechanical wins again when it comes to capacity though, with 250 GB available now, and 300 GB available this fall. Next year, I expect we'll see 400 and 500 GB in a 2.5" size as the drive manufacturers perfect their PMR technology.
     
  16. squeakygeek

    squeakygeek Notebook Consultant

    Reputations:
    10
    Messages:
    185
    Likes Received:
    0
    Trophy Points:
    30
    Let me guess... There is no easy way to swap data. It has to involve a "complex algorithm". Do you really have any reason other than speculation for saying a complex algorithm is involved or do you just think it sounds cool to say "complex algorithm"?
     
  17. masterchef341

    masterchef341 The guy from The Notebook

    Reputations:
    3,047
    Messages:
    8,636
    Likes Received:
    4
    Trophy Points:
    206
    still- long term? SSD is here to stay. its "new tech" as far as the consumer land is concerned. there will be development. eventually they will have really long lives, faster throughput. its not limited by physical moving parts like a hard drive (which hasn't really gotten faster over time). the SSD has a lot of room to speed up. eventually it will replace the SSD. (or another storage media tech will come out that replaces them both)
     
  18. ldiamond

    ldiamond Notebook Evangelist

    Reputations:
    3
    Messages:
    571
    Likes Received:
    0
    Trophy Points:
    30
    As a 3rd year computer engineering student, I can tell you its complex.

    Lupin summarized it very well.
     
  19. ldiamond

    ldiamond Notebook Evangelist

    Reputations:
    3
    Messages:
    571
    Likes Received:
    0
    Trophy Points:
    30
    SSD have lower throughput but they are quite faster when it comes to access time. That's why they are mainly used for storing OS and APPs that require less throughput but faster access.

    SSD accesstime is measured in micro-seconds, usual 7200rpm HDDs have access-time over 8ms.

    So its just a matter of how you're gonna use it.
     
  20. squeakygeek

    squeakygeek Notebook Consultant

    Reputations:
    10
    Messages:
    185
    Likes Received:
    0
    Trophy Points:
    30
    I'm a 5th year computer engineering student. Nice try. Lupin gave no support for it being complex.

    Since we're both computer engineering students, you should have no trouble explaining it to me.
     
  21. ldiamond

    ldiamond Notebook Evangelist

    Reputations:
    3
    Messages:
    571
    Likes Received:
    0
    Trophy Points:
    30
    As an engineering student, you prolly know I dont have time for such things, especially not right now at the end of the semester.

    Anyway, the complexity of the algorithm is subjective. Im just saying its not a simple random allocation, theres more to be done.
     
  22. squeakygeek

    squeakygeek Notebook Consultant

    Reputations:
    10
    Messages:
    185
    Likes Received:
    0
    Trophy Points:
    30
    Ok, so whenever you write a new file, write it into some random free block. When you modify a file, decide randomly with some low probability for it to be reallocated to another random spot. Alternately, you could store some usage statistics in each page, but randomization should work fine I think. Not complex at all.
     
  23. ldiamond

    ldiamond Notebook Evangelist

    Reputations:
    3
    Messages:
    571
    Likes Received:
    0
    Trophy Points:
    30
    As explained earlier, you have to add swapping with data that is rarely accessed.
     
  24. squeakygeek

    squeakygeek Notebook Consultant

    Reputations:
    10
    Messages:
    185
    Likes Received:
    0
    Trophy Points:
    30
    How would that not address the issue?
     
  25. ldiamond

    ldiamond Notebook Evangelist

    Reputations:
    3
    Messages:
    571
    Likes Received:
    0
    Trophy Points:
    30
    If what you suggest includes swapping the data, you're over-simplifying the problem. Where do you put the data during the swap? RAM? at what cost?

    As a computer engineering student, you should understand the fact that this requires an incredible amount of optimization because this is at very low level.
     
  26. squeakygeek

    squeakygeek Notebook Consultant

    Reputations:
    10
    Messages:
    185
    Likes Received:
    0
    Trophy Points:
    30
    If you are going to swap the data, then yeah, obviously you would have to work out the details on how the swap would happen, but I don't think that it would be complicated. And yeah, you would take a performance hit but there is no getting around that. There could perhaps be some buffering involved.

    I'm not sure how being "at very low level" implies that it "requires an incredible amount of optimization" or that it is even very complex. And yes, I have done low level design...
     
  27. wearetheborg

    wearetheborg Notebook Virtuoso

    Reputations:
    1,282
    Messages:
    3,122
    Likes Received:
    0
    Trophy Points:
    105

    Unfortunately its not. I had looked into this a couple of years back.
    A 6 *inch* fall (to concrete) is enough to kill a running(or was it stationary ?) laptop HDD.
     
  28. lupin..the..3rd

    lupin..the..3rd Notebook Evangelist

    Reputations:
    154
    Messages:
    589
    Likes Received:
    0
    Trophy Points:
    30
    Drives have evolved more than a little in the past six years. They're much more resilient these days. Where a modern 2.5" drive is rated for a maximum of 350 G's of operating shock, an older drive from six years ago will be rated for only 100 or 150 G's of operating shock. That's not a small difference.

    I've seen (modern) running laptops fall off of desks and strike a hard floor, and they continue run just fine. And I'm not talking about a Toughbook, or one of the new "fall sensor" models, either. I've seen bare 2.5" drives dropped onto a hard floor (non-operating) and they run fine afterwards. In both cases, we're talking about a lot more than six inches.

    Also, was your test dropping a bare 2.5" drive? Or was it installed in a notebook? The notebook casing offers a good deal of shock protection, and it's highly improbable that anyone would drop a running, bare, 2.5" drive onto concrete! ;)
     
  29. wearetheborg

    wearetheborg Notebook Virtuoso

    Reputations:
    1,282
    Messages:
    3,122
    Likes Received:
    0
    Trophy Points:
    105
    It was not 6 years ago, and it was around 350Gs.

    You are absolutely correct that the notebook casiing reduces the shock.
    Everything depends on whats the deceleration time.
    I just wanted to point out that 350Gs is not a big amount.