The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    ASSD and CDM are *not* incompressible

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by T1mur, Jul 18, 2011.

  1. T1mur

    T1mur Notebook Guru

    Reputations:
    0
    Messages:
    55
    Likes Received:
    3
    Trophy Points:
    15
    I just compressed the ASSD 1 gb testfile down to 16.3 mb, which is a factor of x0.016 aka 1.6%!

    While most compressions methods fail on this testfiles LZMA obviously does a pretty good job. ASSD needs a dictionary size of exactly 16 mb (=192 mb RAM needed for compression), anything bigger won't compress down further than the 16 mb outcome, anything less won't compress at all. So ASSD's "random" data fits exactly into these 16 mb.

    Furthermore I compressed a CDM 2 gb testfile down to 1.3 mb, which is a factor of x0.0006 aka 0.06%!

    CDM's testfile is easier to compress for other compression algorithms. While Winrar fails to get anything out of the ASSD file it has no problems whatsoever with the CDM one (less than 2.9 mb for the 2 gb testfile).

    LZMA is public domain btw, but it's also somewhat "demanding" aka slower than others, so it may or may not be used by Sandforce. But even if not, how do we know that Sandforce (now or in future revisions) really does not manage to compress ASSD and CDM testfiles during benchmarks?
     
  2. chimpanzee

    chimpanzee Notebook Virtuoso

    Reputations:
    683
    Messages:
    2,561
    Likes Received:
    0
    Trophy Points:
    55
    because the controller has no concept of 'file', it can at best compress a contineous block send to it(even that is doubtful). IOW, if it receive just a sector of 512 byte, it has to make the split micro second decision of, 'should I wait or just compress this' etc. ?

    real time compression is very different from what you are doing(there are lots of contraints there). When we say 'incompressible', it is in the context of whether the controller can compress it given its constraints.
     
  3. T1mur

    T1mur Notebook Guru

    Reputations:
    0
    Messages:
    55
    Likes Received:
    3
    Trophy Points:
    15
    The controller likely has an internal buffer, which in turn dictates its dictionary size. So the more internal RAM Sandforce puts into the controllers (and the faster the chip is clocked) the more it can compress.

    Turns out that ASSD likely uses a 16 mb block of random data that then repeats itself over and over again, CDM seems to use a 1 mb block.
     
  4. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Yet another loop-hole that SF controllers will be programmed to look for. Sigh. :)
     
  5. chimpanzee

    chimpanzee Notebook Virtuoso

    Reputations:
    683
    Messages:
    2,561
    Likes Received:
    0
    Trophy Points:
    55
    and why they are not compressible by SF as there is no way it can wait for those 'repeats'. It has nothing to do with the internal buffer but at what point the compression must start. CPU clock plays a role. And the SF controller is if I remember a MIPS based 'system in a chip' with very limited memory embedded.
     
  6. Peon

    Peon Notebook Virtuoso

    Reputations:
    406
    Messages:
    2,007
    Likes Received:
    128
    Trophy Points:
    81
  7. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Peon, very good linking seperate threads. :)

    I do selective compression (on folders) and the systems run smoother for me. Whole drive compression did not show a difference, btw.

    So, with a ~16MB file read in as a 1GB file and a 1.3MB file read in as a 2GB file - it really explains not only the read speeds CDM reports in the other thread you link, but also my perception that specificly compressed folders make the system 'smoother', in my use.
     
  8. T1mur

    T1mur Notebook Guru

    Reputations:
    0
    Messages:
    55
    Likes Received:
    3
    Trophy Points:
    15
    The high CDM read speeds in the NTFS compression thread are *not* related to this. NTFS does not compress the CDM testfile! And the reason for the high read numbers with NTFS compression enabled is that reads are *cached* then even when the benchmark tries to disable it.