Announcement

Collapse
No announcement yet.

Btrfs LZO Compression Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • devsk
    replied
    Originally posted by crazycheese View Post
    It only needs fsck now!
    Understatement of the year by far!

    Leave a comment:


  • crazycheese
    replied
    It only needs fsck now!

    Leave a comment:


  • energyman
    replied
    Originally posted by mbouchar View Post
    It's not complicated. These tests only shows that the stuff is compressed sometimes in memory before being written to disk. This is the same thing that happens when encrypted disks shows better performance than normal disks.

    Only that your memory and CPU will be taxed more and you will have a slower computer for other stuff.
    only that ram is dirt cheap and cpu's underworked almost all the time.

    Leave a comment:


  • jebtang
    replied
    Me too, I think this testing should run on some video file to see the real benefit of compression.

    I don;t think 9X iozone testing result can apply to real world.

    Originally posted by BenderRodriguez View Post
    Why do i get the feeling that zlib/lzo mode speeds up iozone and fs-mark only because the created files are empty and thus compress almost infintely good ?

    Leave a comment:


  • buzz
    replied
    I use btrfs + lzo compression on the latest linux images for the O2 Joggler.

    Coding, retro-gaming, and other projects



    using slow usb flash devices (mine is ~9mb write and ~27mb/second read),. btrfs with lzo feels significantly faster than using zlib (and less cpu usage). Not an actual benchmark of course

    Leave a comment:


  • extofme
    replied
    Originally posted by mbouchar View Post
    It's not complicated. These tests only shows that the stuff is compressed sometimes in memory before being written to disk. This is the same thing that happens when encrypted disks shows better performance than normal disks.

    Only that your memory and CPU will be taxed more and you will have a slower computer for other stuff.
    it's a pretty well established idea that on-disk compression can (and does) lead to impressive performance increase under many workloads. it's not a simple "yay" or "nay". the fact is in the time your disk seeks once your CPU has already burned thru several million cycles ... it's like light speed vs. the fastest human vehicle -- anything you can shave off the latter is probably a win, even if it already seems "pretty fast".

    there are even several workloads that benefit from _memory_ compression ... because RAM -- the uber spaceship of 2010+ -- is still peanuts compared to C. everything that isn't your CPU is a cache to your CPU; the less time to get it there the better. data locality is king.



    "Zcache doubles RAM efficiency while providing a significant performance boost on many workloads."

    both zcache and btrfs (not sure ZFS) use LZO ... the simple truth is your CPU is a lazy bastard that spends most of it's time blaming it's poor efficiency on the rest of the team ;-)

    C Anthony

    Leave a comment:


  • mbouchar
    replied
    It's not complicated. These tests only shows that the stuff is compressed sometimes in memory before being written to disk. This is the same thing that happens when encrypted disks shows better performance than normal disks.

    Only that your memory and CPU will be taxed more and you will have a slower computer for other stuff.

    Leave a comment:


  • extofme
    replied
    Originally posted by energyman View Post
    and because some files are not 'compressable', reiser4 has a simple test that is almost good enough. If it detects that the file can not be compressed, it doesn't even try.
    btrfs does the same thing -- it's the difference between `compress` and `compress-force` mount options.

    i am using LZO comp on a s101 netbook and m4300 notebook with spectacular results. you also have to remember that btrfs only compresses existing data when the data is modified, and even then it only compresses the new extent ... to compress an existing disk completely you need to mount with a compression option, then initiate a rebalance (can be done online).

    it is too bad about reiser4 ... i never used it myself but i've always read very good things about it; it's unfortunate Hans was so difficult to work with and ... well ... other things too. alas, it has no vendor to back it (to get into mainline) -- btrfs is the future here.

    C Anthony

    Leave a comment:


  • energyman
    replied
    and because some files are not 'compressable', reiser4 has a simple test that is almost good enough. If it detects that the file can not be compressed, it doesn't even try.

    using a ssd for / with /var, /tmp, /boot on different partitions. with reiser4 I was able to store 5gb more on a 80% full 64gb disk compared to ext4.

    Leave a comment:


  • unrulycow
    replied
    Sandforce SSD

    I'd be really interested in seeing how this works on a SSD using a Sandforce controller. Sandforce has the fastest controllers because the drive itself is compressing the data. Filesystem compression may actually hurt performance on these fast SSDs.

    Leave a comment:

Working...
X