Announcement

Collapse
No announcement yet.

Testing Out Btrfs In Ubuntu 10.10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • cl333r
    replied
    I remember Btrfs works (abnormally) slow with databases and such tests are not present in this benchmark.
    To understand what I'm talking about see previous benchmarks on Btrfs.

    Leave a comment:


  • Drago
    replied
    Originally posted by kebabbert View Post
    Do you really expect ZFS development to freeze in time, until BTRFS plays catch up? Just recently ZFS Dedup was added. I wonder what functionality will be added in a couple of years.

    BTW, it takes decades to iron out all bugs in a file system. It takes at least 5 years after announcing v1.0 before anyone use it in production. ZFS was officially announced after development in secrecy, and after that it took several years before it was let into production. When BTRFS is v1.0 it will take several years before any trusts it, in production.

    As someone said "filesystems should not be sexy. It should be boring and trusted technology" - implying that he will not let ZFS into his computer halls, before at least 10 years has passed and ZFS has become mature enough.
    Well, file systems are not bottle of wine, to stay in the corner and get better with time. They have to be used to get mature...

    Leave a comment:


  • kebabbert
    replied
    Originally posted by waucka View Post
    Btrfs, once complete, should do pretty much everything that ZFS does and some things that ZFS doesn't.
    Do you really expect ZFS development to freeze in time, until BTRFS plays catch up? Just recently ZFS Dedup was added. I wonder what functionality will be added in a couple of years.

    BTW, it takes decades to iron out all bugs in a file system. It takes at least 5 years after announcing v1.0 before anyone use it in production. ZFS was officially announced after development in secrecy, and after that it took several years before it was let into production. When BTRFS is v1.0 it will take several years before any trusts it, in production.

    As someone said "filesystems should not be sexy. It should be boring and trusted technology" - implying that he will not let ZFS into his computer halls, before at least 10 years has passed and ZFS has become mature enough.

    Leave a comment:


  • brent
    replied
    Without CPU usage numbers, these benchmarks are quite useless.
    Also, it would be very interesting to know what the test data that the benchmark programs use does look like. If it is just zeros or an often repeating pattern, this would yield unrealistically good results.

    Leave a comment:


  • ChrisXY
    replied
    Originally posted by MrEcho View Post
    Think about Atom CPU's and a FS that uses compression, ouch.
    I use btrfs + compression on my eeepc, because the hard disk in it is REALLY slow.

    It feels like the overall performance did improve a bit but I didn't really test it.
    Some operations like updating many packages is slow, but that's acceptable for me.
    I mean, what are you doing on an atom pc that needs so much disk activity?

    I use mainly Firefox, thunderbird, evince, sometimes eclipse (yes, it is not very good on the little screen) or geany. These are applications that actually do work faster as far as I can tell from my feeling.

    Leave a comment:


  • Jonno
    replied
    Originally posted by smitty3268 View Post
    Actually, it seems like they could probably make the file system smart enough to heuristically stop compressing files that are already compressed (like video) in order to avoid the performance penalty. I don't have any idea if that's already being done or not.
    It is, but not very inteligently. The first few blocks are compressed, and the result is used to determine if the entire file should be compressed or not. This doesn't work very well for files that contains both compressed and uncompressed sections, such as disk images and databases...

    Leave a comment:


  • drago01
    replied
    Be careful when testing compression; if your benchmark is just writing out zeros the files of course will compress well and show a huge performance gain; which isn't what you would see in a real world test.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by jetpeach View Post
    Hi, I'm curious about the transparent compression - I searched google but didn't find a lot of useful information on it (just that it is zlib compression). I'm curious, does it actually the files stored on the hard drive smaller?
    Yes. Although the main reason to do this is to reduce the amount that has to be read from the disk and therefore the number of seeks, speeding up access by relying on a fast cpu rather than a slow hdd resource to do the majority of the work.

    And if so, then wouldn't it's performance be highly dependent on the type of file and how much it can be compressed? (Like media files performing badly, while txt files doing well?)
    Yes. Especially when it comes to artificial benchmarks, since they might just rely on writing all zeroes or ones out to the hdd which are more compressible than anything you'd run into in real life. Actually, it seems like they could probably make the file system smart enough to heuristically stop compressing files that are already compressed (like video) in order to avoid the performance penalty. I don't have any idea if that's already being done or not.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by Blue Beard View Post
    ?A short history of btrfs? (LWN.NET July 22, 2009) by Valerie Aurora (formerly Henson) is available at http://lwn.net/Articles/342892/
    That's a great article on btrfs, everyone should read it.

    btrfs (btree file system) really isn't being created to increase performance over existing file systems, the idea is to get a bunch of really cool new features, and try to make it all optimized enough to keep it from slowing down.

    Leave a comment:


  • Blue Beard
    replied
    Btrfs provides the foundation for many useful features.

    Snapshots are point in time data captures. Most would recognize system rollback which depends on snapshots.

    Backup, the feature most people don't do until the data is lost. This backup is almost instantaneous.

    When combined with distributed data storage systems like CEPH, you get replication, protection and performance.

    Leave a comment:

Working...
X