Announcement

Collapse
No announcement yet.

HDD/SSD Performance With MDADM RAID, BCache On Linux 4.14

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ipso
    replied
    If you don't tune bcache the performance is typically horrible... It was designed when SSDs were in their infancy and significantly slower, so it has all kinds of knobs and monitors to disable itself and just pass thru I/O to the backing device. For example if it detects sequential I/O it bypasses bcache, and if it detects more than X nanoseconds of latency to the cache device it also bypasses bcache.

    So if not properly tuned it will most likely just add overhead and be slower in most cases.

    Leave a comment:


  • suberimakuri
    replied
    Performance hit of RAID1 was surprising.. and disappointing..

    Leave a comment:


  • torsionbar28
    replied
    Originally posted by kravemir View Post
    I'm thinking about 1TB HDD + 128GB SSD cache setup for laptop usage.
    dont bother... just buy a 1tb ssd and be done with it.

    Leave a comment:


  • caligula
    replied
    I really hope some bcache expert could comment on these results. Looks really disappointing.

    Leave a comment:


  • vimja
    replied
    I have another idea for a comparative benchmark: mdadm RAID with an SSD for caching vs mdadm RAID with an SSD for LVM thin-provisioning metadata volume.

    LVM thin-provisioning has been around for some time now. The documentation mentions that it is possible to have the metadata volume for an LVM thin-pool on a separate disk. The documentation actually suggests you do that for better performance. However, I have not been able to find *any* benchmarks on how significant these performance benefits might be.

    Now, if I have a large RAID5 or RAID6 array consisting of SSDs or HDDs and then a (significantly faster) SSD for caching, what should I do?

    Should I use the SSD as a cache for mdadm with bcache, or should I do thin provisioned LVM with the metadata on the SSD?

    This question, unfortunately, is a lot deeper. Having the SSD serve as a cache for ldadm allows me to use a filesystem directly on top of that, without the additional layer of LVM. However, for various reasons I might want to use LVM anyway. Still, there are more possibilities with that. The ldadm plus cache enables me yo use non-thin-provisioned LVM. Maybe that performs better?

    Also, when storing the metadata on a separate disk outside the RAID, I probably want to have at least two SSDs in a RAID1 configuration for that. But since the max size for an LVM-thin-pool metadata volume is 16 GiB, I can buy very small SSDs.

    So benchmarks should probably be:

    - ldadm RAID 5/6 plus ldadm RAID1 (ssds) with LVM thin provisioning, the metadata partition being on the RAID1
    - ldadm RAID 5/6 with an SSD for caching, the FS directly ontop of the RAID
    - ldadm RAID 5/6 with an SSD for caching, traditional LVM ontop of the RAID and then the FS on top the LVM
    - ldadm RAID 5/6 with an SSD for caching, LVM thin-provisioning ontop the RAID and then the FS on top the LVM

    And for bonus:
    - ldadm RAID 5/6 with an SSD for caching plus RAID1 (ssds) with LVM thin provisioning, the metadata partition being on the RAID1

    Leave a comment:


  • tildearrow
    replied
    Typo:

    Originally posted by phoronix View Post
    BlogBench ran into problems with BCache's wirteback mode.

    Leave a comment:


  • nomadewolf
    replied
    For a moment, i thought this was a test of bcachefs... lel

    Leave a comment:


  • Guest
    Guest replied
    Michael, these benchmarks are really useless for average laptop/desktop users. Because, desktop/laptop users don't need HDD performance, but fast reactions, while performing task, which operate repeatedly on small amount of data (<60 GB). All remaining data are mostly archives of old projects, which aren't opened for long time, videos, photos, documents, and so on,...

    I'm thinking about 1TB HDD + 128GB SSD cache setup for laptop usage. I only need to start computer fast, to open applications fast, and to build small projects quickly. Most of these operations are based on reads of many small files (source files, application binaries, OS libraries and startup). And, the complete use-case won't overcome 60GB of read data over a week, and 1GB of new data (well, if I don't count temporary files, which get repeatedly removed).

    For this usage the bcache (writeback mode) should eliminate problem of seek times with HDD. Isn't it?

    Could you just benchmark common tasks? For example, times of following actions: repeated computer startup, repeated IDE startup, repeated clean+rebuild of a regular sized project (not millions LoC kernel),...

    Leave a comment:


  • phred14
    replied
    Originally posted by uentity View Post
    I think it would be interesting to test one more option of having external EXT4 journal on SSD. And how it performs with various journal size.
    I remember reading about a year or so ago - somewhere, I forget where - that using SSD as the external journal for ext4 on a spinning disk gave much better performance than bcache. It would be interesting to see how this affects the lifetime of the SSD, since this puts it into a write-heavy situation. It might also be interesting to split the SSD, using some for external journal and some for writearound-mode bcache to help reads.

    Leave a comment:


  • uentity
    replied
    I think it would be interesting to test one more option of having external EXT4 journal on SSD. And how it performs depending on journal size.

    Leave a comment:

Working...
X