Announcement

Collapse
No announcement yet.

Linux 5.4 EXT4 / XFS / Btrfs RAID Performance On Four HDDs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    You're right there. How about testing both options then . I love the flexibility with Btrfs. Both with snapshots and the whole volume management thing, but also the ability to hot add/remove disks live without any downtime or unmounting.

    Comment


    • #22
      this is rather strange to see some reading test with 10x perf loss from raid 1 to raid 10.
      I dont see how any reads could be 10x slower in raid10 as it seems to me that the worst case scenario a raid 10 is as slow as raid1 and in best case it is 2x faster no ?

      Comment


      • #23
        Originally posted by Spam View Post
        Pretty good improvements on Btrfs part compared to a year ago.

        I think you should run these benchmarks with noatime as default though. atime hurts performance, especially for COW filesystems, and is not really used by anything these days, unless you use mutt.
        i really wonder if ANYONE use atime anyway apart for hardcore mutt users yes but i was thinking it was default mount option now on all distros, the atime cost is really too much for what it provides in return.

        Comment


        • #24
          Originally posted by gadnet View Post
          this is rather strange to see some reading test with 10x perf loss from raid 1 to raid 10.
          I dont see how any reads could be 10x slower in raid10 as it seems to me that the worst case scenario a raid 10 is as slow as raid1 and in best case it is 2x faster no ?
          Well if you have a stripe that is 64 bytes long , and you have to interleave this across 4x disks you get 16 bytes pr. disk.
          Now if you are reading tons of files that are less than 16 bytes long for example only one disk needs to be read, but you still need to read back the entire stripe and therefore leaving the disks busy reading the stripe (Raid10) instead of being free for parallel reads (raid1) from other programs/threads.

          Also since physical addresses do not necessarily match logical ones reading the strip can never be faster than the slowest disks (number of seeks or the longest seek). So if stripe data is scattered on one disks the other disks has to wait for it.

          if this explains this benchmarks can probably be discussed until there are no free harddisks space in the world to hold the transcript of the discussion , but your setup needs to be designed for the workload you want, so it all depends on use.

          PS! regarding your comment about atime , if I remember correctly lazytime (which is posix compliant) was supposed to be imeplemented on the VFS layer , but so far it seems to be pretty filesystem specific. BTRFS for example have relatime by default and I think I remember a discussion some years ago where it was decided that BTRFS would not benefit from lazytime compared to relatime - that may have changed of course...



          http://www.dirtcellar.net

          Comment


          • #25
            thanks for your insight waxhead !

            Comment


            • #26
              Originally posted by loganj View Post
              i cure myself of Seagate a few years ago. i had 2 hdds of 3 tb each bought at 1 year difference. and both crashed at 1 year difference. i was lucky that both still had warranty. and these drivers was mostly for movies or storage. i mostly do only read after both of them were full. i didn't even bother to delete what i have on it once it was full. its strange to have hdd failure from 90% of usage time for reading. and to make it clear none of them were use for 2-3 hours/day. probably not even 1 min/week sometimes
              I've got well over 100 Seagate drives, mostly enterprise ones (2-8TB), some over 5 year old, and I rarely have more than 2-3 drive replacements a year.

              BTW big chunk of them are used in a large 24x7 cluster running ovirt (Vmware ESX replacement).

              You sample is far too small to make any meaningful conclusions.

              Gilboa
              oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
              oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
              oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
              Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

              Comment


              • #27
                Originally posted by blacknova View Post
                With 4 drives it is possible to get RAID5, would be possible to test it is as well?
                I second that.
                At least in my experience, MDRAID5 is faster than MDRAID10.

                Gilboa
                oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
                oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
                oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
                Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

                Comment

                Working...
                X