Announcement

Collapse
No announcement yet.

Btrfs Has Many Nice Improvements, Better Performance With Linux 5.11

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by nranger View Post

    Yes, you are missing something. At no point did I say anything about overclocking a server.
    You wrote "The only btrfs volumes I've had fail outright were a RAID0 rootfs on a gaming machine a few years ago, and the real culprit was probably an overly aggressive RAM overclock."

    Overclocking and RAM issues are of course not good for any filesystem.

    Comment


    • #52
      Originally posted by Volta View Post

      It depends what someone wants. I only run one VM for some basic things. I don't want it to stress my SSD and I expect sane performance from it. This is also default setting I expect from Fedora. For more serious tasks I would share your needs.
      As long as it is clear and that the user can change the settings it shouldn't matter. IMHO I think defaults should be the safer option, only because I know a lot of people don't know better.

      Comment


      • #53
        Originally posted by S.Pam View Post

        As long as it is clear and that the user can change the settings it shouldn't matter. IMHO I think defaults should be the safer option, only because I know a lot of people don't know better.
        One more thing. Isn't qcow2 image enough?

        Raw vs Qcow2: Qemu/KVM provides support for various image formats. The two major disk image formats, widely recommended and used are raw and qcow2.


        qcow2 is copy on write image disk, where constant size units called clusters compose a file. A cluster holds both data as well as image metadata.
        Maybe we don't need COW on qcow?

        Comment


        • #54
          Originally posted by Volta View Post

          One more thing. Isn't qcow2 image enough?

          Raw vs Qcow2: Qemu/KVM provides support for various image formats. The two major disk image formats, widely recommended and used are raw and qcow2.


          Maybe we don't need COW on qcow?
          Thanks for the link. It's an interesting idea. Does qcow2 provide data integrity checks with checksums?

          Comment


          • #55
            Originally posted by S.Pam View Post
            Thanks for the link. It's an interesting idea. Does qcow2 provide data integrity checks with checksums?


            It provides consistency checks on write, but I'm not sure if it has btrfs like integrity checks. However, here's an interesting video and maybe there is an answer:

            Full backups of large storage devices are expensive, slow, and waste a lot of space needlessly by copying sectors that have not changed over and over again. ...
            Last edited by Volta; 16 December 2020, 10:51 AM.

            Comment


            • #56
              Originally posted by S.Pam View Post

              You wrote "The only btrfs volumes I've had fail outright were a RAID0 rootfs on a gaming machine a few years ago, and the real culprit was probably an overly aggressive RAM overclock."

              Overclocking and RAM issues are of course not good for any filesystem.
              Right, I've had ext4, fat, and ntfs systems fail too when the underlying hardware misbehaved. But that particular post by mppix confused an overclocked raid0 on a gaming PC with my 6+ year old btrfs RAID5 that's running with stock ram on a media server and has regularly balanced and scrubbed itself many times without issue.

              Comment


              • #57
                Originally posted by S.Pam View Post

                You wrote "The only btrfs volumes I've had fail outright were a RAID0 rootfs on a gaming machine a few years ago, and the real culprit was probably an overly aggressive RAM overclock."

                Overclocking and RAM issues are of course not good for any filesystem.
                Given the context, while he didn't explicitly state TWO machines, I assumed he meant a file server utilising BTRFS, and a seperate gaming rig where he overclocked the hardware and also had a BTRFS filesystem.



                Anyway, as many mentioned above and before, with solid reasoning, R5/6 to me just doesn't make much sense any more. I have masses of video footage for storage, and having tested that RAID style on and off again, wasn't happy with the results, and the ever present fear of impending doom upon said FS. For my case, that shitty feeling is more than enough reason to not use it.

                R10 seems to me the oldest, simplistic and reliable method, with actual decent performance when needed. Low end devices just get swamped with checksumming I found, increasing risk via stress and heat, taking longer to perform a task increasing risk again. I archive with the old drives (detached and stored) as they're replaced, because I don't really have any other options.
                Hi

                Comment


                • #58
                  Originally posted by cynic View Post
                  it's not only thrashing.
                  More fragmentation => more metadata and this may hurt performances on SSD as well.
                  Fragmentation is no issue on SSD's performance-wise, since there are no actual magnetic heads reading data and thus no seek-time penalty.

                  Comment


                  • #59
                    Originally posted by aht0 View Post
                    Fragmentation is no issue on SSD's performance-wise, since there are no actual magnetic heads reading data and thus no seek-time penalty.
                    There is a massive performance difference on SSD's too. Just look at any benchmark comparing random rw with sequential. Only o​n spinning HDD's that difference is even greater.

                    Comment


                    • #60
                      Originally posted by aht0 View Post
                      Fragmentation is no issue on SSD's performance-wise, since there are no actual magnetic heads reading data and thus no seek-time penalty.
                      I'll try to make it simple so that you can understand, ok?

                      more fragmentation => more metadata => bigger fs trees => more time required to traverse the trees + more lock contentions => performance penality.

                      as said, you don't have to pay for mechanical delay, still, fragmentation causes performance panalities with SSD too.

                      Comment

                      Working...
                      X