Announcement

Collapse
No announcement yet.

Fedora Logical Volume Manager Benchmarks

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Fedora Logical Volume Manager Benchmarks

    Phoronix: Fedora Logical Volume Manager Benchmarks

    Last month when publishing Fedora 15 vs. Ubuntu 11.04 benchmarks in some of the disk workloads the Fedora Linux release was behind that of Ubuntu Natty Narwhal. Some users speculated in our forums that SELinux was to blame, but later tests show SELinux does not cause a huge performance impact. With Security Enhanced Linux not to blame, some wondered if Fedora's use of LVM, the Logical Volume Manager, by default was the cause.

    http://www.phoronix.com/vr.php?view=16190

  • #2
    What benefit does it bring me as a casual user to choose LVM during a system installation? I read the benchmark so I know there's speed, but you also mentioned there's a higher risk of loosing data. Can you please tell me why?

    I also read that LVM can dynamically move the size of a partition if the filesystem supports it, what are the risks of loosing data when doing that and how easy (user friendly) an fast is it?

    Comment


    • #3
      Originally posted by SkyHiRider View Post
      I also read that LVM can dynamically move the size of a partition if the filesystem supports it, what are the risks of loosing data when doing that and how easy (user friendly) an fast is it?
      AFAIK nearly everything with LVM partition-wise using ext* requires no unmounting what so ever except reduce the size of a partition. The risk of losing data is pretty slim.

      Comment


      • #4
        not using barriers = I don't care about data = fedora is unfit for every even slightly serious setup.

        Comment


        • #5
          Originally posted by energyman View Post
          not using barriers = I don't care about data = fedora is unfit for every even slightly serious setup.
          Good to know. I thought Fedora has barriers enabled.

          Comment


          • #6
            When the conclusion is "LVM is faster than ext4, probably because ext4 on LVM doesn't enable write barriers", why isn't a 2nd test done with plain ext4 with write barriers disabled? That would show much more usable information. Right now we don't know exactly how much overhead LVM has and we don't know how much performance gain ext4 gets when write barriers are disabled.

            Comment


            • #7
              Originally posted by kraftman View Post
              Good to know. I thought Fedora has barriers enabled.
              It seems that it does: http://docs.fedoraproject.org/en-US/...rieronoff.html.

              Comment


              • #8
                Originally posted by liam View Post
                I think the problem is that it only eables barriers on supported devices, and I am unsure if a LVM is treated as a "supported devicce".

                Comment


                • #9
                  Originally posted by Xake View Post
                  I think the problem is that it only eables barriers on supported devices, and I am unsure if a LVM is treated as a "supported devicce".
                  Except that LVM is the default, so I HOPE it is supported

                  Comment


                  • #10
                    Originally posted by SkyHiRider View Post
                    What benefit does it bring me as a casual user to choose LVM during a system installation? I read the benchmark so I know there's speed, but you also mentioned there's a higher risk of loosing data. Can you please tell me why?

                    I also read that LVM can dynamically move the size of a partition if the filesystem supports it, what are the risks of loosing data when doing that and how easy (user friendly) an fast is it?
                    Nobody ever chooses LVM for speed, and you probably shouldn't either. I didn't even know it was faster, and in spite of this benchmark it probably isn't really. The benchmark result is likely an artifact of some other difference, probably the write barrier setting as speculated (what else could it possibly be?). If you're got some mountpoint where you're willing to take some slightly increased risk in exchange for performance, you can turn off write barriers anyway, even without LVM.

                    The biggest reason to use LVM is convenience and easiness, when moving stuff around or resizing things. No mucking around with repartitioning or having to reboot to re-read partition tables. Your system just magically stays running as though no one had ever pulled the rug out from beneath your filesystem. It's actually pretty damn cool. But it's only useful if you're anticipating changing things. If "casual user" means you're just divvying up one disk (or raided md) into a swap and root partition, then LVM might be overkill.

                    The disabled write barrier risks (or lack thereof) (or controversy about "lack thereof") (or controversy about that supposed controversy) are discussed here. IMHO if you're using a UPS you can blow it off and not worry about write barriers. You're going to lose your data to user errors or simply due to using less-than-decade-debugged filesystems like ext4 or btrfs, long before write barriers are a factor. Again, IMHO.

                    Comment


                    • #11
                      bullshit.
                      with 2.6.38, 2.6.39, 3.0.0-rc:

                      Disconnect usb device, boom kernel panic
                      Add usb device boom kernel panic

                      some other reasons, kernel panic

                      your psu gets flaky
                      your mobo gets flaky
                      your fans clog up unnoticed
                      your ram overheats
                      your graphics drivers lock up the system
                      some in kernel driver locks up the system.

                      there are MANY reasons for a hard reboot. Power fluctuations are such a rare occurance (in civilized countries) they do not matter compared to kernel bugs or hardware failures.

                      Or even the occasional 'oops, tripped over the cord'.

                      Barriers are a must. Disabling them is a typical ext3/redhat/fedora move to blind the stupid. They want to look good in benchmarks made by people without a clue. Disabling barriers is like hitting the user in the face and telling him 'hey, I don't care if you lose all you files. I want to look good in stupid benchmarks'.

                      Comment


                      • #12
                        Originally posted by Zapitron View Post
                        The disabled write barrier risks (or lack thereof) (or controversy about "lack thereof") (or controversy about that supposed controversy) are discussed here. IMHO if you're using a UPS you can blow it off and not worry about write barriers. You're going to lose your data to user errors or simply due to using less-than-decade-debugged filesystems like ext4 or btrfs, long before write barriers are a factor. Again, IMHO.
                        Thanks for the link, it helped me understand what barriers are. But as commits are usually written at the end, when can it happen that one block is not written and a commit is made? Some kind of disk I/O error? Even if that happens, the filesystem has to find out the anomaly pretty soon and fix it, so the risk is just those few seconds that the filesystem is in an inconsistent state.

                        Comment


                        • #13
                          it is not only the problem of 'inconsist' state but also that there might be a HUGE window of a minute and more of 'you told the system to save the data and nothing happend' or 'you told the system to rename the file and it is still in progress'.

                          Lots of people have lost lots of data because of idiotic defaults that are only set that way to look good in benchmarks conducted by people who
                          a) don't have a clue or
                          b) don't touch defaults or
                          c) don't care or
                          d) all of the above

                          Comment


                          • #14
                            Write barriers don't directly help against data loss. They help against filesystem corruption.

                            Kernel developers used to think that write barriers are slow and it is mostly safe to disable them[1], but Chris Mason came up with a write barrier torture test[2]. It demonstrated that on certain workloads, there is a 50% chance of your filesystem ending up corrupted when write barriers are disabled and the power fails.

                            [1] http://lwn.net/Articles/283161/
                            [2] http://thread.gmane.org/gmane.comp.f...tems.ext4/6702

                            Comment


                            • #15
                              Originally posted by Zapitron View Post
                              IMHO if you're using a UPS you can blow it off and not worry about write barriers.
                              Unless your UPS breaks.

                              Comment

                              Working...
                              X