Announcement

Collapse
No announcement yet.

Where The Btrfs Performance Is At Today

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    voyager_biel Ext4 tuned for performance vs .. how do you tune btrfs... Ah hell who needs it anyway when I have ext4 tuned for performance :P

    Comment


    • #17
      Originally posted by alec View Post
      voyager_biel Ext4 tuned for performance vs .. how do you tune btrfs... Ah hell who needs it anyway when I have ext4 tuned for performance :P
      I think brtfs is not yet so common like ext3/4 xfs... got troubles when tried to make /root on brtfs with grub boot manager... should work with grub2 but I haven't try this yet..
      160 MB/s on Intel X-25 SSD with tuned ext4 tested with dd on my laptop is fast enought

      Comment


      • #18
        Originally posted by kebabbert View Post
        I would much rather focus on data integrity. Is your data safe with BTRFS? ReiserFS, JFS, XFS, ext3, etc is not.
        http://www.zdnet.com/blog/storage/ho...ta-at-risk/169

        But, researchers shows that ZFS is safe in another research paper.
        .. guest you mean data integrity in case of power off. On Laptops in normal case you have battery and you might prefer speed :-).
        Of course on a server with data integrity is more important.

        Comment


        • #19
          Originally posted by voyager_biel View Post
          I think brtfs is not yet so common like ext3/4 xfs... got troubles when tried to make /root on brtfs with grub boot manager... should work with grub2 but I haven't try this yet..
          160 MB/s on Intel X-25 SSD with tuned ext4 tested with dd on my laptop is fast enought
          "Not so common" is an understatement for a filesystem which is not available in most of the major linux distribution installers. Especially since the filesystem is still marked expiremental and the most current toolchains is not readily available through the normal channels.

          Grub1 support is quite unlikely for official development has stopped before btrfs implementation even began and it's practically maintained by everybody and thus nobody. So a different bootmanager is probably called for, yes :-)

          But performance is probably the last reason for switching to btrfs imho, though that still doesn't mean it may suck. The other features that a filesystem like btrfs or zfs brings make them very interesting for you general allround filesystem needs. And technologies like SSD will probably help ease over some of the perfomance loss that may come with some of their features.
          But if raw performance matters more than the presence of the features of these next-generation filesytems than than those usecases will possibly will (in the short run at least, possibly for ever) call for another breed of filesystem.

          So in general the choice will remain "features" vs "performance", but getting those close together will surely help in winning over the masses :-)

          Comment


          • #20
            Can we please get a filesystem benchmark using a mechanical drive? Everything here is tested using SSDs which is extremely flawed.

            Comment


            • #21
              Originally posted by LinuxID10T View Post
              Can we please get a filesystem benchmark using a mechanical drive? Everything here is tested using SSDs which is extremely flawed.
              Big plus to this idea. The time I need an SSD is the time I only need to store 100GiB of data.

              Comment


              • #22
                Originally posted by voyager_biel View Post
                .. guest you mean data integrity in case of power off.
                I didnt understand that sentence. Could you please rephrase?

                Comment


                • #23
                  Originally posted by devius View Post
                  So it's probably a good idea to stay away from BTRFS for web/email/database servers. Everything else seems fine.
                  If you want your data to store in RAM then stay away from it. Btw. the Phoronix Apache benchmarks doesn't measure real Apache server performance.

                  Comment


                  • #24
                    Originally posted by kebabbert View Post
                    I would much rather focus on data integrity. Is your data safe with BTRFS? ReiserFS, JFS, XFS, ext3, etc is not.
                    http://www.zdnet.com/blog/storage/ho...ta-at-risk/169

                    But, researchers shows that ZFS is safe in another research paper.
                    The guy you linked is a damn troll. He compares Linux file systems (which are superior to the thing he compares them to) to apple's old, messed up hfs+. He praises apple's time machine as it will resolve some problems. In summary the article is pro apple and anti Linux and little anti Windows. He mentions ZFS, just because it will be available in os x. As you didn't link to a paper like you probably never did, but to some idiot then stop spreading FUD.

                    Comment


                    • #25
                      Originally posted by LinuxID10T View Post
                      Can we please get a filesystem benchmark using a mechanical drive? Everything here is tested using SSDs which is extremely flawed.
                      I already mentioned before that filesystem reviews should ALWAYS include both SSD and HDD tests, and not what is usually done - randomly using either SSDs or HDDs, thus leaving users with the other type of drive wondering if the results apply to them as well. The response I got was that my proposition didn't make sense because that would be introducing another variable into the mix. Yeah... right. And that's supposed to be a bad thing? Having more info and a more complete review is bad?

                      Comment


                      • #26
                        Originally posted by mutlu_inek View Post
                        I would love some file system tests which include a) cpu usage and b) LUKS encryption.
                        Wouldn't we expect mainly some CPU overhead with LUKS, but little impact on disk performance?

                        Comment


                        • #27
                          Originally posted by kebabbert View Post
                          I didnt understand that sentence. Could you please rephrase?
                          data integrity in case of power outage, power cut or blackout....because journal is not persisted to disk... if write I/O corrupts your data or fs, then it doesn't matther which mount options you used...because corruption will be successfuly writen to fs. Only backup or old snapshot can help then to get integrity back...

                          Comment


                          • #28
                            Originally posted by kraftman View Post
                            The guy you linked is a damn troll. He compares Linux file systems
                            No, you got it wrong again. He is not comparing Linux file systems. He talks about some computer sciencie researchers that compare Linux file systems in a research paper on his web page.

                            I hope you dont claim that PhD thesis and research papers are "damn troll"? If it was false and lies, then that research would never have passed the PhD trials. He got his PhD title, that research is valid. If it is not valid, then please mail his professor and point out the errors, then his PhD title will be withdrawn and he loose his diploma. You will instead soon get a PhD thesis if you find errors in current research and can improve it. If you can not point out the errors, then please be more careful before you accuse someone of Trolling. As we know, you are very quick to call people Troll, however you have admitted yourself that you have Trolled earlier.

                            Originally posted by kraftman View Post
                            He mentions ZFS, just because it will be available in os x. As you didn't link to a paper like you probably never did, but to some idiot then stop spreading FUD.
                            Ive told you, that ZFS also has been subject to research from data integrity. And ZFS detected all the artificially introduced errors, whereas Linux filesystems did not even detect all errors, how then can errors be fixed? Impossible! Whereas ZFS would have corrected all errors, if they have used raid - in the research they only used ZFS on one drive which provides no redundancy.

                            Here is a research paper documenting the research on ZFS. If you see some errors, please produce a paper how to improve research, and quite soon you will have a PhD thesis, you too.
                            http://www.zdnet.com/blog/storage/zf...ity-tested/811

                            Comment


                            • #29
                              Originally posted by voyager_biel View Post
                              data integrity in case of power outage, power cut or blackout....because journal is not persisted to disk... if write I/O corrupts your data or fs, then it doesn't matther which mount options you used...because corruption will be successfuly writen to fs. Only backup or old snapshot can help then to get integrity back...
                              With ZFS it doesnt matter too much if you cut power. If you edit old data, all new changes are written to disk but all old data is still left intact on disk. Lastly, the file will point to the new data which is only one operation. Old data is not touched, it is still there on disc.

                              This means that ALL changes was written to disc, or no changes was written to disc. I can not happen that only half of the changes where written to disc, and the other half of the changes got lost. No corruption. The state is always correct.

                              If power is cut before the pointer points to the new data, all old data is left intact and no corruption has occured.

                              Comment


                              • #30
                                2.6.35-rc3 got a lot of btrfs fixes, at least one of them is a regression fix. Maybe retest the performance with the final kernel.

                                Comment

                                Working...
                                X