Announcement

Collapse
No announcement yet.

Where The Btrfs Performance Is At Today

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by LinuxID10T View Post
    Can we please get a filesystem benchmark using a mechanical drive? Everything here is tested using SSDs which is extremely flawed.
    Big plus to this idea. The time I need an SSD is the time I only need to store 100GiB of data.

    Comment


    • #22
      Originally posted by voyager_biel View Post
      .. guest you mean data integrity in case of power off.
      I didnt understand that sentence. Could you please rephrase?

      Comment


      • #23
        Originally posted by devius View Post
        So it's probably a good idea to stay away from BTRFS for web/email/database servers. Everything else seems fine.
        If you want your data to store in RAM then stay away from it. Btw. the Phoronix Apache benchmarks doesn't measure real Apache server performance.

        Comment


        • #24
          Originally posted by kebabbert View Post
          I would much rather focus on data integrity. Is your data safe with BTRFS? ReiserFS, JFS, XFS, ext3, etc is not.
          56% of data loss due to system & hardware problems - OntrackData loss is painful and all too common. Why?


          But, researchers shows that ZFS is safe in another research paper.
          The guy you linked is a damn troll. He compares Linux file systems (which are superior to the thing he compares them to) to apple's old, messed up hfs+. He praises apple's time machine as it will resolve some problems. In summary the article is pro apple and anti Linux and little anti Windows. He mentions ZFS, just because it will be available in os x. As you didn't link to a paper like you probably never did, but to some idiot then stop spreading FUD.

          Comment


          • #25
            Originally posted by LinuxID10T View Post
            Can we please get a filesystem benchmark using a mechanical drive? Everything here is tested using SSDs which is extremely flawed.
            I already mentioned before that filesystem reviews should ALWAYS include both SSD and HDD tests, and not what is usually done - randomly using either SSDs or HDDs, thus leaving users with the other type of drive wondering if the results apply to them as well. The response I got was that my proposition didn't make sense because that would be introducing another variable into the mix. Yeah... right. And that's supposed to be a bad thing? Having more info and a more complete review is bad?

            Comment


            • #26
              Originally posted by mutlu_inek View Post
              I would love some file system tests which include a) cpu usage and b) LUKS encryption.
              Wouldn't we expect mainly some CPU overhead with LUKS, but little impact on disk performance?

              Comment


              • #27
                Originally posted by kebabbert View Post
                I didnt understand that sentence. Could you please rephrase?
                data integrity in case of power outage, power cut or blackout....because journal is not persisted to disk... if write I/O corrupts your data or fs, then it doesn't matther which mount options you used...because corruption will be successfuly writen to fs. Only backup or old snapshot can help then to get integrity back...

                Comment


                • #28
                  Originally posted by kraftman View Post
                  The guy you linked is a damn troll. He compares Linux file systems
                  No, you got it wrong again. He is not comparing Linux file systems. He talks about some computer sciencie researchers that compare Linux file systems in a research paper on his web page.

                  I hope you dont claim that PhD thesis and research papers are "damn troll"? If it was false and lies, then that research would never have passed the PhD trials. He got his PhD title, that research is valid. If it is not valid, then please mail his professor and point out the errors, then his PhD title will be withdrawn and he loose his diploma. You will instead soon get a PhD thesis if you find errors in current research and can improve it. If you can not point out the errors, then please be more careful before you accuse someone of Trolling. As we know, you are very quick to call people Troll, however you have admitted yourself that you have Trolled earlier.

                  Originally posted by kraftman View Post
                  He mentions ZFS, just because it will be available in os x. As you didn't link to a paper like you probably never did, but to some idiot then stop spreading FUD.
                  Ive told you, that ZFS also has been subject to research from data integrity. And ZFS detected all the artificially introduced errors, whereas Linux filesystems did not even detect all errors, how then can errors be fixed? Impossible! Whereas ZFS would have corrected all errors, if they have used raid - in the research they only used ZFS on one drive which provides no redundancy.

                  Here is a research paper documenting the research on ZFS. If you see some errors, please produce a paper how to improve research, and quite soon you will have a PhD thesis, you too.
                  File systems are supposed to protect your data, but most are 20-30 year old architectures that risk data with every I/O. The open source ZFS from Sun Oracle claims high data integrity - and now that claim has been independently tested.

                  Comment


                  • #29
                    Originally posted by voyager_biel View Post
                    data integrity in case of power outage, power cut or blackout....because journal is not persisted to disk... if write I/O corrupts your data or fs, then it doesn't matther which mount options you used...because corruption will be successfuly writen to fs. Only backup or old snapshot can help then to get integrity back...
                    With ZFS it doesnt matter too much if you cut power. If you edit old data, all new changes are written to disk but all old data is still left intact on disk. Lastly, the file will point to the new data which is only one operation. Old data is not touched, it is still there on disc.

                    This means that ALL changes was written to disc, or no changes was written to disc. I can not happen that only half of the changes where written to disc, and the other half of the changes got lost. No corruption. The state is always correct.

                    If power is cut before the pointer points to the new data, all old data is left intact and no corruption has occured.

                    Comment


                    • #30
                      2.6.35-rc3 got a lot of btrfs fixes, at least one of them is a regression fix. Maybe retest the performance with the final kernel.

                      Comment

                      Working...
                      X