Announcement

Collapse
No announcement yet.

With Linux 2.6.32, Btrfs Gains As EXT4 Recedes

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    xfs does care as much about your data as ext4. Not at all.
    It also sucks with small files (with everything below 10mb as small).

    Comment


    • #52
      Originally posted by movieman View Post
      Is it?

      I believe my netbook puts /tmp into RAM, which means that the data in there is completely lost if the machine reboots. That's trading performance for a complete lack of reliabile storage over system restarts.

      I think you'll find that people's needs vary from 'data absolutely must get to the disk and stay there' (e.g. online sales order database) to 'I want performance and if I lose the file on a reboot I don't care' (e.g. said temporary files). Different filesystems serve different needs.

      That said, I agree that a filesystem for general uses (e.g. extN) should put reliability above performance.
      Yes, to me the point of storing files on hard disk, is that my data be safe. Better safe and slow, than fast and unsafe. If you can get 600MB/sec but there 0.0001% chance that you get corrupted data - or if you get 50MB/sec but your data will be completely guaranteed safe - which solution would you choose? I know I would have chosen that my data be totally safe. But your mileage may vary. Maybe you dont care if your data gets slowly corrupted as long as you get high speed?

      Kraftman,
      Some would consider you as a troll. Regarding the link about my performance problems with ZFS and Solaris, I solved the problem. I think it is only trolls that draw conclusions of ZFS+Solaris performance from a problematic setup? It is like "I tried to install beta Linux on a not supported platform, and it didnt succeed. Hence, Linux has severe install problem. Everybody, dont install Linux - it will only be install problems" - not very clever, eh?

      But OTOH, didnt you "prove" that Linux is faster than Solaris by linking to an article where they migrated old 400MHz SUN machines to new intel dual core 2.8GHz Linux machines? If you believe that proves that Solaris is slower than Linux, then I understand that you believe ZFS is slow because you read about one person that had a problematic install.

      It has never occurred to you that your conclusions may be wrong? I suggest you benchmark Linux on a 400MHz CPU, vs Solaris on a dual core 2.8GHz - would you then accept that Solaris is faster? No? Why not? But when Linux says the same thing, then it is ok? Maybe now, you see why it is wrong to reason like you do?

      Comment


      • #53
        Barriers and JFS

        Please make sure to enable barriers when testing ext3 (and others).

        Also, please include JFS in the mix. Not always the fastest, but very low CPU usage and, IMO, generally faster "under load" than ext2/3/4.

        Comment


        • #54
          Originally posted by 900Trophy View Post
          Please make sure to enable barriers when testing ext3 (and others).

          Also, please include JFS in the mix. Not always the fastest, but very low CPU usage and, IMO, generally faster "under load" than ext2/3/4.
          Thank you, I second all of the above but especially for JFS on Linux("JFS2").
          Last edited by fhj52; 27 January 2010, 04:40 PM. Reason: add ("JFS2")

          Comment


          • #55
            Originally posted by kebabbert View Post
            Yes, to me the point of storing files on hard disk, is that my data be safe. Better safe and slow, than fast and unsafe. If you can get 600MB/sec but there 0.0001% chance that you get corrupted data - or if you get 50MB/sec but your data will be completely guaranteed safe - which solution would you choose? I know I would have chosen that my data be totally safe. But your mileage may vary. Maybe you dont care if your data gets slowly corrupted as long as you get high speed?

            Kraftman,
            Some would consider you as a troll. Regarding the link about my performance problems with ZFS and Solaris, I solved the problem. I think it is only trolls that draw conclusions of ZFS+Solaris performance from a problematic setup? It is like "I tried to install beta Linux on a not supported platform, and it didnt succeed. Hence, Linux has severe install problem. Everybody, dont install Linux - it will only be install problems" - not very clever, eh?

            But OTOH, didnt you "prove" that Linux is faster than Solaris by linking to an article where they migrated old 400MHz SUN machines to new intel dual core 2.8GHz Linux machines? If you believe that proves that Solaris is slower than Linux, then I understand that you believe ZFS is slow because you read about one person that had a problematic install.

            It has never occurred to you that your conclusions may be wrong? I suggest you benchmark Linux on a 400MHz CPU, vs Solaris on a dual core 2.8GHz - would you then accept that Solaris is faster? No? Why not? But when Linux says the same thing, then it is ok? Maybe now, you see why it is wrong to reason like you do?
            opensolaris and others on the same hardware, solaris sucked:

            Comment


            • #56
              Originally posted by kebabbert View Post
              Kraftman,
              Some would consider you as a troll. Regarding the link about my performance problems with ZFS and Solaris, I solved the problem. I think it is only trolls that draw conclusions of ZFS+Solaris performance from a problematic setup? It is like "I tried to install beta Linux on a not supported platform, and it didnt succeed. Hence, Linux has severe install problem. Everybody, dont install Linux - it will only be install problems" - not very clever, eh?
              At least, learn how to reply properly. I missed this. You're usually just making yourself to look like an idiot. A known troll from osnews.com.

              @Energyman

              Thanks for nice benchmark.
              Last edited by kraftman; 27 January 2010, 04:38 PM.

              Comment


              • #57
                I am shocked that ext4 is being moved into default usage by the Linux distros. It is definitely *not* for any standard workstation or desktop. The new ext* features are not even meant to support desktop environments. They are for servers and the niche applications that actually have TB files. Yea, it is prbly good for Google, Yahoo, Tripod, NSF, genome companies and the like but those are most definitely not part of the average user base. Web servers don't typically serve multi-GB or TB size files!

                All these tests indicate that ext3 is, overall, much better.
                /methinks the tests are not showing the whole truth. In practice, ext3 is dog slow (like NTFS on windoz ...) and a real PITA to fsck. If ext4 is slower, this is a regression.

                JFS on Linux("JFS2") does better. Of course somebody is not testing it( or ZFS ) so the benchmark proof is not available.

                Speaking of benchmarks, the programs, like IOzone, are not simple tests that readily transform to simple bar charts representing a large "overview" of the entire process. Testing how a filesystem works means seeing how it works for specific programming calls(like fread, fopen, etc.). Such relatively important statistics for programmers mean little to the end user unless that user is using a specific application that s|he knows uses a lot of those calls.
                It has been mentioned before but I'll repeat it. For this site, timed tests for loading applications, deleting large and small files and directories, mutliple access, different setups(E.g., journal on another disk), what is the CPU usage during those tasks, etc. would be more meaningful.

                We really need some margin of error too(also requested before). I mean, really, who cares if btrfs is 1MB/s slower than ext4 if the margin of error is 2MB/s!?!
                If the tests yield results within the margin of error for all the fs tested then the *real truth* is that they all perform the same for that test system and it does not matter which fs is chosen. It is only at that point that function calls might become important and such detailed results would be useful.

                One author wisely pointed-out that the results presented are mostly important for people who run benchmarks all the time and have little applicability to real life. How about fixing that by spending the time to run (more) meaningful tests?

                Comment


                • #58
                  Originally posted by kebabbert View Post
                  Yes, to me the point of storing files on hard disk, is that my data be safe. Better safe and slow, than fast and unsafe. If you can get 600MB/sec but there 0.0001% chance that you get corrupted data - or if you get 50MB/sec but your data will be completely guaranteed safe - which solution would you choose? [...snip...]
                  Not to beat on you or anything like that but one needs to realize that there is no such thing as "completely guaranteed safe" for reads|writes.
                  Typically, safety of data is more dependent upon the hardware setup than the fs of choice.
                  In addition, writes of say a more realistically possible 200MB/s yield to error correction much better than writes of 50MB/s without the user pulling his|her hair out, .
                  When errors occur, and they do more than one thinks because one does not see the negative results(corrupted files' data), a filesystem that is faster should maintain higher transfer rates and interference with the computing process is minimized to an extent that the end-user probably won't see it.

                  We all want our data to be uncorrupted and the way that happens is to have error checking and correction, if necessary, performed at the time of READ|WRITE. Even so, there is no such thing as a zero uncorrected error rate. The best we can get is an "effectively near zero" rate. All filesystems have that or they are not filesystems that can be used. When one exceeds the near-zero rate, as ext4 might have done in some cases, additional measures are taken to put it back to near-zero uncorrected errors. That does not mean the fs must slow down but it might have that impact( as apparently it did with ext4 ).

                  ...
                  So the bottom line for me is the fs that is capable of sustaining 200MBps ( max ) is a better choice than the one that does 50-100MBps ( max ) because both have near-zero uncorrected errors and when the errors occur, the 200MBps fs will(should) handle the problems faster than the 50MBps thereby providing a more pleasant and useful computing experience.

                  Comment


                  • #59
                    Originally posted by fhj52 View Post
                    ...
                    So the bottom line for me is the fs that is capable of sustaining 200MBps ( max ) is a better choice than the one that does 50-100MBps ( max ) because both have near-zero uncorrected errors and when the errors occur, the 200MBps fs will(should) handle the problems faster than the 50MBps thereby providing a more pleasant and useful computing experience.
                    The last part got cut ....
                    It should also say:
                    But neither is a reality. AFAIK, the diff between any modern fs are very small and certainly nowhere near 150MBps (or 550MBps!).
                    ...

                    Comment


                    • #60
                      Originally posted by fhj52 View Post
                      The last part got cut ....
                      It should also say:
                      But neither is a reality. AFAIK, the diff between any modern fs are very small and certainly nowhere near 150MBps (or 550MBps!).
                      ...
                      When errors happen and they cannot be corrected, speed doesn't matter when your data is LOST.

                      If the data is not lost and recoverable by say an fsck, one could argue a fast fs will be better. The thing is though, the speed of fsck is not related to the speed of the file system in normal use. The capacity to performance gap of the hardware forced designers to take the fsck performance into account only recently. Otherwise no effort was made to design the internal data structures in a way to make them quick to be repaired or rebuild by fsck.

                      Comment

                      Working...
                      X