Announcement

Collapse
No announcement yet.

With Linux 2.6.32, Btrfs Gains As EXT4 Recedes

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by kraftman View Post
    Definitely.
    What for? I think they're correct. Or maybe you meant to put EDIT: when editing them?
    No no, I mean please merge your posts if you're replying to more than one recipient in a row. For instance

    Originally posted by SomeDude
    [...]
    Yes, you're right

    Originally posted by SomeOtherDude
    [...]
    No, you're wrong
    edited: It prevents one-liners from "polluting" thread.
    Last edited by reavertm; 15 December 2009, 02:21 PM.

    Comment


    • #32
      Originally posted by reavertm View Post
      No no, I mean please merge your posts if you're replying to more than one recipient in a row. For instance



      edited: It prevents one-liners from "polluting" thread.
      Ok, no problem

      Comment


      • #33
        Originally posted by next9 View Post
        Again. Strange comparison based upon Ubuntu system. Why?

        It should be noticed, that Ubuntu Ext3 does not use barriers by default in order to look much faster. But this is big lie, putting users data into the danger!

        Typical Ext3 speed on distribution, that care safety of users data, would be much slower in these graphs.
        Ouch! This is lame at best. Editors please put a big fat flashy red warning on every page of the article that the tests are massively deceiving.

        Comment


        • #34
          first test (dbench) looks like ext3 never hits the platter - no surprise since ext3 has barriers default off. Unlike reiserfs.

          You guys should really stop looking at ext3. It is not a filesystem meant for serious usage. Just pretty numbers.

          Comment


          • #35
            Please do not use SSDs on your reviews

            SSDs access times are almost identical for all sectors, thus eliminating all allocation optimizations of modern file systems.

            Access time impact is huge

            AFAIK, the 'nobarrier' and 'data=writeback' mount options might have a performance effect even if you don't have a journal (Theo says that the 'nojournal' feature only disables the journal writes to disk and not the journal logic)
            Last edited by tmo71; 15 December 2009, 07:21 PM.

            Comment


            • #36
              CFQ change in 2.6.33

              I'm surprised it wasn't mentioned that the CFQ scheduler in 2.6.33 underwent a change to increase system responsiveness at the expense of throughput. This would significantly affect the benchmarking results if you are comparing them to results on previous kernels. In addition, CFQ optimizes file allocation for rotational media, however this is entirely unnecessary on solid state drives and just results in extra overhead. As I said earlier, it'd be better to use a different I/O scheduler like deadline or noop when testing SSDs. This would eliminate the extra variable of an I/O scheduler that changes each kernel release and would likely yield better performance as well.

              Comment


              • #37
                Originally posted by lordmozilla View Post
                I'm tired of these tests where default options are used everywhere. What's the point? Show us the potential of these filesystems not just the fact that no configuration = crap performance.
                I think the point of default options is because most users do not know, or are able to use the correct switches to get greatest performance. Only few people know which switches to use. Why not bench with default options, which everyone will use then?

                Another thing is that if you aggressively tailor for performance you often loose other functionality. For instance, reliability. Who wants to use a very fast file system where your data is unsafe? I prefer a slow filesystem where my data is safe, and not subject to silent corruption and bit rot as all file systems are (except ZFS). Of course, if it fast, the better. But the point of a file system is that your data is safe. Better slow and safe, than fast and unsafe?

                Comment


                • #38
                  Originally posted by kebabbert View Post
                  Another thing is that if you aggressively tailor for performance you often loose other functionality. For instance, reliability. Who wants to use a very fast file system where your data is unsafe? I prefer a slow filesystem where my data is safe, and not subject to silent corruption and bit rot as all file systems are (except ZFS). Of course, if it fast, the better. But the point of a file system is that your data is safe. Better slow and safe, than fast and unsafe?
                  Well. 3D Video driver benchmarks for instance, some comparison articles try not to only measure frame rate, but also judge the quality. They provide side-by-side screenshots and such. Benchmarking HD video playback, they not only provide the raw numbers but try to subjectively talk about the quality of the presented video.

                  A filesystem benchmark suite, in my opinion, isn't complete unless it attempts to also index reliability or at least to subjectively mention it as a caveat. If number 1 and number 2 are seperated by microseconds, but number 1 increases your likelihood of data by a non-trivial amount, ... well.. you get it.

                  I like these phoronix benchmarks though since they indicate slowdowns or speed bumps as these filesystems evolve.

                  Comment


                  • #39
                    kebabbert, then the distros should set some good defaults in fstab.

                    But the current situation is a mess. extX is unsave by design and unsave by default. Tuned for mbenchmarks and people are like 'OMG EXTX IS SO ?BER' while other filesystems are file safety first - and loose in such stupid benchmarks like phoronix does.

                    Turn on barriers for ext3 and see it loose badly. Or make ext4 not do stupid-but-fast allocations and see it completly break down.

                    Comment


                    • #40
                      Originally posted by energyman View Post
                      But the current situation is a mess. extX is unsave by design and unsave by default. Tuned for mbenchmarks and people are like 'OMG EXTX IS SO ?BER' while other filesystems are file safety first - and loose in such stupid benchmarks like phoronix does.
                      Yes, but try to tell them that EXT is not that reliable nor fast, and they start to call you names. Even if EXT developers confessed that is unreliable they would not accept when you tried to link to the interview.

                      Anyway, I think that a tailored benchmark is misleading. It is like benching a normal CPU (but it is overclocked and has special functions), this will not help normal users. The benches would be misleading, then. Same with special tailored benches. Of course, sometimes it is really bad, for instance benches where OpenSolaris used gcc v3.2 in 32bit mode vs Linux gcc 4.3 in 64bit mode, but such is life. If Solaris people could tailor the compiler, vs Linux people it would be more fair. But not many people has that expertise.

                      Comment

                      Working...
                      X