Announcement

Collapse
No announcement yet.

BFQ I/O Scheduler Patches Revised, Aiming To Be Extra Scheduler In The Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • BFQ I/O Scheduler Patches Revised, Aiming To Be Extra Scheduler In The Kernel

    Phoronix: BFQ I/O Scheduler Patches Revised, Aiming To Be Extra Scheduler In The Kernel

    BFQ developers had hoped to replace CFQ in the mainline Linux kernel with Budget Fair Queueing for a variety of reasons but it hadn't ended up making it mainline. Now the developers are hoping to introduce BFQ back to mainline as an extra available scheduler...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    "nak, make bfq for mq" http://lkml.iu.edu/hypermail/linux/k...0.3/03774.html

    "mq? yes! no! yes!" http://lkml.iu.edu/hypermail/linux/k...0.3/03764.html

    Comment


    • #3
      Yay, back to the moving goalposts!

      "We could have had BFQ for mq NOW, if we didn't keep coming back to this very point."

      We could have had BFQ years ago, if every submission wasn't hit with the latest arbitrary wishlist. Never mind that it's demonstrably better than CFS in every metric, that it's more maintainable, that the code already exists...

      It needs to replace CFQ. It needs to be patched from CFQ. Patching from CFQ is too confusing. The fundamental concept is wrong, if you ignore the real behaviour. Now, having been bikeshedded continuously for years, it's behind on the latest churn.

      BFQ doesn't have to be better than any I/O scheduler in the kernel, or planned to be in the kernel, it has to be better than any of the hypothetical alternatives that get invented on the spot whenever the (real, usable-right-now) patches show up.

      Comment


      • #4
        Jens Axboe and Christoph Hellwig are terrible managers. They may be good programmers, but they are not doing a good job managing their part of the linux kernel project. They have tunnel vision, only seeing what they are currently concentrating on, and not wanting to divert any of their time or effort to anything else, even when a developer comes forward with ready-made useful code that many linux users want.


        Comment


        • #5
          Originally posted by jwilliams View Post
          Jens Axboe and Christoph Hellwig are terrible managers. They may be good programmers, but they are not doing a good job managing their part of the linux kernel project. They have tunnel vision, only seeing what they are currently concentrating on, and not wanting to divert any of their time or effort to anything else, even when a developer comes forward with ready-made useful code that many linux users want.

          Well if they're right that was mq needs to support BFQ is weeks worth of effort, it may not be so bad.
          If it's years now...

          Comment


          • #6
            Originally posted by FLHerne View Post
            Yay, back to the moving goalposts!

            "We could have had BFQ for mq NOW, if we didn't keep coming back to this very point."

            We could have had BFQ years ago, if every submission wasn't hit with the latest arbitrary wishlist. Never mind that it's demonstrably better than CFS in every metric, that it's more maintainable, that the code already exists...

            It needs to replace CFQ. It needs to be patched from CFQ. Patching from CFQ is too confusing. The fundamental concept is wrong, if you ignore the real behaviour. Now, having been bikeshedded continuously for years, it's behind on the latest churn.

            BFQ doesn't have to be better than any I/O scheduler in the kernel, or planned to be in the kernel, it has to be better than any of the hypothetical alternatives that get invented on the spot whenever the (real, usable-right-now) patches show up.
            A long running issue with the kernel has been the difficulty with getting certain features included.
            I've been looking into the history of linux aio and you just wouldn't believe how many attempts have been made (and by no means do all of them add new syscalls). Seriously, take a look for yourselves (and consider how many changes the kernel has incorporated since ~2000 b/c of not properly implementing asynchronous behavior---a hint is that two of them have to do with epoll).
            I've felt really bad for Paolo b/c of how little attention his patches have gotten. He's done everything asked of him. Yet, the number of times he's submitted bfq pales next to the attempts to get kevent (at least 20 and it never happened), let alone devfs (more than 150, iirc...and then it was removed a few years later---so, we know the process doesn't even guarantee that the solution is any good).
            This is a very broken aspect to kernel development.
            It's almost as though there needs to be a...I don't know...PROSPECTIVE branch that you have to go through before submitting to linus' branch.

            Comment


            • #7
              Originally posted by FLHerne View Post
              We could have had BFQ years ago, if every submission wasn't hit with the latest arbitrary wishlist. Never mind that it's demonstrably better than CFS in every metric, that it's more maintainable, that the code already exists...

              It needs to replace CFQ. It needs to be patched from CFQ. Patching from CFQ is too confusing. The fundamental concept is wrong, if you ignore the real behaviour. Now, having been bikeshedded continuously for years, it's behind on the latest churn.
              Yeah, it's a pretty common situation for me, if you're a "nobody" and you have a patch with serious functionality - you have to wage a long time demoralizing uphill battle that in 90% of cases is not worth unless you're paid.

              Which is why I smile when I read naive propaganda that in open source you can improve the code and share it upstream and everyone will benefit and bla-bla-bla, in reality there are gatekeepers upstream that will not let you do a big/serious contribution unless you're willing to go a very long way, and if you're doing it for free, 90% of the people will give up along the way, not to mention often you have to sign certain documents even for open source projects and learn their development and source code tracking stacks. So you might end up spending 5% of effort on creating the change and 95% on trying to get it included upstream.

              Comment


              • #8
                Originally posted by cl333r View Post

                Yeah, it's a pretty common situation for me, if you're a "nobody" and you have a patch with serious functionality - you have to wage a long time demoralizing uphill battle that in 90% of cases is not worth unless you're paid.

                Which is why I smile when I read naive propaganda that in open source you can improve the code and share it upstream and everyone will benefit and bla-bla-bla, in reality there are gatekeepers upstream that will not let you do a big/serious contribution unless you're willing to go a very long way, and if you're doing it for free, 90% of the people will give up along the way, not to mention often you have to sign certain documents even for open source projects and learn their development and source code tracking stacks. So you might end up spending 5% of effort on creating the change and 95% on trying to get it included upstream.
                i think that is because linux is pretty much business critical at this point. even if businesses are rather conservative with upgrades.

                also, if one would be so inclined they might just roll their own linux repository. which, thanks to the nature of git is exactly what many people do. people follow linus partly for traditional reasons, partly for him being the last line of sanity when something dubious gets past the maintainers.

                you can definitely do so in smaller projects where things are easier to test, and impact is less critical. i managed to get a couple of things patched in fusioninventory, for instance.

                Comment


                • #9
                  Originally posted by cl333r View Post
                  Which is why I smile when I read naive propaganda that in open source you can improve the code and share it upstream and everyone will benefit and bla-bla-bla, in reality there are gatekeepers upstream that will not let you do a big/serious contribution unless you're willing to go a very long way, and if you're doing it for free, 90% of the people will give up along the way, not to mention often you have to sign certain documents even for open source projects and learn their development and source code tracking stacks. So you might end up spending 5% of effort on creating the change and 95% on trying to get it included upstream.
                  Fun fact: In closed source it is far worse.

                  Comment


                  • #10
                    Originally posted by cl333r View Post

                    Which is why I smile when I read naive propaganda that in open source you can improve the code and share it upstream and everyone will benefit and bla-bla-bla, in reality there are gatekeepers upstream that will not let you do a big/serious contribution unless you're willing to go a very long way, and if you're doing it for free, 90% of the people will give up along the way, not to mention often you have to sign certain documents even for open source projects and learn their development and source code tracking stacks. So you might end up spending 5% of effort on creating the change and 95% on trying to get it included upstream.
                    Bla, bla, bla. In Open Source you can always fork which is impossible with closed source crap. Bla, bla, bla.

                    Comment

                    Working...
                    X