Announcement

Collapse
No announcement yet.

BMQ "BitMap Queue" Is The Newest Linux CPU Scheduler, Inspired By Google's Zircon

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by PuckPoltergeist View Post

    It's very confusing what you're writing. What do you mean with "already executing on another core there is no context switch"? What is "a function specific to a core"?
    Are you familiar with how Zircon is architected? Will give silly example to explain. You could have one core handling IP another TCP another HTTP and then run the application on a couple of cores. If the core is already running IP then there is no context switch.

    Versus normally Linux enters the kernel on the care making the request. So take web browsing. You have several things on the screen where each requires HTTP, TCP. IP, Ethernet. So each core has the entire stack. Versus a core handling each "function".

    Hope that helps.

    There is ways to do some of this with Linux but Zircon is built from the ground up to work like this.
    Last edited by bartturner; 13 March 2019, 06:40 AM.

    Comment


    • #12
      Originally posted by bartturner View Post

      Are you familiar with how Zircon is architected? Will give silly example to explain. You could have one core handling IP another TCP another HTTP and then run the application on a couple of cores. If the core is already running IP then there is no context switch.

      Versus normally Linux enters the kernel on the care making the request. So take web browsing. You have several things on the screen where each requires HTTP, TCP. IP, Ethernet. So each core has the entire stack. Versus a core handling each "function".
      I think, I understand now what you're speaking about. First, this has nothing to do with preemption. Preemption isn't related to how much a task is separated into sub-tasks. Preemption means interrupting a task in favor of another task.

      Second, you're right, the network stack isn't separated on Linux. And frankly, I don't see the benefit for this. HTTP, TCP, IP are serial tasks that won't benefit from any attempt to parallel this. Additional such a design will suffer from heave cache thrashing. If you split the workload this way, your data must be moved around for every step. Even the instruction cache won't benefit, as long as you don't have dedicated cores for the tasks. With general purpose cores, a core may run IP this moment, next moment some filesystem related work, HTTP next and so on.

      Comment


      • #13
        CFS / BMQ / MuQSS benchmarks would be appreciated.

        Comment


        • #14
          Originally posted by overwatch View Post
          CFS / BMQ / MuQSS benchmarks would be appreciated.
          Indeed

          I have done some tests quite a while back, but the main point of these schedulers (MuQSS/PDS) is for interactivity, and not really for "increased performance".. When i tested the benefits, i ran a few benchmarks (Unigine Valley++ stuff like that) WHILE i was doing a "make -j12". The differences was huge. With regular CFS (Ubuntu) kernel, the os was barely usable while compiling, but with PDS/MuQSS you lost like 15% fps. Just running the benchmark with no tasks in the background CFS was slightly ahead tho..

          This may have improved a LOT with newer kernels (CGroups and whatnot), so i dunno what the change is today. But to be able to watch youtube/browse/game/whatever while compiling is TO ME a huge +, so it would be nice to see a difference in the schedulers.
          I know "Xanmod kernel" have some nice tunings for CFS kernel, so im pretty sure it can be a close race if you just tune a wee bit vs. stock kernel.

          Comment


          • #15
            Found some benchmarks https://m.youtube.com/watch?v=rM7Rfu...ature=emb_logo

            Comment

            Working...
            X