Announcement

Collapse
No announcement yet.

Google Makes New Attempt At "UMCG" As Part Of Their Open-Sourcing Effort Around Fibers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Google Makes New Attempt At "UMCG" As Part Of Their Open-Sourcing Effort Around Fibers

    Phoronix: Google Makes New Attempt At "UMCG" As Part Of Their Open-Sourcing Effort Around Fibers

    Since 2013 Google has been working on Fibers as a promising user-space scheduling framework. Fibers has been in use at Google and delivering great results while recently they began work on open-sourcing this framework for Linux and as part of that working on the new "UMCG" code...

    https://www.phoronix.com/scan.php?pa...MCG-0.2-Fibers

  • #2
    Does Go use UMCG?
    Is UMCG inspired by Go or does it have any relation to Go?
    Maybe Python can be written to take advantage of UMCG?

    Comment


    • #3
      This is basically FUTEX_SWAP. I want it. Now!

      Comment


      • #4
        Is this useful for implementing work stealing or co-routines?

        Comment


        • #5
          i thought it's established that fast 1:1 is better than m:n, so m:n is only used by inferior oses

          Comment


          • #6
            Originally posted by pal666 View Post
            i thought it's established that fast 1:1 is better than m:n, so m:n is only used by inferior oses
            This suggests that having more user threads than HW threads can be useful for hiding I/O latency:


            Whether it's profitable probably depends on the latency of your I/O operations. For network I/O, it could be very beneficial.

            Comment


            • #7
              Originally posted by pal666 View Post
              i thought it's established that fast 1:1 is better than m:n, so m:n is only used by inferior oses
              It's not just that. It's also fast context switching. When doing a context switch (i.e. to do IPC to a process you know it's waiting for it), you can run the new process' thread directly on the same core while the old one waits, without having to ditch all the caches since it uses the same data. It takes like 100ns which is insanely low for a context switch, 10x boost almost. That's why I want it.

              Comment

              Working...
              X