Announcement

Collapse
No announcement yet.

New MGLRU Linux Patches Look To Improve The Scalability Of Global Reclaim

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • New MGLRU Linux Patches Look To Improve The Scalability Of Global Reclaim

    Phoronix: New MGLRU Linux Patches Look To Improve The Scalability Of Global Reclaim

    Among the many exciting new features in Linux 6.1 is the merging of the Multi-Gen LRU "MGLRU" code as what has shaped up to be one of the best kernel innovations for 2022 for overhauling the Linux kernel's page reclamation code. The performance results already are very promising and MGLRU is being used successfully at Google and other large deployments. The work isn't over though on further advancing the kernel in this area...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    This is one of my favorite kinds of improvements. Just creative ways to get the most performance and capabilities out of one's hardware.

    I know a lot of people say they don't need any swap, but isn't it nice to have that as a backup in case some app misbehaves and you don't lose your whole session? Swapping can even be an early warning sign where it's enabled but not expected. I think it's better than OOM kills and crashes. Does it really hurt to have something like ZRAM or ZSwap on at a minimum?

    Comment


    • #3
      Originally posted by Mitch View Post
      I think it's better than OOM kills and crashes.
      I like my OOM-kills. It reminds me that I need more RAM.
      And I can always do with more RAM.

      Comment


      • #4
        Originally posted by milkylainen View Post
        I like my OOM-kills.
        ...to each his own.

        Comment


        • #5
          Is MGLRU likely to benefit tiered memory? Or too slow for that?

          Tiered memory is really going to heat up, as CXL memory devices begin to gain market share.

          Comment


          • #6
            I think the worst decision was that by default, memory overcommit is enabled.
            This lead to the expectation that malloc() always returns a valid pointer.
            I want it return NULL when the program cannot allocate more memory and not just when it depleted its address space.

            Default memory overcommit is why Google Chrome can allocate 1TB on a 64-bit machine with far less RAM.

            Comment


            • #7
              Originally posted by zboszor View Post
              I think the worst decision was that by default, memory overcommit is enabled.
              This lead to the expectation that malloc() always returns a valid pointer.
              I want it return NULL when the program cannot allocate more memory and not just when it depleted its address space.

              Default memory overcommit is why Google Chrome can allocate 1TB on a 64-bit machine with far less RAM.
              This is a great point. It seems like overcommit should be turned off in most app dev and tuning scenarios so that any app isn't unnecessarily reliant on it. I'd imagine overcommit is better when a machine can run more apps or run existing more fluidly in non-development scenarios, like routers, servers, phones, gaming, etc. And I can imagine exceptional scenarios in which apps ought to be designed around overcommit, such as large databases, and even then, you could still design such an app to manage its own overcommit / disk caching outside the kernel. The one advantage the kernel would give you all the overcommit options such as ZRam, ZSwap, disk tiering, moving the swap files off the app disk, etc without having to reinvent these sorts of tools in your app.
              Last edited by Mitch; 07 December 2022, 12:34 PM.

              Comment


              • #8
                Originally posted by Mitch View Post
                It seems like overcommit should be turned off in most app dev and tuning scenarios so that any app isn't unnecessarily reliant on it.
                It's not primarily the app, but the usage scenario that determines whether overcommit is happening. Most code doesn't call malloc() until it needs the memory, and then typically fills/initializes the entire block at that time.

                I suspect it's more the heap-management code (i.e. the guts of malloc()/free()) that's grabbing address ranges before they're entirely used/needed. I wonder if thread stacks are also over-committed...

                Originally posted by Mitch View Post
                you could still design such an app to manage its own overcommit / disk caching outside the kernel. The one advantage the kernel would give you all the overcommit options such as ZRam, ZSwap, disk tiering, moving the swap files off the app disk, etc without having to reinvent these sorts of tools in your app.
                Why do userspace programs need to deal with this kind of complexity? That properly belongs in the OS, which lets the user set a per-app policy or potentially even change it on-the-fly.

                Comment


                • #9
                  Originally posted by coder View Post
                  It's not primarily the app, but the usage scenario that determines whether overcommit is happening. Most code doesn't call malloc() until it needs the memory, and then typically fills/initializes the entire block at that time.

                  I suspect it's more the heap-management code (i.e. the guts of malloc()/free()) that's grabbing address ranges before they're entirely used/needed. I wonder if thread stacks are also over-committed...


                  Why do userspace programs need to deal with this kind of complexity? That properly belongs in the OS, which lets the user set a per-app policy or potentially even change it on-the-fly.
                  I'm speaking outside my area of expertise, but I'm just trying to brainstorm some use cases based on my understanding.

                  I can imagine the user space program having access to better context and heuristics for its own memory than the kernel which could give the app an advantage at some eviction and reclaim decisions in terms of latency and frequency.

                  For example, the app knows I won't use this 1 GB of recently and frequently processed data for some lengthy timeframe (an hour maybe) and it knows it'll need to ask for 1 GB more to use for another purpose and it knows there's some memory pressure. So it evicts the 1 GB by some means such as compression or writing to disk. The kernel might instead believe the 1 GB is hot while the app would know it can safely be evicted and reclaimed later.

                  The kernel will probably eventually figure this out but the app has more context and could do it sooner. If the app can hint to the kernel that the 1 GB of memory is now safe to evict but not delete, the kernel would still likely be the best method.

                  Comment


                  • #10
                    Originally posted by Mitch View Post
                    I can imagine the user space program having access to better context and heuristics for its own memory than the kernel which could give the app an advantage at some eviction and reclaim decisions in terms of latency and frequency.
                    Most userspace doesn't use memory in a flexible way, like you describe. For userspace which does application-level caching, then I could agree there should be some sort of API to give the kernel more visibility and control over what it's doing.

                    Comment

                    Working...
                    X