Announcement

Collapse
No announcement yet.

MGLRU Continues To Look Very Promising For Linux Kernel Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • MGLRU Continues To Look Very Promising For Linux Kernel Performance

    Phoronix: MGLRU Continues To Look Very Promising For Linux Kernel Performance

    One of many promising kernel patch series at the moment for enhancing Linux kernel performance is the multi-gen LRU framework (MGLRU) devised by Google engineers. They found the current Linux kernel page reclaim code is too expensive for CPU resources and can make poor eviction choices while MGLRU aims to yield better performance. These results are quite tantalizing and MGLRU is now up to its ninth revision...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I just rebuilt yesterday's 5.16.13 kernel with V7 patch. Oh well, here we go again

    Comment


    • #3
      This post inspired me to ask a question I have been thinking about for a while. I kind of know at least in simplistic terms the answer is yes. I also get that there is probably a really deep and interesting conversation that could be had around this with people really in the know. Anyway...

      I have read up some on the history of computing. Things like Charles Babbage and Ada Lovelace, various mechanical and electro-mechanical devices, etc. There was some student of logic or whatever that did a summer internship at AT&T/Bell, and when seeing all the electro-mechanical telephone switching equipment/devices, he figured he could use these switches and relays and whatever, and assemble them in a way that they could do logical computations. All sorts of variations on this in early computing history.

      Then we get vacuum tubes, then transistors, then transistors (and other components) etched onto silicon. "Per unit" costs for transistors and the rest, etched onto silicon become increasingly infinitely small and cheap. What we can do becomes more and more limited only to what our imaginations can imagine, not a cost and size limitation.

      Okay, so here is what I am getting it. With caveats that there is more nuance and detail here if you are in the deeper know, is it safe to say that today's modern processors are just really super complex logic/analytical engines? That if you know and understand the history of where they came from, the evolution of these things makes perfect sense, giants standing on the should of other giants, over and over again?

      Thanks

      Comment


      • #4
        Oh please yes, let this be the shining light at the end of the dark, long tunnel of memory-pressure issues under Linux. Maybe in combination with the new Facebook OOM-Killer.

        I have gone through different Distros, currently rocking Gentoo since a few years and switch schedules, memory options and systems countless times. One persistent thing that I always noticed is how Linux would crap itself under high memory pressure. System freezing, locking up, slowing down to one cursor-redraw every 60 seconds and the framebuffer login-prompt taking minutes.

        On systems with 512mb RAM, 8GB RAM or 32GB RAM, SWAP on disk, SWAP on SSD etc.

        Something I never understood, why this was so clunky with Linux. If one rouge process starts allocating gigabytes of memory… why doesn't OOM-killer kill it after a few seconds? Sometimes it takes very long or never happens at all.
        Or if I have some software that loaded x gigabytes of data, and I have an interactive shell and desktop, I would assume the kernel to swap out the pages of the large process first since it only affects one process and is likely a batch thing. Instead, all processes lag and freeze, including the shell and such.

        Looking forward to try those patches, eventho since last year I did upgrade to 32GB, exactly because I wanted to avoid those memory-pressure situations on my laptop.

        Comment


        • #5
          Originally posted by Draget View Post
          Oh please yes, let this be the shining light at the end of the dark, long tunnel of memory-pressure issues under Linux. Maybe in combination with the new Facebook OOM-Killer.

          I have gone through different Distros, currently rocking Gentoo since a few years and switch schedules, memory options and systems countless times. One persistent thing that I always noticed is how Linux would crap itself under high memory pressure. System freezing, locking up, slowing down to one cursor-redraw every 60 seconds and the framebuffer login-prompt taking minutes.

          On systems with 512mb RAM, 8GB RAM or 32GB RAM, SWAP on disk, SWAP on SSD etc.

          Something I never understood, why this was so clunky with Linux. If one rouge process starts allocating gigabytes of memory… why doesn't OOM-killer kill it after a few seconds? Sometimes it takes very long or never happens at all.
          Or if I have some software that loaded x gigabytes of data, and I have an interactive shell and desktop, I would assume the kernel to swap out the pages of the large process first since it only affects one process and is likely a batch thing. Instead, all processes lag and freeze, including the shell and such.

          Looking forward to try those patches, eventho since last year I did upgrade to 32GB, exactly because I wanted to avoid those memory-pressure situations on my laptop.

          I am also using gentoo and I use it to compile firefox, llvm and cargo.

          I only have 16GB, but I never experience any OOM, even with my /var/tmp/portage on tmpfs.

          That is because I turn on zswap, using highest level of zstd and I have at least 2GB of swap (can't remember the exact size right now).

          zswap is really, really useful.
          I hope that it will support writing compressed memory to the swap, soon.

          Comment


          • #6
            That is very welcome, I've always been baffled by how badly the kernel behaves under memory pressure, like paging out pages that were just paged in moments ago, while not touching an application with 15 GB of memory that are never accessed, resulting in an inoperational machine even though the active memory set is well below the size of physical RAM.

            It has become mostly obsolete for desktop PCs, since RAM is so cheap that you can easily put 64 GB in every PC, which is enough for most use cases.
            But it is still useful for various kinds of other devices with limited RAM.

            Originally posted by ehansin View Post
            Okay, so here is what I am getting it. With caveats that there is more nuance and detail here if you are in the deeper know, is it safe to say that today's modern processors are just really super complex logic/analytical engines?
            Sure, CPUs still are big logic circuits than can do basic operations like moving around values in memory and perform some basic mathematical operations. It got more complex over time with things like pipelines and caches being added, but the principle is the same. You can find simple 8-bit CPU designs for education or even build your own in a software like Logisim if you're interested. Or here's a 3D model of a very simple CPU (an ARM Cortex M0):
            Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.



            Comment

            Working...
            X