Announcement

Collapse
No announcement yet.

Fedora To Further Evaluate vm.max_map_count Tuning For Better Linux Gaming Experience

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Fedora To Further Evaluate vm.max_map_count Tuning For Better Linux Gaming Experience

    Phoronix: Fedora To Further Evaluate vm.max_map_count Tuning For Better Linux Gaming Experience

    There's been a Fedora 39 proposal under evaluation for boosting the kernel's vm.max_map_count to help with some Windows games on Steam Play. Though concerns were raised that bumping this kernel tunable too high may not be wise. As such, further testing is to happen for tuning Fedora's stock vm.max_map_count value...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    From the comments:

    Speaking of limits and safety.
    Fully updated Fedora 38 can be killed using the simplest fork bomb:
    Code:
    fork() {
        fork | fork &
    }
    fork
    So, if people are concerned about DoS attacks, maybe people should start getting concerned that Fedora out of the box is not foolproof at all.
    Don't try to run this on your PC

    Comment


    • #3
      Not sure what's more likely to crash a system, being OOM or the kernels OOM handler.

      Comment


      • #4
        Originally posted by avis View Post
        Don't try to run this on your PC
        It's another example of 'brilliant' Fedora choices. At first they messed up the kernel config and it seems they're ruining security as well. What's more disappointing they didn't allow comments criticizing their choices on Fedora's blog.

        I tried this fork bomb, but I'm using custom kernel config with mglru enabled and it had no impact. It runs and gets immediately killed in a loop, but I can't stop this.

        P.S. let Fedora be more 'inclusive', so more clueless people will mess it up and they will get paid for it.
        Last edited by Volta; 19 May 2023, 08:05 AM.

        Comment


        • #5
          Originally posted by Volta View Post

          It's another example of 'brilliant' Fedora choices. At first they messed up the kernel config and it seems they're ruining security as well. What's more disappointing they didn't allow comments criticizing their choices on Fedora's blog.

          I tried this fork bomb, but I'm using custom kernel config with mglru enabled and it had no impact. It runs and gets immediately killed in a loop, but I can't stop this.

          P.S. let Fedora be more 'inclusive', so more clueless people will mess it up and they will get paid for it.
          What would the code be? GCC throws errors in there.

          BTW, is this anyhow differnet in Fedora 38 than in Debian 12 or Ubuntu 22.04/23.04?

          EDIT: It's not C, it's bash. Just do an SH... In the default Fedora kernel it doesn't kill the script, The computer stays surprisingly responsive until it doesn't after depleting the memory.
          Last edited by jorgepl; 19 May 2023, 08:52 AM.

          Comment


          • #6
            Docker in the past had a systemd unit with `LimitNOFILE=1048576` (million) to work around a similar issue. While not ideal since that also caused each container to inherit the same value as the soft limit for processes instead of 1024, it later got changed to `LimitNOFILE=infinity` which at the time resolved to the same value (fs.nr_open), but then systemd v240 arrived late 2018 and for most distros that didn't opt-out like Debian, this increased `fs.nr_open` to over a billion.

            That caused a tonne of difficult to troubleshoot bugs with software running in containers, mysql container starting would consume multiple GB of memory and OOM, while other processes like Fedora's own DNF would become ridiculously slow to install a package. Many daemons would iterate through that range when initializing to close any open file descriptors as a good practice, and for 1024 that's quick, even for a million you only lost less than a second? But a billion being a thousand times is the difference of 1 second vs 16 minutes, so 4 seconds to an hour..

            FWIW, systemd v240 went with a sane default of `LimitNOFILE=1024:524288` (initially planning for half that, until some reports came in to bump it for some niche software).

            ---

            So this is good to see that they're going to do a similar large jump like originally proposed and try find a suitable lower limit. With `LimitNOFILE` it was much easier to associate failures from the limit being too low and adjusting that than it was the other way around.

            Comment


            • #7
              Originally posted by jorgepl View Post
              BTW, is this anyhow differnet in Fedora 38 than in Debian 12 or Ubuntu 22.04/23.04?
              Needs to try.

              EDIT: It's not C, it's bash. Just do an SH... In the default Fedora kernel it doesn't kill the script, The computer stays surprisingly responsive until it doesn't after depleting the memory.
              Yes and it shutdowns easily, so there must be a way to kill those processes.

              Comment


              • #8
                Originally posted by Volta View Post

                It's another example of 'brilliant' Fedora choices. At first they messed up the kernel config and it seems they're ruining security as well. What's more disappointing they didn't allow comments criticizing their choices on Fedora's blog.

                I tried this fork bomb, but I'm using custom kernel config with mglru enabled and it had no impact. It runs and gets immediately killed in a loop, but I can't stop this.

                P.S. let Fedora be more 'inclusive', so more clueless people will mess it up and they will get paid for it.
                This change does not reduce security at all. If you really want to clog your system by creating a huge number of mappings, you can do so from multiple processes right now (e.g. a fork bomb with each process allocating, dirtying and repeatedly loading several megabytes, maybe even in huge pages).

                And I don't know what you mean by "messed up kernel config", and "not allowing comments criticizing their choices" when I see plently of them. Combined with your misguided rant against inclusivity I guess you're just trolling because Fedora doesn't make exactly the decisions you'd like them to make.

                Comment


                • #9
                  Originally posted by Volta View Post
                  Yes and it shutdowns easily, so there must be a way to kill those processes.
                  Run apps you don't trust as a separate user and have a (possibly hotkeyed) script you can easily kill if DoS happens which just does pkill -9 --uid user. Of course such script must be run as root, since it's your OS I expect you to be able to figure this out how.

                  Comment


                  • #10
                    I am not sure what would be reason for 1mln-16mln default. Is there any system that can reasonably survive such high number of memory mappings? From that point security wise what is diffrence between 1mln and 2bilion?

                    Also giant shame on Fedora maintainers:



                    Elasticsearch (very common software for devs) explicitly requires raising that limit most of time. And they require at least 4 times as much on basically flat config with not much in database. You literally see it as 1st link when you google what vm.max-map-count is. Malloc debuggers also love making at ton of maps. IBM's safer payment has recommendation to raise it.
                    Last edited by piotrj3; 19 May 2023, 09:52 AM.

                    Comment

                    Working...
                    X