Announcement

Collapse
No announcement yet.

Glibc 2.34 Will Provide More Helpful Linker Diagnostics

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Glibc 2.34 Will Provide More Helpful Linker Diagnostics

    Phoronix: Glibc 2.34 Will Provide More Helpful Linker Diagnostics

    With the exciting "HWCAPS" feature of Glibc 2.33+ allowing for optimized versions of libraries to be more easily deployed on Linux systems, diagnosing issues around it can be a bit more complicated but on the way for Glibc 2.34 is a welcome improvement to help in such issues...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Bless, letting non-Gentoo folk get a feel for the real thing

    Comment


    • #3
      Does anyone know if this is being worked on by the openSUSE team for Tumbleweed? (Glibc's HWCAPS)

      I'm seriously considering switching back to it since I will be able to activate full kernel preemption myself starting with Linux 5.12, which they decided to mysteriously disable with the switch to Linux 5.0 ...

      Anyway, would be cool to be on a rolling-release again AND to have higher performing packages by default if they would offer the option of at least x86_64 level 2 ! (SSE 4.1 as minimum if I remember correctly ...)

      Comment


      • #4
        Originally posted by Linuxxx View Post
        Does anyone know if this is being worked on by the openSUSE team for Tumbleweed? (Glibc's HWCAPS)

        I'm seriously considering switching back to it since I will be able to activate full kernel preemption myself starting with Linux 5.12, which they decided to mysteriously disable with the switch to Linux 5.0 ...

        Anyway, would be cool to be on a rolling-release again AND to have higher performing packages by default if they would offer the option of at least x86_64 level 2 ! (SSE 4.1 as minimum if I remember correctly ...)
        https://hackweek.suse.com/all/projec...age-generation - it seems they are considering it.

        Comment


        • #5
          Originally posted by ms178 View Post

          https://hackweek.suse.com/all/projec...age-generation - it seems they are considering it.
          Awesome - really good to hear!

          About the only thing left that would be desirable then would be the option to dynamically adjust Linux kernel's timer tick frequency (i.e. CONFIG_HZ).
          OpenSUSE defaults to 250 Hz, however 1000 Hz seems to be better for gaming, at least when it comes to minimum framerates (which is the value that actually matters the most).
          Unfortunately I've only found this single benchmark where this was actually looked at, and even then only in the context of KVM passthrough:

          Siege’s results were quite interesting. Overall, 1000Hz nets better minimum framerates, while 100Hz nets better maximum framerates. Average framerates are no different across CONFIG_HZ settings. Benchmark was run 6 times per CONFIG_HZ setting and results were averaged out. All settings remained at the lowest preset with the exception of medium texture quality, 4x anisotropic texture filtering, and medium shading quality. The low graphical settings helped put more load on the CPU rather than the GPU.

          Conclusion
          If you’re compiling your kernel from source, it is advisable to pick 1000Hz over 100Hz or 250Hz in order to receive a small but tangible minimum framerate improvement while gaming.
          The Linux kernel's CONFIG_HZ option can modify the balance between system throughput and latency. In this article, we explore its effects on KVM.


          Maybe Michael would be interested in having a closer look at benchmarking the different CONFIG_HZ values of Linux?

          Comment


          • #6
            Originally posted by Linuxxx View Post

            Awesome - really good to hear!

            About the only thing left that would be desirable then would be the option to dynamically adjust Linux kernel's timer tick frequency (i.e. CONFIG_HZ).
            OpenSUSE defaults to 250 Hz, however 1000 Hz seems to be better for gaming, at least when it comes to minimum framerates (which is the value that actually matters the most).
            Unfortunately I've only found this single benchmark where this was actually looked at, and even then only in the context of KVM passthrough:

            The Linux kernel's CONFIG_HZ option can modify the balance between system throughput and latency. In this article, we explore its effects on KVM.


            Maybe Michael would be interested in having a closer look at benchmarking the different CONFIG_HZ values of Linux?
            Interesting. FYI on Ubuntu the lowlatency kernel uses 1000HZ (but it has full preempt enabled, which may not be ideal for gaming).

            Comment


            • #7
              Originally posted by jacob View Post

              Interesting. FYI on Ubuntu the lowlatency kernel uses 1000HZ (but it has full preempt enabled, which may not be ideal for gaming).
              Where do You got the idea from that full preempt may not be ideal for gaming?

              Comment


              • #8
                Originally posted by Linuxxx View Post

                Where do You got the idea from that full preempt may not be ideal for gaming?
                What would be helped by making all drivers and kernel subsystems slower, for an application that uses high load for 10s of ms (ridiculously long for realtime)?
                Full preempt/realtime comes at a tough cost, and causes issues regularly (eg. drivers running into timeouts).

                Comment


                • #9
                  Originally posted by discordian View Post

                  What would be helped by making all drivers and kernel subsystems slower, for an application that uses high load for 10s of ms (ridiculously long for realtime)?
                  Full preempt/realtime comes at a tough cost, and causes issues regularly (eg. drivers running into timeouts).
                  It's quite clear that You seem to think full kernel preemption (soft-realtime [PREEMPT]) equals a hard-realtime kernel (PREEMPT RT), when obviously it does not!

                  Also, the only driver I'm aware of that ran into timeout issues was in fact nVidia's binary blob, and even then only with Linux patched as a hard-realtime kernel, but never ever when configured as a soft-realtime one! (I know that because I have been using nVidia's driver with PREEMPT enabled for years without any issues.)

                  Another thing You got wrong is the notion that full kernel preemption does not help gaming-related system loads, when in fact, again, the opposite is true.
                  Think about it this way:
                  A fully preemptible kernel prioritizes user-space software, which games are obviously a part of.
                  Whenever the game updates something that needs input from the kernel because of hardware interrupts (e.g. reacting to user input or drawing the next frame), the Linux kernel will freeze whatever it was doing at that exact moment and try to process whatever the game requires to be processed as quickly as the underlying hardware allows, thereby helping to keep the minimum framerates as high as possible and the maximum frametimes as low as possible, since the odds for any latency issues impacting the game are greatly decreased.

                  Hope this explanation clarifies the baseless fears a lot of Linux users seem to have against a fully preemptible (once more: SOFT-realtime) kernel...

                  Comment


                  • #10
                    Originally posted by Linuxxx View Post

                    It's quite clear that You seem to think full kernel preemption (soft-realtime [PREEMPT]) equals a hard-realtime kernel (PREEMPT RT), when obviously it does not!
                    So much wrong with this.

                    First you brought up "full preempt": that's as "realtime" as Linux can get, which really is not usable for hard-realtime. Solutions like Xenomai are hard-realtime, and they run completely below Linux. There is some effort to get the i-pipe mainlined at which Linux would have a hard-realtime subsystem, but we arent there yet.
                    (PREEMPT_RT, https://github.com/torvalds/linux/bl...config.preempt)

                    Originally posted by Linuxxx View Post
                    Also, the only driver I'm aware of that ran into timeout issues was in fact nVidia's binary blob, and even then only with Linux patched as a hard-realtime kernel, but never ever when configured as a soft-realtime one! (I know that because I have been using nVidia's driver with PREEMPT enabled for years without any issues.)
                    I ran into issues with mmc drivers which register timeouts if they are preempted during programming registers. There are tons of drivers not prepared for this.

                    Originally posted by Linuxxx View Post
                    Another thing You got wrong is the notion that full kernel preemption does not help gaming-related system loads, when in fact, again, the opposite is true.
                    Think about it this way:
                    A fully preemptible kernel prioritizes user-space software, which games are obviously a part of.
                    Yeah, that is was PREEMPT does, no need for PREEMPT_RT.
                    Originally posted by Linuxxx View Post
                    Whenever the game updates something that needs input from the kernel because of hardware interrupts (e.g. reacting to user input or drawing the next frame), the Linux kernel will freeze whatever it was doing at that exact moment and try to process whatever the game requires to be processed as quickly as the underlying hardware allows, thereby helping to keep the minimum framerates as high as possible and the maximum frametimes as low as possible, since the odds for any latency issues impacting the game are greatly decreased.
                    Within kernel thread and drivers that means first making them interruptible and replacing alot of fast synchronization (eg. atomic compare-exchange) with full mutexes/locks. Potentially even locking/unlocking multiple times to not block too long.
                    Ironically than means, that on average, your latency will be higher (thats the case for most realtime optimizations) with less spikes. The criticall issue is that:

                    - its unlikely that a driver will block for any prolonged time that is relevant for games. (if it has priority, it will not be interrupted just might be delayed to run once)
                    - except if you deal with memory faults - at which point RT or non RT matters little.

                    Originally posted by Linuxxx View Post
                    Hope this explanation clarifies the baseless fears a lot of Linux users seem to have against a fully preemptible (once more: SOFT-realtime) kernel...
                    PREEMPT is not soft-realtime (delay in kernel is not bounded).
                    PREEMPT_RT is soft-realtime, and you dont seem to know what this means.

                    (I am writing hard-realtime software for 12+ years, but hey, what do I know....)

                    Comment

                    Working...
                    X