Announcement

Collapse
No announcement yet.

Linux 6.2 Speeds Up A Function By 715x - kallsyms_lookup_name()

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by oleid View Post

    The obvious question, which is not answered in the article, would be: when is this function actually used? Just during boot-up? Or permanently while the kernel is running? When modules are loaded?

    A 700x performance gain is great. Yet, if this function is only called e.g. on kernel startup, it doesn't really matter that much.
    from the original patch submission it seems like this was developed to speed up kernel livepatching, makes sense being written by a telecom company.



    other than that and module loading, it seems like another significant user of this interface is used by bpf, but probably not a hot path for most uses.

    Comment


    • #12
      So, it's said that this will increase the memory footprint. What is the biggest that footprint would become?

      Comment


      • #13
        Originally posted by NobodyXu View Post
        This is because it's changed from a linear search to binary search, O(n) to O(log(n)) and is now becoming much faster consider that the kernel has a lot of symbols.
        This kind of thing is a hidden performance cost of using C. Associative arrays are a basic feature of most higher-level languages, by this point. It's somewhat disheartening that there are still linear searches in the kernel, but I'd bet this isn't the last.

        Comment


        • #14
          Originally posted by cynic View Post
          I do agree,but improvements doesn't have to be always huge: many small improvements add up​ and in the long time makes the difference.
          Yes and no. If the function is used only during operations like module-loading, then an infinite number of such optimizations won't impact normal runtime performance.

          And yet, optimizations often make the code more complex, resulting in higher maintenance costs and more places for bugs to hide. So, you really want to target optimizations a bit selectively. Sometimes, an optimization makes the code simpler, in which case it's a win even if it doesn't have a measurable performance impact.

          Presumably, the patch was developed to address some pain point, in which case the patch is definitely worthwhile to someone. However, it might have been found through code inspection or using static analysis tools, in which case it might not make any practical difference.

          Comment


          • #15
            Originally posted by cynic View Post
            the bigger picture is just catastrophic due to the security mitigations of the last years
            According to Michael's recent testing, they're not significantly impacting Zen 4 or Alder/Raptor Lake.

            Comment


            • #16
              Originally posted by schmidtbag View Post
              So, it's said that this will increase the memory footprint. What is the biggest that footprint would become?
              Tiny. It's a function of the number of symbols in a module. It will always be a small % of overhead, relative to whatever modules you have loaded.

              Comment


              • #17
                Originally posted by coder View Post
                Yes and no. If the function is used only during operations like module-loading, then an infinite number of such optimizations won't impact normal runtime performance.

                And yet, optimizations often make the code more complex, resulting in higher maintenance costs and more places for bugs to hide. So, you really want to target optimizations a bit selectively. Sometimes, an optimization makes the code simpler, in which case it's a win even if it doesn't have a measurable performance impact.

                Presumably, the patch was developed to address some pain point, in which case the patch is definitely worthwhile to someone. However, it might have been found through code inspection or using static analysis tools, in which case it might not make any practical difference.
                If you quoted my entire post, there would also be the statement "Also, when someone find a way to solve a problem in a smarter/better way is in itself a good news.".
                I assumed that improvements are supposed to be not only for speed sake but also to improve the quality of the code.

                however I don't know if this applies to this specific patch​.

                Comment


                • #18
                  Originally posted by coder View Post
                  According to Michael's recent testing, they're not significantly impacting Zen 4 or Alder/Raptor Lake.
                  yes, I saw the news some times ago.
                  anyway to me security mitigations are still a sore spot because I'm still runnning on older architecture

                  Comment


                  • #19
                    As far as reason understand this, now you have to sort the symbols one more time, once per name, once per address (that might be kinda free depending on linker tricks).

                    So n*log2(n) additional cost for sorting, you will need roughly log2(n) lookups before you "break even". And you use more memory.

                    Not sure in which use cases this is a win.

                    Comment


                    • #20
                      Originally posted by Danny3 View Post
                      That's absolutely wonderful!
                      I wish this was backported to 6.1, especially since it's so simple.
                      This is only useful for kernel debugging. Do you do this?

                      Comment

                      Working...
                      X