Announcement

Collapse
No announcement yet.

BOLT Merged Into LLVM To Optimize Binaries For Faster Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • BOLT Merged Into LLVM To Optimize Binaries For Faster Performance

    Phoronix: BOLT Merged Into LLVM To Optimize Binaries For Faster Performance

    Merged into LLVM's mono repository minutes ago was BOLT! This is the Facebook-developed tool for optimizing the layout of binaries in the name of delivering greater performance. Facebook (now Meta) already has been using BOLT internally to great success with production workloads, it's continued advancing in the public as open-source for a while, and is now upstream in LLVM for fostering its future development...

    https://www.phoronix.com/scan.php?pa...LVM-Lands-BOLT

  • #2
    Yeah!

    Comment


    • #3
      Does this improve performance if used on applications?

      Comment


      • #4
        and up to 52.1% if the binaries are built without FDO and LTO
        Wow. I can't wait to see what the real world results are. I also wonder how compile times are with BOLT: faster, slower, the same?

        Comment


        • #5
          So what is built with LLVM that we normally use and could potentially take advantage of this ?

          Comment


          • #6
            Originally posted by Danny3 View Post
            So what is built with LLVM that we normally use and could potentially take advantage of this ?
            Doesn't Google build Chrome with LLVM by default?

            Comment


            • #7
              According to a 2019 paper - I glanced the abstract - it is sample profile driven optimization (like pgo is). So I do not expect wide adoption, because:
              - Setting up the generation of a profile with a predefined workload is cumbersome for developers and distributions. That is why PGO is not used that much.
              - Defining a workload that is adequately generic for most users to benefit from the profile, but not too generic might be a challenge even?

              But I certainly hope I am wrong on both accounts.

              Comment


              • #8
                How does this work with features like KARL (I think it is called KARL) and PIE executables that the *BSDs and Linux are moving towards where the layout of binaries like the kernel and applications are randomized so that if a buffer overflows an attack is harder to generate from the buffer overflow. This BOLT features seems to require a particular predictable layout of the binary file which goes away from what the trend has been the last decade to introduce more randomness into the layout of the executable binary.

                Comment


                • #9
                  Originally posted by Danny3 View Post
                  So what is built with LLVM that we normally use and could potentially take advantage of this ?
                  any code you make with nim just switch compiler to llvm ?


                  To change the compiler from the default compiler (at the commad line):

                  nim c --cc:llvm_gcc --compile_only myfile.nim
                  This uses the configuration defined in config\nim.cfg for llvm_gcc.

                  Comment


                  • #10
                    How does this work with features like KARL (I think it is called KARL) and PIE executables that the *BSDs and Linux are moving towards where the layout of binaries like the kernel and applications are randomized so that if a buffer overflows an attack is harder to generate from the buffer overflow. This BOLT features seems to require a particular predictable layout of the binary file which goes away from what the trend has been the last decade to introduce more randomness into the layout of the executable binary.
                    BOLT works fine with ASLR (not KARL) and does not impact the randomness. BOLT works by reordering and recombining sections so that they better fit into caches & cachelines.
                    Also, all binaries (sans a few exceptions) have been PIE on most distros for years.

                    So what is built with LLVM that we normally use and could potentially take advantage of this ?
                    BOLT works with gcc-built binaries too, and it works on every compiled program - it's a second compilation pass much like PGO

                    Wow. I can't wait to see what the real world results are. I also wonder how compile times are with BOLT: faster, slower, the same?
                    it's a feedback driven optimization pass. You build the binary, collect profiling data, then build it again. Thus it doubles.

                    Does this improve performance if used on applications?
                    Yes, but it's not a generic optimization pass to toggle. It requires an application profile much like PGO.

                    Comment

                    Working...
                    X