Announcement

Collapse
No announcement yet.

GCC 11 PGO With The AMD Ryzen 9 5950X For Faster Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Paradigm Shifter View Post
    is definitely worth it if it shaves 10% of the runtime off a program for which a single run will often be weeks.
    Yes, if you use something cpu heavy on a very regular basis, then PGO is a very attractive option. I personally compile Blender with PGO as it is a piece of software that I use quite a lot, for rendering, 3d sculpting, simulation etc.

    Comment


    • #12
      Originally posted by Paradigm Shifter View Post
      Yeah, I understand that. But compiling twice (something which takes about 10 minutes) is definitely worth it if it shaves 10% of the runtime off a program for which a single run will often be weeks.
      Compiling twice is the easy part. It's coming up with a valid automated testing process in between that's hard.

      It's trivial for a benchmark, because you can just run it. But most apps have tons of functionality and you have to decide how much of it you can setup tests for, and whether some parts need to be focused on more while rare functionality is ignored in order to avoid mis-prioritizing it. If you do a PGO build with bad data, you can easily end up with an app that runs slower rather than faster.

      Comment


      • #13
        Originally posted by smitty3268 View Post
        If you do a PGO build with bad data, you can easily end up with an app that runs slower rather than faster.
        It's extremely unlikely unless you truly botch the data gathering stage by a doing very uncommon workflow, and why would you ?

        It's not as if it is making 'non-touched' code slower, it just doesn't have the runtime data for it with which to do better optimization than it usually does.

        So for 'non-touched' code, you are not really worse off than if you compiled the code without PGO at all, at least if you use "-fprofile-partial-training" for GCC as it otherwise optimizes such code for size rather than performance.

        Comment


        • #14
          There's also FDO (feedback driven optimizations), which is similar, but slightly different. I think FDO makes gathering profiling data easier, but I don't remember how exactly. I think it allows running programs normally and collects profile data transparently. Anyway, there are tools that make it possible to combine profiles from different runs, so in theory a public database could be created that user's could upload profiles to, which then get merged together and then made available for everyone to use without everyone redoing the profiling themselves.

          Originally posted by Grinch
          It's not as if it is making 'non-touched' code slower, it just doesn't have the runtime data for it with which to do better optimization than it usually does.
          IIRC, unless you use a special option gcc will compile code without profiling data using thi equivalent of Os under the assumption that it's not Performance critical and that conserving cache would lead more of the important code to not get flushed. So yes, without further configuration that would be exactly what happens.

          Originally posted by PuckPoltergeist
          So building gcc with pgo is speeding up applications build with this gcc? This doesn't make sense. Enabling pgo should (must) not impact binaries build with this version of gcc.
          No, compiling gcc with PGO just makes gcc itself faster. You need profiling data for every program/library you want to compile.
          Last edited by binarybanana; 01 September 2021, 03:37 PM.

          Comment

          Working...
          X