Announcement

Collapse
No announcement yet.

Benchmarking The New Optimization Level In GCC 4.8

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Benchmarking The New Optimization Level In GCC 4.8

    Phoronix: Benchmarking The New Optimization Level In GCC 4.8

    GCC 4.8 is set to introduce a new optimization level that provides fast compilation performance and decent run-time performance of the resulting binary while still providing a superior debugging experience. Here are benchmarks of this new GCC general optimization level (-Og) compared to the other long-standing compiler optimization levels.

    http://www.phoronix.com/vr.php?view=18470

  • #2
    Did you bother to try debugging with those binaries? Did you check to make sure variables are not being compiled away? I'm curious about how the compiler is supposed to do any optimization at all if it has to preserve variables under all circumstances.

    I don't doubt that debugging with those binaries is a dubious proposition at best

    What is the point of compiling binaries in this mode? You don't get full optimal performance, so you can't do performance testing.

    It's NOT executing the same code that it would in "debug" mode, so you are NOT going to see every instruction get executed, because the compiler IS removing code, look at the benchmarks

    I wonder how long it takes to compile the binaries that save you a few seconds of debugging time? Just the time spent modifying the build scripts is a waste

    If this new mode is so sweet, why isn't it the default? If it actually works, what would be the point of NOT using it as the default compilation mode?
    Last edited by frantaylor; 02-12-2013, 09:54 AM.

    Comment


    • #3
      There's a good "OG/Original Gangsta" joke in here somewhere, but I'm too tired to think of it at the moment.

      Comment


      • #4
        Originally posted by frantaylor View Post
        What is the point of compiling binaries in this mode?
        I'm sure the point is to allow debugging in gdb applications that run really slowly when they are compiled without any optimizations. Like Mesa, for example. You might want to have optimizations so that the testing doesn't take forever to hit the spot of code that you are trying to debug, and not have to be a gdb wizard when you finally get there.

        Comment


        • #5
          Originally posted by frantaylor View Post
          It's NOT executing the same code that it would in "debug" mode, so you are NOT going to see every instruction get executed, because the compiler IS removing code, look at the benchmarks
          You know that that could be a good thing when it comes to reproducibility?
          I have been unable to reproduce bugs from time to time when compiling with O0, but also the other way around. Most of the time this seems to have to do with __OPTIMIZE__ and friends in some headers resulting in two different codepaths between -O0 and -O1, while the codepath used in -O1 to -O3 is more alike. So my debugging usually happens on something more alike -O1 then -O0. Just so I know that the code I just compiled behaves in the same way when I debug it as it did when someone hits the bugs.

          Comment


          • #6
            Originally posted by frantaylor View Post
            What is the point of compiling binaries in this mode? You don't get full optimal performance, so you can't do performance testing.
            Imagine an application that runs acceptable only in optimized state on current hardware.
            Imagine ability to debug application, while having acceptable running speed.

            Comment

            Working...
            X