Announcement

Collapse
No announcement yet.

ARM Proposes Changing GCC's Default Optimization Level To -Og

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by microcode View Post
    I find that -Og optimizes too much out to be useful for debugging. I'm frequently met with <optimized out> after triggering a rare state, only to have to try again after rebuilding with -O0.

    That said, I always set it explicitly, so this wouldn't hurt my debugging experience. Either way, -Og isn't flawless.
    Yeah, I hit that. It is mostly arguments to functions though. I assume they end up in registers and for some reason gdb don't or can't read them.

    Also -Og seems to trigger a lot of false positives on warnings that other levels do not.

    Though, who builds without an explicit optimization flag, and why not -O2 then? That seems to be pretty much defacto default.

    Comment


    • #12
      I've always used -O2 or -O3 as the buildsystem's default. If I get a segfault or abort, then I'll peek at the backtrace and see if it's obvious. If not, the first thing I'll do is rebuild the involved code (but not everything) with -O0.

      That said, I'd love an optimization level that were truly optimized for debugging and valgrind. Especially for valgrind, which is soooo slow that combining it with -O0 can be excruciating.

      Comment


      • #13
        Originally posted by carewolf View Post

        Yeah, I hit that. It is mostly arguments to functions though. I assume they end up in registers and for some reason gdb don't or can't read them.

        Also -Og seems to trigger a lot of false positives on warnings that other levels do not.

        Though, who builds without an explicit optimization flag, and why not -O2 then? That seems to be pretty much defacto default.
        -Os might be a better fit than -O2, often even gets better performance on a real system, due to improved cache utilization.

        Comment


        • #14
          Originally posted by microcode View Post

          -Os might be a better fit than -O2, often even gets better performance on a real system, due to improved cache utilization.
          In theory, but I have never seen that happen. Unless you talk about launching speed which is measurably and reliably faster with Os due to smaller binaries.

          Comment


          • #15
            -Os can hurt performance in a major way due to alignment. When I last tried building ffmpeg with -Os it became considerably slower.

            However, in some cases -Os can indeed improve performance. Mozilla switched defaults to and from -Os in the past due to changing compiler behavior.

            Comment


            • #16
              Originally posted by rene View Post

              I have the feeling especially newcomers will not run gcc/g++ directly, and instead click something together in some kind of IDE, which certainly has some default settings for this, anyways, too ;-)
              You're correct if they're following an introductory course in C++ but every introductory C course I have ever seen has introduced the compiler from the command line, even going so far as to explain why gcc's default is to produce a binary called a.out if you don't specify -o outfile.ext ( history lesson: Ken Thompson wrote an assembler for DECSys that he used to implement Unics, an operating system for the game Space Travel he was working on, and the output of that assembler was a.out aka assembler output. Stallman started writing GNU on a DEC PDP system and adopted it for GCC as well.)
              Last edited by linuxgeex; 30 October 2017, 09:20 AM.

              Comment

              Working...
              X