Announcement

Collapse
No announcement yet.

Linux 5.15 Enabling "-Werror" By Default For All Kernel Builds

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    I'd say fixing all warnings saves debugging time overall, in the long run. However that might apply more to writing/maintaining new code than to (re-)building existing code that you wouldn't debug anyway.....

    It's probably a time saver especially for Linus and other maintainers who need to go through source code written by thousands of engineers.

    Comment


    • #22
      Originally posted by brad0 View Post
      The fact this has not been enabled until now is ridiculous.
      Why? Linus has *never* tolerated warnings in his builds. It is for the increasing number of robots test building things that might otherwise not flag a warning. So hopefully the warnings will be less likely to make it to him. Did anyone even read the article?

      Comment


      • #23
        Originally posted by set135 View Post

        Why? Linus has *never* tolerated warnings in his builds. It is for the increasing number of robots test building things that might otherwise not flag a warning. So hopefully the warnings will be less likely to make it to him. Did anyone even read the article?
        Linus' comment at the bottom makes it clear how stupid the situation is and why this should have been done a long time ago.

        Comment


        • #24
          Originally posted by set135 View Post

          This does not change the bar for code getting in.
          true - it also helps devs too using defaults. it might be one less patch cycle they need to submit - which will also make maintainers jobs easier.

          Comment


          • #25
            Originally posted by indepe View Post
            I'd say fixing all warnings saves debugging time overall, in the long run.
            Depends on what you mean by "all warnings". Some developers tend to be fairly aggressive, in terms of which they enable. Unfortunately, the more advanced warnings tend to be less reliable, which becomes problematic when someone decides to enact a -Werror policy.

            I've seen significant amounts of time wasted, and bugs actually getting introduced, in the course of fixing false-positive warnings (i.e. those which didn't indicate any real problem). In many instances, the resulting code is worse, in terms of being less readable or maintainable. I would argue that most of these warnings shouldn't even have been enabled.

            I've also seen developers take the easy way out. Such as getting rid of unused variables holding return code values, rather than actually checking them and handling errors appropriately. But hey, the warning count was reduced, so the code must be better!

            Comment


            • #26
              Originally posted by coder View Post
              Depends on what you mean by "all warnings". Some developers tend to be fairly aggressive, in terms of which they enable. Unfortunately, the more advanced warnings tend to be less reliable, which becomes problematic when someone decides to enact a -Werror policy.
              This total here I do not agree with. The error rate of more advance warnings is not that high.
              https://gcc.gnu.org/onlinedocs/gcc/D...c-Pragmas.html

              There are the Diagnostic Pragmas in gcc clang and other compilers to selectively turn off the advance warnings in case of a false positive.

              The problem here the worse of the more advance warning has less than a 10% false positive rate. So yes less reliable but is it justifed not to enabled it comes at the cost of letting the 90% of the detections when its right past past.

              Yes it pain to code to selectively turn the advanced warning off when the compiler is truly wrong.

              Code:
              #pragma GCC diagnostic push
              #pragma GCC diagnostic ignored "-Wuninitialized"
              foo(b); /* no diagnostic for this one */
              #pragma GCC diagnostic pop
              But we should be looking at 3 lines of code max per compiler to correct a advance warning false positive.

              The case where it turns into horrible ruining code is when the person doing the repair does not see the problem as a false positive the cause of this is the fact over 90% of the time the advanced warning is right there is a problem and the code does need to be changed. Yes when the code does need to be changed because the error is real there is the equal problem

              What you say next coder:
              Originally posted by coder View Post
              I've seen significant amounts of time wasted, and bugs actually getting introduced, in the course of fixing false-positive warnings (i.e. those which didn't indicate any real problem). In many instances, the resulting code is worse, in terms of being less readable or maintainable. I would argue that most of these warnings shouldn't even have been enabled.
              You can take away the false positive and put when the warning are correct. We have the same set of problems where the people fixing the code make the code worse and less readable or maintainable.

              This is about having people doing the repairs know what the heck they are doing. Lot of people don't understand how little code is required to turn off advance warnings selectively. Yes there is a big problem with people not having the skill to work out that something is a false positive that should be a 3 line patch or less and should be reported to compiler so they can fix it.

              Originally posted by coder View Post
              I've also seen developers take the easy way out. Such as getting rid of unused variables holding return code values, rather than actually checking them and handling errors appropriately. But hey, the warning count was reduced, so the code must be better!
              https://gcc.gnu.org/onlinedocs/gcc/C...ion-Attributes
              __attribute__ ((warn_unused_result))

              The fact a developer can do that is a different bug in the code. Yes warn_unused_result on the function means removing unused variable holding return values still equals warning/error. Yes MSVC has equal declare on functions as well.

              We do need to get in the habit when coding to put proper attributes on functions so that function that return import error codes or results that must be freed cannot black holed causing future issues.

              Comment


              • #27
                I had a situation recently that surprised me at the time. The architecture I work on has 512 general purpose registers and the ability to address them like RAM. Not too surprisingly, the C compiler doesn't have any recognition of using pointers in this manner. It could auto allocate a variable to a register but all my attempts to define or reassign a pointer to general register space either got warnings or errors.

                In the end I used inline assembly to eliminate any confusion.

                Comment


                • #28
                  Originally posted by brad0 View Post

                  That is very evident.
                  Really? What did I say that was very stupid? I'm being serious. I don't want to look stupid on national TV again. Tell me.

                  Comment


                  • #29
                    Originally posted by coder View Post
                    Depends on what you mean by "all warnings". Some developers tend to be fairly aggressive, in terms of which they enable. Unfortunately, the more advanced warnings tend to be less reliable, which becomes problematic when someone decides to enact a -Werror policy.

                    I've seen significant amounts of time wasted, and bugs actually getting introduced, in the course of fixing false-positive warnings (i.e. those which didn't indicate any real problem). In many instances, the resulting code is worse, in terms of being less readable or maintainable. I would argue that most of these warnings shouldn't even have been enabled.

                    I've also seen developers take the easy way out. Such as getting rid of unused variables holding return code values, rather than actually checking them and handling errors appropriately. But hey, the warning count was reduced, so the code must be better!
                    Well, let's assume that with newer gcc/llvm, there is less and less false positives. Just having clean log and seeing new warnings which will appear (not talking about -Werror) is beneficial.

                    I tried write patch for many projects, where build system prints warning after warning and in the end, I couldn't even quickly see from build log if my code introduced any new warnings...

                    Yes, you can create wrong fixes, but I believe review process for Linux kernel is good enough to catch these wrong solutions before they go in.

                    Comment


                    • #30
                      Create a combination matrix of supported architectures/compilers.
                      It grows really fast. Werror prevented me from doing stupid things like assuming int size, or relying on arch specific overflow behaviour (BE/LE differences).
                      Imagine writing a new driver, testing it on amd64, works fine. Then somebody tries running it on a Big endian MIPS machine. What happens can be very, very surprising... these compiler warnings/errors help catch *a lot* of the low hanging fruit.

                      Comment

                      Working...
                      X