Announcement

Collapse
No announcement yet.

Intel Makes ControlFlag Open-Source For Helping To Detect Bugs In Code

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by ddriver View Post
    If you make such trivial errors that can be caught by a static analyzer, you clearly don't "already know math", and you are not using the "calculator" as a time saver, but as a mitigation of the absence of skill.
    People code when they're tired. People code when they're distracted. People code when they're taking responsibility for codebases they didn't write. People code as part of teams working on massive codebases in languages like C and C++ that require global reasoning about correctness. People also just sometimes make mistakes.

    Honestly, after more than 25 years of C (and C++), I’ve become very frustrated with the average C code I seen in the wild. OpenSSL is fairly typical, in a lot of ways. So much C code has buffer overflows, numeric overflows, memory leaks, double frees, undefined behavior, and an an endless number of bugs. There are exceptions—djb’s code is quite good, dovecot seems reasonable, OpenBSD audits aggressively—but when I dive into most C code, I expect problems… I’m tired. I don’t want to rely on programmers practicing constant, flawless vigilance.
    -- emk @ https://www.reddit.com/r/rust/commen...not_a/ds0u68p/

    Comment


    • #12
      Originally posted by ddriver View Post

      For you maybe. I'd rather stick to usage models that increase rather than reduce my capabilities.

      You can write write good software, or you can write bad software and rely on more bad software to fix it.



      There is nothing wrong with using a calculator, after you know and understand math. There is a lot of merit to understanding and being able to perform math that you will never acquire if you always you as a calculator. Chances are, you will be quite limited in what you can do with that calculator.
      > For you maybe
      I beleve you missunderstood me.

      > There is nothing wrong with using a calculator, after you know and understand math
      I agree that everythisng should be used with understanding, and you will not get dumber in that case.

      However, I don't understand why you are against of using AI tools.

      Noone talking about using it as a primary tool w/o understanding what's behind.

      At my opinion, for example, a helper tool which could warn developer, when he is doing merge request is a time saver for code reviewers (they would write less comments).

      Comment


      • #13
        Originally posted by sinepgib View Post

        So we agree then. You can use tools if you understand what they do and just take a part of the cognitive burden from mechanical tasks. On a side note, but keep in mind I don't think this is generally true, I used static analysis many times to learn. I'd run these over existing projects (mostly open source) and tried to understand whether the warnings were false positives and, if not, what they meant and why they were happening. It's all about the mindset and how you use the tools, not about whether you use them or not.
        Absolutely agree

        Comment


        • #14
          Originally posted by ddriver View Post
          If you make such trivial errors that can be caught by a static analyzer, you clearly don't "already know math", and you are not using the "calculator" as a time saver, but as a mitigation of the absence of skill.
          They're so trivial and typical of unskilled programmers that they plague even some of the best software out there :shrug:
          Mistakes happen. Edge cases slip. That's the reality. You can develop some mechanisms to cope with them, but they will still happen sometimes.
          I used Coverity Report at some point, it follows the flow of execution to an astounding level. It would catch really complex scenarios that are definitely non-trivial.

          Lots of people I know who already know math (some of them actual mathematicians) make typos and miscalculate things from time to time. Where I studied there's a common advice: if you're in an exam, use your calculator even for simple addition. You're working with integrals and differentials, you clearly know math if you're able to do that, nobody wants to fail you because of a brain fart under pressure when making a product.

          I remember one occasion where I lost a point in a discrete algebra final due to a very stupid misread of my own hand writing. At the end of an exercise, I had written 4*4 in some combinatorics problem. I read 4*7, so the result I reported was 28 instead of 16. While that would have happened if I used a calculator (I didn't, I never brought it for those exams because the numeric part tends to actually be really simple, it's the symbolic part that's the bulk of the work) because the error was in reading the number I wrote, that kind of mistake simply happens. For reference, I scored 9/10 on that exam, so I clearly knew what I was doing. That was the only mistake, but it was enough to make it imperfect.

          Originally posted by ddriver View Post
          Could you learn from issues, detected by a static analyzer? Theoretically yes, just like you could and should have learned from actual language learning materials. Does this happen in practice? From my experience mostly no - people will use just about anything to indulge their laziness, in the process reinforcing it further.
          Well, I learned a lot from them when I was a novice. It was more practical than looking for a technically closed standard and delving in it without prior experience with the language. I agree that it's not the common case. But in my experience using those tools is not the common case anyway. Most people I worked with barely knows what a static analyzer is, let alone use it as part of their development cycle. And they're not the greatest programmers either, despite not using those tools.

          Linters didn't get developed because programmers are lazy and unskilled. They're developed by skilled programmers, I doubt they'd cater to a problem they hadn't experienced, specially the open source ones developed on voluntary time.

          Comment


          • #15
            With this "training on Open Source" it is most likely a giant GPL violation that is too obscure to be testable in any court... When I make an error, I may generate a novel "idiosyncratic pattern", but do I author it? What is its license? Does it goes away when Intel encodes it? People are blogging... on LinkedIn?

            Comment


            • #16
              Originally posted by RedEyed View Post
              At my opinion, for example, a helper tool which could warn developer, when he is doing merge request is a time saver for code reviewers (they would write less comments).
              Exactly. It's a new team member with savant syndrome signing on as a full-time volunteer to be an additional reviewer for each merge.

              Comment


              • #17
                Originally posted by ssokolow View Post
                People code as part of teams working on massive codebases in languages like C and C++ that require global reasoning about correctness.
                any language requires global reasoning about correctness. and your quote goes onto listing examples of bad c code. i said it before, every rust success story starts with pain caused by sticking to c instead of using c++

                Comment


                • #18
                  Originally posted by ddriver View Post
                  You can write write good software, or you can write bad software and rely on more bad software to fix it.
                  1. The programmer who never creates bugs is also one that never writes code.
                  2. You act as though the only code which affects you is that which you personally write.
                  Obviously, we should use better languages, libraries, and methodologies to minimize the rate at which we create bugs. Tools and testing fit into this formula, as well.

                  As for #2, unless your code runs on bare hardware, using no 3rd party libraries, and you write 100% of your own tools or program in machine code, you're depending on other people's software. Wouldn't you want easier & more powerful tools for them (and even possibly yourself) to check their stuff for bugs, or to help locate defects when an anomalous behavior is observed?

                  However, even that view is somewhat myopic. Consider that you probably interact with billions of lines of code, on a daily basis (if we're counting not just operating systems, apps, firmware, and microcode in your computers, but also internet router firmware, web services, firmware in countless devices, software powering financial systems, code managing the utilities you use, code powering your online shopping & deliveries, billing, etc.). That's not even getting into the broader societal dependence on software, such as use in healthcare & by first responders, military, etc. So, you have an implicit interest in the quality of that code. Better static analysis tools are one means to that end.

                  Comment


                  • #19
                    Originally posted by sinepgib View Post

                    I used to hear that about calculators.
                    I used to hear that about iron and copper.

                    Comment


                    • #20
                      Originally posted by pal666 View Post
                      any language requires global reasoning about correctness. and your quote goes onto listing examples of bad c code.
                      Fine. You want me to nitpick? "That requires global reasoning about correctness just to avoid person A introducing a memory bug because they forgot or didn't realize that person B changed some little thing on the other side of the codebase."

                      I was trying to preserve C and C++'s dignity, so that's on you.

                      Originally posted by pal666 View Post
                      i said it before, every rust success story starts with pain caused by sticking to c instead of using c++
                      Given that it's you, I know it'd be a waste of my time showing how that statement is false.

                      Comment

                      Working...
                      X