Announcement

Collapse
No announcement yet.

Intel's ControlFlag 1.2 Released To Use AI To Provide Full Support For Spotting C++ Bugs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Nth_man View Post

    Have you tried putting intentional bugs to see if it detects any of them? 🤔
    No I did not try that. Not do I plan to. Assuming the runtime scales somewhat linearly with code size I don't see this tool being practical to use. It is just too slow.

    Also there is no clear list of types of issues that this tool can detect as far as I can find. This is somewhat understandable with machine learning approaches being black boxes that are hard to interpret. Nor a list of CVEs or high profile bugs that it found. This is far less understandable.

    Comment


    • #12
      Originally posted by Vorpal View Post
      Also there is no clear list of types of issues that this tool can detect as far as I can find. This is somewhat understandable with machine learning approaches being black boxes that are hard to interpret.
      That's not accurate, generally speaking. Training an AI model requires continual validation against a test set, which is how you know the model's accuracy (or some approximation thereof) and whether it's converging.

      Having a test set, Intel should be able to provide plenty of examples of bugs it can find.

      Originally posted by Vorpal View Post
      Nor a list of CVEs or high profile bugs that it found. This is far less understandable.
      Not sure about that. This seems like a project/approach still early in its maturity. CVE bugs tend to be rather subtle. I'd expect the initial focus of their project to be on bugs more towards the simple end of the spectrum, but hopefully not a complete subset of what conventional analyzers will catch.

      Comment


      • #13
        Originally posted by coder View Post
        That's not accurate, generally speaking. Training an AI model requires continual validation against a test set, which is how you know the model's accuracy (or some approximation thereof) and whether it's converging.

        Having a test set, Intel should be able to provide plenty of examples of bugs it can find.
        That depends on the approach. If you only do do anomly detection all you can say is "this does not conform to the expected pattern of data". According to the readme, Intel appears to be trying to combine anomly detection with suggestion for fixes based on nearest neighbours (which again could plausibly be done without classifying types of bugs).

        That said, I have not read the white paper so maybe they do classification of bug types too.

        Comment


        • #14
          Without really knowing too much about this, would "AI learning" not be needing a huge dataset of "possible bugs" to actually detect bugs? I mean, it will learn what is bugs once it is reported as so... and the "training dataset" is large enough to be useful.

          Not sure what sort of dataset we are talking about here, but it would not magically detect bugs just because it is AI tho.... So with no prior runs, it would by definition find no errors, then when the coder marks a known bug and run through it again, with something marked as "a bug", it would learn not to do that again... and so on and so forth. Depending on how many codesamples+++ and the quality of the included dataset, it will probably be progressively faster at learning this, and thus being able to use its prior knowledge to detect new bugs in the code - adding to this dataset.

          Atleast this is how i thought this kind of AI learning would work..

          Comment


          • #15
            Originally posted by Cybmax View Post
            Without really knowing too much about this, would "AI learning" not be needing a huge dataset of "possible bugs" to actually detect bugs? I mean, it will learn what is bugs once it is reported as so... and the "training dataset" is large enough to be useful.

            Not sure what sort of dataset we are talking about here, but it would not magically detect bugs just because it is AI tho.... So with no prior runs, it would by definition find no errors, then when the coder marks a known bug and run through it again, with something marked as "a bug", it would learn not to do that again... and so on and so forth. Depending on how many codesamples+++ and the quality of the included dataset, it will probably be progressively faster at learning this, and thus being able to use its prior knowledge to detect new bugs in the code - adding to this dataset.

            Atleast this is how i thought this kind of AI learning would work..
            I believe it is more reasonable to call it Machine Learning, a.k.a. ML.

            Anyway, most ML solutions are trained ahead of time and the product only ships a trained model.

            Some of the solutions might provide a model that can further adapt to new input, enabling its performance to be further improved for specific usercases.

            Comment

            Working...
            X