Announcement

Collapse
No announcement yet.

GCC's New Ranger Infrastructure Aims To Be In Good Shape For GCC 11

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GCC's New Ranger Infrastructure Aims To Be In Good Shape For GCC 11

    Phoronix: GCC's New Ranger Infrastructure Aims To Be In Good Shape For GCC 11

    Making waves just over a year ago in the GNU Compiler Collection community was the "Ranger" project for on-demand range generator that's been worked on for several years at Red Hat. While their goals for GCC 10 didn't pan out, it's looking like in the next few months more of the Ranger infrastructure will land and thus putting it in the window for GCC 11...

    http://www.phoronix.com/scan.php?pag...Ranger-In-2020

  • #2
    Just when Fedora has become really stable for me.

    Comment


    • #3
      Originally posted by wizard69 View Post
      Just when Fedora has become really stable for me.
      I am not sure how to parse this. GCC is used for many different systems/services/platforms, and this is a GCC infrastructure change that will eventually be delivered to all. This is not a Fedora specific issue (although, as usual, Fedora, and a few of the other leading distros, are likely to get the improvements before some of the lagging distros).

      Comment


      • #4
        Could someone clarify what is this thing all about ?

        Comment


        • #5
          Originally posted by Brane215 View Post
          Could someone clarify what is this thing all about ?
          Seems to be an update to how GCC calculates the valid ranges for variables, which can be used in various compiler passes.

          For example, if there's a while(i > 0) loop, then the compiler can know that i is always positive (between 1 and 2^31) within that loop. Which can be used for various optimizations, warnings, etc.

          It sounds like GCC already has a way of doing this, but the new framework here will significantly speed up cases where they only need to calculate ranges on a few variables rather than all of them.

          Comment


          • #6
            Originally posted by Brane215 View Post
            Could someone clarify what is this thing all about ?
            Scalar evolution analysis is an aspect of setting up data used in compiler optimization passes. The result is this particular feature is that in some cases, like checking printf arguments against the format string, or allocating memory on the stack, compile times may be improved by up to several microseconds.

            Another possible use of this is particular development is it may be a stepping stone on the way to better auto-vectorization and parallelization by the compiler.

            Of course, it's also yet another source of bugs, both in the compiler and in user code. I can't tell you how many 'compiler bugs' I've seen that turn out to be undefined behaviour in user code coming fro things like assumptions about signed integer overflow.

            Comment

            Working...
            X