Announcement

Collapse
No announcement yet.

AMD's R600 GPU LLVM Back-End To Be Renamed

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by haplo602 View Post
    Thanks for the explanation bridgman. So from a performance perspetive, on R600 HW it is better to not use LLVM for shaders since there's one step less in the pipeline ?

    I have not yet tested it since I expected LLVM to replace a step and not add one in the pipeline.
    Most likely, although if the LLVM compiler is able to optimize things better than the custom r600 backend in mesa it would probably far outweigh the minimal overhead of having an extra stage present.

    The real reason to not use LLVM on r600 hardware is simply that the existing backend is already fairly well tested and working, and rewriting it all in llvm is likely to be a long, frustrating task. No one seems to be quite sure exactly how well llvm could optimize for a VLIW architecture like r600 hardware uses, either, so there is no real compelling reason to even try right now other than getting compute running (on older, slower hardware where it's less interesting for most anyway). I'm sure someone could make an interesting doctorate project out of it.

    GCN hardware is a little more similar to the other hardware LLVM is used on, and the hardware also runs much more complex programs, which is why it was decided that a full-blown compiler stack like LLVM would be better suited there.
    Last edited by smitty3268; 05 August 2014, 02:42 AM.

    Comment


    • #22
      While Intel do have new gen gpu architecture every few months. Those are mostly extensions to previous hw.

      Basic rules are same for whole i965. (And optimization techniques too)

      They yet to have to switch to something different.

      It may be coming (DX12/Mantle/OpenGL "AZDO"/whatever-new-fancy-stuff-conquer-world do change a bit the environment, and Intel may decide to switch to "superscalar" architecture. Such move would mean new driver, maybe even LLVM based)

      But for broadwell it will still be GEN.

      Nvidia switched to "superscalar" years ago, and now they only change "details" (like actual commands that need to be submitted to the GPU ).

      AMD switched to "superscalar" for good only with GCN.

      Comment


      • #23
        Originally posted by smitty3268 View Post
        Most likely, although if the LLVM compiler is able to optimize things better than the custom r600 backend in mesa it would probably far outweigh the minimal overhead of having an extra stage present.

        The real reason to not use LLVM on r600 hardware is simply that the existing backend is already fairly well tested and working, and rewriting it all in llvm is likely to be a long, frustrating task. No one seems to be quite sure exactly how well llvm could optimize for a VLIW architecture like r600 hardware uses, either, so there is no real compelling reason to even try right now other than getting compute running (on older, slower hardware where it's less interesting for most anyway). I'm sure someone could make an interesting doctorate project out of it.

        GCN hardware is a little more similar to the other hardware LLVM is used on, and the hardware also runs much more complex programs, which is why it was decided that a full-blown compiler stack like LLVM would be better suited there.
        I am pretty sure that the LLVM backend can handle VLIW. http://www.phoronix.com/scan.php?pag...tem&px=MTQxMDM

        Comment


        • #24
          Originally posted by log0 View Post
          I am pretty sure that the LLVM backend can handle VLIW. http://www.phoronix.com/scan.php?pag...tem&px=MTQxMDM
          It can generate code for R600-class hardware, yes. Whether it can really handle it properly is up for debate.

          Comment


          • #25
            Originally posted by smitty3268 View Post
            Most likely, although if the LLVM compiler is able to optimize things better than the custom r600 backend in mesa it would probably far outweigh the minimal overhead of having an extra stage present.

            The real reason to not use LLVM on r600 hardware is simply that the existing backend is already fairly well tested and working, and rewriting it all in llvm is likely to be a long, frustrating task. No one seems to be quite sure exactly how well llvm could optimize for a VLIW architecture like r600 hardware uses, either, so there is no real compelling reason to even try right now other than getting compute running (on older, slower hardware where it's less interesting for most anyway). I'm sure someone could make an interesting doctorate project out of it.

            GCN hardware is a little more similar to the other hardware LLVM is used on, and the hardware also runs much more complex programs, which is why it was decided that a full-blown compiler stack like LLVM would be better suited there.
            tested stock, r600_debug=sb and r600_llvm=1 in uniqine valley for a quick one. r600_debug=sb is fastest.

            Comment


            • #26
              Originally posted by haplo602 View Post
              tested stock, r600_debug=sb and r600_llvm=1 in uniqine valley for a quick one. r600_debug=sb is fastest.
              Interesting. What driver version you used?

              I wonder and only bumping this topic because there likely no R600_LLVM for many months and you need to use R600_DEBUG=llvm instead to actually make LLVM shader backend work. Also as far as I remember it's compatible with "sb" for even longer.

              Though it's sad that LLVM compiler isn't actually works on R600...

              Comment


              • #27
                Originally posted by _SXX_ View Post
                Interesting. What driver version you used?

                I wonder and only bumping this topic because there likely no R600_LLVM for many months and you need to use R600_DEBUG=llvm instead to actually make LLVM shader backend work. Also as far as I remember it's compatible with "sb" for even longer.

                Though it's sad that LLVM compiler isn't actually works on R600...
                hmm ... seems I have to retest :-) I got the infor from phoronix articles where Michael tested sb and llvm ...

                Comment

                Working...
                X