Announcement

Collapse
No announcement yet.

AMD's R600 GPU LLVM Back-End To Be Renamed

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Daktyl198 View Post
    Question time:
    Why does AMD have so many different FOSS drivers? Intel and Nouveau drivers manage to support most (not "legacy", but several years worth) of their cards in a single driver no? As an example, the newest Intel driver still supports my crappy Core2Duo integrated graphics from 6 years ago.

    Is there not an effort to bring r600 support to the latest RadeonSi code? I understand r300 and lower being left out (as that can be considered legacy) but r600 could use some love.
    And even if you don't bring ALL of the r600 cards, surely you could bring the newer half or more, then possibly push the older half (or less) down to the r300 driver or something (or move the r300 driver up to r600 and just straight out rename it to ReadeonLegacy)

    It's probably that I'm being an idiot here, I'd just like a clear explanation :P
    Both Intel (i915, and i965) and Nouveau (classic mesa nouveau, and gallium nouveau) have several drivers as well depending on how old the hw is. Most of the code is shared internally where applicable anyway, so the actual driver binaries are largely meaningless. Also, it only makes sense to share code where it's applicable.

    Regarding the i965 driver supporting your 6 year old IGP, the r600 driver support 6-8 year old hw as well. The radeon/r200/r300 drivers support cards that are more like 10-15 years old. They don't really have much in common with newer hardware so there's not much that can be shared.

    I'm not sure I understand your comment about bringing r600 up to the level of radeonsi. r600 and radeonsi are prettty much at feature parity at this point. Most of the applicable code is shared between them already.

    Comment


    • #12
      It's funny because LLVM actually suggested that they rename it back when they wanted to main line it. AMD's argument then was that they consistently name there different back ends after the first supported generation of chips.

      Comment


      • #13
        Originally posted by jrch2k8 View Post

        r300 hardware support is done beyond bug fixing, that hardware can't support anything beyond what already r300 has to offer.
        Drivers supports what developers decide to support, never exactly all off what hardware can support is supported .

        I was impresed with fglrx when i saw GL_ATI_fragment_shader, GL_ATI_envmap _bumpmap... still works fine on Kabini . Blob drivers seems like never has removals of features because "no one use this and that", "it is maintance burden, so lets remove it", "we will remove or not implement this, because it is obsolete and newer is better"... .

        Even if miracle happen and AMD decide to open fglrx - developers can just scream at that . And of course remove something .

        -------

        Just my 2 cents - house is better to be clean, but users like all possible features at hands
        Last edited by dungeon; 04 August 2014, 04:59 PM.

        Comment


        • #14
          Originally posted by dungeon View Post
          Drivers supports what developers decide to support, never exactly all off what hardware can support is supported .

          I was impresed with fglrx when i saw GL_ATI_fragment_shader, GL_ATI_envmap _bumpmap... still works fine on Kabini . Blob drivers seems like never has removals of features because "no one use this and that", "it is maintance burden, so lets remove it", "we will remove or not implement this, because it is obsolete and newer is better"... .

          Even if miracle happen and AMD decide to open fglrx - developers can just scream at that . And of course remove something .

          -------

          Just my 2 cents - house is better to be clean, but users like all possible features at hands
          Well, r300 era hardware just don't have the silicon to support more than it already does and even if it has silicon so support certain additional features they should not be exposed by the driver since is not compliant with the rest of the specification.

          about additional old extensions, i personally consider they should be removed or at least blocked into legacy contexts to make sure developers are forced to pick more efficient methods instead of relying in old crust and pollute the drivers regardless of the vendor, this is exactly one of the reasons that make FGLRX a white whale of millions of LoC that need months to fix even the simplest of details.(specially true with the CAD lazy ass fucker developers)

          Comment


          • #15
            Originally posted by AJenbo View Post
            It's funny because LLVM actually suggested that they rename it back when they wanted to main line it. AMD's argument then was that they consistently name there different back ends after the first supported generation of chips.
            You actually got the backwards. AMD originally called it AMDGPU, but the llvm community suggested we rename it in case we added a different backend later.

            Comment


            • #16
              Originally posted by jrch2k8 View Post
              about additional old extensions, i personally consider they should be removed or at least blocked into legacy contexts to make sure developers are forced to pick more efficient methods instead of relying in old crust and pollute the drivers regardless of the vendor, this is exactly one of the reasons that make FGLRX a white whale of millions of LoC that need months to fix even the simplest of details.(specially true with the CAD lazy ass fucker developers)
              World is full of lazy people and binary only programs . But it is better to have clear solution, OpenCL 2.0 on top of HSA kernel, then to support HSA with OpenCL 1.2 + extensions .

              I guess blobs will support both, but we once again - only the better .

              Comment


              • #17
                How is the relationship between LLVM and the Gallium Driver? I thought it's GLSL -> TGSI -> Gallium driver. Is it now GLSL -> LLVM -> TGSI -> Gallium Driver? Is this the same for all AMD and NVIDIA Gallium drivers?

                And now there's also LunarGLASS, which is also a LLVM based compiler that has an agnostic IR kind of like TGSI and and a bottom level that get's optimized for the different chips. Isn't that basically the same as Gallium?

                Comment


                • #18
                  Originally posted by blackout23 View Post
                  How is the relationship between LLVM and the Gallium Driver? I thought it's GLSL -> TGSI -> Gallium driver. Is it now GLSL -> LLVM -> TGSI -> Gallium Driver? Is this the same for all AMD and NVIDIA Gallium drivers?

                  And now there's also LunarGLASS, which is also a LLVM based compiler that has an agnostic IR kind of like TGSI and and a bottom level that get's optimized for the different chips. Isn't that basically the same as Gallium?
                  Remember that there are essentially two different paths for a driver -- runtime calls and shader programs. The Gallium3D driver handles runtime calls, so the runtime path is essentially GL => Gallium3D => PM4 packets submitted to the hardware via the kernel driver.

                  Assuming that nothing has changed recently :

                  For GL shaders on VLIW hardware, the default path is GLSL (passed to GL) => GLSL IR (internal to Mesa driver**) => TGSI (passed to Gallium3D driver) => HW ISA (passed to hardware). The optional (for VLIW) LLVM path is GLSL => GLSL IR => TGSI => LLVM IR => HW ISA, where the r600 LLVM back end takes care of the last step.

                  For GL shaders on GCN hardware the path is always GLSL => GLSL IR => TGSI => LLVM IR => HW ISA, and again the r600 LLVM back end takes care of the last step.

                  For OpenCL kernels the path is CL C => LLVM IR => HW ISA, ie the Gallium3D driver has been modified to optionally accept LLVM IR directly rather than always receiving TGSI and converting it to LLVM IR. Again, the r600 LLVM back end takes care of converting LLVM IR to HW ISA.

                  ** If I remember correctly the Intel GL driver converts GLSL IR directly to HW ISA, without going through TGSI or Mesa IR.
                  Last edited by bridgman; 04 August 2014, 09:32 PM.
                  Test signature

                  Comment


                  • #19
                    Lattner's has okayed the change from ten days back.

                    Comment


                    • #20
                      Originally posted by bridgman View Post
                      Remember that there are essentially two different paths for a driver -- runtime calls and shader programs. The Gallium3D driver handles runtime calls, so the runtime path is essentially GL => Gallium3D => PM4 packets submitted to the hardware via the kernel driver.

                      Assuming that nothing has changed recently :

                      For GL shaders on VLIW hardware, the default path is GLSL (passed to GL) => GLSL IR (internal to Mesa driver**) => TGSI (passed to Gallium3D driver) => HW ISA (passed to hardware). The optional (for VLIW) LLVM path is GLSL => GLSL IR => TGSI => LLVM IR => HW ISA, where the r600 LLVM back end takes care of the last step.

                      For GL shaders on GCN hardware the path is always GLSL => GLSL IR => TGSI => LLVM IR => HW ISA, and again the r600 LLVM back end takes care of the last step.

                      For OpenCL kernels the path is CL C => LLVM IR => HW ISA, ie the Gallium3D driver has been modified to optionally accept LLVM IR directly rather than always receiving TGSI and converting it to LLVM IR. Again, the r600 LLVM back end takes care of converting LLVM IR to HW ISA.

                      ** If I remember correctly the Intel GL driver converts GLSL IR directly to HW ISA, without going through TGSI or Mesa IR.
                      Thanks for the explanation bridgman. So from a performance perspetive, on R600 HW it is better to not use LLVM for shaders since there's one step less in the pipeline ?

                      I have not yet tested it since I expected LLVM to replace a step and not add one in the pipeline.

                      Comment

                      Working...
                      X