Announcement

Collapse
No announcement yet.

AMDVLK vs. RADV vs. AMDGPU-PRO 17.50 Vulkan Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    I couldn't help it and had to giggle a little while reading this thread. Whenever a Linux beginner complains about the many different ways one can do things, fragmentation of all kinds of components and general requirement to learn a bit instead of expecting plug and play, people here tend to mumble something about diversity, freedom of choice, and alternate solutions competing to reach the best possible quality of code. Now we are presented with multiple open-source graphics drivers which all perform at least decently and suddenly it is "a mess". Weird!
    Also, if I understand correctly the closed source AMD driver is basically the same as AMDVLK except for the compiler. This will eventually be remedied so in reality, there are two and not three drivers. Concerning the LLVM situation, one has to realize that upstreaming changes for AMD's various drivers is not only up to them. So it is completely valid to maintain a fork that does exactly what is required for the respective driver until things get upstreamed. And again: This will also be ironed out over time.
    Lastly, it cannot be so hard to understand that AMD does not want to maintain a separate, Linux-only driver if there is the chance to use AMDVLK on all platforms. So please: Stop asking them to drop everything and contribute to RADV (as impressive as it might be). Say thank you and be glad that they are providing the required resources to have decent open source implementations at all.
    Last edited by GruenSein; 25 December 2017, 07:40 AM.

    Comment


    • #52
      Originally posted by puleglot View Post
      Why you need such a GUI?
      Why you need Linux desktop?

      Comment


      • #53
        It looks like the structure of AMDVLK is as follows:
        1) PAL - shared with pro driver, other OSes and APIs
        2) LLVM compiler - shared with ROCM and mesa
        3) XGL - exclusive for AMDVLK.
        Which part relatively poor vulkan performance on linux does come from? Honestly I did not closely looked in PAL, but it has to have OS specific part. Is this part the main bottleneck?

        Comment


        • #54
          I think XGL is in the same category as PAL - it is shared with the closed source vulkan driver. Why wouldn't it as it is the part that is not OS specific in the least (other than the presentation extensions which should be relatively tiny)?

          I guess it is the compiler that makes the performance worse at times, with a few notable exceptions. I wonder if the amdgpu scheduling is enabled in the LLVM fork used by AMDVLK...

          Comment


          • #55
            Same compiler is used in radeonsi. And radeonsi performance is much more competitive with such on windows. XGL is made to interact with LLVM compiler which is not present in pro driver. I doubt it is shared.

            Comment


            • #56
              Originally posted by valici View Post
              Is the team working on Vulkan different from the one working on Mesa?
              Yes, different team. Mesa work is done in twriter's team.

              Originally posted by Wielkie G View Post
              I think XGL is in the same category as PAL - it is shared with the closed source vulkan driver. Why wouldn't it as it is the part that is not OS specific in the least (other than the presentation extensions which should be relatively tiny)?

              I guess it is the compiler that makes the performance worse at times, with a few notable exceptions. I wonder if the amdgpu scheduling is enabled in the LLVM fork used by AMDVLK...
              Correct, XGL is shared between open and closed driver. The main difference is the shader compiler.
              Test signature

              Comment


              • #57
                Hmm. According description on github XGL uses LLVM IR for its internal work. Does it mean pro driver shader compiler works with LLVM IR?

                Comment


                • #58
                  Originally posted by difron View Post
                  Hmm. According description on github XGL uses LLVM IR for its internal work. Does it mean pro driver shader compiler works with LLVM IR?
                  Not sure, although my guess is that it does use LLVM IR but converts it to another representation as input to the proprietary shader compiler. We have LLVM backends that can generate ISA (HW instructions), AMDIL or HSAIL IIRC.

                  So subject to confirmation, something like:

                  Closed: SPIR-V -> LLVM IR -> AMDIL -> HW ISA

                  Open: SPIR-V -> LLVM IR -> HW ISA
                  Last edited by bridgman; 25 December 2017, 10:42 AM.
                  Test signature

                  Comment


                  • #59
                    Well, I misunderstand it myself. The description on github is for LLPC not for the whole XGL as I thought. But still LLPC is a major part of XGL. At least LLPC located in XGL repo. So we can not just say that XGL is shared for closed and open source drivers. It is partially shared. bridgman Thank you for clarification about IR conversion! So this conversion takes place on Windows as well?
                    Well now.
                    1) PAL OS independent - shared with pro driver, other OSes and APIs
                    2) PAL OS dependent - shared with pro driver
                    3) LLVM compiler backend - shared with ROCM and mesa
                    4) XGL except LLPC - shared with pro driver, other OSes and APIs
                    5) LLPC except backend - exclusive for AMDVLK
                    So bottleneck is in OS specific part of PAL or in LLPC? Or this is yet to be discovered?
                    Last edited by difron; 25 December 2017, 11:16 AM.

                    Comment


                    • #60
                      Originally posted by difron View Post
                      Well, I misunderstand it myself. The description on github is for LLPC not for the whole XGL as I thought. But still LLPC is a major part of XGL. At least LLPC located in XGL repo. So we can not just say that XGL is shared for closed and open source drivers. It is partially shared. bridgman Thank you for clarification about IR conversion! So this conversion takes place on Windows as well?
                      Well now.
                      1) PAL OS independent - shared with pro driver, other OSes and APIs
                      2) PAL OS dependent - shared with pro driver
                      3) LLVM compiler backend - shared with ROCM and mesa
                      4) XGL except LLPC - shared with pro driver, other OSes and APIs
                      5) LLPC except backend - exclusive for AMDVLK
                      So bottleneck is in OS specific part of PAL or in LLPC? Or this is yet to be discovered?
                      It depends on whether you look at "XGL the component" (which is shared) or "XGL the folder on GitHub" (which includes LLPC and so is only partially shared). Strictly speaking the pipeline compiler (LLPC/SCPC) is a separate component from XGL - it was just grouped in the XGL folder for convenience.

                      In your list above, XGL is shared with other OSes but not other APIs.

                      It's safer to say "closed source Vulkan driver" than "pro driver" since it's the workstation OpenGL driver that really distinguishes the PRO stack from the regular stack.

                      Strictly speaking the LLVM compiler backend is shared with HCC rather than ROCm, although the definition of "ROCm" is becoming a bit elastic these days as more people get involved with it. The ROCm stack itself has always been defined as "amdkfd + thunk/libhsakmt + ROC runtime", although if you talk to enough people you will find someone using the term for pretty much any combination of components you can imagine.

                      Which "bottleneck" are you talking about - performance delta relative to closed driver, or performance delta relative to NVidia ? The delta relative to closed driver comes from the different compiler back end, while more investigation is needed to resolve delta relative to NVidia since we don't see it on Windows and (with one exception) it does not appear to be obviously related to the kernel driver. As I said earlier, it appears to be a workload difference as much as anything else.
                      Last edited by bridgman; 25 December 2017, 11:42 AM.
                      Test signature

                      Comment

                      Working...
                      X