Announcement

Collapse
No announcement yet.

Radeon RX 6800 "Sienna Cichlid" Firmware Added To Linux-Firmware.Git

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by bridgman View Post
    I keep forgetting that I **really** need to make sure I'm accurate on the technical material before posting in these forums

    Comment


    • #12
      Originally posted by baryluk View Post
      Nice. Obviously would be perfect to do it day earlier. But this is still speedy.
      It's not like any one other than reviewers have the cards any way.

      Comment


      • #13
        Can someone tell me please which part of these drivers is actually open source and which is not? They are public but still require binaries? I am a bit lost

        Comment


        • #14
          Originally posted by piorunz View Post
          Can someone tell me please which part of these drivers is actually open source and which is not? They are public but still require binaries? I am a bit lost
          The packaged drivers are generally binaries built from our internal working branches of open source trees (kernel, amdgpu DDX, libdrm, Mesa etc..)..

          The exceptions are the AMDGPU-PRO OpenGL and Vulkan components, which are closed source. The legacy ("Orca") back end for OpenCL is also closed source, but the rest of the OpenCL component is part of the open source ROCm stack. I don't expect we will bother to go back and open source the legacy OpenCL back end but we plan to use the ROCm back end for all current and future chips on Linux anyways.

          For graphics drivers we tend to push commits out to the public mailing list (amd-gfx) at the same time they go into our internal trees and upstream every week or so, while for ROCm components we tend to push to "upstream" (our own public GitHub repos) once a month. LLVM changes are somewhere in between... I think we push every few weeks on average.

          We are trying to move the ROCm components to a more open development model (like the graphics driver) as well.
          Last edited by bridgman; 19 November 2020, 06:49 PM.
          Test signature

          Comment


          • #15
            Originally posted by bridgman View Post
            We are trying to move the ROCm components to a more open development model (like the graphics driver) as well.
            Do you know why Redhat and Fedora are slow walking AMDs Rocm runtime changes? Will any of the changes AMD is making to it's development model improve that situation?

            Comment


            • #16
              Originally posted by MadeUpName View Post
              Do you know why Redhat and Fedora are slow walking AMDs Rocm runtime changes? Will any of the changes AMD is making to it's development model improve that situation?
              I think the main issue was that the code was just changing too fast while we were building up the stack and getting our datacenter GPUs supported. Delivering the ROCm stack via Fedora (or any non-rolling-release distro) would mean taking a snapshot and living with it for at least 9 months while for RHEL the code would get even older by the time the latest point release was ready to be replaced by a new one. Delivering the ROCm stack via separate drivers was pretty much the only way to keep pace.

              Now that the stack is largely built out and our first CDNA part has launched we are reaching the point where distro integration can start to make sense, at least for faster moving distros like Fedora. There is still some build & packaging cleanup we probably need to finish first though... eg. we would all prefer to avoid adding yet another version of llvm.
              Test signature

              Comment


              • #17
                Originally posted by bridgman View Post
                The legacy ("Orca") back end for OpenCL is also closed source, but the rest of the OpenCL component is part of the open source ROCm stack.
                Do you mean the PAL OpenCL stuff from AMDGPU-PRO is ROCm repackaged?

                I don't expect we will bother to go back and open source the legacy OpenCL back end but we plan to use the ROCm back end for all current and future chips on Linux anyways.
                Even without open-sourcing it, would it be possible to fix that bug? https://gitlab.com/illwieckz/i-love-compute/-/issues/2

                The packaged drivers are generally binaries built from our internal working branches of open source trees (kernel, amdgpu DDX, libdrm, Mesa etc..).
                Would it be possible to package the same clinfo usually used in distros like Debian (I believe it's http://github.com/Oblomov/clinfo )?
                AMD's clinfo seems to be featureless in comparison.

                Comment


                • #18
                  Originally posted by bridgman View Post

                  In fairness, CPU cores have the same problem once you start using SIMD instructions (SSE/AVX), and I don't think they have the same degree of automatic predication that GPU cores offer.

                  GPU SIMD units used to be a lot wider than CPU SIMDs but the gap is closing, with AVX512 in particular getting close to the 1024 or 2048 bit SIMD width you find on GPUs.

                  I do agree with your comment in the context of Peter Fodrek's post, however, since the GPU has to run each of the branches sequentially and so there is no effective speedup AFAIK.


                  I just speculate of RX 6800XT running stable at 2750 MHz
                  ►RX 6900 XT (Newegg Affiliate): https://geni.us/6900xt►RX 6800 XT (Newegg Affiliate): https://geni.us/6800xt►RX 6800 (Newegg Affiliate): https://geni.us/6800...


                  what is 22% more then max boost clock and 29% more the game clock but it only take us max 9% more performance. So there is lot of computing/geometry performance that cannot be used for games (maybe due to missing enough count of some other parts of GPU of due to not enough data bandwidth) and how to use this not used compute capability to boost performance.

                  Comment


                  • #19
                    Originally posted by illwieckz View Post
                    Do you mean the PAL OpenCL stuff from AMDGPU-PRO is ROCm repackaged?
                    More correct to say that the PAL OpenCL stuff from AMDGPU-PRO has been replaced with "ROCm repackaged". The PAL paths (like the legacy paths) go through the graphics driver user/kernel interface while the ROCm paths go through ROCR and KFD (aka ROCK).

                    Originally posted by illwieckz View Post
                    Even without open-sourcing it, would it be possible to fix that bug? https://gitlab.com/illwieckz/i-love-compute/-/issues/2
                    Yes, that seems like the kind of bug we would still want to fix.

                    Originally posted by illwieckz View Post
                    Would it be possible to package the same clinfo usually used in distros like Debian (I believe it's http://github.com/Oblomov/clinfo )?
                    AMD's clinfo seems to be featureless in comparison.
                    I don't know why we have a different clinfo yet - once I find out it should be easier to answer the question.
                    Test signature

                    Comment


                    • #20
                      Originally posted by Peter Fodrek View Post
                      I just speculate of RX 6800XT running stable at 2750 MHz
                      ►RX 6900 XT (Newegg Affiliate): https://geni.us/6900xt►RX 6800 XT (Newegg Affiliate): https://geni.us/6800xt►RX 6800 (Newegg Affiliate): https://geni.us/6800...


                      what is 22% more then max boost clock and 29% more the game clock but it only take us max 9% more performance. So there is lot of computing/geometry performance that cannot be used for games (maybe due to missing enough count of some other parts of GPU of due to not enough data bandwidth) and how to use this not used compute capability to boost performance.
                      I suspect that part of the issue was simply that the board running stock was achieving higher-than-rated clocks, and so the degree of overclock (which the poster calculated relative to rated clocks AFAICS) was actually quite a bit less. There were also some cases where the "overclocked" performance was lower than stock, which is usually (but not always) an indication that you aren't overclocking as much as you think.
                      Test signature

                      Comment

                      Working...
                      X