Announcement

Collapse
No announcement yet.

Radeon ROCm 4.1 Released - Still Without RDNA GPU Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Radeon ROCm 4.1 Released - Still Without RDNA GPU Support

    Phoronix: Radeon ROCm 4.1 Released - Still Without RDNA GPU Support

    ROCm 4.0 released back in December with "CDNA" GPU support while now ROCm 4.1 has been released as the newest quarterly feature release to this open-source Radeon compute stack focused primarily on HPC/data-center needs...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Let's see if this doesn't smash my 5.8 LTS 20.04 like the last 4.01 release did. Here we go! 🙏 .... (Vega 20. No "RDNA is for gaming" excuses).

    EDIT:

    Pah! No point. From the README. I'm not going to waste my time, Same old story as 4.01.



    Time to flog the Radeon VII to the crypto guys who can actually make use of its compute, and succumb to buying a 3070 or 3080 for which I won't be begging for software support.
    Last edited by vegabook; 23 March 2021, 09:47 PM.

    Comment


    • #3
      ROCm is dead. It is amazing that many years after the release of RDNA 1, AMD still doesn't have ROCm support for it, yet AMD wants to sell a overclocked 192bit RDNA2 chip without compute at Nvidia ampere 256bit GA104 prices by intentionally busting its 8Gb VRAM buffer because it is "designed for gaming at max 1440P settings". What a joke. Milan, as reviewed by anandtech, is also partly a regression in idle power due to subpar IO Hub chip L3 cache design. What a disappointment.

      Comment


      • #4
        Originally posted by vegabook View Post
        Let's see if this doesn't smash my 5.8 LTS 20.04 like the last 4.01 release did. Here we go! 🙏 .... (Vega 20. No "RDNA is for gaming" excuses).
        The kernel version restrictions only apply to the rock-dkms packaged kernel driver. If you install the rocm-dev metapackage over your existing kernel driver that should give you what you need.

        https://rocmdocs.amd.com/en/latest/I...r-AMD-GPU.html

        And yes that information is much harder to find than it should be. Trying to get that improved.

        The dkms driver from the 20.50 amdgpu packaged driver includes support for the 5.8 kernel and is tested with the ROCm components up to OpenCL, but that only shipped a couple of days ago and hasn't made it into the ROCm stack releases yet.
        Last edited by bridgman; 23 March 2021, 10:29 PM.
        Test signature

        Comment


        • #5
          Originally posted by phoronix_is_awesome View Post
          ROCm is dead. It is amazing that many years after the release of RDNA 1, AMD still doesn't have ROCm support for it, yet AMD wants to sell a overclocked 192bit RDNA2 chip without compute at Nvidia ampere 256bit GA104 prices by intentionally busting its 8Gb VRAM buffer because it is "designed for gaming at max 1440P settings". What a joke. Milan, as reviewed by anandtech, is also partly a regression in idle power due to subpar IO Hub chip L3 cache design. What a disappointment.
          I'll skip over how 20 months becomes "many years" but I do need to point out that the IO chip does not include L3, just data fabric and memory controllers. The IO hub is actually pretty much the same between Zen2 and Zen3.

          My understanding (subject to confirmation) is that the higher idle power was a consequence of running the Infinity fabric at 1:1 rather than 1:2 with 3200 MHz memory, not sure if that is something we can address with power management firmware over time.
          Last edited by bridgman; 24 March 2021, 12:15 AM.
          Test signature

          Comment


          • #6
            Surely there is real brand damage caused by entire student bodies -- year one through year four -- of four-year comp sci programs running nvidia cards on personal comps. This is damage that can't be measured by just watching the market for purely gaming-oriented consumers.

            Comment


            • #7
              Originally posted by atomsymbol
              If you had tried to run OpenCL apps on a RDNA card with ROCm 4 then you would know that it works fairly well. bridgman In my opinion, it is a mistake that the partial/unofficial support for RDNA GPUs isn't mentioned in ROCm README files.
              Good point - I don't think that all the fixes from 20.45 have worked their way into the ROCm release stream yet but we should definitely mention that once the functionality is there.

              It's possible that we may have to work through at least conceptually separating "the upstream for our compute components" from "our datacenter releases" as a pre-requisite since right now the ROCm releases kinda serve as both. That is going to be an increasing problem as we expand support to consumer hardware.

              Thanks !
              Last edited by bridgman; 24 March 2021, 12:19 AM.
              Test signature

              Comment


              • #8
                Originally posted by bridgman View Post

                I'll skip over how 20 months becomes "many years" but I do need to point out that the IO chip does not include L3, just data fabric and memory controllers. The IO hub is actually pretty much the same between Zen2 and Zen3.
                I missed a comma between IO chip, and L3 cache. Without shrinking the IO die, Milan is a very small incremental upgrade. Anandtech is right, the 19% IPC uplift is partially nerfed by increase in power consumption of the IO die(40% of socket power)

                2 years is technically eternity for not release ROCm compute support. The most disgusting part of "NAVY FLOUNDERS" is the lie that it is designed for 1440 MAX SETTINGS, where in your own presentation it shows that 1440P max settings require 9.5-10GB of VRAM, thus forcing Nvidia GA104 8GB to swap data over PCI-e bus. Then you guys overclock the 192bit chip to 220W TGP(Do you realize that 250W used to be Nvidia's 384bit TDP range?). You are doing it only because you wanted to price the chip to the 256bit tier to make up the 100mm^2 design mistake called infinity cache. This is the most insidious design and marketing attempt by AMD in the past 5 years. And you refuse to enable ROCm support simply by policy. A single third party engineer did a few lines of modification to enable ROCm on APUs even:
                https://bruhnspace.com/en/bruhnspace-rocm-for-amd-apus
                So it is clear that ROCm support is a political decision, not a technical one. Brunspace is not opening up his code to enable ROCm, and the code base is not up to date. Make compute work on your weaker GPU design, otherwise RX6700XT is worth $299 in my mind without ROCm support when your competitor prices 3060 at $329(eventually you will be able to buy one at MSRP, all we need to do is fukk Bitcoin to $1k where it belongs)

                Comment


                • #9
                  Originally posted by bridgman View Post

                  The kernel version restrictions only apply to the rock-dkms packaged kernel driver. If you install the rocm-dev metapackage over your existing kernel driver that should give you what you need.

                  https://rocmdocs.amd.com/en/latest/I...r-AMD-GPU.html

                  And yes that information is much harder to find than it should be. Trying to get that improved.

                  The dkms driver from the 20.50 amdgpu packaged driver includes support for the 5.8 kernel and is tested with the ROCm components up to OpenCL, but that only shipped a couple of days ago and hasn't made it into the ROCm stack releases yet.
                  Can you confirm that no DKMS driver is required for mainline kernel starting 5.9?

                  Comment


                  • #10
                    Originally posted by oleid View Post
                    Can you confirm that no DKMS driver is required for mainline kernel starting 5.9?
                    It depends on the userspace version - we do some testing of release candidate userspace against recent upstream kernels but over time newer kernels are going to be needed. Or is the question specifically for the 4.1 release ?
                    Test signature

                    Comment

                    Working...
                    X