Announcement

Collapse
No announcement yet.

Radeon Open Compute 3.3 Released But Still Without Official Navi Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Foldingat home works wthis version?

    Comment


    • #12
      Originally posted by xcom View Post
      Foldingat home works wthis version?
      I assume it doesn't since I can't even install it.
      And I wish it would work since I haven't used it in more than 10 years and now it matters the most to contribute.
      But with AMD, which doesn't care about how much time it passes until they add support for something, I bet I will get it working next year when it will be too late for what it matters.

      Comment


      • #13
        Originally posted by Danny3 View Post
        Because they haven't tested with a curent up to date distro instead of the very old LTS distros that they said they used.
        If they would've tested their hardware with an up to date distro they woul've seen their hardware bug sooner.
        You can't test with a moving target that defeats the purpose of testing during development and ends up being wasteful.. especially since once a lot of hardware is released and bought it's going to get supported anyway.

        Comment


        • #14
          Originally posted by Tuxee View Post

          Depends. I am happy with my Ryzen 3700X on my X570 mobo. I am NOT HAPPY with the support for my RX5700 - and no one will convince me otherwise.

          ROCm might never surface, OpenCL is a PITA to install. (Am I supposed to buy a "professional grade" hardware because I like to do some renderings in Blender or work on my photos in darktable?).
          As for the Mesa drivers and the kernel - that's a bumpy road at best. A plain Ubuntu 20.04 with kernel 5.4 and Mesa 20.0.3 just "doesn't work". SOTR gives me 30fps in it's benchmark, when switching to mainline 5.5 or 5.6 kernel (everything else the same) it jumps to the expected 130fps. Having two displays connected frequently gives me problems when booting (powerplay reports that it can't determine the display frequencies - after a lengthy boot process it seems to fall back to some "reasonable" default). All this works only when avoiding the most recent BIOS files which gave me hard crashes... There are setups that work. My 18.04.4 Ubuntu with kernel 5.3, Mesa 20.0.3 (the Kisak PPA, others won't cut it) and BIOS files back from November works reliable. And people frequently tell me that with Arch everything might be fine... I suppose I am just asking too much, when I expect working drivers and setup for a (or the most) popular Linux distribution after the hardware hit the shelves 8 or 9 months ago.
          Hopefully the Mesa OpenCL driver sees some input... currently OpenCL is pretty much neglected outside of RoCm which means no competition which means little incentive to improve.

          That said I think the current release is a move toward fixing installation difficulties, since it helps decouple a few things, even though the drivers must target the same kernel driver. Not being stuck on a single version that works with one software but not another is a major boon.
          Last edited by cb88; 04 April 2020, 03:13 PM.

          Comment


          • #15
            So people know just how close this stuff is to working, I am running a Ryzen 7 CPU and an RX-580 card with FC-31. All of the packages are from the distro not the AMD repo. The output is to long to post here but I have created a link to the output from rocminfo, clinfo and then launching darktable -d opencl. 99% of this works right out of the box. But that dang last 1% might as well be 100%.

            Comment


            • #16
              Originally posted by Danny3 View Post
              People like miners are losing
              There are no miners anymore....

              Comment


              • #17
                Originally posted by Meteorhead View Post
                As for the install process and packaging experience yes, AMD has a lot of room for improvement, but you really have to cut them some slack for having a top-to-bottom open stack. The only reason you are bashing them for the packaging and build experice is because they have one! The binary blobs of Nv are certainly easier to deploy, but is it worth it?.
                Originally posted by Tuxee View Post
                ROCm might never surface, OpenCL is a PITA to install. (Am I supposed to buy a "professional grade" hardware because I like to do some renderings in Blender or work on my photos in darktable?).
                Like the second comment, I would not dismiss the packaging problems of ROCm as a minor thing. Installing ROCm on a Linux system is a pain. For distros, getting ROCm packaged is a pain.

                Imagine someone who previously only packaged Windows software. Now this person is given the task of packaging something for Linux. They do everything like they were used to on Windows, only with some makeshift workarounds for the most obvious things that failed. ROCm packaging literally is that bad.

                And it is not like you can say "poor AMD, how can anyone satisfy the pesky Linux community?" - the competition is miles, if not light years ahead here. Intel employs a decent part of the Linux community, not only kernel developers but also from the distros themselves. Nvidia provide contacts for distro developers, who accept feedback, reply to feedback, and actually influence driver packaging based on feedback. I mean, how cool is that if you are only used to being able to post to https://community.amd.com/ (a horrible, slow mess of a website) where you may or may not get a competent answer but reach nobody who could actually change things, or tweet @AMDSupport and encounter some first-level support drone who is barely computer literate (that is hyperbole but you get the idea).

                Originally posted by cb88 View Post
                You can't test with a moving target that defeats the purpose of testing during development and ends up being wasteful.. especially since once a lot of hardware is released and bought it's going to get supported anyway.
                That's bollocks. There is a concept called "continuous integration testing", something which appears to be alien to AMD's CPU kernel team. If they had any testing of the sort wrt the Linux kernel they would have immediately noticed that it doesn't even boot FFS.

                Plus about the moving target thing, there are few more popular distros than Ubuntu (Android? ChromeOS? OpenWrt?). Just booting the latest Ubuntu release every 6 months to check if it still works is not too much to ask from AMD.

                Comment


                • #18
                  Originally posted by ms178 View Post

                  I don't really know what to think of this separation as I wanted to see GPGPU taking off in the desktop space sooner and this could slow down this advancement (and right before the corner of cache coherency finally arriving on a desktop platform). Or are we supposed to buy a compute card separatly soon or will we see CDNA becoming their new architecture path in iGPUs? I can see that this would make sense - as a great iGPU + dGPU combination so that a consumer could get the best of both worlds if they go the all-AMD route to build their computers. On the other hand their Vega approach also seems to be the more appealing these days as a middle ground between both worlds. It would also be potentially less work for AMD to support just one architecture going forward. I fear that RDNA won't get the same level of support as CDNA in their compute stack and it already shows, if ROCm on RDNA were a priority it would not take AMD over half a year to build that support. And as Tuxee already mentioned, the current situation is less than desireable, even more so for RDNA users who want to use their cards for desktop compute tasks.
                  The separation of CDNA and RDNA is very unfriendly to developers.

                  To promote CUDA, NVIDIA deployed CUDA to all its graphics cards from over 10 years ago, from professional Tesla serise to the lowest end.
                  With this scheme, Learning CUDA can't be easier. All you need is any NVIDIA graphics card.
                  (This is also why I'm not very optimistic to TPU -- you can find it nowhere outside Google's cloud.)

                  However, I'm also sure AMD has evaluated those advantages internally, and guessing they may have some technical considerations for not doing so.
                  For example, if CDNA is designed to be socket based (co-)processor, and support cache coherence with Ryzen through infinity fabric, it would be a natural design to separate CDNA from RDNA.
                  Gaming card needs neither of them. (but I still hope CDNA could be used as a graphics card so I don't need two cards)

                  Comment


                  • #19
                    Originally posted by cb88 View Post

                    You can't test with a moving target that defeats the purpose of testing during development and ends up being wasteful.. especially since once a lot of hardware is released and bought it's going to get supported anyway.
                    What moving target, it was the next Ubuntu release which had a newer systemd release which used the random number generator.
                    If they would've spared 15 minutes to download and test it instead of only testing the LTS release they would've seen the stupid hardware bug in their processors and not have had to rely on motherboard manufactures to release updated BIOSes to fix their problem.
                    Is it that hard for AMD to test one LTS and one non-LTS release with more updated software ?

                    If they don't want to test multiple Linux distributions or the ones with updated software that will be used in the future, why don't they make their own program to test all the available instructions in their processors ?
                    This way there are no moving targets for them.

                    Expecting people to upgrade only hardware like to Ryzen 3000 which is harder because it costs money and not upgrade the software which is easier because it's free, it's really stupid, if you ask me.
                    I don't know how many of us are willing to erase 2 or 4 years worth of improvement to go back to supported AMD distros just to have OpenCL running.

                    Comment


                    • #20
                      They still havn't added back Laptop/APU support after removing it in version 2.8.0.
                      Basically rocm only works with a select number of cards. The support list in the docs lay out just how limited it is.

                      I really wish we had a project that didn't require special /opt/ installations of various forks of already established projects, and having to change paths to compensate.
                      ROCm can be made to work ... but it's extremely time consuming, especially if you aren't using a supported distro.
                      It's very difficult to guess where things are supposed to be installed, what needs patched/hacked, symlinks, etc.
                      The fact that there are 113 AMDGPU/AMDGPU-PRO .deb packages in the .zip file from AMD says it all ... that's insane. Who knows what's in them or what they are for .. or which ones cause conflicts with existing packages on distros.

                      When ANY distro packager can download the source and create packages to support all recent hardware, then they are doing good.
                      Too much time is spent reinventing the wheel, forking projects and maintaining those. All this stuff needs to be pushed to existing projects like llvm, mesa, clover, etc. Atleast they've managed to remove the dkms requirement and upstream the kernel parts.

                      Here's to hoping things get streamlined, upstreamed, and simpler.
                      Last edited by Soul_keeper; 04 April 2020, 08:36 PM.

                      Comment

                      Working...
                      X