Announcement

Collapse
No announcement yet.

AMDGPU-PRO 17.20 Emerges With Vega, ROCm Compute Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Is there any info on installing just the ROCm compute components? Little confused that there are ROCm installers here since ROCm is a whole stack by itself. Would these ROCm packages overwrite the currently installed AMDGPU Pro OpenCl components?

    I currently only install the compute components of the AMDGPU Pro driver (with the --compute flag). Don't see anything equivalent for the ROCm packages.

    Comment


    • #32
      Originally posted by pal666 View Post
      he can't help you. nms is a windows-only scum with mostly negative reviews, written by morons. just forget about its existence.
      Yikes!!! It's the only game I've ever really liked, in my entire life, pal666. For decades I'd searched and longed for a game where I could just explore the universe. No shooting at things, no things shooting back, and for goodness sake, no inane puzzles. As an engineer my life was full of real puzzles to solve, and shooting at things held my attention for about two hours, and as I said that was decades to go.

      I've always wanted a game where, finally, I could just relax, and have fun, and explore a universe. No Man's Sky is the only game I've ever found that allows just that. However if anyone knows of a different game, especially an open source one, that enables the same kind of fun I'd be more than happy to play it. But it's far more complicated than just another shooting game with different characters and backgrounds. It takes true genius to create a procedural universe with ever changing inhabitants. And while NMS is still a very primitive iteration of my dream, it's the only iteration I've yet discovered.

      Comment


      • #33
        Originally posted by jstefanop View Post
        Is there any info on installing just the ROCm compute components? Little confused that there are ROCm installers here since ROCm is a whole stack by itself. Would these ROCm packages overwrite the currently installed AMDGPU Pro OpenCl components?

        I currently only install the compute components of the AMDGPU Pro driver (with the --compute flag). Don't see anything equivalent for the ROCm packages.
        For Vega and up the ROCm packages *are* the compute path, so installing compute components should be sufficient. That said...

        - the ROCm paths in 17.20 have only been tested on Vega AFAIK (in fact the testing focus for the entire driver was Vega) so unless you already have Vega HW I would not recommend 17.20

        - the install instructions do not mention the --compute flag (although I don't think previous ones did either), just installing rocm-amdgpu-pro packages on top of the core install - not sure if the ROCm paths are supported by the --compute option yet, will ask

        Part of the work adding Vega support was replacing the current OpenCL runtime with newer code that can use either ROCm or libdrm paths, but I don't know if there was much/any testing of the libdrm paths since those are only used on pre-Vega HW. So it really boils down to whether you have Vega HW or not AFAICS.
        Last edited by bridgman; 29 June 2017, 12:08 AM.
        Test signature

        Comment


        • #34
          Originally posted by bridgman View Post

          For Vega and up the ROCm packages *are* the compute path, so installing compute components should be sufficient. That said...

          - the ROCm paths in 17.20 have only been tested on Vega AFAIK (in fact the testing focus for the entire driver was Vega) so unless you already have Vega HW I would not recommend 17.20

          - the install instructions do not mention the --compute flag (although I don't think previous ones did either), just installing rocm-amdgpu-pro packages on top of the core install - not sure if the ROCm paths are supported by the --compute option yet, will ask

          Part of the work adding Vega support was replacing the current OpenCL runtime with newer code that can use either ROCm or libdrm paths, but I don't know if there was much/any testing of the libdrm paths since those are only used on pre-Vega HW. So it really boils down to whether you have Vega HW or not AFAICS.
          Cool thanks...I do have a Frontier Card coming in for testing...but our current system has RX400/500 polaris hardware on it as well, so it would be good to know if I can't mix the two hardware currently.

          I guess illl just give it a shot and see what happens.

          BTW is there any plan to ever give the AMDGPU Pro/ROCm stack catalyst era installers/user friendly CLI's? I mean it dosent really matter for me since I'm technical and can work around the packages/scripts(and I guess thats true for most Linux users on here), but you guys are now selling 1K Enterprise hardware again, and I'm sure some of your professional clients on linux side aren't as technical as me.

          Also the driver side is starting to get pretty confusing/messy...there is the open source amdgpu side...amdgpu-pro with the closed source compute components...and now ROCm with its compute components thrown and mixed in with amdgpu-pro for fun A new user buying your 1k card and trying to install it on linux will have no idea what the hell is going on.

          Comment


          • #35
            There should be a new ROCm stack coming out today-ish (1.6) which will support both Polaris and Vega, so that would probably be your best bet.

            Once we finish merging trees (including upstreaming current amdkfd code) things should settle down in terms of what each stack includes, and that in turn will make it a lot easier to provide a simple install experience.

            Initially our highest priority was getting the core functionality in place and working well, but now we have most of that in place so things like upstreaming and OS coverage/integration are moving to the top of the priority heap.
            Test signature

            Comment


            • #36
              Originally posted by bridgman View Post
              There should be a new ROCm stack coming out today-ish (1.6) which will support both Polaris and Vega, so that would probably be your best bet.

              Once we finish merging trees (including upstreaming current amdkfd code) things should settle down in terms of what each stack includes, and that in turn will make it a lot easier to provide a simple install experience.

              Initially our highest priority was getting the core functionality in place and working well, but now we have most of that in place so things like upstreaming and OS coverage/integration are moving to the top of the priority heap.
              Ok I lied...looks like I can't even get this Vega card running on my Ubuntu system. Installed the 17.20 drivers for Frontier Edition...all Polaris Cards detected and work fine for OpenCL apps, but vega card is nowhere to be found. When it installed I didn't see any of the ROCm packages install, so also tried manually installing the non-dev packages, but that also had no effect.

              The card IS posting on kernel boot properly (card init, all ring tests and memory allocations etc). So I'm assuming something is not linked correctly with the OpenCL stuff for Vega.

              (FYI clinfo only returns polaris cards as well, nothing on vega).

              Can you get the vega team to get some install instructions or scripts for the vega cards? (or at the very least just give a least of required vega packages in that driver directory).

              Card installed and OpenCL apps worked no problem on Windows side btw.

              Comment


              • #37
                Ok so I stumbled across the GPUOpen site, and found some additional instructions that were not posted on AMDs driver site. Looks like you need to run

                sudo apt install -y rocm-amdgpu-pro after running the installer script. Sadly performance is pretty horrible under Linux...does about 20% of windows drivers.

                Comment


                • #38
                  Originally posted by jstefanop View Post
                  Ok so I stumbled across the GPUOpen site, and found some additional instructions that were not posted on AMDs driver site. Looks like you need to run

                  sudo apt install -y rocm-amdgpu-pro after running the installer script. Sadly performance is pretty horrible under Linux...does about 20% of windows drivers.
                  There were some other instructions on the main driver install page about setting LLVM_BIN etc... - did you see those ?



                  There is still performance tuning going on for the ROCm paths but 20% doesn't sound right at all - if that is still happening could you post some more details ?
                  Test signature

                  Comment


                  • #39
                    bridgman Yea..looks like those were posted after I figured everything out. There is some detailed discussion going on with gstoner over at the ROCm git page regarding performance and they are looking into it. Long story short seems like vega memory pipeline has some bugs driver side limiting its HBM2 memory bandwidth to about half of what it should do on linux side, which is amplified by the fact that lots of the algorithms we run are memory bound.

                    are you involved with the Vega specific optimizations for linux stack?

                    Comment


                    • #40
                      Originally posted by jstefanop View Post
                      bridgman Yea..looks like those were posted after I figured everything out.
                      I learned a couple of hours ago that there was some kind of web site problem with the initial release notes so most of the important instructions were not visible (although they were in the file). Sorry about that...

                      Originally posted by jstefanop View Post
                      There is some detailed discussion going on with gstoner over at the ROCm git page regarding performance and they are looking into it. Long story short seems like vega memory pipeline has some bugs driver side limiting its HBM2 memory bandwidth to about half of what it should do on linux side, which is amplified by the fact that lots of the algorithms we run are memory bound.

                      are you involved with the Vega specific optimizations for linux stack?
                      I was mostly working on future products/features (including future Vega/Linux work) but I did just get pulled into the issue you mentioned.
                      Test signature

                      Comment

                      Working...
                      X