Announcement

Collapse
No announcement yet.

AMDKFD Looking To Be Merged Into AMDGPU Linux DRM Kernel Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by bridgman View Post
    Mesa OpenCL has pretty much stalled as far as I can see. We worked on it for a few years hoping it would catch on as a community standard, but it didn't happen. There was still a LOT of work to do before it could be a viable alternative to our in-house OpenCL driver, so we focused on open sourcing our in-house OpenCL driver instead
    Speaking of which, since ROCm doesn't support all GCN hardware, are there any plans to open-source PAL OpenCL?

    Comment


    • #22
      Originally posted by LinAGKar View Post
      Speaking of which, since ROCm doesn't support all GCN hardware, are there any plans to open-source PAL OpenCL?
      I am trying to get that discussion going internally, but it's probably not going to make much progress until the summer vacation rush dies down.

      My initial (unconfirmed though) impression is that some of the hard work has been done as part of the AMDVLK effort, but I'm not sure yet if the PAL back end is using a compiler toolchain that we can open source or if we need to plumb in the direct-to-ISA open source compiler.

      Comment


      • #23
        First let me thank you for all of the insight you bring to the forums!

        i need to seek clarification on why APU support is not a priority. In my mind GPU compute is a big win for APU's, especially low power ones. Is it an issue of demand from the community or something else taking focus off APUs.

        Originally posted by bridgman View Post

        Yep, you are wrong

        Kaveri is supported by both radeon and amdgpu kernel drivers today, although radeon is still the upstream default while we work through remaining amdgpu issues on SI/CI. That said, we are using amdgpu in the packaged drivers for all GCN generations right back to SI/GCN 1.0 although the focus there is on dGPU rather than APU at the moment.

        Nothing is being removed from the kernel - we just plumbed amdkfd into amdgpu rather than into radeon for CIK/GCN 1.1. I believe we did that almost a year ago. EDIT - about 9 months ago:

        https://cgit.freedesktop.org/~agd5f/...a3d93d1fe371af

        Anyways, all this means is that you will need to use amdgpu rather than radeon on Kaveri if you want ROCm support. That can be done with boot parms now (one to disable radeon CIK support and another to enable amdgpu CIK support), although if you are using the radeon X driver you will need to tweak X.conf as well IIRC.

        I have received maybe 50 "OMG why are you making me still use radeon instead of making amdgpu the default for SI/CI" posts, but yours is the first "OMG you are going to make me use amdgpu" post so far.

        I should mention for completeness that right now dGPUs (and mostly high end dGPUs) are the main focus area for ROCm, but I think our intent is to keep it running on the APUs we used for initial development as well.



        Mesa OpenCL has pretty much stalled as far as I can see. We worked on it for a few years hoping it would catch on as a community standard, but it didn't happen. There was still a LOT of work to do before it could be a viable alternative to our in-house OpenCL driver, so we focused on open sourcing our in-house OpenCL driver instead:

        https://github.com/RadeonOpenCompute...OpenCL-Runtime

        I'm not saying that Mesa OpenCL (aka the clover state tracker) is dead - there is still occasional work being done on it - but it is not part of our plans at the moment.

        https://cgit.freedesktop.org/mesa/me...rackers/clover

        Comment


        • #24
          Originally posted by bridgman View Post

          I am trying to get that discussion going internally, but it's probably not going to make much progress until the summer vacation rush dies down.

          My initial (unconfirmed though) impression is that some of the hard work has been done as part of the AMDVLK effort, but I'm not sure yet if the PAL back end is using a compiler toolchain that we can open source or if we need to plumb in the direct-to-ISA open source compiler.
          I wish you good luck! having a completely open source -pro stack would be very... Epyc :P

          Comment


          • #25
            Originally posted by bridgman View Post

            I am trying to get that discussion going internally, but it's probably not going to make much progress until the summer vacation rush dies down.

            My initial (unconfirmed though) impression is that some of the hard work has been done as part of the AMDVLK effort, but I'm not sure yet if the PAL back end is using a compiler toolchain that we can open source or if we need to plumb in the direct-to-ISA open source compiler.
            I would like to have very interesting information. What software is worth waiting for and when? As far as I know, HSA only Libre Calc and sample, as well as some versions of OpenCL without HSA.

            Comment


            • #26
              Originally posted by Qaridarium
              with Ethereum ROCm is 20-40% faster than PAL...
              Interesting. I've been running ethminer on my two Fury's, previously with ROCm, but now on Orca (aka. Legacy). ROCm ran at about 28MH/s per card while Orca does about 29.5MH/s per card. Not that I normally run them at full speed... I plan to go back to ROCm once it runs on upstream kernel though.

              Comment


              • #27
                Originally posted by wizard69 View Post
                i need to seek clarification on why APU support is not a priority. In my mind GPU compute is a big win for APU's, especially low power ones. Is it an issue of demand from the community or something else taking focus off APUs.
                I'm going to tweak your question a bit to "why APU support is not highest priority".

                Four factors, I guess...

                1. There was a lot of interest in compute on APUs, but in practice a good chunk of that turned out to be "compute on really big APUs that we didn't actually make". In order to get the kind of performance gain that people were hoping for, something closer to a mid-range dGPU was required once you got past the added overhead of moving data between CPU and GPU.

                2. You're thinking "but one important part of HSA was eliminating the need to move data back and forth" and you're right, but the combination of having to learn a new environment of any kind AND learn a new style of programming was a fairly high bar for most application developers. They were OK with a new environment OR a new style of programming but not both at the same time, particularly if the resulting software would only run on APUs. Our major customers were and still are very interested in GPU compute on APUs but they had already invested in OpenCL and weren't in any rush to change.

                3. This is kind of re-stating #1 and #2 but from a different perspective - nearly all of the applications that did/could make good use of compute were coded for OpenCL, CUDA, or both, and nearly all of them were built around dGPU performance levels and dGPU programming models where the dGPU relies on separate high-speed memory for performance.

                4. Most of the high growth areas needed configurations with a LOT more power than any current APU was going to provide - typical systems have multiple nodes, each with between 4 and 16 high end dGPUs, with high speed interconnect between nodes. Trying to accumulate that much computing power with APUs would be neat but expensive, particularly with high speed interconnect added in.

                When you put all those together it became pretty clear that we needed to get dGPU support (including large multi-GPU configurations) and dGPU-style programming models supported, and supported well, before we could expect much success moving developers to the APU-style programming models.

                Now that dGPU support, porting tools and ML / HPC libraries are all pretty much in place and we have dGPU KFD support upstream we should be able to start pulling APU and dGPU support back together rather than having "APU upstream and dGPU out-of-tree" like we had for a while.

                We kept all of the important HSA features in ROCm, although we had to replace unpinned GPU access to system memory on APUs with a combination of pinned memory and process / user queue eviction on current dGPUs. Going forward, we can start moving back to more of an HSA APU-style model by making use of recoverable page faults.
                Last edited by bridgman; 07 July 2018, 02:14 AM.

                Comment


                • #28
                  Originally posted by Qaridarium
                  But i am sure there is a lot of interest to write special software for a Open-Source GPU with open-source firmware
                  Haven't seen it yet (not zero people but very few) although continuing to monitor and ask about it.

                  It's not that people don't care about avoiding risk of "big brother" in the microcode, more that groups/companies buying large numbers of GPUs take it for granted that their big compute rigs should be firewalled off from the rest of the internet and communicate only through controlled APIs.

                  There is definitely strong interest in open source drivers for the ability to tailor/tweak/optimize, but that doesn't generally extend to open source microcode. Once you get to a corporate level there is also a better understanding that microcode built into the chip is no different from microcode loaded by the driver, so the threshold shifts from "open source microcode" to "fully auditable hardware" anyways.

                  None of that discounts the interest from individuals in more hardware - it just says that larger companies already firewall off their datacenters (other than very specific address+port+protocol combinations allowed through the firewall) and once you do that you worry less about closed source hardware.
                  Last edited by bridgman; 07 July 2018, 11:10 AM.

                  Comment


                  • #29
                    bridgman Is the video coding engine (VCE) enabled through the open source driver or do I have to install the proprietary Radeon driver?

                    Comment


                    • #30
                      Originally posted by menneskelighet View Post
                      bridgman Is the video coding engine (VCE) enabled through the open source driver or do I have to install the proprietary Radeon driver?
                      We use the same code in the open source driver and the proprietary driver - so yes, enabled in both cases.

                      Comment

                      Working...
                      X