Announcement

Collapse
No announcement yet.

It Looks Like AMDGPU DC (DAL) Will Not Be Accepted In The Linux Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by bridgman
    So "open source is hard, particularly if you ignore the benefits, so we shouldn't do it" ?
    Not really what I meant. I'm just highlighting the fact that people always complain about Nvidia not open-sourcing drivers, and it's moments like this why that's the case. I'd like things to be open source where possible, but I'm not inclined to complain about something being closed if it grants a better user experience.

    Comment


    • Right, but generally the better user experience comes from being open source and integrated into upstream.

      Comment


      • Originally posted by bridgman View Post

        We will also still be lighting up hardware with "DC the code" whether or not it is upstream at the moment.
        That implies the use of AMDGPU-PRO, doesn't it? What if I do not want to use AMDGPU-PRO, but instead stay on with a vanilla kernel + libDRM + Mesa stack? What will i expect to lose?

        Comment


        • Originally posted by krelian View Post
          @bridgeman Late to the party, but have you guys ever considered building some DSLs and abstracting the tricky bits with source-translation at compile time, rather than a runtime abstraction layer? Maybe with a PEG/packrat parser and a good templating engine -- something as simple as awk can work wonders? That way you would still be reasonably confidant that things will work on Linux, while both making the upstream maintainers happy and reducing the burden of conformance/validation testing?

          I'm a long time game programmer with a technical background, I've worked on lots of different engines over the years, and it's been my observation that most everyone has been doing abstraction layers wrong, including myself. Extra layers of fine-grained abstractions end up being a nightmare to maintain and can end up causing performance issues.

          Anyway, I've recently had quite a bit of success doing our physicality based shading and material pipeline with a custom DSL that generates backends for HLSL, GLSL, PSSL and MSL. Also currently using the same idea to build and generate backends for our particle rendering system, allowing us to abstract rendering where geometry is generated and draw calls are batched on the CPU for OpenGL ES 2.0 profiles all the way up to where everything is done on the GPU.

          In both cases we're generating orders of magnitude more code (and a lot of it is high-performance stuff and very readable, code generation doesn't have to be a mess) while only having to maintain the stuff written in the DSL plus the DSL front-end and code generation templates. Literally generating several 100,000 LoC for all the various combinations and different platforms from a base of around 8000 LoC.

          Got the ideas a few years back from a talk by Alan Kay entitled "Programming and Scaling". Look it up.
          This sounds very cool.

          Comment


          • bridgman Thank you for the breakdown and clarification. I actually read all 14 pages. My panic didn't actually fade until your explanation on page 13 that basically "nothing has changed and this is all a miscommunication". I'll can admit my disappointment as well, but I'm still planning on going Zen Summit Ridge and Vega HBM2. I'll be honest, looking for the end of tunnel for support can get depressing. Still rocking Sabayon/Gentoo, but have no AMDGPU-PRO driver due to .deb or .rpm only. I myself had considered going Nvidia right before my R9 Nano and I have been an AMD puritan since AMD K6 Days and an ATI puritan since the Rage 128. AMDGPU was what gave me hope to continue and purchase an R9 Nano. Since this is all going the way of opensource eventually, is there a way we can get a non packaged version we could download to compile and install on our vanilla systems?

            Comment


            • Originally posted by Sonadow View Post
              That implies the use of AMDGPU-PRO, doesn't it? What if I do not want to use AMDGPU-PRO, but instead stay on with a vanilla kernel + libDRM + Mesa stack? What will i expect to lose?
              Maybe we are interpreting "lighting up" differently. To me that means "we just got first silicon back from the fab, months before launch, and need to start bringing up the driver and testing".

              What does it mean to you ?

              Comment


              • Originally posted by Darksurf View Post
                Still rocking Sabayon/Gentoo, but have no AMDGPU-PRO driver due to .deb or .rpm only. I myself had considered going Nvidia right before my R9 Nano and I have been an AMD puritan since AMD K6 Days and an ATI puritan since the Rage 128. AMDGPU was what gave me hope to continue and purchase an R9 Nano. Since this is all going the way of opensource eventually, is there a way we can get a non packaged version we could download to compile and install on our vanilla systems?
                We are working on pushing trees for all the open source bits out to public between other things like new HW and new releases... getting pretty close AFAIK.

                Comment


                • Originally posted by bridgman View Post

                  Maybe we are interpreting "lighting up" differently.
                  That may be entirely possible.

                  Originally posted by bridgman View Post
                  To me that means "we just got first silicon back from the fab, months before launch, and need to start bringing up the driver and testing".

                  What does it mean to you ?
                  Obviously the general populace will not have access to the hardware until launch day, so my definition of lighting up is that of
                  1) Works on the latest shipping version of the kernel (not git!) on launch day or shortly after launch with KMS
                  2) can output at up to 1440p over HDMI
                  3) Can hook onto the modesetting DDX driver
                  4) can talk with latest libdrm and Mesa to hardware accelerate a typical DE (like Gnome or Plasma)

                  #1, #2 and #3 are about getting a usable display. #4 is about actually using the hardware, and not falling back to software and llvmpipe to get the desktop drawn onscreen. I consider these four aspects to be the bare minimum of expectations with regards to any driver; if all 4 requirements are met, i deem the hardware as being successfully lit up.

                  Anything else like HDMI audio, FreeSync, etc etc are bonuses and icing on the cake that I can afford to wait longer for or live without for an extended period of time.
                  Last edited by Sonadow; 12-09-2016, 01:50 PM.

                  Comment


                  • Ahh, OK, so we are talking about completely different things then. I have to head out for a while (trying to be on vacation) but will go through the posts and reply differently when I get back.

                    Comment


                    • Originally posted by Sonadow View Post

                      That implies the use of AMDGPU-PRO, doesn't it? What if I do not want to use AMDGPU-PRO, but instead stay on with a vanilla kernel + libDRM + Mesa stack? What will i expect to lose?
                      IMHO the best and most succinct question in this thread. I'm curious about it too.


                      And just for the record: Could people please read and re-read all of @bridgman's posts before posting?


                      THREAD SUMMARY:

                      - On behalf of the Linux kernel DRM subsystem maintainers, Dave Airlie notes that they have prior experience with merging a HAL, which in term makes them very skeptical about doing it ever again.
                      -- So even in light of the DRM maintainer's policy of accepting good enough code and then chipping away at it, merging the DC-the-HAL-code in its present state whould be a BIG mistake since it is all but guaranteed to cause major headaches down the road.

                      ermo's personal observation #1: The above seems a clearly engineering-driven decision which accurately reflects the experience/culture/philosophy/policy of the Linux kernel dev model given its available DRM subsystem dev resources.

                      - On behalf of AMD, bridgman notes that AMD respects this and that the developers assigned to DC-the-project will continue to work on refactoring the cross-platform DC-the-HAL-code to be more in line with what the Linux kernel DRM devs will accept.
                      -- In the mean time and purely for business reasons, AMD will have to keep bringing up yet-to-be-introduced new hardware on the evolving DC-the-HAL-code because this code is shared across Linux (AMDGPU PRO) and Windows (Radeon Software Crimson) and because it makes little business sense to delay introducing new hardware support for the 99% of its customers that run Windows due to very specific specific technical requirements of the platform (Linux) that represents the remaining 1% of its customers.

                      ermo's personal observation #2: bridgman considers the reaction to the article a bit of a storm in a teacup when in fact the ongoing RFC process is pretty much business as usual.


                      bridgman and airlied: I hope the above is a sufficiently accurate representation of your positions? Feel free to correct/expand on this post as warranted.

                      Comment

                      Working...
                      X