Announcement

Collapse
No announcement yet.

It Looks Like AMDGPU DC (DAL) Will Not Be Accepted In The Linux Kernel

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Gee, and people wonder why Nvidia doesn't open source anything. It's exactly moments like this why I support closed-source software. I'm not saying AMD should close this source code and I'm not saying closed-source is better, but if they approached this in a closed-source manner, they wouldn't be in this situation.

    As an AMD user, I don't blame Dave for his decisions - AMD should have known what they were getting into. Yes, this situation sucks, but they put themselves in it.

    Comment


    • Originally posted by Sonadow View Post
      In case Bridgman does not see this, anyone else out here able to answer some of my questions? Thanks very much...
      Bridgman sees all, although he doesn't always have time to respond to all

      Comment


      • Originally posted by bridgman View Post

        The situation before was:

        - initial code pushed for public review
        - lots of issues raised
        - we're working on implementing changes that address the issues raised
        - as we resolve initially identified issues it's likely new ones may be identified
        - we don't know how long it will take before we can get upstream but are continuing to work on it

        After this dramatic event, the situation is :

        - initial code pushed for public review
        - lots of issues raised
        - we're working on implementing changes that address the issues raised
        - as we resolve initially identified issues it's likely new ones may be identified
        - we don't know how long it will take before we can get upstream but are continuing to work on it

        A lot of the confusion here is that DC has two meanings - it's the big chunk of developer effort & code we want to re-use across platforms for all kinds of good reasons, and it is also a very specific abstraction layer inside that code. It's the second one that Dave and Daniel have concerns about, because it internalizes things that they feel should be part of the driver rather than part of what is effectively a blob (despite being publicly developed open source) to them. We understand that.

        Again, this was primarily a miscommunication, so other than making for some good reading it doesn't mean much.
        Thanks, but I think you misunderstood me.

        What I meant was that without the DC abstraction layer, what will the state of hardware support for AMDGPU be? I don't own any GCN hardware right now so it's not like I can even fool around with the driver to see what works and what doesn't. I'm more interested in knowing whether without the DC layer, will AMDGPU still be capable of providing launch-day (or close to launch day) compatibility with Linux, assuming the latest kernel is used? Or will it take much longer for AMD to get new hardware to at least activate KMS on Linux?

        And if it does light up, what works, and what won't work without the DC layer? 1440P over HDMI achievable?

        Comment


        • Originally posted by Sonadow View Post
          Thanks, but I think you misunderstood me.

          What I meant was that without the DC abstraction layer, what will the state of hardware support for AMDGPU be? I don't own any GCN hardware right now so it's not like I can even fool around with the driver to see what works and what doesn't. I'm more interested in knowing whether without the DC layer, will AMDGPU still be capable of providing launch-day (or close to launch day) compatibility with Linux, assuming the latest kernel is used? Or will it take much longer for AMD to get new hardware to at least activate KMS on Linux?

          And if it does light up, what works, and what won't work without the DC layer? 1440P over HDMI achievable?
          I didn't misunderstand you at all. We will still be sharing code, it's just that the current DC abstraction layer will probably need to change at least for Linux (ideally for all platforms), which implies a change in the code sharing model.

          We will also still be lighting up hardware with "DC the code" whether or not it is upstream at the moment.

          Comment


          • Originally posted by bridgman
            So "open source is hard, particularly if you ignore the benefits, so we shouldn't do it" ?
            Not really what I meant. I'm just highlighting the fact that people always complain about Nvidia not open-sourcing drivers, and it's moments like this why that's the case. I'd like things to be open source where possible, but I'm not inclined to complain about something being closed if it grants a better user experience.

            Comment


            • Right, but generally the better user experience comes from being open source and integrated into upstream.

              Comment


              • Originally posted by bridgman View Post

                We will also still be lighting up hardware with "DC the code" whether or not it is upstream at the moment.
                That implies the use of AMDGPU-PRO, doesn't it? What if I do not want to use AMDGPU-PRO, but instead stay on with a vanilla kernel + libDRM + Mesa stack? What will i expect to lose?

                Comment


                • Originally posted by krelian View Post
                  @bridgeman Late to the party, but have you guys ever considered building some DSLs and abstracting the tricky bits with source-translation at compile time, rather than a runtime abstraction layer? Maybe with a PEG/packrat parser and a good templating engine -- something as simple as awk can work wonders? That way you would still be reasonably confidant that things will work on Linux, while both making the upstream maintainers happy and reducing the burden of conformance/validation testing?

                  I'm a long time game programmer with a technical background, I've worked on lots of different engines over the years, and it's been my observation that most everyone has been doing abstraction layers wrong, including myself. Extra layers of fine-grained abstractions end up being a nightmare to maintain and can end up causing performance issues.

                  Anyway, I've recently had quite a bit of success doing our physicality based shading and material pipeline with a custom DSL that generates backends for HLSL, GLSL, PSSL and MSL. Also currently using the same idea to build and generate backends for our particle rendering system, allowing us to abstract rendering where geometry is generated and draw calls are batched on the CPU for OpenGL ES 2.0 profiles all the way up to where everything is done on the GPU.

                  In both cases we're generating orders of magnitude more code (and a lot of it is high-performance stuff and very readable, code generation doesn't have to be a mess) while only having to maintain the stuff written in the DSL plus the DSL front-end and code generation templates. Literally generating several 100,000 LoC for all the various combinations and different platforms from a base of around 8000 LoC.

                  Got the ideas a few years back from a talk by Alan Kay entitled "Programming and Scaling". Look it up.
                  This sounds very cool.

                  Comment


                  • bridgman Thank you for the breakdown and clarification. I actually read all 14 pages. My panic didn't actually fade until your explanation on page 13 that basically "nothing has changed and this is all a miscommunication". I'll can admit my disappointment as well, but I'm still planning on going Zen Summit Ridge and Vega HBM2. I'll be honest, looking for the end of tunnel for support can get depressing. Still rocking Sabayon/Gentoo, but have no AMDGPU-PRO driver due to .deb or .rpm only. I myself had considered going Nvidia right before my R9 Nano and I have been an AMD puritan since AMD K6 Days and an ATI puritan since the Rage 128. AMDGPU was what gave me hope to continue and purchase an R9 Nano. Since this is all going the way of opensource eventually, is there a way we can get a non packaged version we could download to compile and install on our vanilla systems?

                    Comment


                    • Originally posted by Sonadow View Post
                      That implies the use of AMDGPU-PRO, doesn't it? What if I do not want to use AMDGPU-PRO, but instead stay on with a vanilla kernel + libDRM + Mesa stack? What will i expect to lose?
                      Maybe we are interpreting "lighting up" differently. To me that means "we just got first silicon back from the fab, months before launch, and need to start bringing up the driver and testing".

                      What does it mean to you ?

                      Comment

                      Working...
                      X