No announcement yet.

It Looks Like AMDGPU DC (DAL) Will Not Be Accepted In The Linux Kernel

  • Filter
  • Time
  • Show
Clear All
new posts

  • Well this is definitely disappointing news to hear...

    Don't get me wrong, I can understand both parties here. Linux kernel maintainers obviously don't want to deal with code that contains a lot of abstraction layers they consider to be unnecessary and would prefer code with as few abstraction layers as possible, resulting in code which is cleaner and less susceptible to errors and bugs. In the end their main goal would be to ensure that all code that goes into the kernel is built specifically for Linux and not re-used from other operating systems. AMD on the other hand doesn't have anything close to infinite development resources, specially for an operating system which has a fairly marginal market share for the purposes of this driver, and would definitely benefit greatly from being able to re-use code from the driver they created for the OS that holds something like 90+% of the market in question.

    The talk about a lack of communication unfortunately continues AMD's recent trend of not being able to communicate very well with parties outside of their own organization. However the sudden flat-out rejection of DAL/DC like this is a sign of this really not being a problem specific to AMD. I'd say it's even something of a dick move of the maintainers considering how much time and effort AMD has spent on this and how they've previously been told by the maintainers to continue work on improving the code based on promises to merge it once it's good enough. I'd go ahead and call it a "bait 'n switch" if I thought it was deliberate rather than just the result of bad communication and/or indecisiveness on the part of kernel maintainers.

    As I mentioned, I can understand David Arlie's position on this subject, but knowing that Nvidia does something fairly similar inside of it's binary blob drivers I'm having a hard time understanding why he'd take such a harsh attitude to this. The only real difference between AMDGPU with DAL/DC and Nvidia's in-house efforts is that Nvidia ships it's drivers as a binary blob rather than as an open source kernel driver. He talks about breaking things for older cards, but I can't recall Nvidia having any issues like this in it's recent and semi-recent history so I can't really take his horror-scenario based reasoning fully seriously.

    Something that genuinely worries me now is what's going happen after this. If I worked in management at AMD this would make me have some serious doubts as to the point of continuing to invest as much into the further development of in-house Linux drivers. For me the whole thing would boil down to two options if the maintainers can't be convinced to mainline DAL/DC for newer hardware with full mainlining progressing as the DAL/DC code improves. Ether just give the idea of open source drivers the proverbial finger and go the Nvidia route and just have binary blobs or then just drop further development resources to the point where there's barely more than maintenance and just rely on community efforts as much as possible.

    AMD was making such progress and I really hoped that when DAL/DC was finally merged we'd see them re-focus a lot of their attention on improvements rather than just features and finally at some point to the same level as Nvidia, except with open rather than closed source drivers. Because of that I really hope that the kernel maintainers can be talked into a slow mainlining of DC/DAL as it matures rather than the flat-out rejection we're seeing right now.
    Last edited by L_A_G; 12-09-2016, 09:33 AM.


    • Dave is such a massive twofaced hypocritical wanker.

      He's one of the main people who worked real hard and applied all sorts of bullshit tricks to kill the proper open source driver for ATI. He's been very busy making sure that the ATI way of things got implemented. And now he is trying to act like the big hero with RADV2 and now this, because guess what, ATI is doing things the ATI way.

      Why is this guy given any credit whatsoever?


      • @bridgeman Late to the party, but have you guys ever considered building some DSLs and abstracting the tricky bits with source-translation at compile time, rather than a runtime abstraction layer? Maybe with a PEG/packrat parser and a good templating engine -- something as simple as awk can work wonders? That way you would still be reasonably confidant that things will work on Linux, while both making the upstream maintainers happy and reducing the burden of conformance/validation testing?

        I'm a long time game programmer with a technical background, I've worked on lots of different engines over the years, and it's been my observation that most everyone has been doing abstraction layers wrong, including myself. Extra layers of fine-grained abstractions end up being a nightmare to maintain and can end up causing performance issues.

        Anyway, I've recently had quite a bit of success doing our physicality based shading and material pipeline with a custom DSL that generates backends for HLSL, GLSL, PSSL and MSL. Also currently using the same idea to build and generate backends for our particle rendering system, allowing us to abstract rendering where geometry is generated and draw calls are batched on the CPU for OpenGL ES 2.0 profiles all the way up to where everything is done on the GPU.

        In both cases we're generating orders of magnitude more code (and a lot of it is high-performance stuff and very readable, code generation doesn't have to be a mess) while only having to maintain the stuff written in the DSL plus the DSL front-end and code generation templates. Literally generating several 100,000 LoC for all the various combinations and different platforms from a base of around 8000 LoC.

        Got the ideas a few years back from a talk by Alan Kay entitled "Programming and Scaling". Look it up.


        • This constant war between blind linux idealist and GPU driver guys affects users, gamers and game developers the worst and no one cares. I just want my god dam 400$+ GPU to work out of the box like it does on Windows and play my games at FULL FPS / Potential of the graphics card with Audio/Video/VR working on my Linux Desktop !! screw the Redhat Canonical AMD Nvidia and Idealist Linux Kernel developers political BS. I've paid good money to buy a card that should work for what it was intended to do... And no i don't care weather the source is open or blobs I need performance I have paid for.


          • Originally posted by bridgman View Post

            Actually Dave was the first person I called to ask for advice when I volunteered to set up the open source graphics initiative back in 2007, and you are probably running quite a bit of his code on your system today.

            Dave had already accepted an offer from Red Hat by the time we first talked.
            I know the subject is important for you bridgman but you are the last of us answering seriously to debianxfce, as his posts are globally... no comment


            • Bridgman:

              Can you tell us what this will mean moving forward for upcoming and future AMD hardware? Now that upstream has made clear their intent to never mainline AMDGPU DC, what kind of loss in functionality cab the average user expect? Will it mean that newer and future hardware will take a much longer time to just light up on Linux?

              For example, when Zen is released I intend to assemble a Zen-powered computer and a 1440p monitor with a mid-range GPU (probably also an AMD) for live streaming some browser games over Twitch and Facebook using OBS (Intel is getting a tad expensive over here). What can I expect to not work? Will my card be able to light up the display and start KMS, with at least basic OpenGL acceleration for the DE on launch day? Will I be able to get native resolution over HDMI and DP?

              Will future releases take much longer to at least be capable of lighting up the display, activate KMS and attain OpenGL acceleration to at least drive the desktop GUI? And out of the above, how many of these minimum of must-work features can we expect to see supported on Linux of the hardware's launch day?

              I will rather not use Windows since I have reached a point where I am much more comfortable on Linux than on Windows, and not needing to buy a copy of Windows is money that is saved for other uses but by Jove, i will sink that money into a copy of Windows if I have to do so in order to use my new hardware at a baseline level of functionality.
              Last edited by Sonadow; 12-09-2016, 10:47 AM.


              • Originally posted by Linuxhippy View Post
                I guess nobody seriously expects anybody else than AMD putting serious effort and maintenance into this codebase (development of the open-source radeonsi-driver is also almost exclusivly done by paid AMD OSS developers).
                I'm afraid that's not how it works.
                If the devs need to rework some of the common DRM code, that is used by DC, they will have to also touch DC.
                If the DC code there is overly complicated, it makes their job harder.

                I really don't understand the hate on Dave's goal: to have a maintainable codebase, which to users translates as faster and less error prone development of our beloved GPU drivers. Nothing prevents anyone from using a pre-built kernel that includes DC or building it themselves (it's open source after all...).


                • Originally posted by Sonadow View Post
                  Now that upstream has made clear their intent to never mainline AMDGPU DC.
                  I don't believe upstream stated that. The HAL should not be mainlined, but not DC itself.


                  • Originally posted by Ansla View Post
                    My Acer XF270HU works fine here over DP with either RX 480 or the onboard GPU of Kaveri. Except for audio or freesync, of course. If it doesn't work over HDMI it probably has to do with missing HDMI 2.0 support.
                    Ofc, HDMI 1.4 can't handle wqhd@144, I'm using DP1.2.

                    Well, some users (including myself) have reported about this problem at least with Hawaii and also Tonga, iirc.

                    Interesting that it works for you with Kaveri, whose DCE is a few iterations older than Hawaii's and Tonga's. Could you tell me which exact kernel version you are using? Are you using radeon or amdgpu for Kaveri?


                    • This feel bad. For a second I thought: buy a mac