Announcement

Collapse
No announcement yet.

AMD's Modern Graphics Driver In Linux 5.14 Exceeds 3.3 Million Lines Of Code

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Slartifartblast


    If you don't have an AMD GPU then of course it will not be compiled in, unless you build the kernel yourselves and decides to build a monolithic image with support for a lot of drivers you don't use, instead of building them as modules.

    Comment


    • #12
      Originally posted by microcode View Post

      I don't get why you have to write this like it's some unequivocal bad thing. The Linux driver policy is good,
      You are the one making impressions about something being globally "good" or globally "bad", I didn't state these words. I am just simply pointing out that this is a disadvantage of a monolithic kernel and so unless things change then people should get used to it (for better or for worse)

      Originally posted by microcode View Post
      "stable" ABIs are not only a pain to maintain,
      So Linux people keep on parotting this like some gospel but there hasn't been any convincing argument for this. Right now, Linux developers have to maintain entire drivers where as with a stable ABI, well, you are basically maintaining an API. There are of course different sets of challenges (i.e. designing a good API requires a lot of upfront effort and wisdom) but in the long term it actually reduces maintenance. You eventually get to a point where you just periodically test drivers against your API to just make sure you don't have regressions and that new features work.

      Originally posted by microcode View Post
      but a direct source of bloat,
      Are you seriously saying that an API is going to be as bloated as the entire AMD graphics driver?

      Originally posted by microcode View Post
      and they are not useful if you are willing to build all the drivers from source anyhow.
      Not sure what you are implying here

      Originally posted by microcode View Post
      Not to mention, if not for Linux, AMD and Intel most likely would never have published and maintained open source drivers to begin with.
      This is actually an orthogonal concern, there is no reason why the drivers cannot be open source as a model. Furthermore if this is your metric for success its done a terrible job because NVidia's main driver is still closed source (and lets be honest, this is not going to change anytime soon). AMD did release their driver as open source, but it was more of a "we ditched our entire old driver stack because it was total garbage and since we are rebuilding stuff, why not make it open source".

      Originally posted by microcode View Post
      Then on top of all of this, there are all of the secondary benefits. For example, because of the way the Linux driver ecosystem works, you can bring up a new CPU architecture and have working graphics, wireless, ethernet, and USB drivers with little or no porting effort, and without having to run drivers in a DBT or something lol.
      Uhh, this is not really entirely correct. You may have a point for entirely different architectures (i.e. x86 vs ARM vs RISC) but if you are talking about newer CPU's then you are massively overblowing this advantage. Furthermore, closed source drivers do take advantage of newer CPU architectures, its not like this doesn't happen.

      Originally posted by microcode View Post
      Having millions of lines of header files in the kernel is a sign that things are going well, on the scale of things where you should "be careful what you wish for", the consequences of this wish are almost all positive, and the negative consequences are almost all minor. Just having these headers sitting around in the kernel tree costs almost nothing, and is useful.
      Your assessment of minor or major is completely subjective and entirely depends on context. For servers/mainframes then yes, its a good model. But as a gaming machine (especially for AAA games) or even arguably a good desktop experience, it has a lot to be desired.

      As I have mentioned elsewhere, I do not whatsoever how Linux can capture the windows gaming/desktop market when userspace games have no stability about the quality or the features of the graphics driver since its tied to the linux kernel version, and as you know people can be running anything from the latest kernel to a kernel that is 10 years old. Game developers complain about how fragmented windows is because of all of its hardware configurations, well Linux is 10x as worse because you have to deal with software configurations that user space programs cannot do anything about (and no, forcing users to change distributions or to patch their kernels is not an solution here)

      Comment


      • #13
        Why isn't kernel portion rather simple stuff like setting up queues and shared memory, and a few state/synchronization methods?
        The rest should be handled in an userspace driver, pushing down command-streams.

        Comment


        • #14
          Originally posted by discordian View Post
          Why isn't kernel portion rather simple stuff like setting up queues and shared memory, and a few state/synchronization methods?
          The rest should be handled in an userspace driver, pushing down command-streams.
          This is actually a microkernel and they do exist, although typically either in embedded or high assurance situations (power plants, spaceships, airplanes etc etc)

          Linux is adamantly/stubbornly (or w/e you want to put it) a monolithic kernel, it basically expects everything apart from userspace programs to sit in the kernel

          One issue with typical microkernel design is that it does have performance issues, nothing is really stopping you from running a graphics driver in userspace but you lose a lot of performance due to context switching. There are microkernels out there like SEL4 which avoid this context switching by developing are unique ideas (i.e. memory spaces) but thats the general gist of it. Fun fact: Windows 98 actually had the GPU sitting in userspace but they redesigned it in Vista with WDDM where there is a low level stable ABI and drivers basically run in RING0 and interface with the kernel directly (because of performance reasons).

          Comment


          • #15
            Originally posted by mdedetrich View Post

            This is actually a microkernel and they do exist, although typically either in embedded or high assurance situations (power plants, spaceships, airplanes etc etc)

            Linux is adamantly/stubbornly (or w/e you want to put it) a monolithic kernel, it basically expects everything apart from userspace programs to sit in the kernel

            One issue with typical microkernel design is that it does have performance issues, nothing is really stopping you from running a graphics driver in userspace but you lose a lot of performance due to context switching. There are microkernels out there like SEL4 which avoid this context switching by developing are unique ideas (i.e. memory spaces) but thats the general gist of it. Fun fact: Windows 98 actually had the GPU sitting in userspace but they redesigned it in Vista with WDDM where there is a low level stable ABI and drivers basically run in RING0 and interface with the kernel directly (because of performance reasons).
            I was specifically talking about GPU drivers, which end up processing command streams. other than verifying the streams, the kernel shouldn't do much with them.
            Greatly simplified, but unlike other drivers those are directly pointed towards userspace.

            Several GPU drivers (eg. Mali) already do this AFAIK, you have a small display driver handling the interaction with devices (HDMI, audio, serial busses) and setting up display modes. crating the command streams is all done via a userspace module

            Comment


            • #16
              Originally posted by skeevy420 View Post
              Why is this becoming an issue with AMD GPUs now? How come Intel GPUs don't require all the extra code and headers that AMD's seem to need?
              Nobody said it was an issue.

              Comment


              • #17
                Originally posted by marek View Post

                Nobody said it was an issue.
                The existence of this article implies otherwise, its a concern at least.

                Comment


                • #18
                  mdedetrich

                  The existence of this article implies otherwise
                  How so? Please explain. The article does not say anything about it being an issue.

                  Comment


                  • #19
                    Originally posted by mdedetrich View Post

                    Integrated GPU's (within a CPU) tend to be a lot simpler/smaller in scope compared to actual discrete GPU's.

                    In any case as I have stated elsewhere, this is going to be the new reality so get used to it. If Linux devs are adamant about wanting EVERY driver to be in the tree and not providing a lower level stable ABI (so that drivers can be independent of the Kernel) then yeah, this is the result.

                    Its not unfathomable that at some time the majority of the code in the Kernel is just going to be graphics drivers, I guess be careful of what you wish for.
                    It has nothing to do with that, AMD GPU drivers include massive amounts of machine generated code, that is is based on the hardware design code, AMD then writes their drivers based on these definitions. Intel on the other hand doesn't have these big machine generated code drops so much.

                    Linux allows these gigantic code drops in because the maintenance burden is essentially zero as its machine generated.

                    Comment


                    • #20
                      Originally posted by discordian View Post
                      Why isn't kernel portion rather simple stuff like setting up queues and shared memory, and a few state/synchronization methods?
                      The rest should be handled in an userspace driver, pushing down command-streams.
                      Depends what you mean by simple. The user mode drivers set up the command buffers and send them to the kernel for the various engines on the gpu (GFX, compute, video, transfer, etc.). The kernel driver handles scheduling of those command buffers, memory management, interrupts, power management, basic engine initialization, display configuration and validation, engine resets, providing telemetry information, etc. If you want to support all of that at the bandwidth, power, and performance edges, the code gets complex fast.

                      Comment

                      Working...
                      X