Announcement

Collapse
No announcement yet.

The AMD Radeon Graphics Driver Makes Up Roughly 10.5% Of The Linux Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by tpiepho View Post
    ACPI for the bios. Using a gpu specific function. A bunch of stuff. Do the really need all that? Probably not. I figured the code that talked to the hardware was probably the best bet and looked for it. And found a function for volcanic islands cards that does it. Lots of unsafe pointer casts. There's another entire copy of the function for southern islands. Exactly the same code. soc15 copy. Identical. And so on.

    It looks like for each new family, they copy the entire driver from previous family and make a few changes.

    I didn't do an extensive survey. Maybe I just saw the identical bits. But it looks like the driver is really about five nearly identical drivers that all get loaded and then one gets used. Because that's easier than refactoring your design to share common code. If you don't care about bloat or the cost of supporting five copies of your code. That cost isn't much if you don't care about anything but the latest version.
    Part of the SOC15 idea (rebuilding the chips around IP blocks connected by data fabric) was to reduce the duplication you saw. For the last few generations of chips, IIRC the only duplication you should see is pre-SOC15 at the top level, and between "similar but too different to share code" IP block revisions.

    Originally posted by SilverFox
    Could this be considered bloat?
    Given that 5/6 of the lines are register headers that do not produce code, I would argue "no".

    The driver code itself is less that 2% of the kernel.

    Originally posted by rene View Post
    The obvious result of importing cooperate, windows derived source bases? Display Engine, Atom BIOS, etc. pp?
    Display code is shared with Windows and other OSes but was written for Linux first and then shared. AtomBIOS is just the VBIOS, so OS independent. Powerplay shares data tables with other OSes but the code is all Linux-specific.
    Last edited by bridgman; 11 October 2020, 06:21 PM.
    Test signature

    Comment


    • #12
      Originally posted by rene View Post

      . yes .
      I'm gonna have to argue against you on that one Rene, it may be code bloat, but it isn't binary bloat.

      It isn't maintenance bloat either as it has the effect of ensuring that the header definitions are machine generated and almost certainly exactly match the hardware... which is insanely valuable thing to be able to count on. And as bridgeman pointed out AMD GPU compiles out to around 2% of the kernel binary typically. You of all people should appreciate having accurate hardware definitions for things considering your scanner / printer reverse engineering work and how painstaking that can be (and even then your REed work may not be 100% accurate and may require fixing later on).

      Comment


      • #13
        Originally posted by bridgman View Post
        Given that 5/6 of the lines are register headers that do not produce code, I would argue "no".
        The driver code itself is less that 2% of the kernel.
        can you give some examples in how much bigger or how much smaller it is to the old closed source "AMD Catalyst (previous fglrx: FireGL and Radeon for X )" driver ?

        i think the closed source driver is bload and the open source driver is very small compared to this.
        Phantom circuit Sequence Reducer Dyslexia

        Comment


        • #14
          The printer/scanner and flgrx drivers are outside the Linux kernel. Separating things makes them more maintainable ('It's really hard to find maintainers...' Linus Torvalds ponders the future of Linux).

          Comment


          • #15
            Originally posted by Qaridarium View Post
            can you give some examples in how much bigger or how much smaller it is to the old closed source "AMD Catalyst (previous fglrx: FireGL and Radeon for X )" driver ?

            i think the closed source driver is bload and the open source driver is very small compared to this.
            I think you are right. I don't remember fglrx numbers offhand (but can probably find the old repo next week) but from what I do remember I'm pretty sure that the components we pulled in from Windows on their own were quite a bit larger than all of amdgpu code.

            That said, the fglrx kernel driver might have been smaller than the amdgpu kernel driver because one of the largest driver components (display) was in userspace rather than kernel.
            Test signature

            Comment


            • #16
              Isn't there some way of separating these things into modules or something rather than having it all crammed in there?

              Comment


              • #17
                Originally posted by bridgman View Post
                I think you are right. I don't remember fglrx numbers offhand (but can probably find the old repo next week) but from what I do remember I'm pretty sure that the components we pulled in from Windows on their own were quite a bit larger than all of amdgpu code.

                That said, the fglrx kernel driver might have been smaller than the amdgpu kernel driver because one of the largest driver components (display) was in userspace rather than kernel.
                if i remember corectly you told us in the past FGLRX is 55 million lines of code. means bigger than the complete linux kernel with all drivers.

                i bring this in because the people who are new here think AMDGPU is a big driver ...

                no its not. it replace a complete monster full of bload the FGLRX driver.

                the complete linux kernel is "As of today in Linux 5.9 Git, the kernel is about 20.49 million lines of code"

                and the FGLRX closed source driver was 55 million lines of code.
                Phantom circuit Sequence Reducer Dyslexia

                Comment


                • #18
                  Parkinson's Law

                  Comment


                  • #19
                    While a lot of people like to throw shit at NVidia for the whole binary blob issue, if you step back and think about it you can make an argument that the whole issue is mainly the result of a technical decision rather than a political/licensing one.

                    An argument can be made either way of whether graphics should sit in the Kernel or should sit in userland (doing the latter moves you slowly more towards a hybrid/micro kernel approach). This is purely a technical decision, with pros and cons either way and its appropriate to point out that over time for Windows the graphics driver has actually moved to userland for stability reasons (i.e. in Windows XP if your graphics card crashed it brought the whole system down, in Vista+ Windows can recover from such cases).

                    What I am getting at here is that if Linux/Linus made a decision that graphics drivers should be in userland there wouldn't even be all of these arguments and wasted time that "NVidia is a blob", there are plenty of userland applications that non GPL2 licenses and no one really complains. The reason why I am bringing this up is that
                    1. Graphics drivers are f'in complicated and large (in this case taking up 10.5% of the Kernel)
                    2. Graphics drivers have many more software features and aren't really just "pure graphics drivers anymore". i.e. for DLSS (Deep Learning Super Sampling), the neural network models are stored in the driver (which for obvious competitive advantage reasons NVidia doesn't wan't to make open source). You also have things like NVidia voice (which uses AI cores on the graphics card to remove background noise and its better then any other solution I have seen). NVidia drivers also have a ginormous table of hotfixes for games that are programmed incorrectly in order to get them to run properly (cos game development sucks).
                    In conclusion I think the whole premise of Linux forcing all drivers to be part of the Kernel may run into issues when we start dealing with non trivial hardware, in this case GPU's. GPU's are now ridiculously complicated, change very often and also the "software" part of the stack is becoming both larger and more important (and this is stuff that companies should never be forced to GPL). Having the GPU part of the Kernel also adds an extra barrier when it comes to updating drivers because now all changes have to approved by the Linux kernel team (vs the Linux kernel just having an interface and the driver being userspace which allows the companies to update the driver whenever they want).

                    This is also the main reason why Google is creating a new kernel for the phone to replace Linux called Zircon. One of the main features of the Zircon is that drivers sit in userspace, which fixes a big problem that exists currently with Android phones where its very difficult to update Linux versions on the phone separate from the drivers.

                    Originally posted by Bobby Bob View Post
                    Isn't there some way of separating these things into modules or something rather than having it all crammed in there?
                    Yes, it would mean putting the drivers into userland and maintaining a Kernel interface that speaks with userland graphics drivers. This has the added advantage of making the whole "NVidia is a blob" entirely a non issue.

                    Also the graphics drivers being so big isn't "bloat". They are really are that big and complicated. Its like calling Postgres "bloat", not everything can be ultra small and unixy.
                    Last edited by mdedetrich; 12 October 2020, 03:07 AM.

                    Comment


                    • #20
                      Originally posted by mdedetrich View Post
                      An argument can be made either way of whether graphics should sit in the Kernel or should sit in userland (doing the latter moves you slowly more towards a hybrid/micro kernel approach). This is purely a technical decision, with pros and cons either way and its appropriate to point out that over time for Windows the graphics driver has actually moved to userland for stability reasons (i.e. in Windows XP if your graphics card crashed it brought the whole system down, in Vista+ Windows can recover from such cases).

                      What I am getting at here is that if Linux/Linus made a decision that graphics drivers should be in userland there wouldn't even be all of these arguments and wasted time that "NVidia is a blob", there are plenty of userland applications that non GPL2 licenses and no one really complains. The reason why I am bringing this up is that
                      1. Graphics drivers are f'in complicated and large (in this case taking up 10.5% of the Kernel)
                      2. Graphics drivers have many more software features and aren't really just "pure graphics drivers anymore". i.e. for DLSS (Deep Learning Super Sampling), the neural network models are stored in the driver (which for obvious competitive advantage reasons NVidia doesn't wan't to make open source). You also have things like NVidia voice (which uses AI cores on the graphics card to remove background noise and its better then any other solution I have seen). NVidia drivers also have a ginormous table of hotfixes for games that are programmed incorrectly in order to get them to run properly (cos game development sucks).
                      In conclusion I think the whole premise of Linux forcing all drivers to be part of the Kernel may run into issues when we start dealing with non trivial hardware, in this case GPU's. GPU's are now ridiculously complicated, change very often and also the "software" part of the stack is becoming both larger and more important (and this is stuff that companies should never be forced to GPL). Having the GPU part of the Kernel also adds an extra barrier when it comes to updating drivers because now all changes have to approved by the Linux kernel team (vs the Linux kernel just having an interface and the driver being userspace which allows the companies to update the driver whenever they want).
                      You confuse the Kernel part that is doing all the hardware communication with the userland part of the driver we may call Mesa around here. Mesa is where all the complicated parts sit and it sits in the userland. This is not diffrent for the nvidia driver that brings a kernel part and a userland driver part. The same concept is also true for Windows btw.

                      Originally posted by mdedetrich View Post
                      This is also the main reason why Google is creating a new kernel for the phone to replace Linux called Zircon. One of the main features of the Zircon is that drivers sit in userspace, which fixes a big problem that exists currently with Android phones where its very difficult to update Linux versions on the phone separate from the drivers.
                      A microkernel experiment that will fail like any of the other microkernel experiments out there. Turns out userland hardware drivers are a really stupid idea but we only know that since the 90s.

                      Originally posted by mdedetrich View Post
                      Yes, it would mean putting the drivers into userland and maintaining a Kernel interface that speaks with userland graphics drivers. This has the added advantage of making the whole "NVidia is a blob" entirely a non issue.
                      Beside the fact that this is already the case for all mesa compatible drivers that sit in the kernel, they all share a interface connecting to this userland driver framework.

                      Comment

                      Working...
                      X