Announcement

Collapse
No announcement yet.

The AMD Radeon Graphics Driver Makes Up Roughly 10.5% Of The Linux Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    meanwhile NVIDIAS Graphics Driver makes up roughly 0.0% of the Linux Kernel.

    Bloat is not nice and should be cleaned up...but having NO supported Opensourcedriver at all is even worse.

    p.s.: dont use that argument otherwise NV Marketing Dep will use it to praise theire efforts on keeping the kernel clean....

    Comment


    • #32
      Nvidia train mad because AMD train so successful.

      Comment


      • #33
        Originally posted by mdedetrich View Post

        Which is the same reason why the NVidia driver has to patch the kernel in order for the blob to work http://download.nvidia.com/XFree86/L...alldriver.html ?
        Oh, I already guessed that you have no damn clue what you are talking about, DKMS does not patch the kernel, it only builds a .ko, a kernel module object that is compatible to the kernel you are using because you know, Linux isn't a micro/hybrid kernel. The Linux kernel has no blob loader, Nvidia has to supply their own shim to load the blob.


        Originally posted by mdedetrich View Post
        Ontop of this (and at least when talking about GBM in Linux), due to its design NVidia is unable to use it with their blob otherwise they would suffer a sever performance hit due to the design of their driver (this is why they were pushing EGLStream's because its a very generic asynchronous interface that doesn't make many assumptions on how the driver is meant to behave).
        EGLStream is better suited for applications, not desktops and Nvidias claim the GMB would be bad for their performance is not really true but it is fine for them to support what they want to support. But it shows something that you blindly repeat some companies claims about some buffer manager that you have no clue on how it works.

        Originally posted by mdedetrich View Post
        I think you are being disingenuous when describing how the Linux stack works.
        I guess the main difference is that I know how the stack works. Something you clearly lack. You don't even know what a kernel module object is.


        Originally posted by mdedetrich View Post
        I have had ~10 devices in the past half of which are notebooks, all running dual boot with Windows/Linux (use Windows for gaming/CUDA and for other reasons) and I haven't had a single Kernel panic since Vista days. When Vista came out graphics drivers were unstable and caused the driver to crash, but often this just bringed you to the desktop.

        This is taking into account graphics cards that are released with severe problems (i.e. NVidia 3x series + vega which take ridiculous power causing the GPU to crash due to lack of power). The only exception I can think of is if there is a problem with cheap/underprovisioned PSU's where it ends up not supplying enough power to your motherboard because it can't handle both the GPU and the rest of the systems power draw, in which case yes your entire system would crash because its not getting enough power (and there isn't much you can do here).
        I was a Microsoft fanboy for years too but even I was able to think critically and don't lie to myself.

        Originally posted by mdedetrich View Post
        Curiously what laptop do you have?
        A high end dell business machine, 7. Gen Core.


        Originally posted by mdedetrich View Post
        Yes I mentioned this before, Windows used to be mainly a monolothic kernel and it turned into a Hybrid Kernel over the time (the graphics stack an area that has moved to more of a microkernal approach).
        NT never was monolithic.

        Comment


        • #34
          Surely, due to its open source code, intelligent & resourceful people will be able to see it and make suggestions to reduce its size & increase its efficiency. Currently this does not reflect poorly in AMDGPU in my view.

          Comment


          • #35
            Originally posted by Alexmitter View Post

            Oh, I already guessed that you have no damn clue what you are talking about, DKMS does not patch the kernel, it only builds a .ko, a kernel module object that is compatible to the kernel you are using because you know, Linux isn't a micro/hybrid kernel. The Linux kernel has no blob loader, Nvidia has to supply their own shim to load the blob.
            I wasn't talking about DKMS but the patching of kernel interface/headers which is done, its why the installer needs kernel-sources. But thanks for putting words into my mouth.

            Originally posted by Alexmitter View Post
            EGLStream is better suited for applications, not desktops and Nvidias claim the GMB would be bad for their performance is not really true but it is fine for them to support what they want to support. But it shows something that you blindly repeat some companies claims about some buffer manager that you have no clue on how it works.
            Also wrong, EGLStream is basically the same as what Android uses for their window manager (which is not X11 or Wayland). There is a reason why NVidia suggested it, its because its basically the same as what is already being used in production right now on other OS's.

            GBM (which is what is expected to be used for in source drivers) requires buffer frames to be synchronously submitted which works fine if your graphics driver is designed to expect this (i.e. AMD's/Intel's). The NVidia driver does not work this way (btw if you don't know, the NVidia driver is mainly a cross platform blob with an interface for every OS, so Linux is the ugly duckling/exception when it comes to interfacing with the blob; every other OS can use it fine). NVidia has a very high bar/standard for performance in the drivers they official distribute, so they are not going to use an interface that is going to gimp their drivers performance (in this case GBM for linux).

            In general the linux graphics stack is the one thats outdated, not Android or Windows.

            Originally posted by Alexmitter View Post
            I guess the main difference is that I know how the stack works. Something you clearly lack. You don't even know what a kernel module object is.
            Considering you have issues reading what I am writing I doubt it.

            Originally posted by Alexmitter View Post
            I was a Microsoft fanboy for years too but even I was able to think critically and don't lie to myself.
            I am not a Microsoft fanboy (otherwise I wouldn't be using Linux 90% of the time) but there are areas where Microsoft is way ahead of Linux in how it does things, the graphics stack is one of them. In other areas (a lot more of them actually) Linux is ahead of Windows.

            I am not even talking about general stability here, but basic functionality. The state of triple buffering is a joke that took like half a decade to fix properly and don't even get me started on multi screen setup (with things like different DPI/refresh rates on separate screens). And have a look at the AMD freesync limitations https://www.amd.com/en/support/kb/faq/gpu-754 which aren't on windows.

            Originally posted by Alexmitter View Post
            A high end dell business machine, 7. Gen Core.
            I was more interested in the GPU (or integrated CPU if you are using Intel's graphics) as well. If Windows is crashing for reasons other than the GPU and/or graphics part of CPU than your point is irrelevant.

            Originally posted by Alexmitter View Post
            NT never was monolithic.
            NT was also a newer kernel that was being used from Windows 2000 and onwards, its not the first kernel that was used in Windows. Not sure what your point here is, the first version of Windows (or even DOS) was not NT.
            Last edited by mdedetrich; 12 October 2020, 09:30 AM.

            Comment


            • #36
              Originally posted by mdedetrich View Post

              I wasn't talking about DKMS but the patching of kernel interface/headers which is done, its why the installer needs kernel-sources. But thanks for putting words into my mouth.
              There is no patching going on, the installer needs the kernel-headers, not the kernel-sources, so that it can build the nVidia shim.

              Comment


              • #37
                Originally posted by F.Ultra View Post

                There is no patching going on, the installer needs the kernel-headers, not the kernel-sources, so that it can build the nVidia shim.
                Quoting directly from http://download.nvidia.com/XFree86/L...alldriver.html (emphasis mine)

                The NVIDIA kernel module has a kernel interface layer that must be compiled specifically for each kernel. NVIDIA distributes the source code to this kernel interface layer.

                When the installer is run, it will check your system for the required kernel sources and compile the kernel interface. You must have the source code for your kernel installed for compilation to work. On most systems, this means that you will need to locate and install the correct kernel-source, kernel-headers, or kernel-devel package; on some distributions, no additional packages are required.

                After the correct kernel interface has been compiled, the kernel interface will be linked with the closed-source portion of the NVIDIA kernel module. This requires that you have a linker installed on your system. The linker, usually /usr/bin/ld, is part of the binutils package. You must have a linker installed prior to installing the NVIDIA driver.
                So either NVidia's documentation for how their own installation works is unclear/outdated or something else is going on.
                Last edited by mdedetrich; 12 October 2020, 11:11 AM.

                Comment


                • #38
                  Originally posted by mdedetrich View Post

                  Quoting directly from http://download.nvidia.com/XFree86/L...alldriver.html (emphasis mine)



                  So either NVidia's documentation for how their own installation works is unclear/outdated or something else is going on.
                  You seem to be misinterpreting this.

                  Nvidia needs the kernel headers in order to compile it's code for whatever kernel is on the system. That's all that documentation is saying. It doesn't do any modification to the kernel itself, just builds it's own drivers against that kernel's API.

                  Comment


                  • #39
                    Originally posted by smitty3268 View Post

                    You seem to be misinterpreting this.

                    Nvidia needs the kernel headers in order to compile it's code for whatever kernel is on the system. That's all that documentation is saying. It doesn't do any modification to the kernel itself, just builds it's own drivers against that kernel's API.
                    I understand you need the kernel headers to compile its own driver to work against the Linux kernel, but the documentation is also clearly saying you need the kernel sources (this is the part that is confusing).

                    To me the definition of kernel sources is the actual source code of the kernel, so why/what is this needed for?

                    Comment


                    • #40
                      Originally posted by bridgman View Post
                      Given that 5/6 of the lines are register headers that do not produce code, I would argue "no".

                      The driver code itself is less that 2% of the kernel.
                      But without those register headers, the code is still 75% larger than the Intel drm driver, which presumably also includes a considerable number of register defines in its 209 klines. Certainly one could argue the amdgpu driver does more than the i915 driver and should be larger.

                      If I look at my currently running system, the amdgpu kernel module is 5,861,376 bytes. The next largest modules are kvm at 823,296, drm at 626,688, and sunrpc at 565,248 (when did rpc get so huge???). Total size of all modules is 12,898,304. So the admgpu driver is largest single module by over a factor of seven and accounts for 45% of the total size of all loaded modules! And yes I know this doesn't account for compiled in drivers. But basically anything that can be a module is a module in Fedora distro kernel.

                      Certainly seems possible that amdgpu is bloated.

                      Originally posted by Bobby Bob
                      Isn't there some way of separating these things into modules or something rather than having it all crammed in there?
                      Part of the reason the module is huge is that it is one module for all architectures. It could be designed for better modularity, so that one could only load the VI support module and not was is a probably nearly identically sized module for SI support, and another for CIK, and so on.

                      We designed a system for the dvb drivers that allowed for greater modularity. The hardware usually had a main capture chip, from just a couple choices, with e.g. a PCI interface like bt849 or cx88, and then one or more RF demodulators attached to it, from dozens of choices. A different card would use a different combo of capture + demod. We came up with a demod interface that all the demod drivers could use. This way the same demod driver could work for bt849 or cx88. And instead of putting every possible demod driver into the cx88 driver, we came up with a dynamic attachment system so that only the demod driver(s) actually used needed to be loaded into the kernel.

                      But this just address the runtime bloat of loading into RAM drivers for every AMD video card made in the last decade.



                      Last edited by tpiepho; 12 October 2020, 03:46 PM.

                      Comment

                      Working...
                      X