AMD GPU Linux Driver Becoming "Really Really Big" That It's Starting To Cause Problems

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • stargeizer
    replied
    Hmmm...

    The system on where this is reported is an A10-9620P, which is an excavator part, so is quite slow, but certainly not *THAT* slow to produce this on its own. Also, there's a GCN 3rd gen. part on it, so is also, *NOT* that slow, even for today usage...

    From the original dmesg log...
    Code:
    [ 1.559363] integrity: Loaded X.509 cert 'Hewlett-Packard Company: HP UEFI Secure Boot 2013 DB key: 1d7cf2c2b92673f69c8ee1ec7063967ab9b62bec'
    Well... that died quite fast.

    HP bioses/UEFI of the era (2015-2019) were known to be quite buggy and problematic. and "PROBLEMATIC" goes with all caps and luminous lights on the fonts, cartoon style, and that is still an understatement. Maybe is one of these problems that requires bisecting the kernel to know where somebody did something that annoyed these gremlins.

    Probably, is not something easy to debug without the having the gremlin (or a similar gremlin with a bugged bios and not so slow processor) , but also not something worth to make the main page of phoronix after all... YMMV IMHO.
    Last edited by stargeizer; 15 September 2024, 12:12 PM.

    Leave a comment:


  • skeevy420
    replied
    Originally posted by Ironmask View Post
    While I prefer microkernel designs as well, this has nothing to do with kernel architecture. Linux GPU drivers run in usermode, and this specific problem is because it's taking a long time to load the GPU driver *after* the kernel is loaded and initialized, and the boot screen is waiting for the GPU driver to load.
    I think the microkernel suggestions are working on an assumption like each GPU family having their own driver instead of there being one driver to rule them all which would have potentially kept sizes down to prevent an issue like this from even happening. It'd be nice if AMD could do something similar to NVIDIA where the kernel has enough capability to detect the GPU or GPUs installed and then load up vega.ko or kavari.ko or whatever individual drivers are necessary. It's not like I need a driver than can run an R7 260x or a Radeon VII when I have a 6700 XT and a Zen 4 iGPU installed.

    Leave a comment:


  • user556
    replied
    Originally posted by intelfx View Post

    I'd rather wonder how on Earth can header files be responsible for the final object size, much less its initialization time.

    Michael doesn't understand systems programming, and apparently y'all don't either.
    Michael just said the bulk of the six million lines, which he'd linked, is from the auto-generated headers. But those headers could very much be holding a large amount of config data for the wide swatch of GPUs supported that then all ends up in the compiled binary.

    Leave a comment:


  • Ironmask
    replied
    This is why I like nvidia's design of putting all the code in the GPU itself and the driver is pretty much a thin wrapper over it. Yes, I know that was just done as a way to bypass licensing stuff, but it's a genuinely novel idea.

    Originally posted by avis View Post
    Monolithic kernel continues to bear fruit.
    While I prefer microkernel designs as well, this has nothing to do with kernel architecture. Linux GPU drivers run in usermode, and this specific problem is because it's taking a long time to load the GPU driver *after* the kernel is loaded and initialized, and the boot screen is waiting for the GPU driver to load.

    Leave a comment:


  • mmstick
    replied
    This is why we need to migrate to a microkernel.

    Leave a comment:


  • Quackdoc
    replied
    sounds like it might be time to split it up indeed...

    and I know JUST the language to do it in, it would make quite some ad revenue if they did lol

    Leave a comment:


  • abiswas
    replied
    I would suggest switching to UKI splash which I am using. It's static but provides a clean boot logo, in my case which is the "Arch Linux" logo.

    Leave a comment:


  • oleid
    replied
    Originally posted by Danny3 View Post
    Are those auto-generated stuff really needed?
    Can't they be auto-generated only for the GPUs present in the computer on the fly?
    I think AMD should put some work into this!
    Then everybody would need to compile their own kernel (That's not really feasible these days) or amdgpu would need to get converted to dkms.

    Leave a comment:


  • mrg666
    replied
    Originally posted by Danny3 View Post
    Are those auto-generated stuff really needed?
    Can't they be auto-generated only for the GPUs present in the computer on the fly?
    I think AMD should put some work into this!
    Those headers are never read during boot after the kernel is built once.

    Leave a comment:


  • mrg666
    replied
    Originally posted by petteyg View Post
    Obvious solution: separate drivers for each model. Stop cooking the entire damn thing into one ginprmous blob to be loaded.
    The amdgpu kernel module is a separate file than the kernel itself. And, it is not loaded unless there is hardware detected that needs it.

    Actually, there is no problem with AMD driver that I can notice during boot. There is no delay I can notice compared to Intel and Nouveau drivers.

    Leave a comment:

Working...
X