Announcement

Collapse
No announcement yet.

2021 Could Be The Year That AMD Radeon Graphics Can Hot Unplug Gracefully On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    "hot unplugging becoming more common for cases like external Thunderbolt-connected GPUs"
    The news sounds good in any case. I wonder though, if that also might help possible BACO issues, though the latter isn't a complete removal of a card/chip from the PCI(e) slot. Or hybrid systems with extra GPUs in addition to an APU.
    Stop TCPA, stupid software patents and corrupt politicians!

    Comment


    • #12
      Originally posted by doomie View Post
      yknow this isn't maybe primary use-case, but for desktop linux it does seem like it's time someone came up with a way to reattach a running X or wayland or whatever people won't hate me for mentioning. windows actually recovers beautifully from gpu resets, which... might be one of the few things windows does beautifully, but that just makes it more sad
      GPU restarts are handled basically the same on Windows and Linux. Fairly both badly in fact with the same set of problems like miss aligned colour maps.

      Its not GPU reset where the difference is under windows you can restart the DWM.exe that is the windows compositor and have everything rehook back up like nothing happened. This has not been that possible with Linux will explain why.

      Wayland the arguement with Nvidia has been over lack of support to allow the compositor to be restarted without needing to restart applications. Yes using the open source drivers restarting the compositor of wayland without restarting the applications has been possible since 2013. This is why Nvidia finally in closed source driver is implementing DMA BUF support. There is only so much you can do when a major party like Nvidia is serving up defective drivers so crippling functionality possiblity.

      The versions of eglstreams by Nvidia right up to current day are design that you compositor restarts all buffers to all applications that the compositor created and handed out come voided. To be able to restart the compositor and have applications live the applications must have their own per application buffers this is where GBM/DMA BUF combination of open source drivers for Linux come in also Windows your graphical buffers are also per process/application. Historic X11 did not have this split but modern X11 could. There has been a fun problem x.org version of X11 has been maintaining interface functions that only closed source Nvidia driver uses that are only need if your driver does not have the functionality to hand out per application buffers..

      Yes the Linux world has had xpra for ages but GPU support for doing it has been badly lacking until 2013. Yes xpra gets better in 2013 as well when you have open source drivers on your GPU.

      doomie remember you cannot build a decent house without solid foundations. Nvidia has been giving the Linux desktop crappy foundations to build on top of then claiming to be doing great good for Linux by giving drivers.

      Comment


      • #13
        An interesting, but heretic, use for such things is for single gpu passthrough. Although it is currently doable, you have to kill everything that uses it (including the desktop environment and the display manager) before "deataching" the gpu. Being able to launch a VM with passthrough, do you stuff in the VM (of course the gpu will not be accessible by the host during that time), and seemlessly return to the same session once the VM is turned off, looks useful.

        PS. Yes I want to play some windows only games without dual booting or losing my plasma session.

        Comment


        • #14
          Originally posted by doomie View Post
          yknow this isn't maybe primary use-case, but for desktop linux it does seem like it's time someone came up with a way to reattach a running X or wayland or whatever people won't hate me for mentioning. windows actually recovers beautifully from gpu resets, which... might be one of the few things windows does beautifully, but that just makes it more sad
          Yeah, Windows handles GPU resets and faults pretty good but GUI architecture is not that great either. On Windows kernel is responsible for parts of interface while on Linux interface is userland process that you can easily reset or disable if you don't need it. Removing GUI completely from Windows wouldn't be easy process and I think it would make it less usable as a lot of applications needs GUI or APIs that are provided by GUI subsystem. Fun fact is that Windows used to handle this a lot better - Windows NT 3.x had whole interface in user mode and kernel could easily work without it. It was moved to kernel space with Windows NT 4.0 to improve performance and reduce requirements at the expense of stability. It was changed again with Windows Vista when part of GUI subsystem was moved to userspace again. Also new driver architecture (WDDM) also moved big part of driver to userspace. Of course we are speaking about "parts" because some things are still handled in kernel space. Vista (and later releases) didn't revert to NT 3.x architecture. I don't know why but I can only guess compatibility was the reason.

          I remember some claimed that Linux also should move drawing to the kernel. Probably to make it pure GUI operating system like Windows. That would be bad idea because it would hurt Linux flexibility. Also it's not impossible to make pure (or almost pure) GUI with current Linux architecture. Good example would be Android. There is also macOS which also handles GUI in userspace. You can even force it to boot in text mode without any GUI at all. Wayland compositors are not that far away from Quartz Compositor.

          Comment


          • #15
            Originally posted by marios View Post
            An interesting, but heretic, use for such things is for single gpu passthrough. Although it is currently doable, you have to kill everything that uses it (including the desktop environment and the display manager) before "deataching" the gpu. Being able to launch a VM with passthrough, do you stuff in the VM (of course the gpu will not be accessible by the host during that time), and seemlessly return to the same session once the VM is turned off, looks useful.

            PS. Yes I want to play some windows only games without dual booting or losing my plasma session.
            This is in fact do able without killing everything with the right parts. Intel GVT-g, Nvidia vGPU, AMD MxGPU is this. Yes these allows you to keep you allocated buffers intact so you don't need to detach the GPU from host before allowing a VM to connect up by a pass though as long as the GPU support this stuff .


            There is a problem if you are talking consumer graphics the only one with it on consumer is Intel and Intel not really that great for games at this stage. Also its fun that hybrid intel/nvidia systems it does not work right either mostly due to Nvidia memory management so those system if Nvidia fixed there driver to have plasma stay on intel using GVT-g to pass the intel into the vm with windows and pass the complete Nvidia in. Yes if hot plug worked.....

            This is where things just keep on getting problem hot plug is another thing for a long time has not worked correctly with consumer cards drivers under windows.

            So some of the problem is how hardware vendors have decided to split the consumer and enterprise markets with GPU features.

            Comment


            • #16
              Originally posted by dragon321 View Post
              Yeah, Windows handles GPU resets and faults pretty good but GUI architecture is not that great either. On Windows kernel is responsible for parts of interface while on Linux interface is userland process that you can easily reset or disable if you don't need it. Removing GUI completely from Windows wouldn't be easy process and I think it would make it less usable as a lot of applications needs GUI or APIs that are provided by GUI subsystem. Fun fact is that Windows used to handle this a lot better - Windows NT 3.x had whole interface in user mode and kernel could easily work without it. It was moved to kernel space with Windows NT 4.0 to improve performance and reduce requirements at the expense of stability. It was changed again with Windows Vista when part of GUI subsystem was moved to userspace again. Also new driver architecture (WDDM) also moved big part of driver to userspace. Of course we are speaking about "parts" because some things are still handled in kernel space. Vista (and later releases) didn't revert to NT 3.x architecture. I don't know why but I can only guess compatibility was the reason.

              I remember some claimed that Linux also should move drawing to the kernel. Probably to make it pure GUI operating system like Windows. That would be bad idea because it would hurt Linux flexibility. Also it's not impossible to make pure (or almost pure) GUI with current Linux architecture. Good example would be Android. There is also macOS which also handles GUI in userspace. You can even force it to boot in text mode without any GUI at all. Wayland compositors are not that far away from Quartz Compositor.
              Windows NT 3.x vs Windows NT 4.0 changes have a lot in common with the changes from UMS(user mode setting) to KMS(kernel mode setting) under Linux and freebsd with X11.

              Turns on with a lot of graphical stuff usermode is not the best choice. There is a catch with user mode stuff its truly possible to start it more than once. Implementing particular parts of the graphical in kernel mode does make sense so that you have a single party in fact in charge. NT 3.x original architecture has the same defects of UMS where two things can start at the same time both take charge of the GPU and give GPU conflicting instructions like asking the screen to be two complete different modes and possible doing that repeatedly. Yes one of the fun bugs of Windows NT 3.x was the flicker from hell when it happened that had started 2 particular things twice. You can have the same thing happen with X11 under Linux using old UMS drivers when running X11 twice on two different TTYs. KMS with Linux makes this problem go away and some of the changes from NT 3.x to NT 4.0 makes this problem also go away.

              Comment


              • #17
                Originally posted by doomie View Post

                oh interesting...

                also, shame on you for using X; you're obviously afraid of change and improvements and are trying to hold the world back. i'm politically offended.

                ty though.
                Until Wayfire learns to use hot-corners same way as compiz does wayland is not an option. I still use X only because of compiz as it is the best environment ever created and Wayland have nothing so far even close to it.

                Comment


                • #18
                  Originally posted by asriel View Post
                  Until Wayfire learns to use hot-corners same way as compiz does wayland is not an option. I still use X only because of compiz as it is the best environment ever created and Wayland have nothing so far even close to it.
                  Time will tell. Compiz could end up superseeded under X11 as well by Wayfire. When Nvidia drivers finally get DMA BUF stuff means compositors under X11 can even be done differently due to not having to support as many of the old legacy X11 server interfaces.

                  This is the hard point going forwards the existing compositors under X11 even not moving to Wayland will need core rewrites to improve stability by not using the obsolete X11 interface that have been kept around because Nvidia would not fix their driver.

                  Comment


                  • #19
                    Does it handle the case when you had a display connected to the eGPU?

                    Comment


                    • #20
                      99% of all GPU user don't care about this feature.
                      Linux doesn't have HDR even in early implementation for AMD and Nvidia. This is sad.

                      Comment

                      Working...
                      X