Announcement

Collapse
No announcement yet.

2021 Could Be The Year That AMD Radeon Graphics Can Hot Unplug Gracefully On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • dragon321
    replied
    Originally posted by oiaohm View Post

    The one who in fact did the change with Windows NT 3.x to Windows NT 4.0 wrote up that it started with the mode setting problem. UMS to KMS and Windows NT 3.x to Windows NT 4.0 starts with the same problem.. Only difference the windows NT developers once he had started putting stuff in kernel space he saw performance gains then did not stop so takes the process too far.

    The thing here is getting the right balance between user space and kernel space. DWM introduction was Microsoft correcting their position a bit.

    Full GUI in kernel space and Full GUI in user space are both bad for different forms of failure. Some of the GUI stuff like buffer management you want kernel side protected other stuff not so much.
    Yeah, you are right with this one.

    Leave a comment:


  • DeepDayze
    replied
    Would be a good idea to somehow salvage the state of open windows and X before either gracefully disconnecting GPU or if the GPU driver restarts so that the desktop is restored upon GPU coming back up or is physically reconnected. Any screen redraws can be redirected to memory then reloaded. This way the user experience is not lost due to GPU shutdown/restart.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by dragon321 View Post
    You are right that pure user space interface it's not without issues but it's probably better idea than kernel mode interface. GUI was moved to kernel in Windows NT 4.0 not because of NT 3.x issues but to provide better performance and reduce requirements.
    The one who in fact did the change with Windows NT 3.x to Windows NT 4.0 wrote up that it started with the mode setting problem. UMS to KMS and Windows NT 3.x to Windows NT 4.0 starts with the same problem.. Only difference the windows NT developers once he had started putting stuff in kernel space he saw performance gains then did not stop so takes the process too far.

    The thing here is getting the right balance between user space and kernel space. DWM introduction was Microsoft correcting their position a bit.

    Full GUI in kernel space and Full GUI in user space are both bad for different forms of failure. Some of the GUI stuff like buffer management you want kernel side protected other stuff not so much.

    Leave a comment:


  • dragon321
    replied
    Originally posted by oiaohm View Post

    Windows NT 3.x vs Windows NT 4.0 changes have a lot in common with the changes from UMS(user mode setting) to KMS(kernel mode setting) under Linux and freebsd with X11.

    Turns on with a lot of graphical stuff usermode is not the best choice. There is a catch with user mode stuff its truly possible to start it more than once. Implementing particular parts of the graphical in kernel mode does make sense so that you have a single party in fact in charge. NT 3.x original architecture has the same defects of UMS where two things can start at the same time both take charge of the GPU and give GPU conflicting instructions like asking the screen to be two complete different modes and possible doing that repeatedly. Yes one of the fun bugs of Windows NT 3.x was the flicker from hell when it happened that had started 2 particular things twice. You can have the same thing happen with X11 under Linux using old UMS drivers when running X11 twice on two different TTYs. KMS with Linux makes this problem go away and some of the changes from NT 3.x to NT 4.0 makes this problem also go away.
    Yeah, I'm aware about switch from UMS to KMS but this is driver stuff and it's not directly linked with GUI. Remember that Windows used to draw UI elements in kernel while on Linux it's not the case - UI elements are rendered by user space toolkit. I think even some windows management stuff was handled in Windows kernel prior to DWM introduction. On Linux some stuff was moved away from X11 to drivers but most important GUI elements stayed in user space and kernel can easily live without it. You can remove whole X11 or Wayland and you won't get any GUI. On Windows it's not that easy and it took years for Microsoft to provide Server edition without GUI. On Linux it was pretty straightforward for years - if you don't want GUI then simply don't install it.

    You are right that pure user space interface it's not without issues but it's probably better idea than kernel mode interface. GUI was moved to kernel in Windows NT 4.0 not because of NT 3.x issues but to provide better performance and reduce requirements. Windows NT was very heavy compared to classic Windows so Microsoft wanted to make it lighter. With NT 3.x architecture getting GPU acceleration for GUI or disabling it wouldn't be difficult task. Just like on macOS where Apple quickly implemented GPU acceleration and it was always possible to boot without GUI.

    Leave a comment:


  • Ladis
    replied
    Originally posted by Paradigm Shifter View Post
    I've only used an external GPU (nVidia) once, on a laptop; neither Linux nor Windows handled having it disappear without warning (when there was a power cut) with any sort of grace or stability. That said, that was a few years ago, it may have improved.
    Nowadays it works perfectly (brother has an RTX 2070 Super and connects it to a thin company HP notebook with just Intel iGPU - played Cyberpunk 2077 on DLSS upscale from 2K to 2.5K with raytracing and ran smooth). You can just connect/disconnect the cable to the eGPU as you like. It works the same way like connecting to a standard USB-C docking station (means one cable for everything - power delivery, monitor, ethernet, audio, USB hub). In fact, the eGPU box is technically just an oversized USB-C dock and you can use it that way (his has 100W power delivery which is totally fine for a notebook with a 15W 4core Intel CPU).

    The only catch is if you connect the desktop monitor directly to the eGPU (if you don't want the internal notebook display or use monitor ports on the notebook), you have to log out and log in. Otherwise the picture created on the eGPU travels from eGPU to your notebook's iGPU, then back to the eGPU box and from there to the directly connected monitor (causes some drop frames each few seconds).

    Leave a comment:


  • dimko
    replied
    99% of all GPU user don't care about this feature.
    Linux doesn't have HDR even in early implementation for AMD and Nvidia. This is sad.

    Leave a comment:


  • Venemo
    replied
    Does it handle the case when you had a display connected to the eGPU?

    Leave a comment:


  • oiaohm
    replied
    Originally posted by asriel View Post
    Until Wayfire learns to use hot-corners same way as compiz does wayland is not an option. I still use X only because of compiz as it is the best environment ever created and Wayland have nothing so far even close to it.
    Time will tell. Compiz could end up superseeded under X11 as well by Wayfire. When Nvidia drivers finally get DMA BUF stuff means compositors under X11 can even be done differently due to not having to support as many of the old legacy X11 server interfaces.

    This is the hard point going forwards the existing compositors under X11 even not moving to Wayland will need core rewrites to improve stability by not using the obsolete X11 interface that have been kept around because Nvidia would not fix their driver.

    Leave a comment:


  • asriel
    replied
    Originally posted by doomie View Post

    oh interesting...

    also, shame on you for using X; you're obviously afraid of change and improvements and are trying to hold the world back. i'm politically offended.

    ty though.
    Until Wayfire learns to use hot-corners same way as compiz does wayland is not an option. I still use X only because of compiz as it is the best environment ever created and Wayland have nothing so far even close to it.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by dragon321 View Post
    Yeah, Windows handles GPU resets and faults pretty good but GUI architecture is not that great either. On Windows kernel is responsible for parts of interface while on Linux interface is userland process that you can easily reset or disable if you don't need it. Removing GUI completely from Windows wouldn't be easy process and I think it would make it less usable as a lot of applications needs GUI or APIs that are provided by GUI subsystem. Fun fact is that Windows used to handle this a lot better - Windows NT 3.x had whole interface in user mode and kernel could easily work without it. It was moved to kernel space with Windows NT 4.0 to improve performance and reduce requirements at the expense of stability. It was changed again with Windows Vista when part of GUI subsystem was moved to userspace again. Also new driver architecture (WDDM) also moved big part of driver to userspace. Of course we are speaking about "parts" because some things are still handled in kernel space. Vista (and later releases) didn't revert to NT 3.x architecture. I don't know why but I can only guess compatibility was the reason.

    I remember some claimed that Linux also should move drawing to the kernel. Probably to make it pure GUI operating system like Windows. That would be bad idea because it would hurt Linux flexibility. Also it's not impossible to make pure (or almost pure) GUI with current Linux architecture. Good example would be Android. There is also macOS which also handles GUI in userspace. You can even force it to boot in text mode without any GUI at all. Wayland compositors are not that far away from Quartz Compositor.
    Windows NT 3.x vs Windows NT 4.0 changes have a lot in common with the changes from UMS(user mode setting) to KMS(kernel mode setting) under Linux and freebsd with X11.

    Turns on with a lot of graphical stuff usermode is not the best choice. There is a catch with user mode stuff its truly possible to start it more than once. Implementing particular parts of the graphical in kernel mode does make sense so that you have a single party in fact in charge. NT 3.x original architecture has the same defects of UMS where two things can start at the same time both take charge of the GPU and give GPU conflicting instructions like asking the screen to be two complete different modes and possible doing that repeatedly. Yes one of the fun bugs of Windows NT 3.x was the flicker from hell when it happened that had started 2 particular things twice. You can have the same thing happen with X11 under Linux using old UMS drivers when running X11 twice on two different TTYs. KMS with Linux makes this problem go away and some of the changes from NT 3.x to NT 4.0 makes this problem also go away.

    Leave a comment:

Working...
X