Announcement

Collapse
No announcement yet.

New X.Org Server Release While Maintaining Separate XWayland Being Discussed

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by oiaohm View Post
    xrandr is quite common way to break your desktop.
    (Lots of deranged rambling snipped)
    This is you presuming a feature of X11 makes sense. You have totally ignored the existing defects of xrandr.
    No, this is me pointing out that a feature of X11 that *is required* is unavailable in Wayland; that Wayland is inherently incapable of supporting it "well" because of "design reasons"; and that the Wayland devs are as a result unwilling to implement it. Whether that's from pride, technical reasons, or a mix of both is utterly irrelevant.

    The argument that randr can "break your desktop" is an enormous crock of bovine excrement, and a blatantly apologist attempt to redirect blame away from your sacred cow. Nobody in the history of Ever has actually experienced your hypothetical disaster scenario, because there is ALWAYS more than enough trivially-discardable VRAM available to guarantee that it doesn't happen. Not even on Windows. Ever. And both OGL and DX *explicitly* guarantee that it can't, since the resolution change invalidates the context and thus also all resources loaded within that context like textures etc.

    inb4 the inevitable "No, really, it's true and it happened to me!" claim: if a driver is defective enough that a simple mode change causes it to fall over, it's got bigger problems for you to worry about.

    Comment


    • #92
      Originally posted by AJSB View Post
      Taking in account my for my gaming needs i use mostly WINE and that i made a fully functional AutoHotKey Script that makes everything automaticly w/o the intervention of the game *and* that i made a way to make lauhchers also do that directly using XRANDR without messing the WM, i care less about Wayland, but i guess the needs of some are not the needs of others.
      And

      Originally posted by arQon View Post
      No, this is me pointing out that a feature of X11 that *is required* is unavailable in Wayland; that Wayland is inherently incapable of supporting it "well" because of "design reasons"; and that the Wayland devs are as a result unwilling to implement it. Whether that's from pride, technical reasons, or a mix of both is utterly irrelevant.

      The argument that randr can "break your desktop" is an enormous crock of bovine excrement, and a blatantly apologist attempt to redirect blame away from your sacred cow. Nobody in the history of Ever has actually experienced your hypothetical disaster scenario, because there is ALWAYS more than enough trivially-discardable VRAM available to guarantee that it doesn't happen. Not even on Windows. Ever. And both OGL and DX *explicitly* guarantee that it can't, since the resolution change invalidates the context and thus also all resources loaded within that context like textures etc.
      Need the same answer here. The reality here we would talking about this now in this way if bare metal X,org X11 development was bot dead. Over 12 months ago Nvidia proposed breaking xrandr on bare metal X11 server. This would be to solve the HiDPI and HDR issues. HDR would be that application colour space could be different to the monitor colour space. HiDPI would be that the resolution that the application believes the screen is by xrandr is basically fiction so what ever size the application think the screen is there was to be a scaling factor to what the real resolution of the screen was.

      Originally posted by arQon View Post
      trivially-discardable VRAM available to guarantee that it doesn't happen.
      Sorry no this is not the case. The output buffer of ram in GPU is not you generic VRAM allocation in all makes of video card. This is something Nvidia explained in the request for change to X.org X11 bare metal. If I could find the post I could find the Nvidia o my god error. It was using a particular card plug in a monitor scale it down then plug in a few more monitors that are 4K then attempt to ask the first monitor to return to it original resolution and error . The card was basically reading zero vram used yet you could not change monitor res. This could also happen due to slightly suspect cables. So no its not a hypothetical disaster scenario their are real world GPU that will do this to you. People are getting 4k and 8k HDR monitors a fragmented output buffer memory on the gpu is coming more likely and this is more likely to result in not being able to change resolution. Intel and AMD may not suffer from this but Nvidia design does.

      The reality here arQon one of Nvidia lead Linux driver developers experienced this fault and attempt to get X11 x.org bare metal changed to avoid it.

      The cause of this is something so simple. HDR at 4K or 8k what is 10 bit per colour is quite a bit larger than the 8 bit per colour in ram requirements.

      The problem with xrandr is real. Wayland not having xrandr or Nvidia wanting to change xrandr under X11 baremetal to be per application state is to solve exactly the same problem. We cannot keep going down the path of hey we can always change the monitor output resolution because there is hardware out there if you do things things can go horrible wrong. Next HiDPI is that what gamescope does where the X11 applications thinks its has X xrandr setting but in reality its being upscaled because the output is on a HiDPI monitor will happen more often now yes Nvidia wants this as a feature of bare metal X11 server as well. XRANDR being a true correct value for screen output is ceasing to be the case. So you use AutoHotkey with an application in the future world you will be fighting with the cases where AutoHotkey and the application think the screen are two different XRANDR values. Lot of ways you need something like gamescope as in proxy wayland compositor to-do the autohotkey stuff this way you know what the application thinks the screen is and are able to provide input correctly to the application because by the time you interfacing with the host bare metal X11 server or host wayland what the scaling is past that point is going to be trouble.

      Reality here people using autohotkey on Linux have had a good run but sooner or latter some Linux particular change was going to happen that autohotkey was not going to be compatible with.

      Comment


      • #93
        ah. So in other words,

        > The argument that randr can "break your desktop" is an enormous crock of bovine excrement, and [snip] Nobody in the history of Ever has actually experienced your hypothetical disaster scenario

        was exactly correct, and the problem only exists in a heavily-fabricated case that requires additional externalities like hotplugging monitors, etc - and even then would require a fairly substantial amount of deliberate effort to trigger.

        Thanks for the additional details, but I think you've really only proven my point. (Not that it was all that relevant though, since the requirement for randr remains and we can't just wish that away even if the problem case was more realistic).

        Comment


        • #94
          Originally posted by arQon View Post
          was exactly correct, and the problem only exists in a heavily-fabricated case that requires additional externalities like hotplugging monitors, etc - and even then would require a fairly substantial amount of deliberate effort to trigger.
          Hot plugging allows you to simulate old/damaged cables in a lot more dependable way. As you cables get questionable gpu has to do reconnects more often to the monitor. Those reconnect can trigger output memory buffer to move so stack tighter with each other. The Nvidia demo was to that you could see the problem without have the random nature of worn hardware. So you had a repeatable failure instead of random when I feel like it failure.

          This issue is going to get worse for those with multi monitors as there hardware ages.

          Originally posted by arQon View Post
          Thanks for the additional details, but I think you've really only proven my point. (Not that it was all that relevant though, since the requirement for randr remains and we can't just wish that away even if the problem case was more realistic).
          The issue with randr remains as well. That there is a hardware fault here. That for people with multi monitors they better not be changing modes and be expecting to always be able to return to a large mode.

          Comment

          Working...
          X