Announcement

Collapse
No announcement yet.

SDL2 Reverts Its Wayland Preference - Goes Back To X11 Default

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by Myownfriend View Post

    That's not true. Wayland was and always has been developed hand-in-hand with an implementation: Weston. Wayland's and Weston's first release was on the same day and that was before the client and server APIs stabilized.
    I am aware of what Weston is but it doesn't count because it was a joke reference implementation. When I am talking about an actual implementation, I mean something along the lines of Gnome/KDE running with Wayland protocol and not something like Weston which is just a bare implementation of a protocol (and this is in light of the fact that the Wayland protocol at the time didn't even support basic functionality).

    But you don't have to take my word for it, the fact that Wayland has already taken over a decade and its still not ready is already evidence of that. No software project in their right mind would adopt a protocol which has been completely unproven (and by unproven I mean the example I gave above).

    Comment


    • #92
      Originally posted by ezst036 View Post

      Oh, now I understand. You are Artem S. Tashkinov. Right? I've run into your authorship online at times. You are definitely colorful and passionate in what you believe. I don't think you and I have had a discussion before. Nice to make your acquaintance.

      Unfortunately, not correct in the details though. Today is not a new day where everything just started right now in this instant. All of the things that bother you about Wayland could have been resolved four or more years ago had Nvidia not tried to strong-arm the Wayland devs with EGLStreams. This problem is overwhelmingly a timeline problem. SDL devs can't code for a Nvidia thing that isn't supported in conjuction with a Wayland thing that isn't supported because of the Nvidia dependency. This is just simple coding problems, all developers run into dependency woes at some point or another right?

      https://www.phoronix.com/scan.php?pa...ice-Memory-API

      In the end, the brave David-esque Mesa developers stared down Goliath Huang and ultimately were triumphant. Nvidia folded and now is on with GBM but only kind of supports GBM. It would be myopic to try to claim that Nvidia having a war with the developers to insist on EGLStreams as their own personal preference when they don't even contribute on the open source side anyways, that that's not going to have some sort of repercussions. It does have repercussions.

      https://www.phoronix.com/scan.php?pa...sa-Backend-Alt

      I'm only kidding with you about the David and Goliath thing though. :-) In all seriousness, you should note that in both the 2016 article as well as the 2021 article, it is widely acknowledged that the GBM/EGLStreams fight was (and is) detrimental on the Wayland front.



      Now, we can all be nice and civil and have a decent conversation around here, right?
      While technically correct, you are also only stating half the truth with paints a different picture than reality. The main reason why NVidia didn't take Wayland seriously is because of serious technical limitations both with Linux graphics ecosystem and with Wayland.

      The primary problem currently is the fact that almost everything in linux graphics stack uses implicit synchronization (this is mainly a result of dogmatically sticking to the "everything is a file with a simple read/write buffers" mantra). The problem is that this is both extremely outdated and inefficient and this is the main reason why NVidia came up with EGLStreams, its because EGLStreams uses explicit synchronization. Herein lies the dilemma, the concept of implicit synchronization is so outdated NVidia's driver doesn't even support it without a lot of workarounds/hacks.

      And we are now at the point where Mesa developers have a foot in their own mouth because they are painfully realized this point about implicit synchronization, something that Nvidia has been saying for a while but were constantly ignored because they were "evil".

      I expand on this point here https://www.phoronix.com/forums/foru...e2#post1317670 but the tl;dr is that the Linux graphics stack (including Wayland) in some ways is so technically outdated that NVidia didn't treat it seriously and no one was going to listen to them anyways.

      Comment


      • #93
        Originally posted by mdedetrich View Post
        I am aware of what Weston is but it doesn't count because it was a joke reference implementation. When I am talking about an actual implementation, I mean something along the lines of Gnome/KDE running with Wayland protocol and not something like Weston which is just a bare implementation of a protocol (and this is in light of the fact that the Wayland protocol at the time didn't even support basic functionality).
        Why should a reference implementation require having more than what's in the spec when it's purpose is to test the spec? Not every reference implementation of something is supposed to strive to be it's only implementation. The purpose of a reference implementation is to work as, get this, reference so that when Gnome, KDE, or Wlroots implement a protocol and they're not sure if they're getting the right behavior, they can test it against Weston. If they get the same results, then they've implemented it correctly.

        It makes no sense to say that it "doesn't count".

        Originally posted by mdedetrich View Post
        But you don't have to take my word for it, the fact that Wayland has already taken over a decade and its still not ready is already evidence of that. No software project in their right mind would adopt a protocol which has been completely unproven (and by unproven I mean the example I gave above).
        You're showing your ass. Check the Wayland governance rules.



        "Protocols in the "xdg" and "wp" namespaces must have at least 3 open-source implementations (either 1 client + 2 servers, or 2 clients + 1 server) to be eligible for inclusion."

        And again, it's not an issue of Wayland not being ready, it's a matter of adoption. It's been just under 10 years since the core client and server protocols were stabilized, 6 since the first non-Weston compositors shipped with support for it, and less than 1 year since any of them have been usable for daily use on hardware by the desktop GPU vendor with the largest marketshare, Nvidia.

        Of the handful of Wayland compositors that exist, only two of them supported the EGL_Streams backend which was required to run on Nvidia hardware at all: Gnome and KDE. On KDE, the backend was written completely by Nvidia themselves and was so buggy that it was dropped immediately after Nvidia started to support GBM. Hardware acceleration for XWayland applications was not available on either of these compositors because Nvidia refused to support the DMA-buf, a kernel feature.

        DMA-buf was only added June of 2021: 10 months ago.

        GBM support was added in October 2021: 6 months ago.

        Comment


        • #94
          Originally posted by mdedetrich View Post
          While technically correct, you are also only stating half the truth with paints a different picture than reality. The main reason why NVidia didn't take Wayland seriously is because of serious technical limitations both with Linux graphics ecosystem and with Wayland.

          The primary problem currently is the fact that almost everything in linux graphics stack uses implicit synchronization (this is mainly a result of dogmatically sticking to the "everything is a file with a simple read/write buffers" mantra). The problem is that this is both extremely outdated and inefficient and this is the main reason why NVidia came up with EGLStreams, its because EGLStreams uses explicit synchronization. Herein lies the dilemma, the concept of implicit synchronization is so outdated NVidia's driver doesn't even support it without a lot of workarounds/hacks.
          James Jones commented on explicit vs implicit sync just last month in a git issue about XWayland sending earlier frames and juddering on Nvidia hardware (something I've seen) and addresses the effect on implicit sync on Mesa.

          "I'm well aware implicit sync is the semantic of the Linux graphics stack, as I've been pushing back on assuming that semantic for years, and at this point I think everyone agrees, all else being equal, explicit sync is preferable. The only major component of the graphics stack that doesn't support explicit sync is X. Hence, while I don't dispute that this is an NVIDIA driver issue, I still assert that X not supporting explicit sync is a valid X issue as well, and my opinion is that addressing that is better for end users than adopting implicit sync semantics in the NVIDIA driver. While OSS driver stacks have supported Wayland and GBM for years, the fact is our driver already requires a more or less bleeding-edge set of supporting OSS components to support these use cases, so it doesn't necessarily need to be burdened by backwards compatibility with a host of older implicit-sync-only userspace or kernel components. They already aren't going to work for other reasons."

          I don't feel it's correct to say that Nvidia didn't support Wayland because of the Linux graphics stack using implicit synchronization when James himself said that X is the only major component that requires it. That feels like it would be reason for Nvidia to be at the forefront of Wayland support.

          I also can't find anything saying that GBM and dma-buf don't support explicit synchronization either.

          Originally posted by mdedetrich View Post
          And we are now at the point where Mesa developers have a foot in their own mouth because they are painfully realized this point about implicit synchronization, something that Nvidia has been saying for a while but were constantly ignored because they were "evil".

          I expand on this point here https://www.phoronix.com/forums/foru...e2#post1317670 but the tl;dr is that the Linux graphics stack (including Wayland) in some ways is so technically outdated that NVidia didn't treat it seriously and no one was going to listen to them anyways.
          He commented on this as well.

          "Yes, we could theoretically attach/consume implicit fences to buffers from userspace to mimic OSS driver behavior to some extent. I followed Jason's patch series to this effect, and we do some of this for synchronization in PRIME situations now, as it's vastly simpler when the only implicit synchronization boundary we care about is SwapBuffers() for consumption on a 3rd-party driver. It gets much harder to achieve correct implicit sync with arbitrary modern API usage (direct read/write of pixels in images in compute shaders, sparse APIs, etc.), and this has been a big pain point with Vulkan implementations in OSS drivers from my understanding."

          I'm not gonna act like I know better than him about implicit and explicit fencing either. I'm not well-read on the subject at all. He does say something else though, so I'll continue.

          "I don't know what the current state is, but I know it limited the featureset exposed in OSS Vulkan drivers when they first came out. I assume it's technically achievable, but I'd prefer not to add that complexity to our driver stack, nor do something like downgrade functionality of dmabuf-based surfaces to account for such limitations just for something everyone agrees is outdated in a world where rendering isn't as simple as read-only textures and write-only render buffers occupying the entirety of a single kernel-side allocation in a given command buffer, which was a pretty accurate mental model of GPU usage when implicit sync was developed."

          To me, this reads as him saying that he feels implicit synchronization is holding back the functionality of dmabuf-based surface, not that dmabuf is preventing explicit synchronization.

          Comment


          • #95
            Originally posted by ezst036 View Post
            In the end, the brave David-esque Mesa developers stared down Goliath Huang and ultimately were triumphant. Nvidia folded and now is on with GBM but only kind of supports GBM. It would be myopic to try to claim that Nvidia having a war with the developers to insist on EGLStreams as their own personal preference when they don't even contribute on the open source side anyways, that that's not going to have some sort of repercussions. It does have repercussions.
            Could you explain this? I assumed that their GBM backend must be incomplete or immature just because of how new the support is but I don't know the details.

            Comment


            • #96
              Originally posted by mdedetrich View Post
              While technically correct, you are also only stating half the truth with paints a different picture than reality. The main reason why NVidia didn't take Wayland seriously is because of serious technical limitations both with Linux graphics ecosystem and with Wayland.

              The primary problem currently is the fact that almost everything in linux graphics stack uses implicit synchronization (this is mainly a result of dogmatically sticking to the "everything is a file with a simple read/write buffers" mantra). The problem is that this is both extremely outdated and inefficient and this is the main reason why NVidia came up with EGLStreams, its because EGLStreams uses explicit synchronization. Herein lies the dilemma, the concept of implicit synchronization is so outdated NVidia's driver doesn't even support it without a lot of workarounds/hacks.

              And we are now at the point where Mesa developers have a foot in their own mouth because they are painfully realized this point about implicit synchronization, something that Nvidia has been saying for a while but were constantly ignored because they were "evil".

              I expand on this point here https://www.phoronix.com/forums/foru...e2#post1317670 but the tl;dr is that the Linux graphics stack (including Wayland) in some ways is so technically outdated that NVidia didn't treat it seriously and no one was going to listen to them anyways.
              I had an idea in general about some of the limitations, but your post and additional links helped clarify some things. This isn't an area I have studied intensely.

              Knowing this, I think I probably do agree that long term Mesa developers will likely have to re-do this in the future. But pure technical merits as explained took a back seat. And when it does get fixed, it will have to see leadership from some entity that has become a trusted entity, such as Valve.

              Regardless of that, Nvidia has locked themselves out. There's a trust gap here, and it's as big as the moon. The Open Source community has for years pleaded with Nvidia to play nicer in a lot of areas and those please have gone unheard. Past Phoronix articles are littered with these. When Linus gave his famous Nvidia speech, I think there were multiple reasons for this. I don't think the lack of an OSS driver was even on the list, but it may have been. They have fostered huge amounts of bad will.

              And even if the technical facts prove them correct in the end, it only causes additional heartburn that Nvidia chose not to take a conciliatory attitude from day one and say "ok, let's do both then so we don't hold you up." We would've all seen Wayland a decade ahead of where Wayland is at today. Heck, It might very well be (in an alternate universe) that EGLStreams would've become the norm in 2022 after seeing the inferior way fail for a decade. Instead, we won't now see EGLStreams adopted until 2032.(or whatever might be the better/more correct way forward) [I'm only using EGLStreams here as an example, for conversational purposes.]

              Had Nvidia chosen a conciliatory tone a decade ago, they could've come back more gracefully and said "see, told you so. It's been a decade. This isn't technically sound. Can we please implement EGLStreams now that play time is over?"

              Nvidia does receive undue hate at times, let's remember they were gracious enough to create a well supported video driver long before AMD or Intel went down the OSS driver route when Linux usage was quarters of a quarter percent usage. However, It's impossible to miss that this stunt of holding up GBM only really accomplished one thing for Nvidia and that was additional scorn.

              It is going to take many years for Nvidia to fix this PR quagmire.

              Comment


              • #97
                Originally posted by Myownfriend View Post
                Could you explain this? I assumed that their GBM backend must be incomplete or immature just because of how new the support is but I don't know the details.
                Whoops, I think I wrote that in too much haste. That's poorly written to the point of being probably completely incorrect.

                You're ahead of me, I had in mind your post (at #20) where you wrote:

                The reason that OBS doesn't work properly on Wayland on Nvidia hardware is because Nvidia drivers don't support EGL_NATIVE_RENDERABLE. The reason why Gnome night light doesn't work on Nvidia hardware in Wayland is because the driver doesn't support GAMMA_LUT and according to Nvidia it's part of the reason why Gamescope has issues running on Nvidia hardware either.
                I should have used the phrase "missing extensions" or something more of that sort.

                Comment


                • #98
                  Originally posted by Myownfriend View Post

                  I don't feel it's correct to say that Nvidia didn't support Wayland because of the Linux graphics stack using implicit synchronization when James himself said that X is the only major component that requires it. That feels like it would be reason for Nvidia to be at the forefront of Wayland support.
                  That is actually the main reason why NVidia pushed EGLStreams, you can go back and read the mailing lists. The main fundemental difference with EGLStreams is it has explicit synchronization baked into the API and thats why it wasn't really compatible with GBM at the time (precisely because GBM had a different synchronization model, i.e. it was implicit).

                  Originally posted by Myownfriend View Post
                  I also can't find anything saying that GBM and dma-buf don't support explicit synchronization either.

                  Read https://lwn.net/Articles/814587/

                  Its a combination of either doesn't support or supports with hacks/workarounds that often hurt performance (although in some cases the performance hit isn't an issue). There are for example issues with Vulkan integration which has the same fundamental issue (Vulkan's API is explicit sync only)

                  Originally posted by Myownfriend View Post
                  He commented on this as well.

                  "Yes, we could theoretically attach/consume implicit fences to buffers from userspace to mimic OSS driver behavior to some extent. I followed Jason's patch series to this effect, and we do some of this for synchronization in PRIME situations now, as it's vastly simpler when the only implicit synchronization boundary we care about is SwapBuffers() for consumption on a 3rd-party driver. It gets much harder to achieve correct implicit sync with arbitrary modern API usage (direct read/write of pixels in images in compute shaders, sparse APIs, etc.), and this has been a big pain point with Vulkan implementations in OSS drivers from my understanding."

                  I'm not gonna act like I know better than him about implicit and explicit fencing either. I'm not well-read on the subject at all. He does say something else though, so I'll continue.

                  "I don't know what the current state is, but I know it limited the featureset exposed in OSS Vulkan drivers when they first came out. I assume it's technically achievable, but I'd prefer not to add that complexity to our driver stack, nor do something like downgrade functionality of dmabuf-based surfaces to account for such limitations just for something everyone agrees is outdated in a world where rendering isn't as simple as read-only textures and write-only render buffers occupying the entirety of a single kernel-side allocation in a given command buffer, which was a pretty accurate mental model of GPU usage when implicit sync was developed."

                  To me, this reads as him saying that he feels implicit synchronization is holding back the functionality of dmabuf-based surface, not that dmabuf is preventing explicit synchronization.
                  You have the wrong summary, he said it quite well here

                  1. Explicit sync everywhere. Of course, it would help if our driver supported sync FD first. Working on that one. Then, X devs would need to relent and let the present extension support sync FD or similar. I'm not clear why there has been so much pushback there. Present was always designed to support explicit sync, it just unfortunately predated sync FD by a few months. glamor would also need to use explicit sync for internal rendering. I believe it has some code for this, but it uses shmfence IIRC, which in turn relies on implicit sync.
                  2. Ensure all work is finished before submitting frames from GL/Vulkan/etc. to Xwayland or wayland. Without hacky/protocol-breaking changes (Or the shmfence thing Erik mentions, though it's specific to Xwayland) to defer sending the updates, this means doing a hard CPU stall until the GPU has idled, which is what I mean by tanking perf. We've measured ~30% perf drops for one game using this solution, but impact could vary from 0-50% depending on the workload. Also, this solution alone doesn't fix glamor rendering in X, nor any composition rendering the Wayland compositor does.
                  3. Implement implicit sync in the NV kernel driver. This would also have unacceptable perf impact, though we haven't measured it explicitly in a long time. Regardless, it's essentially at odds with our software architecture, and I don't view it as a forward-looking solution.
                  In other words, NVIdia could support implicit sync now but because its a massive hack/workaround the performance penalty would be massive.

                  And one thing to note is that this is NOW. Back half a decade ago (or even more), NVidia's driver was the same in design (only designed for explicit sync) however the linux community behind the graphics stack didn't even get to the point of contemplating supporting explicit sync and scoffed at the concept of being told by NVidia that basically "you are using an inferior design from the pre 2000's era, maybe you should consider changing it?"

                  (note that I am not saying that EGLStreams was perfect and it evidently didn't support the full range of features that was necessary for desktop composting, but what it did support properly it did at much better performance because of its explicit synchronization model).
                  Last edited by mdedetrich; 21 April 2022, 06:39 PM.

                  Comment


                  • #99
                    Originally posted by ezst036 View Post

                    I had an idea in general about some of the limitations, but your post and additional links helped clarify some things. This isn't an area I have studied intensely.

                    Knowing this, I think I probably do agree that long term Mesa developers will likely have to re-do this in the future. But pure technical merits as explained took a back seat. And when it does get fixed, it will have to see leadership from some entity that has become a trusted entity, such as Valve.

                    Regardless of that, Nvidia has locked themselves out. There's a trust gap here, and it's as big as the moon. The Open Source community has for years pleaded with Nvidia to play nicer in a lot of areas and those please have gone unheard. Past Phoronix articles are littered with these. When Linus gave his famous Nvidia speech, I think there were multiple reasons for this. I don't think the lack of an OSS driver was even on the list, but it may have been. They have fostered huge amounts of bad will.

                    And even if the technical facts prove them correct in the end, it only causes additional heartburn that Nvidia chose not to take a conciliatory attitude from day one and say "ok, let's do both then so we don't hold you up." We would've all seen Wayland a decade ahead of where Wayland is at today. Heck, It might very well be (in an alternate universe) that EGLStreams would've become the norm in 2022 after seeing the inferior way fail for a decade. Instead, we won't now see EGLStreams adopted until 2032.(or whatever might be the better/more correct way forward) [I'm only using EGLStreams here as an example, for conversational purposes.]

                    Had Nvidia chosen a conciliatory tone a decade ago, they could've come back more gracefully and said "see, told you so. It's been a decade. This isn't technically sound. Can we please implement EGLStreams now that play time is over?"

                    Nvidia does receive undue hate at times, let's remember they were gracious enough to create a well supported video driver long before AMD or Intel went down the OSS driver route when Linux usage was quarters of a quarter percent usage. However, It's impossible to miss that this stunt of holding up GBM only really accomplished one thing for Nvidia and that was additional scorn.

                    It is going to take many years for Nvidia to fix this PR quagmire.
                    I am only going to make the following remark which is that NVidia cares about their brand which also means their performance more than anything else. This means that compelling them to work on a solution that is technically inferior and will negatively harm their image is not going to work regardless if its open source or not. Its highly arrogant for the Linux community to expect them to able to compel NVidia to "work with them" when the same community is stubbornly refusing changes.

                    If you actually go through the mailing lists you will see the intention is very clear, the main point of contention is that Linux was telling NVidia "if you want to work with us, you have to do things our way and use GBM and other inferior technologies at the time", Linux community was at that time not open at all about changing the design of their stack. So don't be surprised that in NVidia's position they wouldn't take Linux community seriously or in good faith, and they didn't.

                    I would highly recommend you read this article https://lwn.net/Articles/814587/ , its very clear that we got to this place because the Linux graphics stack community was very stubborn about sticking to implicit sync and they are only changing now because their hand is forced, this is also demonstrated in comments like this one https://gitlab.freedesktop.org/xorg/...7#note_1273350.

                    Ultimately in the end you should be thanking NVidia, because of their persistence in not compromising on technically inferior solutions the Linux graphics community has finally realized that they need to change stuff and NVidia was a primary driver for that (along with things like Vulkan existing).
                    Last edited by mdedetrich; 21 April 2022, 06:41 PM.

                    Comment


                    • As predictable as ever...

                      SDL makes Wayland the default, fanboys rejoice like they were actually involved in the project at all, like football fans thinking their team only won because they wore their lucky shirt that day or whatever. Then crow about how X is "dead" and Wayland is awesome etc etc.

                      SDL reverts to X because Wayland still isn't ready after 15 years, fanboys "defend" it saying it's only been 10 years, and besides, it's all the compositor's fault. Or Mesa's, or nvidia's, or a specific distro, or literally *anything* except where the blame actually belongs: with the team that keeps failing to deliver something adequately functional, over and over again.

                      All the nuance and honesty of modern political arguments. Every. Goddamn. Time.
                      Last edited by arQon; 21 April 2022, 06:44 PM.

                      Comment

                      Working...
                      X