Announcement

Collapse
No announcement yet.

SDL2 Reverts Its Wayland Preference - Goes Back To X11 Default

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Myownfriend
    replied
    Originally posted by ezst036 View Post
    In the end, the brave David-esque Mesa developers stared down Goliath Huang and ultimately were triumphant. Nvidia folded and now is on with GBM but only kind of supports GBM. It would be myopic to try to claim that Nvidia having a war with the developers to insist on EGLStreams as their own personal preference when they don't even contribute on the open source side anyways, that that's not going to have some sort of repercussions. It does have repercussions.
    Could you explain this? I assumed that their GBM backend must be incomplete or immature just because of how new the support is but I don't know the details.

    Leave a comment:


  • Myownfriend
    replied
    Originally posted by mdedetrich View Post
    While technically correct, you are also only stating half the truth with paints a different picture than reality. The main reason why NVidia didn't take Wayland seriously is because of serious technical limitations both with Linux graphics ecosystem and with Wayland.

    The primary problem currently is the fact that almost everything in linux graphics stack uses implicit synchronization (this is mainly a result of dogmatically sticking to the "everything is a file with a simple read/write buffers" mantra). The problem is that this is both extremely outdated and inefficient and this is the main reason why NVidia came up with EGLStreams, its because EGLStreams uses explicit synchronization. Herein lies the dilemma, the concept of implicit synchronization is so outdated NVidia's driver doesn't even support it without a lot of workarounds/hacks.
    James Jones commented on explicit vs implicit sync just last month in a git issue about XWayland sending earlier frames and juddering on Nvidia hardware (something I've seen) and addresses the effect on implicit sync on Mesa.

    "I'm well aware implicit sync is the semantic of the Linux graphics stack, as I've been pushing back on assuming that semantic for years, and at this point I think everyone agrees, all else being equal, explicit sync is preferable. The only major component of the graphics stack that doesn't support explicit sync is X. Hence, while I don't dispute that this is an NVIDIA driver issue, I still assert that X not supporting explicit sync is a valid X issue as well, and my opinion is that addressing that is better for end users than adopting implicit sync semantics in the NVIDIA driver. While OSS driver stacks have supported Wayland and GBM for years, the fact is our driver already requires a more or less bleeding-edge set of supporting OSS components to support these use cases, so it doesn't necessarily need to be burdened by backwards compatibility with a host of older implicit-sync-only userspace or kernel components. They already aren't going to work for other reasons."

    I don't feel it's correct to say that Nvidia didn't support Wayland because of the Linux graphics stack using implicit synchronization when James himself said that X is the only major component that requires it. That feels like it would be reason for Nvidia to be at the forefront of Wayland support.

    I also can't find anything saying that GBM and dma-buf don't support explicit synchronization either.

    Originally posted by mdedetrich View Post
    And we are now at the point where Mesa developers have a foot in their own mouth because they are painfully realized this point about implicit synchronization, something that Nvidia has been saying for a while but were constantly ignored because they were "evil".

    I expand on this point here https://www.phoronix.com/forums/foru...e2#post1317670 but the tl;dr is that the Linux graphics stack (including Wayland) in some ways is so technically outdated that NVidia didn't treat it seriously and no one was going to listen to them anyways.
    He commented on this as well.

    "Yes, we could theoretically attach/consume implicit fences to buffers from userspace to mimic OSS driver behavior to some extent. I followed Jason's patch series to this effect, and we do some of this for synchronization in PRIME situations now, as it's vastly simpler when the only implicit synchronization boundary we care about is SwapBuffers() for consumption on a 3rd-party driver. It gets much harder to achieve correct implicit sync with arbitrary modern API usage (direct read/write of pixels in images in compute shaders, sparse APIs, etc.), and this has been a big pain point with Vulkan implementations in OSS drivers from my understanding."

    I'm not gonna act like I know better than him about implicit and explicit fencing either. I'm not well-read on the subject at all. He does say something else though, so I'll continue.

    "I don't know what the current state is, but I know it limited the featureset exposed in OSS Vulkan drivers when they first came out. I assume it's technically achievable, but I'd prefer not to add that complexity to our driver stack, nor do something like downgrade functionality of dmabuf-based surfaces to account for such limitations just for something everyone agrees is outdated in a world where rendering isn't as simple as read-only textures and write-only render buffers occupying the entirety of a single kernel-side allocation in a given command buffer, which was a pretty accurate mental model of GPU usage when implicit sync was developed."

    To me, this reads as him saying that he feels implicit synchronization is holding back the functionality of dmabuf-based surface, not that dmabuf is preventing explicit synchronization.

    Leave a comment:


  • Myownfriend
    replied
    Originally posted by mdedetrich View Post
    I am aware of what Weston is but it doesn't count because it was a joke reference implementation. When I am talking about an actual implementation, I mean something along the lines of Gnome/KDE running with Wayland protocol and not something like Weston which is just a bare implementation of a protocol (and this is in light of the fact that the Wayland protocol at the time didn't even support basic functionality).
    Why should a reference implementation require having more than what's in the spec when it's purpose is to test the spec? Not every reference implementation of something is supposed to strive to be it's only implementation. The purpose of a reference implementation is to work as, get this, reference so that when Gnome, KDE, or Wlroots implement a protocol and they're not sure if they're getting the right behavior, they can test it against Weston. If they get the same results, then they've implemented it correctly.

    It makes no sense to say that it "doesn't count".

    Originally posted by mdedetrich View Post
    But you don't have to take my word for it, the fact that Wayland has already taken over a decade and its still not ready is already evidence of that. No software project in their right mind would adopt a protocol which has been completely unproven (and by unproven I mean the example I gave above).
    You're showing your ass. Check the Wayland governance rules.



    "Protocols in the "xdg" and "wp" namespaces must have at least 3 open-source implementations (either 1 client + 2 servers, or 2 clients + 1 server) to be eligible for inclusion."

    And again, it's not an issue of Wayland not being ready, it's a matter of adoption. It's been just under 10 years since the core client and server protocols were stabilized, 6 since the first non-Weston compositors shipped with support for it, and less than 1 year since any of them have been usable for daily use on hardware by the desktop GPU vendor with the largest marketshare, Nvidia.

    Of the handful of Wayland compositors that exist, only two of them supported the EGL_Streams backend which was required to run on Nvidia hardware at all: Gnome and KDE. On KDE, the backend was written completely by Nvidia themselves and was so buggy that it was dropped immediately after Nvidia started to support GBM. Hardware acceleration for XWayland applications was not available on either of these compositors because Nvidia refused to support the DMA-buf, a kernel feature.

    DMA-buf was only added June of 2021: 10 months ago.

    GBM support was added in October 2021: 6 months ago.

    Leave a comment:


  • mdedetrich
    replied
    Originally posted by ezst036 View Post

    Oh, now I understand. You are Artem S. Tashkinov. Right? I've run into your authorship online at times. You are definitely colorful and passionate in what you believe. I don't think you and I have had a discussion before. Nice to make your acquaintance.

    Unfortunately, not correct in the details though. Today is not a new day where everything just started right now in this instant. All of the things that bother you about Wayland could have been resolved four or more years ago had Nvidia not tried to strong-arm the Wayland devs with EGLStreams. This problem is overwhelmingly a timeline problem. SDL devs can't code for a Nvidia thing that isn't supported in conjuction with a Wayland thing that isn't supported because of the Nvidia dependency. This is just simple coding problems, all developers run into dependency woes at some point or another right?

    https://www.phoronix.com/scan.php?pa...ice-Memory-API

    In the end, the brave David-esque Mesa developers stared down Goliath Huang and ultimately were triumphant. Nvidia folded and now is on with GBM but only kind of supports GBM. It would be myopic to try to claim that Nvidia having a war with the developers to insist on EGLStreams as their own personal preference when they don't even contribute on the open source side anyways, that that's not going to have some sort of repercussions. It does have repercussions.

    https://www.phoronix.com/scan.php?pa...sa-Backend-Alt

    I'm only kidding with you about the David and Goliath thing though. :-) In all seriousness, you should note that in both the 2016 article as well as the 2021 article, it is widely acknowledged that the GBM/EGLStreams fight was (and is) detrimental on the Wayland front.



    Now, we can all be nice and civil and have a decent conversation around here, right?
    While technically correct, you are also only stating half the truth with paints a different picture than reality. The main reason why NVidia didn't take Wayland seriously is because of serious technical limitations both with Linux graphics ecosystem and with Wayland.

    The primary problem currently is the fact that almost everything in linux graphics stack uses implicit synchronization (this is mainly a result of dogmatically sticking to the "everything is a file with a simple read/write buffers" mantra). The problem is that this is both extremely outdated and inefficient and this is the main reason why NVidia came up with EGLStreams, its because EGLStreams uses explicit synchronization. Herein lies the dilemma, the concept of implicit synchronization is so outdated NVidia's driver doesn't even support it without a lot of workarounds/hacks.

    And we are now at the point where Mesa developers have a foot in their own mouth because they are painfully realized this point about implicit synchronization, something that Nvidia has been saying for a while but were constantly ignored because they were "evil".

    I expand on this point here https://www.phoronix.com/forums/foru...e2#post1317670 but the tl;dr is that the Linux graphics stack (including Wayland) in some ways is so technically outdated that NVidia didn't treat it seriously and no one was going to listen to them anyways.

    Leave a comment:


  • mdedetrich
    replied
    Originally posted by Myownfriend View Post

    That's not true. Wayland was and always has been developed hand-in-hand with an implementation: Weston. Wayland's and Weston's first release was on the same day and that was before the client and server APIs stabilized.
    I am aware of what Weston is but it doesn't count because it was a joke reference implementation. When I am talking about an actual implementation, I mean something along the lines of Gnome/KDE running with Wayland protocol and not something like Weston which is just a bare implementation of a protocol (and this is in light of the fact that the Wayland protocol at the time didn't even support basic functionality).

    But you don't have to take my word for it, the fact that Wayland has already taken over a decade and its still not ready is already evidence of that. No software project in their right mind would adopt a protocol which has been completely unproven (and by unproven I mean the example I gave above).

    Leave a comment:


  • ezst036
    replied
    Originally posted by birdie View Post
    This is a fucking lie. Wayland is held back by its asinine design decisions and asinine implementations.
    Oh, now I understand. You are Artem S. Tashkinov. Right? I've run into your authorship online at times. You are definitely colorful and passionate in what you believe. I don't think you and I have had a discussion before. Nice to make your acquaintance.

    Unfortunately, not correct in the details though. Today is not a new day where everything just started right now in this instant. All of the things that bother you about Wayland could have been resolved four or more years ago had Nvidia not tried to strong-arm the Wayland devs with EGLStreams. This problem is overwhelmingly a timeline problem. SDL devs can't code for a Nvidia thing that isn't supported in conjuction with a Wayland thing that isn't supported because of the Nvidia dependency. This is just simple coding problems, all developers run into dependency woes at some point or another right?

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


    In the end, the brave David-esque Mesa developers stared down Goliath Huang and ultimately were triumphant. Nvidia folded and now is on with GBM but only kind of supports GBM. It would be myopic to try to claim that Nvidia having a war with the developers to insist on EGLStreams as their own personal preference when they don't even contribute on the open source side anyways, that that's not going to have some sort of repercussions. It does have repercussions.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


    I'm only kidding with you about the David and Goliath thing though. :-) In all seriousness, you should note that in both the 2016 article as well as the 2021 article, it is widely acknowledged that the GBM/EGLStreams fight was (and is) detrimental on the Wayland front.

    Originally posted by birdie View Post
    Yeah, he strikes straight into the brains of Linux users who suddenly lose dozens of IQ points when they see the word NVIDIA.
    Now, we can all be nice and civil and have a decent conversation around here, right?

    Leave a comment:


  • tildearrow
    replied
    Oh wait... forgot about one thing.

    Originally posted by birdie View Post
    Where's KWin using wlroots?
    KWinFT

    https://www.phoronix.com/scan.php?pa...OOTS-Continues

    Last edited by tildearrow; 20 April 2022, 06:56 PM.

    Leave a comment:


  • Myownfriend
    replied
    Originally posted by Weasel
    Imagine blaming people for ignoring you and calling them names when you're the quote war asshat who quotes every single word and nobody wants to bother with.

    Your problem.
    This means nothing lol Yea dude, imagine backing up what you say by citing stuff. Couldn't be you. So cringe 😛

    Leave a comment:


  • Myownfriend
    replied
    Originally posted by mdedetrich View Post

    he core problem is the very fact that Wayland was released as a protocol with no real implementation backing it. Do note the process of doing this is also historically rare and not that successful. Typically speaking an interface/protocol is derived from a known well working implementation and not the other way around (because a protocol with no real implementation is just pure theory).
    That's not true. Wayland was and always has been developed hand-in-hand with an implementation: Weston. Wayland's and Weston's first release was on the same day and that was before the client and server APIs stabilized.

    Fun fact: "The X11 protocol was designed with little idea of how it would be implemented and was fully specified before the implementation began."

    https://www.google.com/url?q=https:/...w2zep7lhFMYq-X

    That's on page 3 though the full quote is:

    "The X11 protocol was designed with little idea of how it would be implemented and was fully specified before the implementation began. It is of course true, however, that if we did not un- derstand how to implement something in a reasonable amount of time and effort (since timeli- ness was critical) we did not add it to the design; for example, non-rectangular windows have been added as an extension since the original release. We did not understand at the time how easy they would be to implement, and therefore explicitly rejected them during the design meet- ing. The specification was changed during alpha and beta test as we learned from the imple- mentation; often errors in the specification or design flaws were uncovered as the implementa- tion proceeded. We are very skeptical of systems that have never been implemented before widespread adoption; similarly, systems that have not been carefully specified before imple- mentation begins are also suspect."
    Last edited by Myownfriend; 20 April 2022, 12:06 PM.

    Leave a comment:


  • Sevard
    replied
    Originally posted by birdie View Post
    I can use almost any WM under any DE under Xorg.

    XFWM4 (from XFCE) running KDE? No problems.
    KWin running XFCE? No problems.
    Mutter running KDE? No problems.
    All of them have some serious issues even when you use XFWM4 with XFCE, KWin with KDE and Mutter with Gnome, so it's not true.
    Tried to run KDE with Mutter some time ago. It's been accident I haven't noticed that I switched this and no - it doesn't work at all.
    But there are more serious issues with all WMs:
    - Multidisplay support on Mutter in X11 is broken since forever. The only X11 Windows Manager where it kind of works for me is KWin, but there are some issues when you connect and disconnect displays - sometimes you need to turn display off and on with xrandr. It works perfectly fine on Mutter and KWin Wayland display servers though.
    - The situation is even funnier with fractional scalling. It doesn't work in X11 at all. You can use xrandr for this and then it sometimes work, but with many issues and performance is rather bad. On the other hand it works flawlessly with KWin and Mutter compositors.
    None of them implement the display protocol, none of them reimplement screen settings, keyboard/mouse, locale, systray, drag and drop, screen sharing and casting, and a metric ton of features the Xorg server provides out of the box.
    Some of those features shouldn't be implemented by display server at all. They're in X11 because many years ago somebody put them there and nobody cared to fix this. But thanks for remainder - screen sharing in X11 is also broken. It works in Wayland unless you use NV drivers.
    I've got one fucking configuration for all Xorg/X11 DEs.
    I have none for Wayland. Don't need it - default config just works.
    Each fucking compositor under Wayland has its own configuration file and format.
    Don't really care about this - I've never needed to modify them manually.

    Leave a comment:

Working...
X