Originally posted by caligula
View Post
Announcement
Collapse
No announcement yet.
SDL2 Reverts Its Wayland Preference - Goes Back To X11 Default
Collapse
X
-
-
Originally posted by mdedetrich View Post
That is actually the main reason why NVidia pushed EGLStreams, you can go back and read the mailing lists. The main fundemental difference with EGLStreams is it has explicit synchronization baked into the API and thats why it wasn't really compatible with GBM at the time (precisely because GBM had a different synchronization model, i.e. it was implicit).
Read https://lwn.net/Articles/814587/
Its a combination of either doesn't support or supports with hacks/workarounds that often hurt performance (although in some cases the performance hit isn't an issue). There are for example issues with Vulkan integration which has the same fundamental issue (Vulkan's API is explicit sync only)
You have the wrong summary, he said it quite well here
In other words, NVIdia could support implicit sync now but because its a massive hack/workaround the performance penalty would be massive.
And one thing to note is that this is NOW. Back half a decade ago (or even more), NVidia's driver was the same in design (only designed for explicit sync) however the linux community behind the graphics stack didn't even get to the point of contemplating supporting explicit sync and scoffed at the concept of being told by NVidia that basically "you are using an inferior design from the pre 2000's era, maybe you should consider changing it?"
(note that I am not saying that EGLStreams was perfect and it evidently didn't support the full range of features that was necessary for desktop composting, but what it did support properly it did at much better performance because of its explicit synchronization model).
Leave a comment:
-
Originally posted by Myownfriend View PostThe story I've always heard is that the OSS community wanted Nvidia to be part of the conversation when it came to determining a buffer sharing API and they didn't show. The OSS community decided on GBM and Nvidia came in afterwards to expressed their issue then came back with EGLStreams which was made by them and only them. They even apoligized for how they handled the EGLStreams proposal. Nvidia had no say in GBM and OSSC had no say in EGLStreams.
They even apoligized for how they handles
There are probably better links to old mailing list archives than this, however,
It does appear that if we follow the timeline as things actually happened, GBM and associated development was well underway and Nvidia popped in and said SURPRISE! For the life of me, I can't understand why that would be considered controversial. /sarc
One thing I could not find was if the OSS community wanted to have Nvidia as a part of the conversation prior to the 2014 EGLStreams debacle. There's what I believe, and there's what I can demonstrate. However, looking at the discussion in that devel archive, the conversations read like there's already some existing familiarity between the two camps.
Originally posted by Myownfriend View PostI also found this PDF from Nvidia about explicit synchonization which echoes that statement about DMA-buf but it doesn't mention GBM either.
In a presentation in 2014, Nvidia said that GBM is fine, that they can use it, and pointed out that they could just use the Mesa implementation just like they wound up doing 7 years later. They just thought EGLStreams was better.
Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
Clearly they have some issues with GBM though because they were working on something that could replace or kind of extend GBM but they haven't worked on it in 5 years.
NVIDIA To Issue An Update On Their Support Of Mir & Wayland - 29 September 2014
Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
NVIDIA 364.12 Arrives With Wayland & Mir Support - 21 March 2016
Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
NVIDIA Publishes Patches For Its Driver To Work With Wayland's Weston - 21 March 2016
Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
NVIDIA Continues Discussing Their Controversial Wayland Plans With Developers - 3 April 2016
Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
Streams vs. GBM: The Fight Continues Over NVIDIA's Proposed Wayland Route - 12 May 2016
Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
GNOME Lands Mainline NVIDIA Wayland Support Using EGLStreams - 17 November 2016
Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
X.Org Server 1.20 RC4 Released, EGLStreams For XWayland Might Still Land - 11 April 2018
Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
In reality, it looks like at the end of the day it was Fedora who forced Nvidia's hand into supporting GBM. When X.org was officially pushed off into the realm of abandonware, it's clear what that meant. Nvidia's driver went along with it. Nvidia can't have it's own driver being abandonware. They now have no choice, they have to support GBM. Fedora did this. Red Hat triumphed and Nvidia lost.
It's Time To Admit It: The X.Org Server Is Abandonware - 25 October 2020
Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
Fedora Looks To Provide Standalone XWayland Package Tracking X.Org Server Git - 30 November 2020
Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
Within a year, we had Nvidia GBM in a way that we should've had back in 2014. That's 8 years lost. 8 years of this!
NVIDIA 495 Linux Beta Driver Released With GBM Support - 14 October 2021
Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
There is probably much that I could not find, meaning much that is missing. But in closing, I'll say it again. Nvidia deserves blame because they did this. But let's keep in mind, users like mdedetrich and birdie are correct also. Wayland is still inferior to x.org. Whatever the reasons, SDL2 just reverted back to X11. That alone is the proof.
- Likes 1
Leave a comment:
-
Originally posted by arQon View PostAs predictable as ever...SDL makes Wayland the default, fanboys rejoice like they were actually involved in the project at all, like football fans thinking their team only won because they wore their lucky shirt that day or whatever. Then crow about how X is "dead" and Wayland is awesome etc etc.
Originally posted by arQon View PostSDL reverts to X because Wayland still isn't ready after 15 years, fanboys "defend" it saying it's only been 10 years, and besides, it's all the compositor's fault. Or Mesa's, or nvidia's, or a specific distro, or literally *anything* except where the blame actually belongs: with the team that keeps failing to deliver something adequately functional, over and over again.
But you know what, believe what you want, QAnon.
Leave a comment:
-
Originally posted by ezst036 View PostHowever, It's impossible to miss that this stunt of holding up GBM only really accomplished one thing for Nvidia and that was additional scorn
They even apoligized for how they handles
Originally posted by mdedetrich View PostThat is actually the main reason why NVidia pushed EGLStreams, you can go back and read the mailing lists. The main fundemental difference with EGLStreams is it has explicit synchronization baked into the API and thats why it wasn't really compatible with GBM at the time (precisely because GBM had a different synchronization model, i.e. it was implicit).
Read https://lwn.net/Articles/814587/
Its a combination of either doesn't support or supports with hacks/workarounds that often hurt performance (although in some cases the performance hit isn't an issue). There are for example issues with Vulkan integration which has the same fundamental issue (Vulkan's API is explicit sync only)
I also found this PDF from Nvidia about explicit synchonization which echoes that statement about DMA-buf but it doesn't mention GBM either.
In a presentation in 2014, Nvidia said that GBM is fine, that they can use it, and pointed out that they could just use the Mesa implementation just like they wound up doing 7 years later. They just thought EGLStreams was better.
Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
Clearly they have some issues with GBM though because they were working on something that could replace or kind of extend GBM but they haven't worked on it in 5 years.
Originally posted by mdedetrich View PostYou have the wrong summary, he said it quite well here
Originally posted by mdedetrich View PostIn other words, NVIdia could support implicit sync now but because its a massive hack/workaround the performance penalty would be massive.
And one thing to note is that this is NOW. Back half a decade ago (or even more), NVidia's driver was the same in design (only designed for explicit sync) however the linux community behind the graphics stack didn't even get to the point of contemplating supporting explicit sync and scoffed at the concept of being told by NVidia that basically "you are using an inferior design from the pre 2000's era, maybe you should consider changing it?"
"Yes, it certainly sounds like a sync issue. Our driver has no way to implement implicit sync, so it doesn't. For the most part, our kernel driver is blissfully unaware of what work is in flight and which buffers that work uses. It doesn't care beyond ensuring clients don't interfere with buffers they haven't allocated themselves or been granted access to, which the HW itself takes care of for the most part. We've been evaluating various ways to work around this issue for Xwayland specifically without tanking perf, but none of them have panned out so far. They either break X protocol guarantees, or don't work. Regardless, these mechanisms would only work for GL/Vulkan-based applications. Native X rendering in glamor itself still wouldn't sync properly unless using the EGLStream backend, where EGLStream handles synchronization internally from my understanding."
Originally posted by mdedetrich View PostI am only going to make the following remark which is that NVidia cares about their brand which also means their performance more than anything else. This means that compelling them to work on a solution that is technically inferior and will negatively harm their image is not going to work regardless if its open source or not. Its highly arrogant for the Linux community to expect them to able to compel NVidia to "work with them" when the same community is stubbornly refusing changes.
If you actually go through the mailing lists you will see the intention is very clear, the main point of contention is that Linux was telling NVidia "if you want to work with us, you have to do things our way and use GBM and other inferior technologies at the time", Linux community was at that time not open at all about changing the design of their stack. So don't be surprised that in NVidia's position they wouldn't take Linux community seriously or in good faith, and they didn't.
I would highly recommend you read this article https://lwn.net/Articles/814587/ , its very clear that we got to this place because the Linux graphics stack community was very stubborn about sticking to implicit sync and they are only changing now because their hand is forced, this is also demonstrated in comments like this one https://gitlab.freedesktop.org/xorg/...7#note_1273350.
I don't think a comment from Michel Danzer's alone is proof that Nvidia isn't welcome to help make the transition to explicit sync.
Originally posted by mdedetrich View PostUltimately in the end you should be thanking NVidia, because of their persistence in not compromising on technically inferior solutions the Linux graphics community has finally realized that they need to change stuff and NVidia was a primary driver for that (along with things like Vulkan existing).
But just to be extra clear, I do think the stack should go explicit sync, but that doesn't change the fact that Nvidia could have been more collaborative and definitely hindered Wayland adoption a lot.
- Likes 1
Leave a comment:
-
As predictable as ever...
SDL makes Wayland the default, fanboys rejoice like they were actually involved in the project at all, like football fans thinking their team only won because they wore their lucky shirt that day or whatever. Then crow about how X is "dead" and Wayland is awesome etc etc.
SDL reverts to X because Wayland still isn't ready after 15 years, fanboys "defend" it saying it's only been 10 years, and besides, it's all the compositor's fault. Or Mesa's, or nvidia's, or a specific distro, or literally *anything* except where the blame actually belongs: with the team that keeps failing to deliver something adequately functional, over and over again.
All the nuance and honesty of modern political arguments. Every. Goddamn. Time.Last edited by arQon; 21 April 2022, 06:44 PM.
- Likes 3
Leave a comment:
-
Originally posted by ezst036 View Post
I had an idea in general about some of the limitations, but your post and additional links helped clarify some things. This isn't an area I have studied intensely.
Knowing this, I think I probably do agree that long term Mesa developers will likely have to re-do this in the future. But pure technical merits as explained took a back seat. And when it does get fixed, it will have to see leadership from some entity that has become a trusted entity, such as Valve.
Regardless of that, Nvidia has locked themselves out. There's a trust gap here, and it's as big as the moon. The Open Source community has for years pleaded with Nvidia to play nicer in a lot of areas and those please have gone unheard. Past Phoronix articles are littered with these. When Linus gave his famous Nvidia speech, I think there were multiple reasons for this. I don't think the lack of an OSS driver was even on the list, but it may have been. They have fostered huge amounts of bad will.
And even if the technical facts prove them correct in the end, it only causes additional heartburn that Nvidia chose not to take a conciliatory attitude from day one and say "ok, let's do both then so we don't hold you up." We would've all seen Wayland a decade ahead of where Wayland is at today. Heck, It might very well be (in an alternate universe) that EGLStreams would've become the norm in 2022 after seeing the inferior way fail for a decade. Instead, we won't now see EGLStreams adopted until 2032.(or whatever might be the better/more correct way forward) [I'm only using EGLStreams here as an example, for conversational purposes.]
Had Nvidia chosen a conciliatory tone a decade ago, they could've come back more gracefully and said "see, told you so. It's been a decade. This isn't technically sound. Can we please implement EGLStreams now that play time is over?"
Nvidia does receive undue hate at times, let's remember they were gracious enough to create a well supported video driver long before AMD or Intel went down the OSS driver route when Linux usage was quarters of a quarter percent usage. However, It's impossible to miss that this stunt of holding up GBM only really accomplished one thing for Nvidia and that was additional scorn.
It is going to take many years for Nvidia to fix this PR quagmire.
If you actually go through the mailing lists you will see the intention is very clear, the main point of contention is that Linux was telling NVidia "if you want to work with us, you have to do things our way and use GBM and other inferior technologies at the time", Linux community was at that time not open at all about changing the design of their stack. So don't be surprised that in NVidia's position they wouldn't take Linux community seriously or in good faith, and they didn't.
I would highly recommend you read this article https://lwn.net/Articles/814587/ , its very clear that we got to this place because the Linux graphics stack community was very stubborn about sticking to implicit sync and they are only changing now because their hand is forced, this is also demonstrated in comments like this one https://gitlab.freedesktop.org/xorg/...7#note_1273350.
Ultimately in the end you should be thanking NVidia, because of their persistence in not compromising on technically inferior solutions the Linux graphics community has finally realized that they need to change stuff and NVidia was a primary driver for that (along with things like Vulkan existing).Last edited by mdedetrich; 21 April 2022, 06:41 PM.
- Likes 2
Leave a comment:
-
Originally posted by Myownfriend View Post
I don't feel it's correct to say that Nvidia didn't support Wayland because of the Linux graphics stack using implicit synchronization when James himself said that X is the only major component that requires it. That feels like it would be reason for Nvidia to be at the forefront of Wayland support.
Originally posted by Myownfriend View PostI also can't find anything saying that GBM and dma-buf don't support explicit synchronization either.
Read https://lwn.net/Articles/814587/
Its a combination of either doesn't support or supports with hacks/workarounds that often hurt performance (although in some cases the performance hit isn't an issue). There are for example issues with Vulkan integration which has the same fundamental issue (Vulkan's API is explicit sync only)
Originally posted by Myownfriend View PostHe commented on this as well.
"Yes, we could theoretically attach/consume implicit fences to buffers from userspace to mimic OSS driver behavior to some extent. I followed Jason's patch series to this effect, and we do some of this for synchronization in PRIME situations now, as it's vastly simpler when the only implicit synchronization boundary we care about is SwapBuffers() for consumption on a 3rd-party driver. It gets much harder to achieve correct implicit sync with arbitrary modern API usage (direct read/write of pixels in images in compute shaders, sparse APIs, etc.), and this has been a big pain point with Vulkan implementations in OSS drivers from my understanding."
I'm not gonna act like I know better than him about implicit and explicit fencing either. I'm not well-read on the subject at all. He does say something else though, so I'll continue.
"I don't know what the current state is, but I know it limited the featureset exposed in OSS Vulkan drivers when they first came out. I assume it's technically achievable, but I'd prefer not to add that complexity to our driver stack, nor do something like downgrade functionality of dmabuf-based surfaces to account for such limitations just for something everyone agrees is outdated in a world where rendering isn't as simple as read-only textures and write-only render buffers occupying the entirety of a single kernel-side allocation in a given command buffer, which was a pretty accurate mental model of GPU usage when implicit sync was developed."
To me, this reads as him saying that he feels implicit synchronization is holding back the functionality of dmabuf-based surface, not that dmabuf is preventing explicit synchronization.
- Explicit sync everywhere. Of course, it would help if our driver supported sync FD first. Working on that one. Then, X devs would need to relent and let the present extension support sync FD or similar. I'm not clear why there has been so much pushback there. Present was always designed to support explicit sync, it just unfortunately predated sync FD by a few months. glamor would also need to use explicit sync for internal rendering. I believe it has some code for this, but it uses shmfence IIRC, which in turn relies on implicit sync.
- Ensure all work is finished before submitting frames from GL/Vulkan/etc. to Xwayland or wayland. Without hacky/protocol-breaking changes (Or the shmfence thing Erik mentions, though it's specific to Xwayland) to defer sending the updates, this means doing a hard CPU stall until the GPU has idled, which is what I mean by tanking perf. We've measured ~30% perf drops for one game using this solution, but impact could vary from 0-50% depending on the workload. Also, this solution alone doesn't fix glamor rendering in X, nor any composition rendering the Wayland compositor does.
- Implement implicit sync in the NV kernel driver. This would also have unacceptable perf impact, though we haven't measured it explicitly in a long time. Regardless, it's essentially at odds with our software architecture, and I don't view it as a forward-looking solution.
And one thing to note is that this is NOW. Back half a decade ago (or even more), NVidia's driver was the same in design (only designed for explicit sync) however the linux community behind the graphics stack didn't even get to the point of contemplating supporting explicit sync and scoffed at the concept of being told by NVidia that basically "you are using an inferior design from the pre 2000's era, maybe you should consider changing it?"
(note that I am not saying that EGLStreams was perfect and it evidently didn't support the full range of features that was necessary for desktop composting, but what it did support properly it did at much better performance because of its explicit synchronization model).Last edited by mdedetrich; 21 April 2022, 06:39 PM.
- Likes 4
Leave a comment:
-
Originally posted by Myownfriend View PostCould you explain this? I assumed that their GBM backend must be incomplete or immature just because of how new the support is but I don't know the details.
You're ahead of me, I had in mind your post (at #20) where you wrote:
The reason that OBS doesn't work properly on Wayland on Nvidia hardware is because Nvidia drivers don't support EGL_NATIVE_RENDERABLE. The reason why Gnome night light doesn't work on Nvidia hardware in Wayland is because the driver doesn't support GAMMA_LUT and according to Nvidia it's part of the reason why Gamescope has issues running on Nvidia hardware either.
- Likes 1
Leave a comment:
-
Originally posted by mdedetrich View PostWhile technically correct, you are also only stating half the truth with paints a different picture than reality. The main reason why NVidia didn't take Wayland seriously is because of serious technical limitations both with Linux graphics ecosystem and with Wayland.
The primary problem currently is the fact that almost everything in linux graphics stack uses implicit synchronization (this is mainly a result of dogmatically sticking to the "everything is a file with a simple read/write buffers" mantra). The problem is that this is both extremely outdated and inefficient and this is the main reason why NVidia came up with EGLStreams, its because EGLStreams uses explicit synchronization. Herein lies the dilemma, the concept of implicit synchronization is so outdated NVidia's driver doesn't even support it without a lot of workarounds/hacks.
And we are now at the point where Mesa developers have a foot in their own mouth because they are painfully realized this point about implicit synchronization, something that Nvidia has been saying for a while but were constantly ignored because they were "evil".
I expand on this point here https://www.phoronix.com/forums/foru...e2#post1317670 but the tl;dr is that the Linux graphics stack (including Wayland) in some ways is so technically outdated that NVidia didn't treat it seriously and no one was going to listen to them anyways.
Knowing this, I think I probably do agree that long term Mesa developers will likely have to re-do this in the future. But pure technical merits as explained took a back seat. And when it does get fixed, it will have to see leadership from some entity that has become a trusted entity, such as Valve.
Regardless of that, Nvidia has locked themselves out. There's a trust gap here, and it's as big as the moon. The Open Source community has for years pleaded with Nvidia to play nicer in a lot of areas and those please have gone unheard. Past Phoronix articles are littered with these. When Linus gave his famous Nvidia speech, I think there were multiple reasons for this. I don't think the lack of an OSS driver was even on the list, but it may have been. They have fostered huge amounts of bad will.
And even if the technical facts prove them correct in the end, it only causes additional heartburn that Nvidia chose not to take a conciliatory attitude from day one and say "ok, let's do both then so we don't hold you up." We would've all seen Wayland a decade ahead of where Wayland is at today. Heck, It might very well be (in an alternate universe) that EGLStreams would've become the norm in 2022 after seeing the inferior way fail for a decade. Instead, we won't now see EGLStreams adopted until 2032.(or whatever might be the better/more correct way forward) [I'm only using EGLStreams here as an example, for conversational purposes.]
Had Nvidia chosen a conciliatory tone a decade ago, they could've come back more gracefully and said "see, told you so. It's been a decade. This isn't technically sound. Can we please implement EGLStreams now that play time is over?"
Nvidia does receive undue hate at times, let's remember they were gracious enough to create a well supported video driver long before AMD or Intel went down the OSS driver route when Linux usage was quarters of a quarter percent usage. However, It's impossible to miss that this stunt of holding up GBM only really accomplished one thing for Nvidia and that was additional scorn.
It is going to take many years for Nvidia to fix this PR quagmire.
Leave a comment:
Leave a comment: