Announcement

Collapse
No announcement yet.

Fedora 40 Eyes Dropping GNOME X11 Session Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by kpedersen View Post
    back in 2012 things were still user-mode display drivers. It has since modernized to make Wayland be fairly redundant.
    ​KMS starts with DRI2 in 2008 with implementations in 2009 from ATI and Intel. DRI2 has the problem where all graphical buffers into a single file so there is no permission split between graphical applications.

    DRI2 driver are kernel space drivers. Yes there are X11 user mode drivers for the X11 server with DRI2 that are attempt to patch over security the mess that is DRI2 is and don't succeed.

    The idea with DRI2 of shoving all graphical buffers under 1 single file is a super stupid idea that Nvidia closed source drivers still do. Yes the big reason why Nvidia was pushing for eglstreams is they wanted to keep on doing this insecure stupidity.

    It Wayland development work that brought about DRI3 that from a security point of view is somewhat right.

    Wayland is not redundant because there is problem. X11 protocol is not designed to take advantage of what DRI3 offers.

    swapimages/swapchain count. when you look into this you see a problem.

    At best you start up X11 server with no compositor you have a count of 2. Wayland compositor on bare metal best again you will have a count of 2.
    X11 server you start a X11 compositor at best you have is count of 4 for swapimages/swap chain. Remember as this number increases so does the latency.

    What happens to the swap chain count when xwayland runs on top of Wayland compositor nothing does not increase. What happens to the swap chain count when you run Wayland compositor on top of a Wayland compositor again nothing. This is taking full advantage of the DMABUF in DRI3.

    Yes there have been driver issue that have resulted in wayland compositors end up with counts of 4 and those issues have result with under X11 having counts of 6 and 8 when running X11 compositor. Yes people complain about X11 compositor desktops for being slow because they are.

    Comment


    • #92
      Originally posted by Mez' View Post
      That's just wishful thinking.
      It wouldn't have been, it would have freed more pragmatic and user-oriented solutions.
      Red Hat has the means to influence people into believing into their crappily designed apps, but most of what they design is developer-oriented utopia forced onto users.
      In corporations, users would reject anything they are pushing onto them. In corporations, users pull the design, it's called user requirements. Developers have to implement things according to it. Ubuntu has understood that and it's why it's the most popular distro, even now with their snap forced onto people. Still, they let users decide how to fulfill their workflow for most things.

      Only in Red Hat world do they push their own geeky workflow onto practical users. What is happening is that you have keyboard warriors geeks deciding how others should do their workflow. That's why Gnome is a failure in the eyes of most people.
      That is basically why nobody is really happy with anything Red Hat produced, and why they are criticzed 10x more than any other Linux IT company.
      It sounds like you simply don't like Gnome and its app ecosystem. That's fine, switch to KDE, or Cosmic when it's going to be released in the near future. But we were talking specifically about the technological advancements Red Hat brought to Linux, both in kernel and in userspace (Systemd, dbus, Pulseaudio, Pipewire, etc). For example, do you know how painful was the audio experience on Linux before PulseAudio existed? (when you only had plain ALSA). And these days, who's bringing HDR support to Linux? That's right, it's Red Hat. That's what I'm talking about. These are the advancements that brought the desktop experience at least to par with Windows and MacOS.


      Originally posted by Mez' View Post
      I don't like it, I'm "forced" to use these, but wherever I can I remove things designed by these anti-user ayatollahs. And little by little, my life and workflow gets better.
      Again, switch to KDE, Cosmic, or any other DE. So I don't get your problem. Is someone forcing you to use Gnome apps or something?
      Last edited by user1; 22 September 2023, 05:16 AM.

      Comment


      • #93
        Originally posted by oiaohm View Post
        .

        Wayland is not redundant because there is problem. X11 protocol is not designed to take advantage of what DRI3 offers.
        Wow, yet another "benefit" of wayland I didnt know about.

        Breaking anything that relies on GLX.

        Great sales pitch, you really selling the benefits of wayland there.

        Comment


        • #94
          Originally posted by mSparks View Post
          Breaking anything that relies on GLX.
          Good question how video card vendors in fact use GLX. AMD/Intel... they are DRI3. Nvidia is really the only one that in fact uses X11 protocol GLX driver side. So you are talking about a 1992 interface in the X11 protocol that was superseded by DRI 1.0 in 1998.

          I really do wonder when Nvidia will have their driver for X11 not using interfaces that have been marked deprecated for over 2 decades now. How many times are you going to point out that Nvidia driver is a steaming pile of garbage.

          Comment


          • #95
            Originally posted by oiaohm View Post

            I really do wonder when Nvidia will have their driver for X11 not using interfaces that have been marked deprecated for over 2 decades now. How many times are you going to point out that Nvidia driver is a steaming pile of garbage.
            Considering that one of nvidias main customer segments is the GPU renderfarm market worth tens of billions of $s to them alone and whose business success depends on getting the best performance out of glx.

            I think I can guess.

            Comment


            • #96
              Originally posted by mSparks View Post
              Considering that one of nvidias main customer segments is the GPU renderfarm market worth tens of billions of $s to them alone and whose business success depends on getting the best performance out of glx.
              Interesting point but there is a problem.
              In an earlier article I covered how I got the NVIDIA Tesla K10 GPU cards working under Ubuntu 20.04 with CUDA. In this article I will cover how I got Blender working to allow me to use them.

              Render farms using Nvidia cards learn very quickly to start using CUDA because the GLX implementation of Nvidia is total garbage. Yes being based on interfaces that were known busted in 1998 of course absolutely does not help. Sucks to have Nvidia consumer cards right that are not setup to use the CUDA stuff well.

              Comment


              • #97
                Originally posted by oiaohm View Post

                Interesting point but there is a problem.
                In an earlier article I covered how I got the NVIDIA Tesla K10 GPU cards working under Ubuntu 20.04 with CUDA. In this article I will cover how I got Blender working to allow me to use them.

                Render farms using Nvidia cards learn very quickly to start using CUDA because the GLX implementation of Nvidia is total garbage. Yes being based on interfaces that were known busted in 1998 of course absolutely does not help.
                cuda doesn't replace glx, it complements it.
                Vulkan replaces glx.
                ​​​And Vulkan/Cuda/X11 is absolutely the stack they adopted.

                Not least because the nodes are all running custom, closed source, X11 clients.

                You want to sell wayland to them, you are going to need to get wayland forwarding working at a minimum. and offer more than just "we broke the old stuff to justify our jobs"

                "ooopsy, IBM execs that dont code or use their own products made another whoopsy"
                Last edited by mSparks; 22 September 2023, 07:55 AM.

                Comment


                • #98
                  Originally posted by mSparks View Post
                  cuda doesn't replace glx, it complements it.



                  Vulkan/Cuda/X11 is absolutely the stack they adopted. << You are badly wrong.
                  gpudirect and CUDA in server farms is killing of X11 and GLX. Of course Nvidia going to leave X11 support and GLX as broken as they can get away with so they can sell other solutions in the HPC space.

                  mSparks X11 is a not secure and HPC are stopping using it. Yes like it or not Nvidia has features of gpudirect that works with Windows and not X11 or Wayland workstations.

                  So expect you Nvidia solution in future to be Windows Workstations with Linux based HPC running Nvidia own designed stuff no X11 or Wayland at all.

                  The writing on the wall says those developing Linux desktops should worry about AMD and Intel and not give a rats about Nvidia support because long term Nvidia is not going to support Linux desktops anyhow.

                  Comment


                  • #99
                    It needs to be done. Should have done long ago, but Wayland should have done long ago.

                    Comment


                    • Originally posted by oiaohm View Post
                      gpudirect and CUDA in server farms is killing of X11 and GLX.
                      absolute nonesense.
                      You cannot administrate, schedule and monitor graphics jobs going out to 20000 servers with gpu direct and cuda.

                      those tasks are done with X11 clients, that report their status from as simply as current FPS to as complex as displaying exactly what is being rendered.

                      and not just render farms, pretty much any custom cluster has been doing that for decades, REALLY kicked off the US air force proved you could do it with PS3s running linux circa 2010.

                      Even with more publically available stuff, the first thing anyone does in say something like a kubernates setup is remove any trace of wayland, because BY DESIGN wayland is not suitable for a distributed network setup, aka every linux machine deployed in production.

                      But yeah, mostly working on 20% of desktop linux PCs running AMD or Intel is the only hurdle wayland needed to overcome to see widespread adoption....

                      proclaimed the IBM exec who thought they were buying a monopoly on Linux distribution when they paid $34 billion for the redhat logo.

                      You have got to appreciate the hilarity of it all, surely.

                      Comment

                      Working...
                      X