Announcement

Collapse
No announcement yet.

NVIDIA Continues Discussing Their Controversial Wayland Plans With Developers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by rabcor View Post
    Ehhem, what is wrong with using EGL? And if this won't be mainlined, wouldn't it be easy to implement an extension/plugin package named something like "wayland-nvidia" or "wayland-EGL" or something that simply adds support for Nvidia's chosen approach? Also wasn't open source software about more "developer and user freedom" to begin with? Nvidia clearly thinks this is the way to do things, who are we to tell them "No, you're not allowed to do it this way." when we constantly blather about "freedom" in Linux and the open source world? Are we hypocrites now?

    Also whats wrong with having an alternative way of using EGL for buffer management rather than GBM? Couldn't it just simply perform faster when you are on hardware that supports it?

    I don't see the problem, I just don't, I'm probably going to be buying AMD next because I generally don't like Nvidia's way of things over the past couple of years, but this is not one of the things I find "controversial" about them... I don't see any problem here... Unless there are some serious downsides to using EGL for buffer management, I think Nvidia should stand their ground.
    Errrr.. cos the conclusion of the multi-vendor joint developer debate, arrived at a consensus to implement GBM?
    Ummmm... cos' EGL is additional software that's not in the plans, and that Nvidia won't pay for and causes fragmentation for no benefit to anyone but Nvidia. Because there's not a surplus of skilled developers under-employed in FLOSS seeking to support alternative methods.

    Apart from that.. I agree totally with your point!

    Comment


    • #42
      Originally posted by rob11311 View Post
      No this is silly.. NO developer, goes to work for Nvidia, without being very aware of their stance. When you go for a SW job with a high profile company, you BET you do your homework. It's not a new thing..

      Just because end-users who run games like Nvidia drivers, doesn't make it the best thing for the FLOSS world to accept Nvidia suggestions, when there was a multi-vendor discussion which decided on another framework. Nothing stopped Nvidia from being involved with the creation of that API, except Nvidia's management & long running policies
      You seem to have ignored the point of view entirely. You didn't even disagree with me.
      You seem to be doing a lot of post padding...

      Comment


      • #43
        As one of the compositor projects that now have to go write a fairly decent sized blob of code as a separate code path to the open drivers (gbm/drm/etc.) just for the only driver that does it (nvidia close drivers), it's an undesirable thing. we now have to maintain 2 completely separate code paths for the same thing, one of which can only be tested under nvidia closed drivers (and the other on everything else). this means work for us (nvidia isn't sending us patches - just to weston as an example) and not doing the maintenance. so it's an unhappy state we are in with this.

        On the other hand i can see why nvidia do this. They can't use certain gpl interfaces in the kernel (dmabuf - from memory) and end up having to do all their own hiding it at the EGL layer inside the driver. i'm not so sure that their approach is really any better but it is more cost to maintain. if all drivers worked this way - well then i guess it would have been ok. but they don't. that's not how the world is set up now and thus there is a time, effort and maintenance cost from people other than nvidia. this is probably the major objection compositor guys have. hell for us we have to support this in a toolkit too not just compositor. we have to rev toolkit versions too to fix bugs and support this.

        i think it'd just be so much better if nvidia switched to supporting gbm/drm etc. for buffers and thus had to open their kernel code. they can keep userspace closed, but use the same interface as everyone else please. that'd be fantastic! keep your secret sauce up in userspace in your opengl/glesl/glx/egl bits. you won't get optimal results with eglstreams anyway because compositor can switch on the fly and the solution is not eglstreams etc. but higher level protocol imho.

        so if nvidia "budge a bit" on the kernel side then everyone can be about as happy as can be. if not then all compositors have to implement 2 paths for buffer handling OR ... open drivers need to support eglstreams and everyone move once to this. admittedly work for everyone now, but no ongoing separate code path maintenance work.

        Comment


        • #44
          Originally posted by jmcharron View Post
          if you buy a Nvidia card and run their drivers you can expect to see 95%-105% OpenGL performance parity between Windows and Linux.
          that's because they use windows driver on linux
          so it has same performance on some arbitrary subset of linux, while in general it does not work at all, for example on wayland
          Last edited by pal666; 03 April 2016, 11:15 PM.

          Comment


          • #45
            I run NVIDIA on my primary rig because I do some gaming on it. Nothing super strenuous, but I generally run a GTX x60 level card or better. For games, it's the way to go to be sure you get the best performance. However, the 2D performance has always felt sluggish when compared to the Mesa drivers. I've often had issues with freezes and suspend issues when running the NVIDIA drivers that simply aren't there on my Intel and AMD GPU's.

            While I certainly respect the work that NVIDIA puts into making their drivers perform as they do, I also don't like some of the headaches that I have to deal with to run them. Coming from that and now seeing this, doesn't make me feel very warm and fuzzy about where things are headed. With Wayland it seemed like the promise of unified graphics were finally coming to fruition. No longer would I have to worry what vendor's GPU so I could apply the right xorg or compositor tweaks to work around tearing and other nonsense.

            I don't want to have to deal with two ways of doing things in the Wayland world. Maybe NVIDIA's way has merit, but if it is as some are saying here, "a better, more efficient way" then why didn't NVIDIA lay that out during the planning phases and trumpet the technical merits of their approach so that it might have been the standard going forward.

            We tend to get a little crazy about things in the OSS world because we are passionate people. This could amount to nothing in a few months, but then it could also not.

            What I know is that right now on Fedora 23 I can run Wayland on my AMD and Intel GPU's. Fedora 24 looks to be even better and even if Wayland didn't make default session status, it sure looks like it will be pretty much usable as a daily driver moreso than F23 was (it really is pretty good with only a few rough edges). I don't want to have to wait another 6 months or more to run Wayland on NVIDIA when I can already do it on AMD and Intel.

            I try to pick the OSS stuff where possible and don't consider myself a purist as I have used NVIDIA for pragmatic performance minded needs, but I will switch to AMD in a second if it looks like this is going to be a long drawn out situation between NVIDIA and the Wayland devs. That is by no means a threat. Like I said, I appreciate what NVIDIA does from a performance standpoint. But I'm also watching the AMD side of things. My Radeon cards already run desktop stuff just fine, have access to VDPAU decoding and meet all the basic needs right now without even talking about the new hybrid drivers. If I can get solid performance for when I want to run my games and Wayland compatibility by switching to a Radeon card I would make that switch and I think a lot of people in a similar position would probably say the same thing.

            I'm not going to lose my mind just yet until we get a clearer picture as to what exactly getting NVIDIA support for Wayland in Gnome, KDE etc. is going to actually entail.

            Comment


            • #46
              Originally posted by microcode View Post



              The problem is that to get any support for NVIDIA's driver, they would need to implement two codepaths. Weston, and all of the other compositors, already have the GBM codepaths, to make use of NVIDIA's driver they would need to write and maintain both. On top of that NVIDIA seems to be the only vendor who is suggesting this approach. I seriously hope they have a good reason for it. I don't think NVIDIA folks have said specifically why they want information that is available at commit time but not until after allocation; then again, I'm perhaps not qualified to say that there aren't good reasons for it. Just from a glance I'm not seeing hard facts.
              Ohh, I get it now. I can think of a solution too but Nvidia I guess wouldn't go with it, which would be for Nvidia to also support GBM like everyone else, but provide this approach as well which developers would be free to try and use, and other vendors would be free to support if it's any better than GBM. I guess that would be the most logical approach here (that way Nvidia wouldn't have to abandon any of the work they've done here, and GBM vs EGL buffer management could be tested in an arena of their own making. If EGL is noticably faster, people will use it over GBM that's simply how things work... Nvidia either has to prove that to create incentive, or do basically what I just said which would probably be easier to begin with.

              Comment


              • #47
                Originally posted by jmcharron View Post
                It's really neat to see what AMD are doing with the new amdgpu stack but if you buy a Nvidia card and run their drivers you can expect to see 95%-105% OpenGL performance parity between Windows and Linux. AMD and Intel can't say the same thing.
                If you want a comparison equivalent to the NVidia link you provided (proprietary-to-proprietary OpenGL-to-OpenGL comparison) you should use this one instead:

                https://www.phoronix.com/scan.php?pa...nlin2014&num=1
                Test signature

                Comment


                • #48
                  Originally posted by bridgman View Post

                  If you want a comparison equivalent to the NVidia link you provided (proprietary-to-proprietary OpenGL-to-OpenGL comparison) you should use this one instead:

                  https://www.phoronix.com/scan.php?pa...nlin2014&num=1

                  Not bad, It's a bit old though ( Michael about time you do that again but with windows 10 vs amdgpu and gpu-pro?)

                  Also, the opengl linux vs opengl windows performance is all well and good, but opengl nvidia card vs opengl amd card with comparable performance levels on windows (in either) would be more interesting to see, the only way to really see if nvidia is particularly far ahead of amd in opengl performance. It would never give anything more than a rough idea, but it could settle the score if done thoroughly.
                  Last edited by rabcor; 04 April 2016, 12:05 AM.

                  Comment


                  • #49
                    Originally posted by stqn View Post
                    Why does wayland require some special hackery to display a few rectangular windows on the screen?

                    Displaying a few rectangular windows on the correct screen, at the exact vblank time, using the right GPU and accurately making use of hardware overlays/planes ain't that easy.

                    On topic it's good the discussion is being civil and both camps aknowledge of their respective issues. Too bad it didn't happen earlier as NVIDIA's talk about EGLStreams occured years ago already.

                    Comment


                    • #50
                      Originally posted by pal666 View Post
                      that's because they use windows driver on linux
                      so it has same performance on some arbitrary subset of linux, while in general it does not work at all, for example on wayland
                      'cept it has nothing to do with Windows anything.
                      fglrx and other proprietary drivers don't support it either.

                      There are actual technical reasons for this. Which, I have no doubt, you'll ignore in favour of bashing a company you seem to have an irrational hatred for.

                      Comment

                      Working...
                      X