Announcement

Collapse
No announcement yet.

Nouveau Developer Working On OpenGL Extension To Help With Reverse-Engineering

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by The_Analyst View Post
    Will this be an optional extension for now, with hopes that it is later introduced into the full OpenGL spec?

    Will this be included in WebGL? If you're able to poke random registers on the GPU and see the results, it sounds like you're increasing your attack surface quite a lot.
    It's not going to be in the GL spec. No one except Mesa will be interested in it, because the other drivers are all provided by their own manufacturer - there's not a ton of reasons for them to add reverse-engineering support into their drivers.

    It sounds like they might not even register it at Khronos, and just keep it completely internal within Mesa.

    Comment


    • #12
      Originally posted by smitty3268 View Post
      No one except Mesa will be interested in it, because the other drivers are all provided by their own manufacturer - there's not a ton of reasons for them to add reverse-engineering support into their drivers.
      On the other hand, among the big desktop/workstation/server/HPC players, Intel and AMD have nothing to lose given that they are currently pushing for opensource drivers and even contributing dev time to that.
      So they don't have any incentive to veto the extension if the open-source crowd tries to push it aggressively into an official openGL version.

      Only Nvidia (and the countless embed GPU manufacturers) have something to lose from it.

      But then, if it gets miraculously accepted into 4.7, it won't necessarily find its way in the next Nvidia driver blobs:
      who will pay attention if OpenGL 4.7 is supported, now that all the attention has shifted to Vulkan ?

      Comment


      • #13
        This extension would really only be useful to enable a driver developer to pass tokens from GLSL to the driver's shader compiler, so that the shader compiler can then emit arbitrary instructions in place of that token. This is not something that is useable in a released version of any software -- it relies on a hacked up shader compiler to operate. This just facilitates providing arguments to the operation being debugged, and saving off its return value.

        Hopefully the driver developer then can more easily construct GLSL shaders that help figure out or verify what a particular operation does.

        Comment


        • #14
          Originally posted by duby229 View Post
          That too, but specifically twinview is buggy. The most common bug is ctrl-alt-fkey to switch to a vt results in just a blank screen with just the amber light, only happens with twinview and is decades old.
          I don't remember ever encountering that (I've been using nVidia since around 2002) but then I only ever Ctrl+Alt+FKey when my KWin borks from being left open for too many weeks on end and needs a forced restart, so I'm probably using it the way their testers are.

          Originally posted by duby229 View Post
          Another common bug is that most games see both monitors as one screen and stretch across both of them, also decades old bug.
          Now that I do know but I never realized it was nVidia-specific. Given that it only happens in certain games, I just assumed that some games weren't tested on multi-monitor desktops and would break on any of them... I then worked around it by combining windowed mode at the desired monitor's native resolution with a KWin preset to invent that "borderless windowed" fullscreen mode before it became a standard option.

          Originally posted by duby229 View Post
          And another common bug is that it can only sync to vblank on the first monitor, although a work-around exists, it's also a decades old bug.
          Not sure if I'm familiar with that. How is "first monitor" defined and what's the workaround?

          Originally posted by duby229 View Post
          EDIT: Oh yeah, and the xrender acceleration is so bad, how could I forget that. It's terrible, so, sooo bad. nVidia needed to spend time learning how to accelerate xrender better a long time ago.
          Fair enough.

          Comment


          • #15
            Originally posted by ssokolow View Post

            I don't remember ever encountering that (I've been using nVidia since around 2002) but then I only ever Ctrl+Alt+FKey when my KWin borks from being left open for too many weeks on end and needs a forced restart, so I'm probably using it the way their testers are.



            Now that I do know but I never realized it was nVidia-specific. Given that it only happens in certain games, I just assumed that some games weren't tested on multi-monitor desktops and would break on any of them... I then worked around it by combining windowed mode at the desired monitor's native resolution with a KWin preset to invent that "borderless windowed" fullscreen mode before it became a standard option.



            Not sure if I'm familiar with that. How is "first monitor" defined and what's the workaround?


            Fair enough.
            The work-around is don't use twinview, use zaphod instead. Not sure if that's still an option though.... Last time a tried zaphod it didn't work with the OSS drivers....

            EDIT: The first monitor is usually defined as "Screen0", but really that's just a variable name which you can change to whatever you want.
            Last edited by duby229; 04-18-2019, 10:12 AM.

            Comment

            Working...
            X