Announcement

Collapse
No announcement yet.

GNOME's Window Rendering Culling Was Broken Leading To Wasted Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by eydee View Post
    Sometimes it's mind blowing that GPUs are able to draw complex 3D scenes but can struggle with drawing 10 windows on a desktop. At whatever resolution, with whatever bugs. The question shouldn't be being able to reach 60 fps but whether it's 5000 or 6000.
    We are talking Intel integrated graphics @4K. I think one reason that more people don't notice is that a discrete card _doesn't_ struggle with it at all.

    Comment


    • #12
      Originally posted by kpedersen View Post

      I wonder if it is because GPUs have an architecture mainly intended for games and CAD (i.e where data can be retained) these days?
      Correct me if I'm wrong - but can 2D be considered as a special case of 3D - any affine transformation has always a constant value (even zero) for one of the (x,y,z)?

      Comment


      • #13
        Originally posted by TemplarGR View Post

        It's cause they have to draw them pixel by pixel in real time, there are no premade textures and models. 2D has always been expensive and as time went on dedicated 2D fixed function hardware was removed from modern gpus. I think a cool idea would be to draw them with pixel shaders in the future or even better make them raytraced.
        Where are you getting all this from? And how is ray tracing meant to improve 2D composition? (in the sense of desktop UI)

        There's plenty of techniques to handle this more efficiently on the GPU, it's not as you describe.

        Your models/meshes are 2D quads, 4 verts for a window with a render texture to composite to and with the rest of the desktop. Pixel shaders can be used by compositors for effects, pretty sure you'll find that with some of kwins.

        Originally posted by kpedersen View Post
        With windows displaying complex software (like a web browser) this data often needs to keep being sent (i.e as pixels) because it changes very often (the copy on the GPU is already out of date).
        Browsers often optimize the content into tiles to render, so it's a lot less intensive to what you describe. They can utilize the structure of the DOM and CSS to layer and composite their own data internally. The desktop compositor can then take that surface and composite it with the rest of the desktop.

        I don't know how Wayland handles it, but X11 iirc treats all displays as one big display image and crops from that(at least when I used xshm or whatever it was to capture the screen data like most x11 capture software does). It's unfortunate as those frames were in CPU memory iirc(system ram), rather than on the GPU like Windows handles it(no idea about macOS).

        So I guess it can be a performance issue in that sense, but it's the opposite of what you've described. The buffer from the GPU ends up coming back to CPU, from vram to ram afaik. I don't know the internals that well myself with how it's done with linux/x11, perhaps someone else can chime in, I would assume the GPU sends the final frames back, but it could be before prior to compositing the full frame too? Perhaps it only sends back the dirty regions.

        Splitting the display contents into tiles is useful for dirty region updates similar to how that was done on CPU, by only needing to update that portion(in GPU it'd be the minimum tile(s) size of the updated region. In a browser as you scroll, some tiles can already be rendered in advanced, so it's just updating a render texture with those separate textured tiles like blitting via CPU. If you're familiar with 3D you can just translate the quads and take the viewport output as the texture for a window.

        Text is another one, each glyph can be a sub texture(within a texture atlas, spritesheet), and each of those are rendered to their own quads/rectangles that get laid out in a similar manner. A blinking cursor for text input can be on another layer or z-depth and toggle it's visibility, you don't have to wastefully update a large texture pixel for pixel.

        Point I'm trying to communicate is you get many static primitives that can composite the window content, and windows themselves can do similar, especially with decorations.

        Comment


        • #14
          Originally posted by CochainComplex View Post

          Correct me if I'm wrong - but can 2D be considered as a special case of 3D - any affine transformation has always a constant value (even zero) for one of the (x,y,z)?
          Absolutely. And I am pretty sure a lot of the calculations can be simplified. However there is still a potential for some waste plugging 2D values through a pipeline really intended for 3D.

          Originally posted by polarathene View Post

          Point I'm trying to communicate is you get many static primitives that can composite the window content, and windows themselves can do similar, especially with decorations.
          Yes, I do see that. I.e data can be batched and reused and draw instructions reduced (just like it would be done in games i suppose). My guess is that there is still not quite enough of this going on.
          Last edited by kpedersen; 22 June 2020, 09:01 AM.

          Comment


          • #15
            Originally posted by TemplarGR View Post
            It's cause they have to draw them pixel by pixel in real time, there are no premade textures and models. 2D has always been expensive and as time went on dedicated 2D fixed function hardware was removed from modern gpus. I think a cool idea would be to draw them with pixel shaders in the future or even better make them raytraced.
            You may want to explore this blog's archive, there is a pretty cool ongoing series about GPU accelerated vector graphics with a design focus on UI rendering : https://raphlinus.github.io/ .

            Comment


            • #16
              Great timing! I just bought a 4k display and I had to disable GNOME animations because they were choppy.

              Comment


              • #17
                Originally posted by polarathene View Post

                Where are you getting all this from? And how is ray tracing meant to improve 2D composition? (in the sense of desktop UI)

                There's plenty of techniques to handle this more efficiently on the GPU, it's not as you describe.

                Your models/meshes are 2D quads, 4 verts for a window with a render texture to composite to and with the rest of the desktop. Pixel shaders can be used by compositors for effects, pretty sure you'll find that with some of kwins.
                You are mistaking compositor effects for the whole rendering, this is not the case. Yes modern compositors use 2D meshes and a texture in order to handle windows and add pixel effects, but this is AFTER the window is drawn. There is no premade texture of your let's say GTK window to be sent to the compositor, it has to be drawn in real time per frame. Then after it is drawn it becomes a texture and can be handled like every 3D model. Unless this has changed and i am not aware of it.

                What i wanted to say is that we should be using pixel shaders to actually draw the window and create the texture itself. That would remove some bottlenecks i think.

                Comment


                • #18
                  Looks like we need to go back to the Voodoo + Matrox era

                  Comment


                  • #19
                    Originally posted by eydee View Post
                    Sometimes it's mind blowing that GPUs are able to draw complex 3D scenes but can struggle with drawing 10 windows on a desktop. At whatever resolution, with whatever bugs. The question shouldn't be being able to reach 60 fps but whether it's 5000 or 6000.
                    Basically if you have ten maximized windows and they all get painted for each frame, it means that the renderer is in fact rendering ten "frames" for each actual frame. Most of the work is probably computed in the CPU and it has to push out each frame at a flat rate or they will be discarded due to vsync.

                    I have actually been wondering about jumpy mouse cursor for a few days now. It seems to occur at random but persistently. This bug could very well explain that as well.

                    Comment


                    • #20
                      Like I said in one of previous discussions, modern Intel GPU should be definitely capable of 4K@60 FPS. That fix makes it more possible

                      Comment

                      Working...
                      X