Announcement

Collapse
No announcement yet.

GNOME's Window Rendering Culling Was Broken Leading To Wasted Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
    bdog
    Junior Member

  • bdog
    replied
    Great timing! I just bought a 4k display and I had to disable GNOME animations because they were choppy.

    Leave a comment:

  • HadrienG
    Phoronix Member

  • HadrienG
    replied
    Originally posted by TemplarGR View Post
    It's cause they have to draw them pixel by pixel in real time, there are no premade textures and models. 2D has always been expensive and as time went on dedicated 2D fixed function hardware was removed from modern gpus. I think a cool idea would be to draw them with pixel shaders in the future or even better make them raytraced.
    You may want to explore this blog's archive, there is a pretty cool ongoing series about GPU accelerated vector graphics with a design focus on UI rendering : https://raphlinus.github.io/ .

    Leave a comment:

  • kpedersen
    Senior Member

  • kpedersen
    replied
    Originally posted by CochainComplex View Post

    Correct me if I'm wrong - but can 2D be considered as a special case of 3D - any affine transformation has always a constant value (even zero) for one of the (x,y,z)?
    Absolutely. And I am pretty sure a lot of the calculations can be simplified. However there is still a potential for some waste plugging 2D values through a pipeline really intended for 3D.

    Originally posted by polarathene View Post

    Point I'm trying to communicate is you get many static primitives that can composite the window content, and windows themselves can do similar, especially with decorations.
    Yes, I do see that. I.e data can be batched and reused and draw instructions reduced (just like it would be done in games i suppose). My guess is that there is still not quite enough of this going on.
    kpedersen
    Senior Member
    Last edited by kpedersen; 22 June 2020, 09:01 AM.

    Leave a comment:

  • polarathene
    Senior Member

  • polarathene
    replied
    Originally posted by TemplarGR View Post

    It's cause they have to draw them pixel by pixel in real time, there are no premade textures and models. 2D has always been expensive and as time went on dedicated 2D fixed function hardware was removed from modern gpus. I think a cool idea would be to draw them with pixel shaders in the future or even better make them raytraced.
    Where are you getting all this from? And how is ray tracing meant to improve 2D composition? (in the sense of desktop UI)

    There's plenty of techniques to handle this more efficiently on the GPU, it's not as you describe.

    Your models/meshes are 2D quads, 4 verts for a window with a render texture to composite to and with the rest of the desktop. Pixel shaders can be used by compositors for effects, pretty sure you'll find that with some of kwins.

    Originally posted by kpedersen View Post
    With windows displaying complex software (like a web browser) this data often needs to keep being sent (i.e as pixels) because it changes very often (the copy on the GPU is already out of date).
    Browsers often optimize the content into tiles to render, so it's a lot less intensive to what you describe. They can utilize the structure of the DOM and CSS to layer and composite their own data internally. The desktop compositor can then take that surface and composite it with the rest of the desktop.

    I don't know how Wayland handles it, but X11 iirc treats all displays as one big display image and crops from that(at least when I used xshm or whatever it was to capture the screen data like most x11 capture software does). It's unfortunate as those frames were in CPU memory iirc(system ram), rather than on the GPU like Windows handles it(no idea about macOS).

    So I guess it can be a performance issue in that sense, but it's the opposite of what you've described. The buffer from the GPU ends up coming back to CPU, from vram to ram afaik. I don't know the internals that well myself with how it's done with linux/x11, perhaps someone else can chime in, I would assume the GPU sends the final frames back, but it could be before prior to compositing the full frame too? Perhaps it only sends back the dirty regions.

    Splitting the display contents into tiles is useful for dirty region updates similar to how that was done on CPU, by only needing to update that portion(in GPU it'd be the minimum tile(s) size of the updated region. In a browser as you scroll, some tiles can already be rendered in advanced, so it's just updating a render texture with those separate textured tiles like blitting via CPU. If you're familiar with 3D you can just translate the quads and take the viewport output as the texture for a window.

    Text is another one, each glyph can be a sub texture(within a texture atlas, spritesheet), and each of those are rendered to their own quads/rectangles that get laid out in a similar manner. A blinking cursor for text input can be on another layer or z-depth and toggle it's visibility, you don't have to wastefully update a large texture pixel for pixel.

    Point I'm trying to communicate is you get many static primitives that can composite the window content, and windows themselves can do similar, especially with decorations.

    Leave a comment:

  • CochainComplex
    Senior Member

  • CochainComplex
    replied
    Originally posted by kpedersen View Post

    I wonder if it is because GPUs have an architecture mainly intended for games and CAD (i.e where data can be retained) these days?
    Correct me if I'm wrong - but can 2D be considered as a special case of 3D - any affine transformation has always a constant value (even zero) for one of the (x,y,z)?

    Leave a comment:

  • ernstp
    Senior Member

  • ernstp
    replied
    Originally posted by eydee View Post
    Sometimes it's mind blowing that GPUs are able to draw complex 3D scenes but can struggle with drawing 10 windows on a desktop. At whatever resolution, with whatever bugs. The question shouldn't be being able to reach 60 fps but whether it's 5000 or 6000.
    We are talking Intel integrated graphics @4K. I think one reason that more people don't notice is that a discrete card _doesn't_ struggle with it at all.

    Leave a comment:

  • kpedersen
    Senior Member

  • kpedersen
    replied
    Originally posted by eydee View Post
    Sometimes it's mind blowing that GPUs are able to draw complex 3D scenes but can struggle with drawing 10 windows on a desktop.
    I wonder if it is because GPUs have an architecture mainly intended for games and CAD (i.e where data can be retained) these days?

    In many ways it is all due to the transfer of data from main memory to the GPU memory.
    For example a 3D scene you upload to the GPU memory (i.e a large number of vertices). You do this once and it is there to be referenced as it is needed.

    With windows displaying complex software (like a web browser) this data often needs to keep being sent (i.e as pixels) because it changes very often (the copy on the GPU is already out of date). This unfortunately results in blocking the pipeline. Especially now people have (wastefully IMO) extremely high resolution displays requiring massive amounts of pixels to be sent.

    If people had simpler UI designs (i.e think boxy Motif) where they could be drawn mainly with instructions (i.e "draw 20x20 box") rather than a raster image, this could be much faster. But people in 2020 want their fancy "bling". This trend does seem to cycle. Perhaps in 2030 we will have less wasteful desktops? Who knows?

    As it stands, many things can be retained on the GPU. But not enough unfortunately.
    This ability to retain data also made it translate exceptionally well to remote UI systems. Unfortunately these are being neglected in these days of consumer electronics.
    kpedersen
    Senior Member
    Last edited by kpedersen; 22 June 2020, 08:12 AM.

    Leave a comment:

  • CochainComplex
    Senior Member

  • CochainComplex
    replied
    Originally posted by eydee View Post
    Sometimes it's mind blowing that GPUs are able to draw complex 3D scenes but can struggle with drawing 10 windows on a desktop. At whatever resolution, with whatever bugs. The question shouldn't be being able to reach 60 fps but whether it's 5000 or 6000.
    I guess the difference to coherent 3D output is that the wm /compositor does not always know what the software does and when it is finished pushing the result whilst trying to maintain a fluent/smooth output. In contradiction a game engine syncs the task so that the output is pleasent and represented at once and not chopped....but if the devs are in a hurry it happens there as well ..some might remember the ac: unity horroshow

    CochainComplex
    Senior Member
    Last edited by CochainComplex; 22 June 2020, 08:06 AM.

    Leave a comment:

  • sykobee
    Senior Member

  • sykobee
    replied
    Well I hope that there are tests in place for that culling functionality now.

    /me mutters darkly about untested code

    (aside: surely even a low-end integrated GPU can render hundreds of UI windows per frame these days? It's just a matter of memory bandwidth, the scene should be fairly simple compared to, e.g., a game, and the contents of each window should be a texture - window contents that don't change shouldn't need re-rendering. Obviously you might break down a window into its component panels to improve granularity/re-rendering hit on a change, but the principle still applies. No way you re-render everything within a window, every time, unless you're a video app or game)
    sykobee
    Senior Member
    Last edited by sykobee; 22 June 2020, 08:04 AM.

    Leave a comment:

  • TemplarGR
    Senior Member

  • TemplarGR
    replied
    Originally posted by eydee View Post
    Sometimes it's mind blowing that GPUs are able to draw complex 3D scenes but can struggle with drawing 10 windows on a desktop. At whatever resolution, with whatever bugs. The question shouldn't be being able to reach 60 fps but whether it's 5000 or 6000.
    It's cause they have to draw them pixel by pixel in real time, there are no premade textures and models. 2D has always been expensive and as time went on dedicated 2D fixed function hardware was removed from modern gpus. I think a cool idea would be to draw them with pixel shaders in the future or even better make them raytraced.

    Leave a comment:

Working...
X