Originally posted by eydee
View Post
Announcement
Collapse
No announcement yet.
GNOME's Window Rendering Culling Was Broken Leading To Wasted Performance
Collapse
X
-
Originally posted by kpedersen View Post
I wonder if it is because GPUs have an architecture mainly intended for games and CAD (i.e where data can be retained) these days?
Comment
-
Originally posted by TemplarGR View Post
It's cause they have to draw them pixel by pixel in real time, there are no premade textures and models. 2D has always been expensive and as time went on dedicated 2D fixed function hardware was removed from modern gpus. I think a cool idea would be to draw them with pixel shaders in the future or even better make them raytraced.
There's plenty of techniques to handle this more efficiently on the GPU, it's not as you describe.
Your models/meshes are 2D quads, 4 verts for a window with a render texture to composite to and with the rest of the desktop. Pixel shaders can be used by compositors for effects, pretty sure you'll find that with some of kwins.
Originally posted by kpedersen View PostWith windows displaying complex software (like a web browser) this data often needs to keep being sent (i.e as pixels) because it changes very often (the copy on the GPU is already out of date).
I don't know how Wayland handles it, but X11 iirc treats all displays as one big display image and crops from that(at least when I used xshm or whatever it was to capture the screen data like most x11 capture software does). It's unfortunate as those frames were in CPU memory iirc(system ram), rather than on the GPU like Windows handles it(no idea about macOS).
So I guess it can be a performance issue in that sense, but it's the opposite of what you've described. The buffer from the GPU ends up coming back to CPU, from vram to ram afaik. I don't know the internals that well myself with how it's done with linux/x11, perhaps someone else can chime in, I would assume the GPU sends the final frames back, but it could be before prior to compositing the full frame too? Perhaps it only sends back the dirty regions.
Splitting the display contents into tiles is useful for dirty region updates similar to how that was done on CPU, by only needing to update that portion(in GPU it'd be the minimum tile(s) size of the updated region. In a browser as you scroll, some tiles can already be rendered in advanced, so it's just updating a render texture with those separate textured tiles like blitting via CPU. If you're familiar with 3D you can just translate the quads and take the viewport output as the texture for a window.
Text is another one, each glyph can be a sub texture(within a texture atlas, spritesheet), and each of those are rendered to their own quads/rectangles that get laid out in a similar manner. A blinking cursor for text input can be on another layer or z-depth and toggle it's visibility, you don't have to wastefully update a large texture pixel for pixel.
Point I'm trying to communicate is you get many static primitives that can composite the window content, and windows themselves can do similar, especially with decorations.
- Likes 4
Comment
-
Originally posted by CochainComplex View Post
Correct me if I'm wrong - but can 2D be considered as a special case of 3D - any affine transformation has always a constant value (even zero) for one of the (x,y,z)?
Originally posted by polarathene View Post
Point I'm trying to communicate is you get many static primitives that can composite the window content, and windows themselves can do similar, especially with decorations.Last edited by kpedersen; 22 June 2020, 09:01 AM.
- Likes 1
Comment
-
Originally posted by TemplarGR View PostIt's cause they have to draw them pixel by pixel in real time, there are no premade textures and models. 2D has always been expensive and as time went on dedicated 2D fixed function hardware was removed from modern gpus. I think a cool idea would be to draw them with pixel shaders in the future or even better make them raytraced.
- Likes 1
Comment
-
Originally posted by polarathene View Post
Where are you getting all this from? And how is ray tracing meant to improve 2D composition? (in the sense of desktop UI)
There's plenty of techniques to handle this more efficiently on the GPU, it's not as you describe.
Your models/meshes are 2D quads, 4 verts for a window with a render texture to composite to and with the rest of the desktop. Pixel shaders can be used by compositors for effects, pretty sure you'll find that with some of kwins.
What i wanted to say is that we should be using pixel shaders to actually draw the window and create the texture itself. That would remove some bottlenecks i think.
- Likes 1
Comment
-
Originally posted by eydee View PostSometimes it's mind blowing that GPUs are able to draw complex 3D scenes but can struggle with drawing 10 windows on a desktop. At whatever resolution, with whatever bugs. The question shouldn't be being able to reach 60 fps but whether it's 5000 or 6000.
I have actually been wondering about jumpy mouse cursor for a few days now. It seems to occur at random but persistently. This bug could very well explain that as well.
- Likes 1
Comment
Comment