Originally posted by sdack
View Post
Announcement
Collapse
No announcement yet.
2D Rendering On X11 Remains Barely Faster Than CPU Rendering
Collapse
X
-
-
Originally posted by curaga View PostIt's been years since I used xfce, but I recall if you use such a custom size, it caches the resized icons once on disk, then every subsequent start uses the cached icons. Could be wrong.
The choices of algorithms is also not limited to nearest or bilinear. GPUs are programmed these days to do any algorithm. Nor does it make much sense to use a computer's main memory for caching graphics when graphics card have 256MB to several GBs of video RAM.
I believe the point here is that the current state of X11 is poor and not that CPU rendering should be favoured.
Leave a comment:
-
So what about variable scaling?/smartphones
Leave a comment:
-
It's been years since I used xfce, but I recall if you use such a custom size, it caches the resized icons once on disk, then every subsequent start uses the cached icons. Could be wrong.
Leave a comment:
-
Originally posted by caligula View PostThe fastest complexity class is always O(0). For instance in this case if the scaling of icons is a problem, I'd render them at install time and use the persistent cached versions. For instance this is the space usage for gnome icons:
200K /usr/share/icons/gnome/128x128
1.7M /usr/share/icons/gnome/16x16
1.6M /usr/share/icons/gnome/22x22
1.6M /usr/share/icons/gnome/24x24
8.0M /usr/share/icons/gnome/256x256
1.7M /usr/share/icons/gnome/32x32
1.9M /usr/share/icons/gnome/48x48
632K /usr/share/icons/gnome/512x512
32K /usr/share/icons/gnome/8x8
112K /usr/share/icons/gnome/icon-theme.cache
12K /usr/share/icons/gnome/index.theme
17M /usr/share/icons/gnome/scalable
It's pretty clear that you could easily cache all icons once more for a custom size. Maybe some animations could need unlimited scaling, but for me a simple solution would be sufficient.
So what about variable scaling? You do want to scale a window, or only a part of a layout, as fluent as possible and without any flickering and delays. Even a delay as short as 50ms will feel like pulling at rubber and not as if it's instantaneous. To make it appear instantaneous does it need to be 16ms or less, meaning, all images require to be resized within the refresh rate (16.7ms for 60HZ). On top of this do you have 4K displays and refresh rates going up to 100Hz (some gaming monitors even do 144Hz these days).
Or look towards smartphones where these have display resolutions of 2560x1440 (at merely 5-6 inch). You want these to render their graphics as fluent as possible, or it will hinder finger gestures and feel like loosing touch with the display.
Graphics the domain of GPUs and not CPUs, strictly speaking. So there shouldn't be a need for CPUs to compete with GPUs by increasing the complexity of software renders. Instead, the sensible way is to improve GPU rendering. It's not about if a CPU can or cannot do graphics, but about letting the right silicon do the job it was meant to do and to leave room on CPUs not only to do rudimentary tasks, but to offer room for new and yet unknown developments.
Leave a comment:
-
Originally posted by sdack View PostI don't know yet. I don't just expect a pure speed up from it, but also an increase in image quality. The tricks focus mainly on speed, sometimes lowering imagine quality by using the fastest algorithms over those with the best imagine quality (i.e. linear vs. lanczos or vs. super-sampling), require additional memory and are becoming increasingly more complex. In short, I see more than one possible gain here. I don't really have much expectations as of now, but would like to see where this development leads to before I make any judgements. One won't know for sure before then. I'm talking about Intel's FastUIDraw by the way. I'm also thinking about SVG rendering in hardware. The discussion whether it should be done in software or by X11 is merely the epilog of what is about to come. If X11 can't do it any faster then it needs to be improved or we have to look for Wayland to do it.
The hardware manufacturers keep looking for new grounds to explore and advances in chip design are no longer only used for faster 3D graphics. Nvidia for example provides hardware video decoding and encoding for H.264 and H.265 with amazing speeds (and at the same time can it render 3D graphics and run CUDA computations). I see Intel's attempt as a "going back to basics" and I still remember a time where we thought S3 chips were the best, because they could move a window around with its content and in 24-bit depth (that was before when GPUs could do 3D graphics...).
If Intel have managed to include all the recent developments in 2D graphics then this could definitely be a success, so I believe, and I would want to have it with my next GPU for sure. It also means thin clients can become even thinner and more powerful, which means we get to conserve more energy, reduce CO2, etc.
200K /usr/share/icons/gnome/128x128
1.7M /usr/share/icons/gnome/16x16
1.6M /usr/share/icons/gnome/22x22
1.6M /usr/share/icons/gnome/24x24
8.0M /usr/share/icons/gnome/256x256
1.7M /usr/share/icons/gnome/32x32
1.9M /usr/share/icons/gnome/48x48
632K /usr/share/icons/gnome/512x512
32K /usr/share/icons/gnome/8x8
112K /usr/share/icons/gnome/icon-theme.cache
12K /usr/share/icons/gnome/index.theme
17M /usr/share/icons/gnome/scalable
It's pretty clear that you could easily cache all icons once more for a custom size. Maybe some animations could need unlimited scaling, but for me a simple solution would be sufficient.
Leave a comment:
-
Originally posted by caligula View PostJust out of curiosity, have you ever benchmarked this? How much speedup you're expecting? Let's say your icons are displayed at 66x66 resolution and the original is 128x128 or SVG. Your application uses on average something like 20-50 icons. How much speedup you are expecting really? On my system the image libraries can downscale even 20 megapixel images in 0.2 seconds (per core). These images have huge amount of data compared to icons. I'd guess you'd save less than 50 milliseconds on program startup. You save more by switching to musl from glibc.
The hardware manufacturers keep looking for new grounds to explore and advances in chip design are no longer only used for faster 3D graphics. Nvidia for example provides hardware video decoding and encoding for H.264 and H.265 with amazing speeds (and at the same time can it render 3D graphics and run CUDA computations). I see Intel's attempt as a "going back to basics" and I still remember a time where we thought S3 chips were the best, because they could move a window around with its content and in 24-bit depth (that was before when GPUs could do 3D graphics...).
If Intel have managed to include all the recent developments in 2D graphics then this could definitely be a success, so I believe, and I would want to have it with my next GPU for sure. It also means thin clients can become even thinner and more powerful, which means we get to conserve more energy, reduce CO2, etc.
Leave a comment:
-
Originally posted by sdack View PostThe buttons, icons and thumbnails you see just come in a few fixed sizes. These then get constantly up- or downscaled wherever these are being used and before they can get cached. It goes into the start times of applications and opening times of windows and menus. Having a bit more speed here would be nice.
Other than this will faster 2D help with HTML rendering and bring more web content. I'm guessing advertisement and flashy web pages will profit from accelerated 2D the most. If this is a good or a bad thing is another question, but shouldn't be a reason not the have faster 2D.
Leave a comment:
-
Originally posted by starshipeleven View PostHe means that the system takes the icons from a folder and just swaps them with bigger/smaller ones when "scaling" them.
"Which is why every sane program, WM and DE uses the supplied power-of-two icon sizes. No scaling anywhere."
He believes the reason for the power-of-two icons is to avoid scaling. We obviously do have variable scaling and not fixed-size scaling. I assuming the power-of-two sizes help to speed up scaling - i.e. to scale to a 20x20 size do you load a 32x32 image, to scale to a 100x100 size do you load a 128x128 image, ... It could also be they upscaled. I haven't looked into it for a while to be sure.
Leave a comment:
-
Originally posted by sdack View Post"No scaling anywhere" you say?
Leave a comment:
Leave a comment: