Announcement

Collapse
No announcement yet.

GNOME Dynamic Triple Buffering Can 2x The Desktop Performance For Intel Graphics, Raspberry Pi

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by drake23 View Post
    Finally... How does Plasma do this? On my old X200s, Plasma is super smooth while gnome stutters a lot. I hope this development fixes this.
    By not sucking, this is only going to cause excessive stutter as the window manager bounces between double and tripple buffering... and doing the triple buffering at at worst times, eg when the GPU is falling behind.... thus inuring the worse possible latency then it will speed back up after a few ms etc...
    Last edited by cb88; 17 February 2022, 02:44 PM.

    Comment


    • Originally posted by fulalas View Post
      I wish. If you read the article with attention, there's a reference to what I claimed:

      'Much of the code of the shell is written in Javascript'

      Technology?action=AttachFile&do=get&target=tech-components-diagram.png

      https://wiki.gnome.org/Projects/GnomeShell/Technology
      The JS portion of Gnome Shell is doing layout. You see how it is a tiny slice at the top? The parts that actually matter for performance are written in C. If you dispute that, then tell me what is written in JS that is causing performance problems.

      Don't believe me? You don't need to. Here is Canonical's main Gnome contributor Daniel van Vugt saying exactly the same thing. I'll quote him:

      Originally posted by Daniel van Vugt
      Let’s get this right because many people don’t and it leads to unfair criticism of Gnome Shell. It is a desktop environment written in C and JavaScript on top of the Mutter compositor/window manager.

      The majority of the logic is in the Mutter project, which includes the Clutter and Cogl graphics toolkits. Mutter is comprised of 1389 C source files (at time of writing). While most of the Mutter project is used as libraries by Gnome Shell, a mutter command also exists as a standalone compositor if all you need is a mouse pointer and wallpaper.

      The Gnome Shell project is the smaller of the two projects, adding bling like the desktop panel and launcher, similar to how Unity 7 sits on top of Compiz . Gnome Shell is made up of 199 C source files and 157 JavaScript source files (at the time of writing).

      The important thing to note here is that most of the source code is in the Mutter project, not Gnome Shell. So overall only around 10% of Gnome Shell is written in JavaScript when you consider Mutter, and around 90% written in C.
      You are spreading nonsense here about Gnome's performance. Blame the C code if you don't like it, but it has nothing to do with JavaScript. Look at your graphic again. You see all that stuff below the JS part? Mutter, clutter, etc? Those are the parts responsible for the performance of Gnome.

      Originally posted by fulalas
      Also, all extensions have to be written in JavaScript. And the more extensions you have, the more instances of JavaScript engine will have to run.
      This is totally false, where are you getting that from? All the extensions do is patch the same objects. That's why they can conflict with one another.
      Last edited by krzyzowiec; 17 February 2022, 02:56 PM.

      Comment


      • Originally posted by binarybanana View Post
        They should've done with Lua at least. JS is a horrible language and you get to pick either slow, snappy and lean, or fast, sluggish and fat, depending on the runtime you pick. Meanwhile with Lua you can get fast, snappy and lean at the same time with LuaJIT, or still reasonable perf the stock Lua interpreter if LuaJIT isn't available on the CPU arch. IIRC Arcan does exactly that. Looking forward to that.
        More nonsense. JavaScript has great performance compared to Lua.

        Comment


        • Originally posted by cb88 View Post

          By not sucking, this is only going to cause excessive stutter as the window manager bounces between double and tripple buffering... and doing the triple buffering at at worst times, eg when the GPU is falling behind.... thus inuring the worse possible latency then it will speed back up after a few ms etc...
          That's wrong on both accounts. Stutter is what you get when frame rate is either too low or just inconsistent. Switching from doube to triple buffering should be imperceptible, or at least it can and should be.
          Also, triple buffering reduces input latency when the GPU falls behind. Think about it, if you miss vblank with double buffer the GPU does nothing until next vblank after it finishes the current frame, so both rendering and input processing are pretty much blocked from reacting to events in the mean time. The best case is then roughly 16x2=32ms of latency, while if you render at 55 FPS, input latency will be closer to that 16ms (or "exactly" "roughly" 16x60/55=17.5ms on average), because input is handled sooner and a more fresh frame is pushed to the screen and more often. I say "roughly", because when the human does input is essentially random and could be any time in between frames, but it doesn't really matter for this discussion.

          The only time triple buffering increases latency over double buffering is when the GPU renders in excess of the refresh rate until it is blocked similar to the double buffer case, meaning because there are two rendered frames waiting for scanout. But some triple buffer schemes don't even have this form of backpressure because they don't treat the buffers as a queue. In that case the oldest frame just gets overwritten and in this case there is no cost in terms of latency even if the GPU renders above refresh rate. This is what FastSync, TearFree and ForceCompositionPipeline with vsync off give you, effectively.
          If it is a queue what you can do is just limit the frame rate so that the GPU never gets to render more than a single frame in advance. That way you can still render at arbitrary frame rates with vsync on (instead of just x, x/2, x/3, etc., where x is refresh rate), but you don't pay for it when the GPU races ahead. Gamers have been doing this for decades to keep latency low. This is basically the same as this dynamic triple buffer scheme and might actually be the way it's implemented, but that's just a guess.
          Last edited by binarybanana; 17 February 2022, 03:21 PM. Reason: typo

          Comment


          • Originally posted by binarybanana View Post
            That's wrong on both accounts.
            Except its not... triple buffering that inadvertently increases framerate by forcing the GPU into a higher power state is A) idiotic, B) wasteful of power C) not actually solving a problem but causing others. Triple buffering AT THE SAME FRAME RATE, has higher latency... arguing that it has lower latency because oh you increased GPU power use by a huge amount is brain damaged.

            Your triple buffering arguments also make no sense... if it isn't a queue that means you potentially throw frames away, and inherently are introducing stutter.

            Comment


            • Yes, it is a queue in most implementations common today, but it wasn't always. I just mentioned it to point that out. As I said, triple buffering only increases latency when the buffers fill up. If you don't let that happen it is equal or better than double buffering. It's obviously true, when you get more frames painted on the screen with input that wouldn't have been processed yet at all if the rendering was blocked until vsync. With double buffering you can't render 50FPS. The next step down is 30FPS, then 20FPS and so on. Naturally input latency will be worse at 30FPS than at 50FPS.

              And you're totall mischaracterizing what's happening here. Running your desktop at 60FPS instead of 30FPS when the display refresh rate is 60Hz is neither idiotic, nor wasteful. WTF? It's what any sane person would expect. If you don't like it the sensible thing to do would be to set your refresh ratekto 30Hz or use a limiter. Why waste power on scanning out memory more than once for the same frame? /s
              I could somewhat understand if you said it's wasteful that GNOME needs this at all to perform well enough, but you're not.

              Comment


              • The only thing triple buffer does is implement two back buffers for vsync. whether it is queue based or not is a separate matter. Triple buffering does not force your gpu to work harder but merely gives it the ability too. most modern implementations of triple buffer create two side by side buffers that alternatively get filled up and discarded, the gpu when refreshes then picks the newest frame to flip, so while it does introduce input lag, it is usually by a single frame. which will certainly be an issue for some people, but the vast majority of users won't notice.

                whether or not gnome is using this I have no idea (though I would hope so). and just by implementing this doesn't mean your gpu will go wild, you can still frame cap, or target a certain performance. this will not introduce stutter, and if done even remotely right, will not cause the gpu to go wild.

                EDIT: as an appendage, Vulkan for instance can independantly set buffer size, and flip type, meaning you can have traditional vsync or tripple buffering vsync, and you can set whether or not they are in queue mode, or not. (I think nvidia only supports queue mode on linux, thank's nvidia)
                Last edited by Quackdoc; 17 February 2022, 07:33 PM.

                Comment


                • Originally posted by Mario Junior View Post
                  Linux will never be a relevant thing on desktops because we can't have a fucking decent desktop environment. This is the BASIC of computer usability.
                  True, true.
                  Among the many annoying things (totally non-exhaustive list)...
                  - Inability to properly remove some usb drives or at least warn people it's still actually writing on it (flushing) for a few seconds.
                  - Impossibility to say which program / window is locking a mount thus preventing ejection of an external drive.
                  - Extreme difficulty to change audio configuration and muxing on the fly.
                  - Near-total impossibility to customize desktop layout, appeareance, behaviour and shortcuts to optimize daily routines.
                  - Incapability to install core updates without restarting OS several times, also preventing the use of computer during updates.
                  - Extreme complexity to configure some environment things because of a mix of interfaces.
                  - Unreliability of program install/update because only a few apps are actually available from a central repository.

                  ...
                  ...
                  Oh, wait, my bad, was looking at Windows list. xd

                  Comment


                  • Originally posted by Citan View Post

                    True, true.
                    Among the many annoying things (totally non-exhaustive list)...
                    - Inability to properly remove some usb drives or at least warn people it's still actually writing on it (flushing) for a few seconds.
                    - Impossibility to say which program / window is locking a mount thus preventing ejection of an external drive.
                    - Extreme difficulty to change audio configuration and muxing on the fly.
                    - Near-total impossibility to customize desktop layout, appeareance, behaviour and shortcuts to optimize daily routines.
                    - Incapability to install core updates without restarting OS several times, also preventing the use of computer during updates.
                    - Extreme complexity to configure some environment things because of a mix of interfaces.
                    - Unreliability of program install/update because only a few apps are actually available from a central repository.

                    ...
                    ...
                    Oh, wait, my bad, was looking at Windows list. xd
                    all of these are small issues to the majority of people, here are some big issues with linux DEs

                    File transfers on linux are abysmal, so many times have i been using an exfat drive gone to eject it and nothing. sits for 5 - 10 min even for a small transfer, of course when you rip out the drive all the data is gone too. only terminal is good for transfering file, and you need to use pv or rsync to get a bloody progress bar. this has happened on gnome, kde, and xfce. and with many more gui browsers. you can blame the filesystem all you like, and blame microsoft too, but this has almost never happened with me on windows.

                    also, I don't want to customize my desktop, being able to customize a desktop doesn't mean the user should have to do so to get a semi decent looking desktop. KDE is buggy as hell, gnome requires tweaking just to see your bloody background tasks, so on and so forth.

                    we still dont have a semi decent way of simply installing applications from the web, flatpaks are thankfully getting there. but until all DEs can properly install them from double click...

                    and these are just a few issues. that the majority of people will have, and I limited that to DE related issues

                    Comment

                    Working...
                    X