Announcement

Collapse
No announcement yet.

GNOME Dynamic Triple Buffering Can 2x The Desktop Performance For Intel Graphics, Raspberry Pi

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by Mez' View Post
    while Gnome is mostly developed by corporation employees with little to no community around.
    I missed this the first time around. Who do you think develops gnome?

    Comment


    • #82
      Originally posted by oiaohm View Post
      That is part of it but not quite. Yes intel reclocking heuristics do cause some problems but GPU like the Raspberry PI one that don't have the reclocking problem as also helped by this.
      Oh, right.
      Totally missed RPi's VideoCore horror show and similar low end GPUs. Yes, if those GPUs can't even hit the target frame rate reliably triple buffering will prevent stalling and even improve input latency.
      In that case it's actually not even a hack or a workaround (other than for underpowered hw). The whole point of triple buffering is decoupling rendering from monitor refresh, after all. As any gamer who's had to deal with massive sudden frame rate drops to half the frame rate with vsync on would know. What do you do? Turn on triple buffer (and maybe cap frame rate to prevent buffer bloat...). In fact most games just use triple buffering by default nowadays becase no one likes super jumpy fluctuations like that.
      So if anything, the sensible thing wouldn't be to bash GNOME for adding this at all, but doing it because it took them so long instead. Let's be reasonable about the hate instead of blindly attacking everything without objective evaluation :3.

      Comment


      • #83
        Originally posted by drake23 View Post
        Finally... How does Plasma do this? On my old X200s, Plasma is super smooth while gnome stutters a lot. I hope this development fixes this.
        Plasma has been using triple buffering for a long time.

        Comment


        • #84
          Originally posted by gens View Post

          Nah. Computer are binary, so I declare a kilobyte is 8 bytes. (edit: wrote 64 but that is for megabyte)

          In reality, it's stupid. kB being 1000B is because of hard drive manufacturers, at least in ISO. It makes absolutely no sense for it to be 1000 as computers just don't work that way and it doesn't matter for users.
          That JEDEC defines it as 1024 shows that is how computers work (CPU cache is also base 2, so my cpu would have 2097.152 kB of L3 cache..).
          The more ways i can think of looking at it, the more stupid it is. Only reason to have it 1000, that i can think of, is so you can more precisely figure out the size of a file when looking at it in units of bytes, and even then ~60MB file looks like 61733496 bytes so it's easier to use the computer to turn it into human readable then counting how many digits it has.

          And "kibibyte" just sounds stupid and smaller then "kilobyte".
          No gens I am sorry lot of people make the JEDEC arguement and its wrong.

          The definitions of kilo, giga, and mega based on powers of two are included only to reflect common usage.
          This line out the JEDEC dictionary is lethal to anyone attempt to use JEDEC to justify using 1024 based KB/MB/GB. Turns out every time a JEDEC document had a declare of terms that gave the 1024 base it either mentioned the ISO size or network size that is 10 based. So 1024 based KB/MB/GB was technically just common usage by ram makers and users not agreed standard thinking they constantly state incompatible size.

          Its important to open up the JEDEC dictionary. Do note that the JEDEC dictionary declares today that the KB/MB... being 1024 based is deprecated as in you are not meant to use that any more. JEDEC is about ram so number should only ever be power of 2. Yes by JEDEC in modern document writing KB/MB.... is a typo but they have to keep it in their dictionary in case you pick up old JEDEC documents.

          IEEE/ASTM SI 10‑1997 states "This practice frequently leads to confusion and is deprecated."
          Yes this here is a JEDEC backed standard. IEEE is you electrical standards. Yes JEDEC deprecated KB/MB.... and stop usage in new documents in 1997 other than dictionary that has to be valid for reading old documents.

          Yes network transfer speeds by standard KB/s is 1000 bytes per second and MB/s is 1000000 bytes per second and so on. So yes there is a incompatibility between network transfer-speeds and the legacy JEDEC usage of MB. Yes network transfer speed standards predate ram standards.



          Please note Byte being 8 bits only comes standard in 1993 by the way. Yes before 1993 a byte could be 1 to 48 bits by convention. Yes that network one of 1000000 bytes per second prior to 1993 could be 48bit bytes so 6000000 modern day bytes so yes 1MB/s(48 bit bytes) can equal 6MB/s (current day and standard 8bit bytes). Yes if you reading network documents made prior to 1993 so total mess of what transfer speed is that worst part here you have documents changing the byte size in different chapters so not even constant inside 1 document at times.

          The 1990s was a very active time for finally defining what lots of computer terms in fact meant instead of being hardware and use case unique values. Most of the sort out went fairly smoothly. MB/KB and so on has been the only major dispute.

          Comment


          • #85
            Originally posted by Mez' View Post
            It's not for you to decide what would be useful for others.
            Never said I was, genius. I'm just stating the logistics of it. Things would be changing pages more often because newer applications would be inserted somewhere in the middle of the list. If you put applications in a group then that group would be placed on the grid alphabetically, and the applications would be placed alphabetically within it. So if you're looking for Krita but it's in a group named Art, then the name of the application isn't going to help you on the initial pass looking through the app grid.

            From a usability stand point, if you're going to have one type of sorting, I'd argue that Gnome's is more useful for applications. Alphabetical sorting makes sense for things like a physical dictionary where your only option is to searched manually but that doesn't mean it always makes sense for a user interface. Case in point, online dictionaries which may not have lists of words at all just a search bar.

            UI's can have more useful metrics they can be sorted by especially if they don't have to be the same per-user and the user's own actions are populating and depopulating the list. Sorting by most recently launched, more often used, last added, or custom ordering can all be useful to a user and problem more useful than alphabetical order.

            Are you using Gnome but you like KDE's software? They all begin with K so if the app grid is alphabetical, those probably aren't going to be quickly accessible to you. If you use them often and the grid is sorted by last-used then they will often be on the first page. If the list is custom ordered, then you can place them on the first page and they'll ALWAYS be there.

            I'm shocked that I have to explain this this much but if you're not gonna think about the logistics of these things then I'm gonna have to think for you.

            And lets be clear, if the conversation was Gnome and other DEs should give you the option to sort things differently, then I wouldn't be having this conversation. But it's not. Instead the conversation is that all these DEs only offer one way to sort their applications but Gnome's way stands out so it must be a flaw. That's idiotic.

            Originally posted by Mez' View Post
            The thing is at least you'd know where to find it approximately.
            I don't have any extension activating alphabetical order, and I have literally no clue where my app will be positioned in the grid, which makes it very hard to find within my eight scroll worth of apps.
            If you just installed it, it's at the end. Very simple.

            Comment


            • #86
              Originally posted by sinepgib View Post
              What's so special about those applications that makes Fluxbox a bad choice to use with them? What did people do before GPU accelerated desktops?
              Note I _do_ think a compositor using basic GPU features is the way to go nowadays. GPUs are so commonplace that it's practically impossible to find anything newer than 2005 without at least an iGPU, and properly implemented it could save memory, CPU and energy compared to non-compositing managers for same number of frame buffers. But I fail to see how programs that predate compositing WMs would become problematic under non-compositing WMs.
              It's not so much that it needs compositing. My point is that users of creative applications want and need a real GUI desktop, not a minimalistic tiling WM. Before GPU accelerated desktops everyone doing 3D, graphics and/or audio was using the Mac (and once the Atari ST for MIDI audio), not twm or anything where the actual user interface was xterm and the way to choose your desktop wallpaper was through "vi .obscurerc".

              Comment


              • #87
                Originally posted by oiaohm View Post

                No gens I am sorry lot of people make the JEDEC arguement and its wrong.
                https://www.jedec.org/standards-docu...orage-capacity

                This line out the JEDEC dictionary is lethal to anyone attempt to use JEDEC to justify using 1024 based KB/MB/GB. Turns out every time a JEDEC document had a declare of terms that gave the 1024 base it either mentioned the ISO size or network size that is 10 based. So 1024 based KB/MB/GB was technically just common usage by ram makers and users not agreed standard thinking they constantly state incompatible size.

                Its important to open up the JEDEC dictionary. Do note that the JEDEC dictionary declares today that the KB/MB... being 1024 based is deprecated as in you are not meant to use that any more. JEDEC is about ram so number should only ever be power of 2. Yes by JEDEC in modern document writing KB/MB.... is a typo but they have to keep it in their dictionary in case you pick up old JEDEC documents.

                IEEE/ASTM SI 10‑1997 states "This practice frequently leads to confusion and is deprecated."
                Yes this here is a JEDEC backed standard. IEEE is you electrical standards. Yes JEDEC deprecated KB/MB.... and stop usage in new documents in 1997 other than dictionary that has to be valid for reading old documents.

                Yes network transfer speeds by standard KB/s is 1000 bytes per second and MB/s is 1000000 bytes per second and so on. So yes there is a incompatibility between network transfer-speeds and the legacy JEDEC usage of MB. Yes network transfer speed standards predate ram standards.



                Please note Byte being 8 bits only comes standard in 1993 by the way. Yes before 1993 a byte could be 1 to 48 bits by convention. Yes that network one of 1000000 bytes per second prior to 1993 could be 48bit bytes so 6000000 modern day bytes so yes 1MB/s(48 bit bytes) can equal 6MB/s (current day and standard 8bit bytes). Yes if you reading network documents made prior to 1993 so total mess of what transfer speed is that worst part here you have documents changing the byte size in different chapters so not even constant inside 1 document at times.

                The 1990s was a very active time for finally defining what lots of computer terms in fact meant instead of being hardware and use case unique values. Most of the sort out went fairly smoothly. MB/KB and so on has been the only major dispute.
                Didn't think you would know about networks.
                And I know data transfer has, and always has had, funky rates. Even USB has, I think 12 byte packages. It's even worse then storage, in the difference between baud and actual data-rate.
                But that doesn't matter because the computer has base 2 addressing, and that is much more important.
                Base 2 has to be, and has to be rounded to biggest base 2 value that it can because anything else would increase complexity and hurt performance of the hardware. That only does not matter in spinning hard drives and (somewhat) in networking. But in programming base 2 is very important if you care about performance. In storage it completely and utterly doesn't make any difference, mainly because you eye ball it anyway. Because 100 1kB files doesn't take up the same space as 1 100kB file and 4GB file doesn't take up 4GB on any storage medium because filesystems (technically you can use disks raw, and some do). A computer is not decimal, no matter how much it pretends to be.

                It was always 1024, until seagate and co. payed ISO. Actually they started doing it before ISO-ing it, because all big corporations (especially USA ones) are complete assholes.
                Used to be that engineers would engineer, before corporations would politicize. It's a shame.

                "..leads to confusion.."
                Then why did they change it at all, as everybody knew it was 1024 and nobody had problems with it.

                It's stupid. All of it. Honestly I even doubt people who say it's better now, as to where the trust comes from in companies like ISO.

                Comment


                • #88
                  Originally posted by sinepgib View Post
                  I get it that this is a reasonable fix to improve utilization (it's been done by games for ages), but wouldn't it be better to fix Intel driver's heuristics for that case? It's open source, so that's a real option. Not that they're mutually exclusive or anyone owes me fixing that, of course, but if the two biggest desktops and probably some others had to work around that, maybe it's be a good idea to finally fix it at the source.
                  The issue that the buffer has to be locked before it can be sent to monitor to prevent tearing cannot be fixed that just the way it has to be. Yes fix up intel driver's heuristics completely would not remove the need for triple buffering at times. Please note the developer adding dynamic triple buffering notes it helps AMD and Nvidia gpus when they have load spikes as well.

                  Please note I am not saying that Intel heuristics should not be fixed they should be. Issue here is that dynamic triple buffering deals with real issues caused by doing tear free output. Double buffering has a serous flaw when it comes to utilisation and its the nature of double buffering. Allowing tearing has serous risks to users yes this has not prevented gamers doing this for advantage over their competitors but you cannot do this to your general office workers as health and safety laws around the world will sooner or latter come after your tail. So from my point of view those implementing wayland and X11 compositors aimed general users really have no choice but to implement triple buffering of some form. Now for power usage we really do want the dynamic triple buffering where the compositor makes smart choices to save on power usage as in rendering double buffering most of the time and triple buffering when it needed. Yes dynamic triple buffering costs a little more power than double buffering but less and full blown all the time triple buffering and way less power than render like the bat out of hell no vsync tear to hell.

                  Dynamic triple buffer is the best compromise for power usage and while maintaining quality of output.

                  Comment


                  • #89
                    Originally posted by gens View Post
                    Base 2 has to be, and has to be rounded to biggest base 2 value that it can because anything else would increase complexity and hurt performance of the hardware. That only does not matter in spinning hard drives and (somewhat) in networking. But in programming base 2 is very important if you care about performance. In storage it completely and utterly doesn't make any difference, mainly because you eye ball it anyway. Because 100 1kB files doesn't take up the same space as 1 100kB file and 4GB file doesn't take up 4GB on any storage medium because filesystems (technically you can use disks raw, and some do). A computer is not decimal, no matter how much it pretends to be.
                    This is contains a presume that does not 100 percent apply to the 1980s. IBM made a computer in the 1980s with BCD memory addressing. Yes that really did make the ram modules for it horrible complex. There were also a few prototype Ternary logic systems in the 1980s and 1970s as well so base 3.


                    So basically before the 1990s we truly did have a wild west custom ram modules. JEDEC is the one that mandated base 2 addressing in 1958 yes IBM and other did not always toe the line. Yes IBM was a JEDEC member when they made a BCD memory addressed computer.

                    Originally posted by gens View Post
                    It was always 1024, until seagate and co. payed ISO. Actually they started doing it before ISO-ing it, because all big corporations (especially USA ones) are complete assholes.
                    I am sorry to give you bad news it was not always 1024. BCD addressed 100000000 memory using 32 bit address bus. Of course Ternary logic has you with 3^10 that is 59049 stats instead of 2^10 that is 1024 states these are both a KB before 1990s both were made in the 1980s by different companies.

                    gens there is a rabbit hole before the 1990s in the computer world you really don't want to jump into. Yes this rabbit hole lot of the current day presumes are not valid like power of 2 addressing.

                    Seagate and other storage vendors pushing ISO to make a ruling was double sided yes they wanted the 1000=Kilo locked in for money reasons but they also did not want to deal with BCD or Ternary logic sized on their devices either. Yes this is why Seagate and other storage vendors also pushed for the KiB define in ISO. So there is more here than you are presuming. Yes more of a fragmented mess than you are presuming.

                    Some ways we have to thank Seagate and storage vendors for getting ISO ruling locking down KB... and KiB... so they do in fact have a defined meaning instead of hardware makers having a creative idea and redefining them how they saw fit at the time.

                    Comment


                    • #90
                      Originally posted by You- View Post

                      you just press a single key and suddenly the app list is much smaller. Have you used gnome-shell? It almost seems like you havent.

                      Gnome-shell is used predominantly (or pretty much exclusively) on pcs and laptops that have full keyboards. The first time you log in it goes through, or atleast used to, an animation telling you how to find apps. You press the windows key and press the first letter of the app. Its fast, its easy, its convenient.

                      Even if you wanted to scroll to find your app, it is convenient - your distro will come with around 1 page of apps pre-installed and the rest appear pretty much earliest installed first. If you still find the placement inconvenient, you can move the icon to which ever panel you want.

                      You make it sound like its not a system you have used at all.
                      I´m on Gnome 3-40+ since 2017. I was on Gnome 2 until 2011 before that. To be complete, Unity happened between both.
                      I don´t use the keyboard to trigger simple things in my workflow. This isn´t 1985 and it is arm straining to constantly switch between your touchpad/mouse and the keyboard, to contort for key combos or to call back your resting arm. It´s pure inefficience and it is slow, and I avoid it as much as possible.
                      These days I mostly use the touchpad (got an external one for my desktop) to steer Gnome with swiping gestures and the rest as a mouse. But I got rid of any mouse.
                      I mostly use dash-to-dock to start an and manage instances of an app (which you can´t manage properly with the vanilla dock) and I don´t go that often into the app grid to be completely honest, because I don´t find it practical (compared to Budgie app menu it´s not good).

                      Originally posted by You- View Post
                      Extension support is built into gnome-shell. It is officially supported. Extensions are reviewed and available through gnome infrastructure. They may not be part of the gnome core, but they are part of gnome.
                      It´s not true. There wasn´t any single explicit mention of extensions within Gnome until 3.36 and the Gnome Extensions app mentioning clearly where to get them. Before that, it was up to you to discover its existence.
                      The thing is Gnome devs didn´t expect it would take on so well (as they didn't expect the disappointment of users regarding the lack of features and customization) and they found themselves with their back to the wall and that´s when they started to half ass some support. Plus you can´t say anything´s official or properly supported if you break it every other release and make it complicated for 3rd parties to follow suit.

                      Comment

                      Working...
                      X