Announcement

Collapse
No announcement yet.

GNOME 40 Mutter Moves Input Work To A Separate Thread

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by oiaohm View Post
    The mainline Linux kernel is closer to a real-time kernel than the Windows or MacOS kernels. Problem is real-time is a vague term.
    So Linux is theoretically closer to a real-time kernel than both Windows and macOS, but in the real world, both Windows and macOS are much more suitable than Linux for audio production and low-latency audio.

    Originally posted by oiaohm View Post
    Migrations are slow an painful process.
    Yeah, Wayland have taken forever. I think macOS have had Quartz which is similar to Wayland since forever. Apple quickly migrated from x86 to ARM.

    Originally posted by oiaohm View Post
    That not exactly true. There are permission systems like Android present but not used to their fullest. You will find that services on Linux cannot in fact do anything because they have selinux around them or apparmor or some other LSM around them. Turns out the LSM of Linux can do almost all the restrictions Android can(the missing restrictions are X11 limitations). Have LSM ever been integrated properly into the Linux desktop the answer is no. Does it mean the system to-do it is not there? the answer is a solid no. Also cgroups are also able todo almost all the Android restrictions. So there are in fact two systems todo android like restrictions the older LSM system and the newer cgroups system.
    But permissions is something that work on Android today, fully.
    On Linux its just oh this one is Flatpak, this one is Sandbox, and the rest is .deb's and executables in /bin/, sure there are infrastructure and plumbing and technology like cgroups in place which could theoretically be used, but it is not used to sandbox everything. It's like "we have all this, and we have this and this and this" but none of it is used. Or like "you have this which we could use to achieve this", but its not achieved. Android has achieved it in the real word long ago.

    Comment


    • #62
      Originally posted by S.Pam View Post
      I was using multi-threaded applications on the Amiga days... What happened - why isn't important and critical things not in separate worker threads already?
      Because software developers aren't taught how to do important things like multi-threading. And the desktop world still lives in "run everything on a single thread".

      Whereas on mobile OS like Android and iOS you are forced to push obvious stalling work (like network calls) to background threads. And the best practices espoused there, always tell developers to do heavy work on a background thread, and not stall the rendering/input handling thread.

      Comment


      • #63
        Originally posted by sandy8925 View Post

        Because software developers aren't taught how to do important things like multi-threading. And the desktop world still lives in "run everything on a single thread".

        Whereas on mobile OS like Android and iOS you are forced to push obvious stalling work (like network calls) to background threads. And the best practices espoused there, always tell developers to do heavy work on a background thread, and not stall the rendering/input handling thread.
        That's not entirely true. Developers study multithreading, but still it's hard to make it right and even harder to debug.
        Node.js does a pretty good job at using threads under the hood while hiding them from the developer

        Comment


        • #64
          Originally posted by sandy8925 View Post

          Because software developers aren't taught how to do important things like multi-threading. And the desktop world still lives in "run everything on a single thread".
          A lot of developer is taught how important is multi-threading. But the employer doesn't pay to implement it in the software.

          Comment


          • #65
            Originally posted by JackLilhammers View Post

            That's not entirely true. Developers study multithreading, but still it's hard to make it right and even harder to debug.
            Node.js does a pretty good job at using threads under the hood while hiding them from the developer
            It's not really THAT hard. Read that book "Java Concurrency in Practice". The very first chapter demystifies and boils down to the essentials - concurrency problems are all about access to the same piece of data from multiple threads. So your job is to protect data accesses and plan for this in advance, when designing and writing software. If you do that, most of your problems will vanish.

            ​​​​​If you have the attitude of do whatever, and think QA will catch concurrency bugs, then you're sorely mistaken. These things are insidious to reproduce and debug. Designing software properly is the only good way to deal with concurrency problems.

            Edit: There are or course, additional problems when you're dealing with hardware, like a kernel does for example. Places where you won't get an interrupt but have to poll instead, or wait for a specific time to get an answer. Those are additional complications. But in userspace, when you're not dealing with hardware, it's way easier.
            Last edited by Guest; 29 November 2020, 01:12 PM.

            Comment


            • #66
              Originally posted by JackLilhammers View Post
              That's not entirely true. Developers study multithreading, but still it's hard to make it right and even harder to debug.
              Node.js does a pretty good job at using threads under the hood while hiding them from the developer
              Nah, running simple tasks in threads is trivial. Reading a file from a disk or waiting for a network request to finish is an isolated task and therefore trivial to parallelize -- as long as the libraries used don't do something fucking stupid and mess it up.

              I've fiddled with reading audio file metadata in parallel (via multithreading) with TagLib, but the library is designed in such a stupid way that makes even the read-only functions non-thread-safe / non-reentrant, leading to crashes.

              Comment


              • #67
                Originally posted by uid313 View Post
                So Linux is theoretically closer to a real-time kernel than both Windows and macOS, but in the real world, both Windows and macOS are much more suitable than Linux for audio production and low-latency audio.
                This is in fact wrong. There are a lot of users of jackaudio on Linux because they really do need low latency audio for production. I will accept that to get somewhere near a low latency audio is far faster and simpler on Windows and Mac OS. This is why there is a lot of interest in pipewire. Please note this has been a 35 year travelling in reducing sound servers on Linux. Yes sound server problem starts before Linux exists.

                Originally posted by uid313 View Post
                Yeah, Wayland have taken forever. I think macOS have had Quartz which is similar to Wayland since forever. Apple quickly migrated from x86 to ARM.
                Migrating between cpus is faster than migrating complete software interfaces. Quartz happens year 2000 with the migrations from Mac OS 9 to Mac OS X. There is a huge stack of Mac OS 9 applications that had to be run in emulation for 10+ years. By the way Quartz is older than that it was prototyped in nextstep platform with Display PostScript yes this starts in 1987 so there is 13 years of development before Quartz was production ready following by about another 10 years after that to migrate all the applications.

                Core display interface API/ABI development like or not is a multi decade process and hardware is a lot more complex when wayland started. Android interface was one of the fastest but that was still a decade before the design was locked down.

                Wayland development speed for a core display interface API/ABI is what you should expect. The progress has not been exactly fast but by compared to speed this stuff is developed in the same area for other operating systems the progress is going quite quickly. There has been a lot of unrealistic expetations for how fast Wayland should develop when you look at histories of how long the other platforms took to develop the same things.

                Originally posted by uid313 View Post
                But permissions is something that work on Android today, fully.
                On Linux its just oh this one is Flatpak, this one is Sandbox, and the rest is .deb's and executables in /bin/, sure there are infrastructure and plumbing and technology like cgroups in place which could theoretically be used, but it is not used to sandbox everything. It's like "we have all this, and we have this and this and this" but none of it is used. Or like "you have this which we could use to achieve this", but its not achieved. Android has achieved it in the real word long ago.
                Except Android has not achieved it for desktop class usage cases so the claim of fully is wrong. Like you don't run Android Studio to develop android applications. So there is a unsolved problem in the Android solution as well.

                Cgroups is going though the process of being deployed. There is a weakness in the Android permissions model being application based what do you do when you have 2 users on a system that want the same application to have two different permissions. Flatpak sandbox has been particularly designed to allow a single user or multi users run single application with different permissions. The cgroup model being developed jointly by gnome and kde also include this propriety to run single application multi times with different permissions.

                How to secure applications for every use case in a universal way is not in fact a solved problem yet.

                Comment


                • #68
                  Originally posted by Mez' View Post
                  In Gnome on wayland, there's also the issue of dealing with out of control memory over time.

                  When using Gnome, every few days you need to refresh your session (on Xorg via alt + F2 + r) or RAM goes berserk (3-4x the clean use), but this trick doesn't work on wayland, leaving you with a RAM usage going wild every. single. time. after a while of uptime. I don't want to go through the hassle of saving everything, logging out and back in to solve that. Especially when you have an ongoing workflow.
                  I just don't understand why they didn't implement this refresh feature in wayland. It's kind of dumb since it's a recurring Gnome issue.
                  That's not a feature, it's a bug.

                  Comment


                  • #69
                    Originally posted by 144Hz View Post
                    The review process and the CI is really good now. Adahl is super picky and thorough. CI happens on commit level, GnomeOS level and distributor level.

                    Just like upstreams are meant to be.
                    What is CI? Code input? Commit input? Sorry, I'm a newb.

                    Comment


                    • #70
                      Originally posted by Xsidm View Post

                      What is CI? Code input? Commit input? Sorry, I'm a newb.
                      Continuous Integration.

                      Automated build and testing systems.

                      Comment

                      Working...
                      X