Announcement

Collapse
No announcement yet.

Wine Developers Are Working On A New Linux Kernel Sync API To Succeed ESYNC/FSYNC

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    I'd be surprised if this even gets a reply from any of the kernel developers; the chances of it materializing any time soon are even slimmer...

    Anyway, for anyone wondering about what is wrong with ESYNC or FSYNC (futex2), here's a quote from Zebediah Figura:
    However, "esync" has its problems. There are some areas where eventfd just doesn't provide the necessary interfaces for NT kernel APIs, and we have to badly emulate them. As a result there are some applications that simply don't work with it. It also relies on shared memory being mapped read/write into every application; as a result object state can easily be corrupted by a misbehaving process. These problems have led to its remaining out of tree. There are also some operations that need more than one system call and hence could theoretically be improved performance-wise. I later developed a second out-of-tree patch set, the awfully named "fsync", which uses futexes instead of eventfds. It was developed by request, on the grounds that futexes can be faster than eventfds (since an uncontended object needs no syscalls). In practice, there is sometimes a positive performance difference, sometimes a negative one, and often no measurable difference at all; it also sometimes makes performance much less consistent. It shares all of the same problems as "esync", including (to a degree) some inefficiencies when executing contended waits.

    Comment


    • #12
      Originally posted by skeevy420 View Post
      If E and F are no good I suppose they could go with GSYNC

      Wait: I got it. A name that's both a joke and works:

      WiNSYNC
      ESYNC sounds like a Gentoo utility to synchronize packages.
      FSYNC sounds like the system call to synchronize the disks.

      Wi... NSYNC? Are you for real? The pop band from the early 2000's?

      Comment


      • #13
        I don't get why they're relying on some kernel API for synchronisation. Is this some syscall made by Windows applications? Is this some kind of inter-process synchronisation that needs to take place?

        Comment


        • #14
          Originally posted by sandy8925 View Post
          I don't get why they're relying on some kernel API for synchronisation. Is this some syscall made by Windows applications? Is this some kind of inter-process synchronisation that needs to take place?
          Same here. I don't understand why the first step would be to come up with new syscalls, be it for fsync or ntsync. In the case of interprocess communication, what's wrong with atomic access to shared memory? Is there some problem with the Linux implementation of shared memory, or shared memory itself?

          Comment


          • #15
            Originally posted by sandy8925 View Post
            I don't get why they're relying on some kernel API for synchronisation. Is this some syscall made by Windows applications? Is this some kind of inter-process synchronisation that needs to take place?
            Yes this is (several) syscalls called by Windows application of which several can be used for inter-process synchronisation. I think the major one is the WaitForSingleObjects syscall that Windows applications use to wait for up to 64 objects at the same time where objects can be (among others) mutexes, semaphores, file descriptors and sockets so unfortunately not something that can be 1:1 replaced by select/poll/epoll. The mail linked in the article contains info on why the current syscalls in Linux does not really fit the bill here.

            Comment


            • #16
              Originally posted by F.Ultra View Post

              Yes this is (several) syscalls called by Windows application of which several can be used for inter-process synchronisation. I think the major one is the WaitForSingleObjects syscall that Windows applications use to wait for up to 64 objects at the same time where objects can be (among others) mutexes, semaphores, file descriptors and sockets so unfortunately not something that can be 1:1 replaced by select/poll/epoll.
              That would be WaitForMultipleObjects.

              Originally posted by F.Ultra View Post
              The mail linked in the article contains info on why the current syscalls in Linux does not really fit the bill here.
              I'd think what's needed is a library like pthreads, but custom tailored. Not syscalls that get upstreamed, generally speaking.

              If the risk of corruption is a real problem for shared memory, maybe what's needed is a way to limit (write) access to those library functions, for example. That's what the kernel would have to do anyway, internally. Maybe setting that up would require a new syscall, but it would be one that other libraries can then use as well. Just my 2 cents.

              EDIT: For example, that might be syscall which takes a shared mem id, a function pointer, and a context pointer as paramaters. It would then execute that function with (write) access enabled, and disable (write) access before returning. Plus a clean-up function pointer in case the function crashes. (Not that I would exactly know what it takes to implement such a syscall). Then any library implementing IPC functionality can use this facility.
              Last edited by indepe; 18 January 2021, 09:54 PM.

              Comment


              • #17
                Originally posted by Cattus_D View Post

                Codeweavers rely on Mac sales for a substantial part of their income. It is not odd, therefore, that they also work on retaining and expanding Mac compatibility.
                Not anymore, their main incoming now is Valve. This is why they have changed their focus to gaming instead of office applications.

                Comment


                • #18
                  I started using Fsync over Esync but performance wise they seem the same to the layman now they are looking at something new. I hope this is the last time they have to work on the issue.

                  Comment


                  • #19
                    Originally posted by indepe View Post
                    I'd think what's needed is a library like pthreads, but custom tailored. Not syscalls that get upstreamed, generally speaking.

                    If the risk of corruption is a real problem for shared memory, maybe what's needed is a way to limit (write) access to those library functions, for example. That's what the kernel would have to do anyway, internally. Maybe setting that up would require a new syscall, but it would be one that other libraries can then use as well. Just my 2 cents.

                    EDIT: For example, that might be syscall which takes a shared mem id, a function pointer, and a context pointer as paramaters. It would then execute that function with (write) access enabled, and disable (write) access before returning. Plus a clean-up function pointer in case the function crashes. (Not that I would exactly know what it takes to implement such a syscall). Then any library implementing IPC functionality can use this facility.
                    Really no. pthreads under linux is not just a library on most Linux distributions. "NPTL (Native POSIX Threads Library)" This requires a Linux 2.6 kernel or newer to have the required syscalls so it can function. The shared memory wine is using is very much like what pthreads before NPTL used this not a good work around for pthreads either.

                    Threading locking you really do need the kernel scheduler to know about it.

                    There is a nice fun little problem called Priority inversion and Priority inheritance.
                    https://www.opensourceforu.com/2019/...l-programming/

                    Yes there are particular behaviours in the windows kernel where if a thread/process is stopped on a lock where its time slice goes to the process/thread holding the lock this causes some major performance differences. Software library in userspace does not fix these issues because you are needing the kernel scheduler to be aware what the lock states are and allocate cpu resources according to that information. Yes once this information has got to scheduler about the locking most of the information no longer needs to be shared by the work around memory mappings.

                    The trap here the way you are forced todo things when you don't have the features you need from the kernel are horrible memory maps that just cause more problems. Threading locking libraries done pure userspace always end up using horrible memory mappings that always cause trouble because the kernel scheduler is not getting the information it needs to make correct choices and the memory needs to be read/write between the processes so opening up exploits there. Kernel based thread locking is vastly superior to user-space stuff because the need for memory read/write between processes goes away and the scheduler has access to information it can use to make more correct choices..

                    Of course these new forms of locking wine wants could be useful to native Linux programs in future due to their different behaviours. There are other areas in Linux that are mix of ways BSD does things and ways other parties have done things.

                    Comment


                    • #20
                      So.. im still not really grasping the difference between the fsync kernel patchset and the fsync2 kernel patchset. Are they interchangeable? Can i run proton on a kernel patched with fsync2 (and NOT the "old" fsync patchset)? (And actually USE fsync ofc.. not just "run it", since i suppose someone would nitpick on the wording here...)

                      And.. this is opting to be the 3rd round of fsync? (or. winsync/ntsync/winesync/supersync... whatever).

                      Comment

                      Working...
                      X