Announcement

Collapse
No announcement yet.

Google's Gasket Driver Framework Landing For Linux 4.19

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by oiaohm View Post
    There is more than 1 communication path to kernel.

    Circular buffers is kind of like the stunt futex on Linux is upto.

    You need a syscall to allocate shared memory between kernel space and userspace but once that is set up you can put many yield messages back in the same allocation. Same reason logs in the linux kernel is in circular buffer if it was a syscall every time logging program need to get log results you would have way too many context switches.
    Implying getting logs is such a common operation and a performance bottleneck you do it billions of time per second?

    Again, you don't know what you're talking about. If a thread goes to sleep, you need a fucking context switch. Threads don't "exist" on the CPU, so all the state needs to be saved and then switched to another thread anyway, you're not going to save anything in terms of overhead by avoiding the kernel context switch.

    Originally posted by oiaohm View Post
    This is wrong. You might need to-do a context switch. So you set that thread need to yield you wait in a spin lock if lock comes acquirable you cancel the yield message.
    What if lock becomes free and THEN kernel yields you while you tried to "cancel" the yield message? Such operation cannot be atomic.

    Your idea stinks of race conditions and is no different than just spinlocking for a fixed time and then syscalling in terms of overhead.

    If you have to yield, a context switch is needed anyway, since your thread will have to be replaced by another thread.

    Originally posted by oiaohm View Post
    Calling a syscall to yield has already cost you a context switch and their is no way to back out of it if the lock comes acquirable before allocated time slice is up.
    You have an obsession with the time slice.

    Why should a thread that has to wait (due to mutex) block for its entire time slice when it could immediately yield and another thread would be able to do useful work during that time? ffs there's NO reason a thread has to finish its time slice, especially if it has to wait. It's wasted work and wasted energy.


    You just don't get it that a spinlock wastes time because other threads wait until that thread's time slice is up. Waiting is lost performance which is worse than a context switch in many cases. It's only useful if you KNOW with a high probability that it will most likely become free very soon.

    Furthermore we started the whole argument about ZERO SPINLOCKS. Because I already am WELL aware of designs with spinlocks. YOU claimed you can avoid syscall without spinlocks. YOU did. So the fact you resort to it when I CLEARLY said no spinlocks should be involved shows to me you're grasping at straws right now.

    Comment


    • #62
      Originally posted by Weasel View Post
      What if lock becomes free and THEN kernel yields you while you tried to "cancel" the yield message? Such operation cannot be atomic..
      Does that matter. When scheduler knows the locks when scheduler restarts with the process having the correct lock for it if the task after the yield the cancel yield due to waiting on X lock its a no-op. So it never need to be atomic just need to come a no-op when it pointless.

      Originally posted by Weasel View Post
      Why should a thread that has to wait (due to mutex) block for its entire time slice when it could immediately yield and another thread would be able to do useful work during that time? ffs there's NO reason a thread has to finish its time slice, especially if it has to wait. It's wasted work and wasted energy.
      How much energy do you waste storing the registers and loading the registers and refilling the caches to change to another task. Again this is another example of you being clueless. Straight up yielding every time a lock is hit can result in using even more power and wasting a lot more processing in context switch code that is not doing anything productive for end user.

      There as timeslice where the power wasted waiting is less than the cost of switching away. Yes depending on how much timeslice is left straight up yielding instead of waiting for the timeslice to end and crossing fingers that lock comes free can result in high power costs than waiting for timeslice natural end.

      There is another interesting factor heat. Increased context switch rate in fact push heat generation of cpus up because of more stuff changing.

      Waiting is not always lost performance. Waiting is sometimes the very smart choice.

      Comment

      Working...
      X