Announcement

Collapse
No announcement yet.

Haiku Operating System Gets Moving With Clang, Driver Fixes

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by cb88 View Post

    Actually under full disk load Haiku still has issues, it's more noticeable with slow disks though. The Haiku IO scheduler could probably use some improvements in that area.
    Every OS has problems with disk IO. It's unavoidable when you are using memory mapping and on-demand paging for your executables and libraries. Not to mention the data files.

    I suppose that you could page-in and memory lock everything for the foreground application, which would isolate it from disk IO. But most apps don't use the majority of their libraries. They don't even use all of their executable because of rarely used exception and error handlers. Loading those in is a waste of RAM.

    Data files are even worse. There's no way for the OS to predict the usage.

    Disk access latency is the killer. Even if you gave the foreground application immediate access to the next disk queue slot, that's as much as 15 ms to finish the current operation on a slow spinning disk. Then what, are you going to leave the disk completely idle so that unknown future operations have minimum latency? For how long?

    I think we just upgrade everyone to NVMe Flash. Even SATA solid-state is good. Latency is in microseconds, not milliseconds.

    Comment


    • #12
      Originally posted by Zan Lynx View Post

      Every OS has problems with disk IO. It's unavoidable when you are using memory mapping and on-demand paging for your executables and libraries. Not to mention the data files.

      I suppose that you could page-in and memory lock everything for the foreground application, which would isolate it from disk IO. But most apps don't use the majority of their libraries. They don't even use all of their executable because of rarely used exception and error handlers. Loading those in is a waste of RAM.

      Data files are even worse. There's no way for the OS to predict the usage.

      Disk access latency is the killer. Even if you gave the foreground application immediate access to the next disk queue slot, that's as much as 15 ms to finish the current operation on a slow spinning disk. Then what, are you going to leave the disk completely idle so that unknown future operations have minimum latency? For how long?

      I think we just upgrade everyone to NVMe Flash. Even SATA solid-state is good. Latency is in microseconds, not milliseconds.
      No what I mean was more like what happens with bufferbloat mitigations where no application gets 100% of the disk usage quota... which maintains latency in user interactions at reasonable levels but means you don't get peak performance in all cases. Modern OSes can be wasteful of IOPS, which is what kills non SSD users. Haiku actual already combats a little of this by havving alot of the file meta data located along side the data on the disk as extents.

      Comment


      • #13
        Linux is doing that about as well as it can be done now I think.

        The BFQ MQ scheduler does a good job. Even during a BTRFS array scrub I can use my NAS. I can still feel the annoying little pause, pause, act delay though.

        Comment

        Working...
        X