Announcement

Collapse
No announcement yet.

FUSE Introducing Per-File DAX Option With Linux 5.17

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • FUSE Introducing Per-File DAX Option With Linux 5.17

    Phoronix: FUSE Introducing Per-File DAX Option With Linux 5.17

    Last year with Linux 5.10 FUSE added DAX support for use with VirtIO-FS. Like with DAX for other file-systems, enabling this direct access mode allows bypassing the page cache. For use-cases when running on persistent memory like devices or VirtIO, having this direct access to the storage device can be beneficial for performance. With Linux 5.17 FUSE is expanding the DAX support to allow per-inode control as well...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    FUSE is such a blessing in the LInux world. It is sorely missed in the *BSD world where individual UFS/FFS implementations aren't compatible with each other and fat32 or ext2 are your best multi-os file systems... Would love to see ext4 a first class citizen on all OSes or ExFAT even!

    Comment


    • #3
      Is FUSE performance still a dog's breakfast? A couple years ago I quick-n-dirty connected to a Samba share on my local NAS using the built-in gvfs-fuse integration in nautilus, and it couldn't even saturate a gigabit network link.

      Comment


      • #4
        Originally posted by nranger View Post
        Is FUSE performance still a dog's breakfast? A couple years ago I quick-n-dirty connected to a Samba share on my local NAS using the built-in gvfs-fuse integration in nautilus, and it couldn't even saturate a gigabit network link.
        Not my experience. I sometimes copy big files to a NTFS volume on a SATA3 SSD and it's always been several hundred MB/s. I want to say in the 300's, but I didn't pay that much attention. That's going back probably 3-4 years, I think.

        If you're dealing with many smaller files, maybe there are significant per-file overheads holding you back.

        Comment


        • #5
          > Is FUSE performance still a dog's breakfast?

          Apropos whether kernel cache should be disabled or not, I recently discovered that SSHFS is a thousand times faster (for my purposes) if you enable its option "kernel_cache".

          My use case: Such is embedded development in corona times that your compiler installation is on NFS over SSHFS over VPN. The latency was killing me – page faults in the 100s of milliseconds. As it turns out, I was downloading niosII-g++ a thousand times instead of once. Page cache is already disabled by default with SSHFS.

          Comment


          • #6
            Originally posted by andreano View Post
            My use case: Such is embedded development in corona times that your compiler installation is on NFS over SSHFS over VPN.
            Even on a LAN, straight NFS performance noticeably suffers without client-side caching.

            I've previously done builds on NFS-mounted filesystems, and that made the difference between it being fully usable vs. unbearable.

            Comment


            • #7
              Originally posted by nranger View Post
              Is FUSE performance still a dog's breakfast? A couple years ago I quick-n-dirty connected to a Samba share on my local NAS using the built-in gvfs-fuse integration in nautilus, and it couldn't even saturate a gigabit network link.
              It depends massively on the filesystem's settings and usage pattern. I've seen filesystems easily saturate 10Gbps+ networks and others (including my own) fall apart depending on the block size used for reads/writes and latencies and how the client software works.

              Comment

              Working...
              X