Announcement

Collapse
No announcement yet.

XFS Improvement For Linux 5.20 Enhances Scalability For Large Core Count Systems

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • XFS Improvement For Linux 5.20 Enhances Scalability For Large Core Count Systems

    Phoronix: XFS Improvement For Linux 5.20 Enhances Scalability For Large Core Count Systems

    One of several improvements being prepared for the XFS file-system with the upcoming Linux 5.20 cycle is focused on improving the CIL scalability for systems with many CPU cores...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I'm interested in benchmark FS (Ext4, btrfs, XFS, ZFS) on 64 core or more with this new update.
    Developer of Ultracopier/CatchChallenger and CEO of Confiared

    Comment


    • #3
      XFS has gotten a lot of attention lately. They always where a low resource friendly filesystem and now becomming more efficient for high end usage. Hopefully they don't loose the low-end part on the way.

      Comment


      • #4
        Originally posted by Anux View Post
        XFS has gotten a lot of attention lately. They always where a low resource friendly filesystem and now becomming more efficient for high end usage. Hopefully they don't loose the low-end part on the way.
        SGI always had high-CPU count machines, so it's not like they could afford not to be efficient in the high-end space.

        Comment


        • #5
          Originally posted by alpha_one_x86 View Post
          I'm interested in benchmark FS (Ext4, btrfs, XFS, ZFS) on 64 core or more with this new update.
          Me too.

          Comment


          • #6
            Originally posted by uxmkt View Post
            SGI always had high-CPU count machines, so it's not like they could afford not to be efficient in the high-end space.
            Which were more IO performance bound than thread bound. True high performance throughput didn't begin being a thing till solid state storage became reasonably affordable. Back when SGI was a prince, small solid state rewritable storage drives meant for environments impractical for rotational media was on the order of 10k USD base price. I know. I had to evaluate storage options for balloon-borne experiments around '96. SGI had the CPUs, but they didn't have the hardware I/O throughput that began showing up problems until both high count thread systems + high throughput storage I/O on a single package (package being a blade, workstation, desktop, etc - rather than over an entire group of distributed systems) became more of a norm.
            Last edited by stormcrow; 18 July 2022, 02:25 PM.

            Comment


            • #7
              The article is inaccurate. While Darrick Wong was the one merging the patches, the CIL scalability improvements have been developed by Dave Chinner over the last few months.

              Comment


              • #8
                Originally posted by uxmkt View Post
                SGI always had high-CPU count machines, so it's not like they could afford not to be efficient in the high-end space.
                Sure but in the age of HDDs they probably didn't care for a performance cap at 1 mil trans/s.

                Comment

                Working...
                X