Announcement

Collapse
No announcement yet.

FreeBSD Lands Important ZFS Performance Fix For Some Going From ~60MB/s To ~600MB/s

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    "So how does the ZFS performance compare between Solaris, Linux and FreeBSD"

    Solaris, the Oracle Solaris with latest SRU leads with wide margin. The kernel based SMB server, currently v. 3.1, is a neat in NAS usage.

    Comment


    • #22
      Originally posted by skeevy420 View Post

      You answered your own question.

      ZFS has its own internal LVM-like behavior and running that on top of Linux LVM can cause performance hits. Also, set your disk scheduler to noop/none with ZFS since it also has its own internal scheduler (it normally does it automatically if it detects that it's the only file system on the disk(s)).

      W/o knowing your setup better, best I can say is use ZFS by itself by giving LVM 20% of the disks and give the remaining 80% to ZFSs raidz...better would be to give one disk to not-ZFS and the rest of the disks to ZFS and is what I'd do in your situation.

      If you're using encryption, consider compiling your own kernel with the NixOS patch so ZFS can use the CPUs AES encryption.
      I am sorry but your remarks are not helpful at all. You must have misread something. I already wrote that when I create a new pool on separate LVs on the same drives as the original in identical configuration (only smaller capacity) I get the expected performance. There's something with the ZFS that is on those original LVs and has nothing to do with LVM, schedulers and the like. My best guess is fragmentation because even SSDs slow down when the access patterns diverge from sequential. However, the fragmentation is reported as 2% by ZFS itself which doesn't look like it should be so consequential. And if it were, I would have expected the IO wait time reported by TOP to shoot up, which doesn't happen. Neither IO nor CPU seem to be saturated, yet the performance is crap!

      Comment


      • #23
        Originally posted by kobblestown View Post

        I am sorry but your remarks are not helpful at all. You must have misread something. I already wrote that when I create a new pool on separate LVs on the same drives as the original in identical configuration (only smaller capacity) I get the expected performance. There's something with the ZFS that is on those original LVs and has nothing to do with LVM, schedulers and the like. My best guess is fragmentation because even SSDs slow down when the access patterns diverge from sequential. However, the fragmentation is reported as 2% by ZFS itself which doesn't look like it should be so consequential. And if it were, I would have expected the IO wait time reported by TOP to shoot up, which doesn't happen. Neither IO nor CPU seem to be saturated, yet the performance is crap!
        For the very few that will end up here in the future, I want to add that I solved my ZFS performance problem. The dataset I was testing was meant for movies so I had the primarycache property set to metadata. I didin't want to cache the contents of movie files. However, since ZFS prefetches into the ARC, this turns off prefetching for the dataset. Even big files are being read one record after the previous one. You can see it in the output of zpool iostat -r in the sync_read column. Setting primarycache back to all shifts the bulk of the operations into the async_read category - with the corresponding performance increase.

        And there's absolutely no slowdown from using LVM.

        Comment

        Working...
        X