Originally posted by jacob
View Post
Announcement
Collapse
No announcement yet.
4-Disk Btrfs Native RAID Performance On Linux 4.10
Collapse
X
-
Originally posted by starshipeleven View PostCan you clarify? What you mean?
Comment
-
As far as I am aware, ZFS on Linux was still a FUSE-vased implementation, and would therefore not yield proper results compared to a native Solaris or at least BSD system.
And why has no one here mentioned the one critical point about RAID1? Its only as fast as the slowest drive. You can have 100 drives, but it still only reads at that slowest drives speed. You also only have that smallest drives capacity limit as well. But, you'd have redundancy across 100 drives. You only start to get speed from RAID1 if you pair it with 0, and that is spread across the number of 0 arrays, but increases risk of lost data. Let's say we turned the 100 drives in to RAID10 with 50 drives in RAID1, the next fifty in RAID1, then we can put them in to a RAID0 array for twice the speed. Similarly with four arrays of 25 drives.
Going by my testing the benefits of BTRFS simply weren't there compared to md on spinning rust (no SSDs). It was slow in RAID10, and 5 was slower and 6 was abyssmal. Native/MD was far superior with a choice of FS to boot, giving both speed and redundancy. If it takes less time on my drive to do that same thing, then I've won, as there's less wear and tear being performed on the systems resources. Since the BTRFS 5/6 storage issue popped up, I had to stay well clear, as we're using several drive arrays for primary storage and didn't feel like having to risk losing any of them and testing all the archived data all the time to be sure things worked.
And I want BTRFS to be the FS it was promising. Just not yet for my needs.Hi
Comment
-
Originally posted by pal666 View Postheavy caching is done by page cache, zfs uses much more ram due to obsolete design
l also wonder how much (if any) there is performance gain/loss between those.
I, personally have been using btrfs only, as it seems quite user friendly AND btrfs does not care a bit about different disk sizes and how many disk you give to it.
For example I used 5xSSD setup a while ago. I think there was three different sized disks. btrfs managed to utilize around 90% of the space on RAID1 (and briefly on RAID5 and RAID6 too). I've head that ZFS on the other hand is more picky (enteprise users don't really care about that)... But still flexible when compared to "regular raid".
Comment
-
Originally posted by jacob View PostLet's say the FS wants to transfer a number of logically contiguous blocks (like an extent, for example). Normally it would occur as a single DMA operation, in burst mode. But if the physical blocks are scattered around, would that affect the transfer speed and/or max number of blocks transferred per request?
Hard drives are of course sequential access memory, that's a very high-tech version of a gramophone after all.
The main limitation of flash is read/write speed, a flash chip isn't terribly fast on its own.
SSD controllers give you far more speed than the average usb flashdrive because they actively fragment the writes you do on different flash chips, making a "RAID0" of sorts (some also have caches and other stuff on different faster chips and whatever).
Comment
-
Originally posted by stiiixy View PostAs far as I am aware, ZFS on Linux was still a FUSE-vased implementation, and would therefore not yield proper results compared to a native Solaris or at least BSD system.
And why has no one here mentioned the one critical point about RAID1?
You also only have that smallest drives capacity limit as well. But, you'd have redundancy across 100 drives.
Also btrfs RAID1 has no way to increase the amount of redundancy, all drives you add increase capacity, not redundancy.
Going by my testing the benefits of BTRFS simply weren't there compared to md on spinning rust (no SSDs).
The fact that you post bullshit like "the benefits of btrfs simply weren't there compared to md" you clearly show you don't fucking need any of the many features btrfs offers vs any other filesystem, so the issue is only on your side that you chose the wrong setup.
Just not yet for my needs.
Comment
-
Originally posted by Zucca View PostHm. Obsolote? Which "brach" of ZFS? OpenZFS, OracleZFS? (Are there more?)
zfs was designed before invention of cow btrees. so zfs designers sacrificed btrees for cow.
Originally posted by Zucca View PostI've head that ZFS on the other hand is more pickyLast edited by pal666; 31 January 2017, 10:08 AM.
Comment
-
Originally posted by Zucca View PostHm. Obsolote? Which "brach" of ZFS? OpenZFS, OracleZFS? (Are there more?)
l also wonder how much (if any) there is performance gain/loss between those.
RAID6 too). I've head that ZFS on the other hand is more picky (enteprise users don't really care about that)... But still flexible when compared to "regular raid".Last edited by SystemCrasher; 31 January 2017, 10:15 AM.
Comment
Comment