Announcement

Collapse
No announcement yet.

FreeBSD ZFS vs. Linux EXT4/Btrfs RAID With Twenty SSDs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by pegasus View Post
    Technically, true. But with rotational media algorithms have 10 of miliseconds of time to do whatever they're doing and that is why raid pays off. With solid state media you only have microseconds to do your magic which makes it much more difficult for algorithms to add value to the whole setup. You can observe similar situation with io schedulers ... those that are the best for disks are typically not the best for flash.
    No that's different. The entire point most recommend "none" or deadline is that flash have so large bandwith that they can serve all requests regardless, for home use or light server. When you start going into the professional territory where you have enough bandwith to actually stress the drive you see that schedulers are important again (aka Kyber from facebook) to avoid resource exhaustion.

    Any algorithm that is intercepting data and doing something with it before writing to disk will have a performance penalty over not doing anything, but if I want redundancy (more like uptime, RAID is not a backup) I have to do something in any case (which isn't a dumb "cp /path/ /new/path" on a schedule). Which is why I want to see numbers that show cluster filesystems actually have better performance than RAID.
    Last edited by starshipeleven; 16 December 2018, 11:38 AM.

    Comment


    • #32
      I guess benchmarking ceph is beyond phoronix test suite ... but you can check ceph channel on youtube, where they have regular performance meetings. They're somewhat hard to follow but full of interesting numbers and "under the hood" facts.

      There's also now io500 list and their benchmark that you can run on clusters. Especially interesting is their 10-node challenge, where I got a score of 6.1 but was too late to submit for November edition.
      Last edited by pegasus; 16 December 2018, 03:18 PM.

      Comment


      • #33
        Originally posted by gadnet View Post
        i will love to have a test of ZFS linux vs freeBSD also you should contact the writers of the ZFS books to have best optimisations and not fall into a biased test.
        This thread is full of whining ZFS people, BTRFS would also do better with optimizations, As example the SQLITE file is not marked and modified to not run as Copy-on-write on BTRFS which apparently is not the case with ZFS, which autodetect this file and blacklists from cow? Or what does it do caches the whole file and needs therefor 5x as much ram? It can't be faster than ext4 without any drawback.

        Because they are not only at the same speed as ext4 but faster there must be heavy caching involved, which at least for home users that have not thousands of extra dollars in the bank for a stupid file file server.

        So well whatever it is, it makes btrfs look bad because the proper optimization is not done, which is realistically at least for home users, sure home users also don't have 20 ssd raids, but he also tested single drive so that's very realistic for home users.

        I think people that get 2000-10000 dollars paid for running a big data center can do the tests they need them self and maybe release them. But it's harder to do for home users that have a live and maybe a different job besides file server administration.

        So having a out of the box comparison is very interesting for me, and is very relevant, at least for the cow file-system. the comparison with ext4 is pretty meaningless anyway, cause if you use a cow fs than not primary because of the speed but the additional features.

        If you want the most efficient fastest FS, a Cow-fs is probably the wrong choice. Sure xfs would be more interesting than ext4 because it wants to be a competitor featurewise, eventually and even has some cow mode as far as I read somewhere.

        Comment


        • #34
          Originally posted by blackiwid View Post
          This thread is full of whining ZFS people, BTRFS would also do better with optimizations, As example the SQLITE file is not marked and modified to not run as Copy-on-write on BTRFS which apparently is not the case with ZFS, which autodetect this file and blacklists from cow? Or what does it do caches the whole file and needs therefor 5x as much ram? It can't be faster than ext4 without any drawback.
          ..
          Well, could be said you are of "whining BTRFS people", lol. I think it's not so much about used file systems but operating systems..
          Look at past benchmarks, for example this one, its bit more than 1 month old..
          Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


          You'll see that even OpenBSD with it's literally ancient iteration of UFS does better in SQLite bench than any of the three Linux contenders. And 2 different FreeBSD versions using ZFS do there only slightly worse than Linux (12-Beta2 is actually pretty much on par with Fedora 29, which was using XFS).

          Yeah, BTRFS lost hugely in one particular test. Big deal. It's sufficiently good in others. Get over it.

          Comment


          • #35
            Originally posted by aht0 View Post

            Yeah, BTRFS lost hugely in one particular test. Big deal. It's sufficiently good in others. Get over it.
            I have no problem with it, I just get pissed if bsd / zfs people think they are threated bad by using defaults, like I said I have no problem to see the default speeds with btrfs, I just wanted to demonstrate that ZFS is not the only one that suffers from not optimizing, so therefor it's not biased.

            Comment


            • #36
              Originally posted by blackiwid View Post

              I have no problem with it, I just get pissed if bsd / zfs people think they are threated bad by using defaults, like I said I have no problem to see the default speeds with btrfs, I just wanted to demonstrate that ZFS is not the only one that suffers from not optimizing, so therefor it's not biased.
              Understood. In this case, it looks like SQLite bench results have fairly little to do with actual file systems being used during testing ..
              ---

              Grumbling about optimizations have some basis, for bsd people, at least. Linux distros often do come to a degree pre-configured for one or other purpose (Fedora Server, Ubuntu Server and so forth..). Or you can select intended use-scenario from it's installer (OpenSUSE for example).
              BSD tends to come (with the exception of some FreeBSD derivatives) as DIY OS. You are finished with base install: you have to fully configure it yourself up to and including fine-tuning the performance.

              So when Phoronix does it's testing it tends to be like: optimized Linux distros vs BSD base install, with some extra packages thrown on top - ones needed for benching. Thats all. I even understand that you cannot have completely fair benchmarking, there are too many subjective variables involved.

              But this grumbling also is often caused by Linux-die hards (like pal666, Pawlerson ... ) who would then invariably take the same benchmark results and present these as "The Truth about Linux superiority" and start ranting about necessity of killing off all the other OS'es, because Linux is "so much better".. Gets tiring.

              Luckily, haven't seen neither troll here for some time. Hope it stays same.
              Last edited by aht0; 17 December 2018, 08:50 PM.

              Comment

              Working...
              X