Announcement

Collapse
No announcement yet.

EXT4/Btrfs/XFS/F2FS Benchmarks On Linux 3.17

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by kenjitamura View Post
    You tried going into the Nvidia X Server Settings and changing the PowerMizer "Preferred Mode" settings from "auto" to "Prefer Maximum Performance"?

    Only thing that severely reduces/eliminates tearing for me with my GT 640. It's a night and day difference in tearing for me just changing that one setting when I watch videos in VLC or SMPlayer. It's a longstanding problem with the Nvidia blob, I'm using driver version 340.24.
    This didn't solve it for me. I need to have compositing enabled at all time, even when watching or playing full-screen. Otherwise I get tearing under KDE.
    I have GTX 660, driver 331.20

    Comment


    • #22
      Originally posted by pumrel View Post
      This didn't solve it for me. I need to have compositing enabled at all time, even when watching or playing full-screen. Otherwise I get tearing under KDE.
      I have GTX 660, driver 331.20
      Same, this didn't solve it for me, I've heard that the solution of using Compton works, or running E17 or something.

      Tearing is still an annoying issue in Linux. Games which have a vsync option usually fix it though but many games do not have this option.

      Comment


      • #23
        Originally posted by jrch2k8 View Post
        btrfs is not an speedster file system like neither ZFS is genius, this advantage of those file systems is the ability of LiveRAID, CoW, SnapShots, Dedup, Metadata, compression on the fly, encryption on the fly, Volumes, SubVolumes,etc. When you use your computer for things other than watching porn and check your facebook status BTRFS and ZFS provide real solution to real problems.

        for watching porn and checking facebook etx3-4/f2fs are way more than enough in fact almost overkill, so you can stick to that and be just fine for the foreseeable future.

        for literate/professionals/engineers/other types of smart people that need computers for stuff a bit more complex than play angry birds BTRFS and systemd offer fantastic solutions to real problems that otherwise will require very ugly insecure hacks(like is done today with sysV and other filesystems. but at this point i don't expect you to understand it. Is big boys stuffs)
        Btrfs remains disappointing. Most people do not need compression and encryption for their drives. And even if they do can one always use a different method than using Btrfs. One can use the loopback device with crypto module to create encryption within an existing file system and without having to rely on a different file system all together. The compression with Btrfs is also not ideal. Most people use specific compression algorithms for their needs. So does JPEG work best for images, H.264 for video and MP3 for music. Btrfs can only offer a general compression with LZO and LZ4, which are fast algorithms, but no professional will rely on these when they can have better compression through different means. Btrfs simply tries to offer too much. It should be possible to offer its feature without falling behind, but it fails at doing so and this is why it is disappointing.

        Comment


        • #24
          Originally posted by sdack View Post
          Btrfs remains disappointing. Most people do not need compression and encryption for their drives. And even if they do can one always use a different method than using Btrfs. One can use the loopback device with crypto module to create encryption within an existing file system and without having to rely on a different file system all together. The compression with Btrfs is also not ideal. Most people use specific compression algorithms for their needs. So does JPEG work best for images, H.264 for video and MP3 for music. Btrfs can only offer a general compression with LZO and LZ4, which are fast algorithms, but no professional will rely on these when they can have better compression through different means. Btrfs simply tries to offer too much. It should be possible to offer its feature without falling behind, but it fails at doing so and this is why it is disappointing.
          no, true for regular desktop users not for professionals(as is stated in my post not all is facebook and cute cats videos in your pc), fails a lot, yes it is from a FS perspective(you have a nasty misunderstand here about compression), against is not.

          to not be an ass, ill try to explain the difference between specialised user space compression algorithms and filesystem level compression.

          userspace compression logic:
          * maximum size reduction even if require uniquely specialised ungodly complex algorithms(HEVC for example)
          * Priority transmission over slow media(internet, etc.)
          * not intended for widespread use outside the specific target audience
          * very efficient in size reduction -- very high resource consumption

          filesystem compression logic:
          * good enough compression on flat files(you avoid already compressed data, the point was never to save space <-- mind fuck)
          * Priority reduce I/O bandwidth and improve raw throughput in specialised (<--important keyword) cases(Databases, giant file structures, massively wide texture data, etc)
          * intended for widespread use
          * very efficient saving I/O bandwidth -- extremely low resource usage

          ***keyword specialised means, if my 300gb pgsql database with 5000 users is choking to death my preferred I/O subsystem with ext4 or XFS with an assumed 30% compressible data then can be assumed i can reduce the load and avoid choking the I/O too often if btrfs/ZFS can handle efficiently that 30% compressible data on the fly because it means btrfs/ZFS will move that 30% compressible data compressed to RAM(saving bandwidth) and the kernel behind the scene will uncompress the pages as they are requested from RAM(this is why among other things that they use more RAM).

          on the other hand this method won't be very effective for an FTP server for example because the data even if it highly compressible it must be transmitted uncompressed(or compressed with a secondary algorithm), for this case XFS is more beneficial due to smaller seek latency, so the right tool for the right job and remember no tool will be efficient for every job or use case

          Comment


          • #25
            Originally posted by jrch2k8 View Post
            no, true for ...
            It does not matter if you believe it or not. No matter how much you try to comprehend the purpose of compression will it not help your argumentation, because btrfs is merely a feature-overloaded file system with a moderate performance. The numbers presented in the article give a clear picture of its performance and your words do not. Btrfs wants to cater to everyone, but because of it does it also have issues and why bugs keep popping up and why it still has not reached a stable state, making it less attractive to professionals who seek a fast and stable solution. And so btrfs ends up catering to no one in particular but the facebook users and cat video lovers who could be talked into using it. Other file systems win over btrfs, because these have kept it simple and have achieved their goals sooner and better. These then end up being the better solutions while offering less features, because they do not try to compete with every other file system, but only add to the already feature-rich Linux infrastructure of available file systems. So chances are by the time btrfs has left its experimental state will the faster and younger F2FS have reached a stable state, too! No professional wants to put their trust into a file system that took years in becoming stable when stability is the most important feature of any file system.

            By the way, there is nothing wrong with having a facebook account or enjoying cat videos. Those who argue against it might want to check their social awkwardness before pointing at others.

            Comment


            • #26
              Originally posted by sdack View Post
              It does not matter if you believe it or not. No matter how much you try to comprehend the purpose of compression will it not help your argumentation, because btrfs is merely a feature-overloaded file system with a moderate performance. The numbers presented in the article give a clear picture of its performance and your words do not. Btrfs wants to cater to everyone, but because of it does it also have issues and why bugs keep popping up and why it still has not reached a stable state, making it less attractive to professionals who seek a fast and stable solution. And so btrfs ends up catering to no one in particular but the facebook users and cat video lovers who could be talked into using it. Other file systems win over btrfs, because these have kept it simple and have achieved their goals sooner and better. These then end up being the better solutions while offering less features, because they do not try to compete with every other file system, but only add to the already feature-rich Linux infrastructure of available file systems. So chances are by the time btrfs has left its experimental state will the faster and younger F2FS have reached a stable state, too! No professional wants to put their trust into a file system that took years in becoming stable when stability is the most important feature of any file system.

              By the way, there is nothing wrong with having a facebook account or enjoying cat videos. Those who argue against it might want to check their social awkwardness before pointing at others.
              well, first phoronix(not michael fault tho) results are useless because all HDD benchmarks are basically a lot like brute force allocation attacks aka it just force the filesystem to allocate huge files or thousand of small file and then read them back in parallel or serial mode, this benchmarks just prove ext4 and xfs use a lot less metadata and have speed enhanced search algorithms(improved by the smaller metadata) in their journals and prove F2FS is very efficient (for raw speed) because of the SSD/Flash specific optimizations and smaller metadata(is not stable tho, dunno where you get this).

              wrong again, just check LKML and see how many years took EXT4 to be accepted stable as EXT3 succesor and EXT3 as EXT2 <--- sneak peak wasn't months

              wrong again, ZFS and BTRFS are already used in production on Google, Oracle DB and OS (btrfs without dedup since that feature is still in development) and they are widely used in storage systems all over the place when it make sense(raw speed king in proffesional enviroments is actually XFS and feature rich kings are ZFS and then BTRFS)

              The point here is a regular user has no use for most BTRFS/ZFS features(opensuse use the few that make sense on users machines) and XFS is prolly overkill because you can't take advantage of most their advanced features either(like delayed allocations) because you cannot stress your filesystem enough with this workloads.

              So, i stress my point again not all is about cats videos and facebook and not all tools are efficient the same way, an easy analogy to point it out "Ext4 is a nice fast camaro, XFS is a big pickup, F2FS is a nice electric car and BTRFS/ZFS are Caterpillar 797F" and bechmarks all they do is measure raw speed(big news the camaro always wins) but none show you which one handle more cargo or need less maintenance, etc.

              some phoronix article about google and btrfs reccomendations in last linuxcon http://www.phoronix.com/scan.php?pag...tem&px=MTc2Njk , as you see fast benchies isn't their point

              Comment


              • #27
                Originally posted by jrch2k8 View Post
                well, first phoronix(not michael fault tho) results are useless ...
                You have your opinion, obviously.

                Comment

                Working...
                X