Announcement

Collapse
No announcement yet.

XFS & Btrfs For Linux 3.16 Bring New Features

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • XFS & Btrfs For Linux 3.16 Bring New Features

    Phoronix: XFS & Btrfs For Linux 3.16 Bring New Features

    While EXT4 didn't see any exciting changes for the Linux 3.16 merge window, the XFS and Btrfs file-systems are continuing to receive a great deal of upstream improvements...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Is Btrfs already reliable for network storage systems? I read somewhere that it was not reliable yet for RAID based systems, and ZFS was recomended instead (using *BSD in place of Linux)

    Comment


    • #3
      Originally posted by newwen View Post
      Is Btrfs already reliable for network storage systems? I read somewhere that it was not reliable yet for RAID based systems, and ZFS was recomended instead (using *BSD in place of Linux)
      I've been using btrfs for about 3 years (a PC and a laptop in single disk mode), and for the past half year or so in raid 1 mode. Never had a single issue so far.

      Comment


      • #4
        Originally posted by newwen View Post
        Is Btrfs already reliable for network storage systems? I read somewhere that it was not reliable yet for RAID based systems, and ZFS was recomended instead (using *BSD in place of Linux)
        Using BSD isn't necessary, although it might work better there since it's been tested there longer. I've personally used it for storing games and other big stuff, but in hindsight I shouldn't have expected the transparent compression to do much since most everything is already compressed in some way. I've switched back to Ext4 because the openSUSE packages are unofficial and can be rather flaky (though once you get a version that works, it's fine).

        Comment


        • #5
          I switched also recently to btrfs on my ssd, it has so much advantages that it should be pushed hard now.

          1. for a ssd which each GB is very expensive disk compression is worth real money and even pushes the speed if not used with a very weak cpu
          2. subvolumes that shares the hole space also reduces disk usage than often u have formated fixed sizes and maybe / is full but on /home you have 20gb free that cant happen with btrfs subvolumes.
          3. data savety. because it checksumms your data you cant get bit rod
          4. snapshoting and sending/receiving allows cheap backups.


          Thats some for me important features. BTRFS should be pushed hard to make it officialy stable and usable on productive machines and make it the linux default fs or at least start with distries to make it default one after the other.

          ext4 is not good enough anymore. We dont need to wait till windows or macosx maybe switches to such a fs before we do the same step, we can here be first Desktop-OS that supports this.

          We try to switch in next 1-2 years to wayland as default instead of xserver on most desktop distros. BTRFS is developed since 2007 Wayland since 2012 or so. So Wayland is developed since 2-3 Years while btrfs since 7 Years. So make it soon default.

          I know filesystems are more problematic, but on the other hand raiser3 was default on some distros a while and I heard much more storries about data losses from it than I hear from btrfs.

          The advantages are huge, way more professional and less problematic as example with grub than using sw-raid + lvm.

          Comment


          • #6
            Originally posted by blackiwid View Post
            1. for a ssd which each GB is very expensive disk compression is worth real money and even pushes the speed if not used with a very weak cpu.
            However, SSDs perform compression internally to reduce flash writes, so you may gain a bit more disk space, but you lose drive lifespan as it has to write more data to the flash.

            Comment


            • #7
              In the worst case it will write same amount of data to a disk, sometimes less if FS compression is more efficient than internal. Goal is to avoid writing 2+ blocks, when compressed data fits into one - it doesn't matter really if filesystem does it or a disk.
              Given same amount of data there will be no difference. If filesystem compression actually saves you some space and you use it - sure it might affect drive lifetime a bit, but that won't be a major blow.

              Comment


              • #8
                Originally posted by movieman View Post
                However, SSDs perform compression internally to reduce flash writes, so you may gain a bit more disk space, but you lose drive lifespan as it has to write more data to the flash.
                I dont get how reducing of disk space can lead to more flash writes? Ahh k I get it if you have 1mb txt file and u change 1 char and save it, that it have to writhe the full 1mb again. Thats maybe true yes. But config file writes per day will not be more than lets say 50mb so if you have a 60gb ssd it will take 1000 days till it wrote 1 time the full ssd. So I guess thats not that big of a problem.

                Btw I dont know about disk-compression internaly fact is that my filesystem was a few gb smaler around 20% after doing that. so the internel compression cant be very good.

                Comment


                • #9
                  And lets say for a moment that your ssd only works 3 years instead of 5 years because of that. after 3 years you want most likely anyway a bigger ssd. SSDs reduce the price per mb faster than harddisks so each day u can delay a bought of a bigger one is much money worth.

                  And another thing, ssds get slower and the write cycles cant be changed to other sectors so effective if the hard disk is fuller.

                  So maybe without compression, u have less write cycles but that happen on less free sectors.

                  Comment


                  • #10
                    Originally posted by blackiwid View Post
                    I dont get how reducing of disk space can lead to more flash writes? Ahh k I get it if you have 1mb txt file and u change 1 char and save it, that it have to writhe the full 1mb again. Thats maybe true yes. But config file writes per day will not be more than lets say 50mb so if you have a 60gb ssd it will take 1000 days till it wrote 1 time the full ssd. So I guess thats not that big of a problem.

                    Btw I dont know about disk-compression internaly fact is that my filesystem was a few gb smaler around 20% after doing that. so the internel compression cant be very good.
                    He described it completely wrong, what he actually means is this kind of scenario:
                    You fill up whole disk with data, you only have like 10 free blocks (just for example).
                    Then you change file, SSD tries to use all available sectors evenly, so it will keep rotating writes over those remaining sectors and if there is many writes, you'll end up hitting wear level on those and they fail. If you have 10 Gb of compressible data, SSD does compression internally, report data written 10Gb, while only using part of it internally - this leaves more free blocks for SSD to manage wear level and helps lifetime.

                    In reality, even on HDD filling up whole disk is terrible idea and you should never have less than 10% left, because otherwise there will be perfomance issues not only with disk, but filesystem as well. Unless you desperately want to make your system slower, you should not let this happen and 10% is more than enough for SSD for wear management.

                    Comment

                    Working...
                    X