Announcement

Collapse
No announcement yet.

Ubuntu 12.04 LTS - Benchmarking All The Linux File-Systems

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Any way you could include nilfs2 in these?

    Comment


    • #12
      These results mirror my experiences with BtrFS a couple of years ago - it seemed to get slower and slower and eventually made starting Chromium painful - this was with compression switched on

      Comment


      • #13
        Stay tuned for more benchmark results, including when testing each of the Btrfs mount options on Ubuntu 12.04.
        YES! Do want! Could you also include different mount options to EXT4 for comparison to different BTRFS mount options? I don't know what good ones there are, but i'm sure there are some optimization options.

        Please test the same on a traditional HDD
        Was thinking the same thing when reading the article

        best FS for SSD might not be the best for HDD as well
        Very good point! I plan to get an SSD in the future which will work along side a storage HDD. However, it should be kept in mind that there is no single answer to global answer to this question. It will depend on the functions primarily performed on the storage device.

        Comment


        • #14
          yes I had similar experineces with btrfs, getting slower and slower, and after I tried to update to ubuntu 12.04 beta I could not get it to boot anymore, not shure maybe it had something to do with efi shit I think that it was not btrfs related because it should have shown grub even if it could not access the btrfs partition because boot was on ext2, but it just said no device or something at boot try to fixing grub did not help at all, then I just reinstalled it and before that I backuped the home, but ok that was not btrfs fault I guess ^^.

          efi shit sucks ^^.

          I dont mean efi ^^ I mean the replacement of mbr, dont know the name right now ^^ I thinkt that was somehow the problem, I will only try that again if my system is unable to boot from good old solid 1000 year old mbr


          But back to btrfs It also through out some mistakes when I tried to delete the old apt-btrfs images it sometimes just refused to do that and said something about errors. Because there is no fixing fsck, I will wait now a long time to go with that I think. BTW I even made some benchmarks extracting a linux kernel on the old and the new partition on the old btrfs I had at the end 15mb/s write performance (kernel extract) with ext4 (both with lvm) I have 80mb/s so btrfs should get some older before its long term usable, but again most important it did not loose the data in the volume.


          sorry for my english, not long awake today

          Comment


          • #15
            Originally posted by FourDMusic View Post
            However, it should be kept in mind that there is no single answer to global answer to this question. It will depend on the functions primarily performed on the storage device.
            If you use an SSD as a boot drive and HDD as storage, I would think the HDD should lean towards large, mostly sequential reads most of the time. The SSD should cover almost the entire range of operations, from large sequential reads upon startup to small, random writes for temporary files. Fortunately, SSD has both low latency and very high transfer rates. But it is good to know whether the FS can help on top of that.
            In the end, that's all this IO, transfer rates testing is about: letting anyone infer the results for their own use cases.

            Comment


            • #16
              Raid Request Clarification

              I guess my above post did not specify my interest is in relative raid performance using traditional hard disks. I suspect that software raid 6 with a hot spare is a reasonable choice for a home archival data server where your hot button issue is no data loss caused by disk failure. We are getting closer to Xeon Atom boards with 8 PCIe lanes that can be used for many sata ports. It would be nice to know that Btrfs raid rebuild performance is comparable to mdamd if you are using a low powered CPU from whoever.

              Comment


              • #17
                Originally posted by malkavian View Post
                Yes, there are "cheap" SSD disks nowadays, but they use MLC chips, that are slower and have a short life (short number or writes before fail). In 2 or 3 years you could have problems with a SSD MLC disk. SLC have a much longer life (large number or writes before fail) and are a lot quicker, but they are much more costly per Gb. http://en.wikipedia.org/wiki/Multi-level_cell
                You know that even a standard MLC SSD has more than enough durability for any normal scenario these days?
                A standard number I've seen for older MLC technologies is 10000 write cycles, and if anything the current number is higher. That means any single cell can be rewritten 10000 times, and there's (say) 128GB of cells available. At 10000, that's about 1.2PB. That's 717GB of writes every day for five years, if the wear levelling is perfect. If you use 90% of the drive for constant content, so the rewrites only touch the remainder, it should wear out in five years at about 70GB/day, and even if you then use a very pessimistic 10x safety margin you're at 7GB of writes every day for five years. Realistically, it'll be fine.

                Alternatively, intel claims a 1.2million hour MTBF on their 120GB SSD, or over 130 years. I have no idea how they came up with that number.
                Last edited by dnebdal; 17 March 2012, 12:36 PM.

                Comment


                • #18
                  If nowadays wear levelling is well implemented I suposse you are right

                  Comment


                  • #19
                    Originally posted by malkavian View Post
                    If nowadays wear levelling is well implemented I suposse you are right
                    It's supposed to be, though of course it works better the more free space it has to play with. Drives typically keep some GB of spare space for that purpose, so even if it's 100% full it can spread writes over 4 or 8 GB of physical cells. I think that's why you often see drive sizes that are a bit less than a power of 2 - like the 120GB intel that I bet has 128GB of physical chips.

                    Comment


                    • #20
                      Originally posted by blackiwid View Post
                      I dont mean efi ^^ I mean the replacement of mbr, dont know the name right now ^^ I thinkt that was somehow the problem, I will only try that again if my system is unable to boot from good old solid 1000 year old mbr
                      You mean GPT, btw. It's nice enough as long as your BIOS can boot it; the only thing that annoys me with it is that Win7 and Win8 don't want to install onto GPT partitions. On the other hand it supports larger partitions, more partitions, more file system type codes, and it's a bit more resilient with the backup copy at the end of the disk.

                      Not that it really matters as long as you can get your file system onto the disk at the desired position and length, and boot from it.
                      Last edited by dnebdal; 17 March 2012, 01:06 PM.

                      Comment

                      Working...
                      X