Announcement

Collapse
No announcement yet.

Linux 3.14 File-System HDD Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Bucic View Post
    I'd hapily switch to Btrfs or even ext4+LVM but it would mean giving up my problem-free, beginner-friendly backup solution which is CloneZilla. There are few problems with LVM one can stumble upon in a completely standard scenario. Does Btrfs similarly bring difficulties for CloneZilla users?

    Or, alternatively, can Btrfs features replace CloneZilla as far as backup goes? Full disk backups capability is a must (including GPT Bios boot partition, /boot partition).
    I'm not sure what CloneZilla does, exactly, but Btrfs has LVM features; yet they, naturally, work only in Btrfs partitions. For instance, the UEFI system partition must be FAT32, there is no way around that. And the swap partition must also be a real swap partition (Btrfs doesn't support swapfiles). There are no problems with /boot being a part of the Btrfs partition, though (that's how I have mine set up).

    But about backups, there are several things Btrfs supports there. First off is snapshots ? cheap to make, yet complete subvolume copies thanks to copy-on-write. Snapper makes them and cleans them automatically with a cron job. YaST and zypper also make pre/post snapshots automatically, too, to allow you to both check what was changed during the session and to roll back changes. On Gentoo I have made a script for myself that creates pre/post snapshots on every non-pretend invocation of emerge for the same reason.

    Next we have RAID, of course. Btrfs generally best supports simple mirroring (RAID5/6 are still being worked on IIRC), so that if you have two disks, if one suddenly dies you still have the other with the exact same data (and/or metadata, RAID applies to those independently; very useful in my case, where I have a small SSD and a large HDD ? I couldn't mirror the whole HDD contents on the SSD, but I did it with the metadata, and still have space for new data on the SSD).

    And then you have btrfs send/receive, which can copy a subvolume/snapshot to another device, and also synchronise it once the original changes (without recopying the entire thing), which means that you can have manual backups on external storage that way. These copies are accessible like any other regular Btrfs file system (except they're by default read-only). So if you want to do something potentially dangerous and want to make a manual full backup, this is the way to go.

    Comment


    • #12
      I just wonder if this benchmark is really about file-system performance or which file-system works best with the caching algorithm of this hybrid drive. Would be nice to see numbers for real HDDs and SSDs. to rule out such a possibility.

      Comment


      • #13
        Originally posted by GreatEmerald View Post
        I'm not sure what CloneZilla does, exactly, but Btrfs has LVM features; yet they, naturally, work only in Btrfs partitions. For instance, the UEFI system partition must be FAT32, there is no way around that. And the swap partition must also be a real swap partition (Btrfs doesn't support swapfiles). There are no problems with /boot being a part of the Btrfs partition, though (that's how I have mine set up).

        But about backups, there are several things Btrfs supports there. First off is snapshots ? cheap to make, yet complete subvolume copies thanks to copy-on-write. Snapper makes them and cleans them automatically with a cron job. YaST and zypper also make pre/post snapshots automatically, too, to allow you to both check what was changed during the session and to roll back changes. On Gentoo I have made a script for myself that creates pre/post snapshots on every non-pretend invocation of emerge for the same reason.

        Next we have RAID, of course. Btrfs generally best supports simple mirroring (RAID5/6 are still being worked on IIRC), so that if you have two disks, if one suddenly dies you still have the other with the exact same data (and/or metadata, RAID applies to those independently; very useful in my case, where I have a small SSD and a large HDD ? I couldn't mirror the whole HDD contents on the SSD, but I did it with the metadata, and still have space for new data on the SSD).

        And then you have btrfs send/receive, which can copy a subvolume/snapshot to another device, and also synchronise it once the original changes (without recopying the entire thing), which means that you can have manual backups on external storage that way. These copies are accessible like any other regular Btrfs file system (except they're by default read-only). So if you want to do something potentially dangerous and want to make a manual full backup, this is the way to go.
        Nicely laid out, thanks.

        I know that Btrfs 'has LVM '. That's why I said 'Ext4+LVM or Btrfs'.

        However, what puts me off as a beginner is this 'snapshot everything'. For me a zero-zero restore, i.e. ability to restore to a completely wiped-out disk from a backup, is what I need to establish first and foremost. So:
        1. Does Btrfs have this ability out of the box (no workarounds/hacks)?
        OR
        2. Is there a Clonezilla alternative that handles Btrfs in a hassle-free manner?

        Comment


        • #14
          If you want to be able to make full disk backups, then why don't you just use dd or a frontend to it? Maybe CloneZilla already does that?

          Comment


          • #15
            The Postmark benchmark on this Solid State Hybrid disk shows no differences between 3.14 and previous kernels, but for the SSD you tested a couple of days ago it showed a large slowdown in performance: http://www.phoronix.com/scan.php?pag...ltrabook&num=2

            And what would happen with a HDD I wonder...?

            Comment


            • #16
              Originally posted by siavashserver
              @ GreatEmerald , Ericg

              Thank you both for sharing, I'm going to try it with a fresh install on one of machines For the main machine, is converting existing ext4 partitions to btrfs a safe operation? Or should I backup data and go with a fresh install?
              As far as whether its safe... I havent heard of any recent problems in regards to it. But for the sake of cleanliness and neatness, I don't do it. The conversation abuses the fact that Btrfs really doesnt care where the metadata goes, so it just shoves it into unused space of the ext4 layout. Does it work? Of course, but I don't like the frankenstein-filesystem lol.

              One important thing to note if you do the converting.. btrfs will create a ext4_saved subvol under /. Its a snapshot of the drive before the conversion took place. Since its a snapshot, at first it takes up no extra space. But as you use btrfs more and more the differences between the two will grow and you'll eventually take up more and more space than if you had just done a straight install (because it has to keep the old data AND the new data). Only way around this is to delete the snapshot but then you can't go back pre-conversion.

              More info here: https://btrfs.wiki.kernel.org/index....sion_from_Ext3

              Just a thing to note incase you have a small drive.
              All opinions are my own not those of my employer if you know who they are.

              Comment


              • #17
                Originally posted by GreatEmerald View Post
                If you want to be able to make full disk backups, then why don't you just use dd or a frontend to it? Maybe CloneZilla already does that?
                The problem with dd and (AFAIK) CloneZilla is exactly why I made this bug report: https://bugzilla.redhat.com/show_bug.cgi?id=1065847

                Every linux disk utility does full disk backups via a sector by sector copy, which is great because its filesystem agnostic. But it sucks because you get an image the size of your drive, not the size of your used space because none of them hook into the filesystem utilities and kernel features to figure out what exactly IS used space
                All opinions are my own not those of my employer if you know who they are.

                Comment


                • #18
                  SSD benchmarks are queued up for Monday or Tuesday of next week.... HDD results (non-Hybrid) may come later in the week pending interest level; reason using SSHD was at the time of testing I forgot that it was a hybrid drive and the model number not reflecting it so when testing was done I remembered this particular system was my sole hybrid drive system.
                  Michael Larabel
                  https://www.michaellarabel.com/

                  Comment


                  • #19
                    Originally posted by Ericg View Post
                    Every linux disk utility does full disk backups via a sector by sector copy, which is great because its filesystem agnostic. But it sucks because you get an image the size of your drive, not the size of your used space because none of them hook into the filesystem utilities and kernel features to figure out what exactly IS used space
                    Bah, you want to have the cake and eat it too. Doing a full disk backup naturally creates a backup of the full disk, including "free space" (which is never really free). Anything else would be an inaccurate backup and prone to error. And anyway, the point is that you should never have free space. Just like with RAM, any free space on the disk is wasted space. It's much better to make use of it by creating snapshots, so all the free space is filled with backup data (except for certain overhead space so you could delete the oldest snapshot when you need to write more to the disk). So if you want to have a space-efficient backup, then you can use send/receive. And if you want a truly full backup of the disk that you can reliably restore, then use dd and make a full copy (and you can even compress it afterwards and possibly save some extra space).

                    Comment


                    • #20
                      Originally posted by GreatEmerald View Post
                      Bah, you want to have the cake and eat it too. Doing a full disk backup naturally creates a backup of the full disk, including "free space" (which is never really free). Anything else would be an inaccurate backup and prone to error. And anyway, the point is that you should never have free space. Just like with RAM, any free space on the disk is wasted space. It's much better to make use of it by creating snapshots, so all the free space is filled with backup data (except for certain overhead space so you could delete the oldest snapshot when you need to write more to the disk). So if you want to have a space-efficient backup, then you can use send/receive. And if you want a truly full backup of the disk that you can reliably restore, then use dd and make a full copy (and you can even compress it afterwards and possibly save some extra space).
                      Originally posted by GreatEmerald View Post
                      Bah, you want to have the cake and eat it too.
                      Of course I want to have my cake and eat it too :P The problem is that i know it doesn't HAVE TO be this way. I'm typing this reply on my laptop which currently has 2 1TB external hard drives hooked up to it doing a 78GB copy.... you know whats IN those 78GB's? 11 Disk images made by Acronis True Image. And after those 78GB's will be another ~80 or so as I go through and backup tons of system images.

                      I can throw in the Acronis CD, select the "$Vendor - $Brand - $Model - $OS" image and Acronis doesn't care that the original image was taken from a 250GB Hard drive and the new one is a 120GB, or a 120GB and the new one is a 1TB. It just makes sure the partition table roughly matches resizing as needed. And thats just the automatic mode, manual mode I could configure it.

                      Originally posted by GreatEmerald View Post
                      including "free space" (which is never really free).
                      You're right that free space is never really free, its not zero'd out, its just "not used anymore." Unfortunately the current mechanism TO zero out the free space is to

                      1) pipe output output from /dev/zero into a file until the filesystem -ERRNOSPC's
                      2) then you have a file that takes up your remaining drive space, filled with zero's
                      3) then delete said file
                      4) then dd the drive, piping THAT into tar
                      5) and tell it to compress it.

                      Which ONLY works because all those zero's in the free space get compressed down to nothing. But thats not fast, reliable (seriously, torturing your disk and filesystem?), or easy.

                      I work in a computer repair shop, anytime we get a "Just reinstall it" job on a computer we haven't seen before the last step we do is to create an image of that computer's drive with Acronis, with all drivers and our 'standard' apps installed (flash, Avast, Reader, Chrome, etc). This way if we ever get a similar one that is also a "Just reinstall it" we don't have to go through all the hassle again. We can just restore the image we made from the last guy's clean drive and then work from there.

                      Sector by sector is great in that its filesystem agnostic and it will pretty much always work. But if you're taking an image of a 1TB hard drive... good luck finding a place to store that. Because you need either a blank 1TB drive, or a drive larger than 1TB that HAS 1TB free.

                      I just pulled up the Acronis image for a Dell Dimension system on one of the external hard drive. Full copy of the OS, + updates, + apps, + drivers.... 9.8Gb's. 9.8Gb's for an image, that for all I know came from a 1TB hard drive, or came from a 20GB hard drive. And if I restored that? Acronis wouldn't care either as long as the system had at least a 10GB hard drive.

                      I get that sometimes you WANT to do dd because you need a PERFECT bit by bit copy with zero possible errors... But even compressing it, I really doubt that you'll save much space unless you first zero out the drive, which means we need something better and smarter than just sector by sector.
                      All opinions are my own not those of my employer if you know who they are.

                      Comment

                      Working...
                      X