Announcement

Collapse
No announcement yet.

Linux 3.14 File-System HDD Benchmarks

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    If you want to be able to make full disk backups, then why don't you just use dd or a frontend to it? Maybe CloneZilla already does that?

    Comment


    • #17
      The Postmark benchmark on this Solid State Hybrid disk shows no differences between 3.14 and previous kernels, but for the SSD you tested a couple of days ago it showed a large slowdown in performance: http://www.phoronix.com/scan.php?pag...ltrabook&num=2

      And what would happen with a HDD I wonder...?

      Comment


      • #18
        Originally posted by siavashserver View Post
        @ GreatEmerald , Ericg

        Thank you both for sharing, I'm going to try it with a fresh install on one of machines For the main machine, is converting existing ext4 partitions to btrfs a safe operation? Or should I backup data and go with a fresh install?
        As far as whether its safe... I havent heard of any recent problems in regards to it. But for the sake of cleanliness and neatness, I don't do it. The conversation abuses the fact that Btrfs really doesnt care where the metadata goes, so it just shoves it into unused space of the ext4 layout. Does it work? Of course, but I don't like the frankenstein-filesystem lol.

        One important thing to note if you do the converting.. btrfs will create a ext4_saved subvol under /. Its a snapshot of the drive before the conversion took place. Since its a snapshot, at first it takes up no extra space. But as you use btrfs more and more the differences between the two will grow and you'll eventually take up more and more space than if you had just done a straight install (because it has to keep the old data AND the new data). Only way around this is to delete the snapshot but then you can't go back pre-conversion.

        More info here: https://btrfs.wiki.kernel.org/index....sion_from_Ext3

        Just a thing to note incase you have a small drive.

        Comment


        • #19
          Originally posted by GreatEmerald View Post
          If you want to be able to make full disk backups, then why don't you just use dd or a frontend to it? Maybe CloneZilla already does that?
          The problem with dd and (AFAIK) CloneZilla is exactly why I made this bug report: https://bugzilla.redhat.com/show_bug.cgi?id=1065847

          Every linux disk utility does full disk backups via a sector by sector copy, which is great because its filesystem agnostic. But it sucks because you get an image the size of your drive, not the size of your used space because none of them hook into the filesystem utilities and kernel features to figure out what exactly IS used space

          Comment


          • #20
            SSD benchmarks are queued up for Monday or Tuesday of next week.... HDD results (non-Hybrid) may come later in the week pending interest level; reason using SSHD was at the time of testing I forgot that it was a hybrid drive and the model number not reflecting it so when testing was done I remembered this particular system was my sole hybrid drive system.
            Michael Larabel
            http://www.michaellarabel.com/

            Comment


            • #21
              Originally posted by Ericg View Post
              Every linux disk utility does full disk backups via a sector by sector copy, which is great because its filesystem agnostic. But it sucks because you get an image the size of your drive, not the size of your used space because none of them hook into the filesystem utilities and kernel features to figure out what exactly IS used space
              Bah, you want to have the cake and eat it too. Doing a full disk backup naturally creates a backup of the full disk, including "free space" (which is never really free). Anything else would be an inaccurate backup and prone to error. And anyway, the point is that you should never have free space. Just like with RAM, any free space on the disk is wasted space. It's much better to make use of it by creating snapshots, so all the free space is filled with backup data (except for certain overhead space so you could delete the oldest snapshot when you need to write more to the disk). So if you want to have a space-efficient backup, then you can use send/receive. And if you want a truly full backup of the disk that you can reliably restore, then use dd and make a full copy (and you can even compress it afterwards and possibly save some extra space).

              Comment


              • #22
                Originally posted by GreatEmerald View Post
                Bah, you want to have the cake and eat it too. Doing a full disk backup naturally creates a backup of the full disk, including "free space" (which is never really free). Anything else would be an inaccurate backup and prone to error. And anyway, the point is that you should never have free space. Just like with RAM, any free space on the disk is wasted space. It's much better to make use of it by creating snapshots, so all the free space is filled with backup data (except for certain overhead space so you could delete the oldest snapshot when you need to write more to the disk). So if you want to have a space-efficient backup, then you can use send/receive. And if you want a truly full backup of the disk that you can reliably restore, then use dd and make a full copy (and you can even compress it afterwards and possibly save some extra space).
                Originally posted by GreatEmerald View Post
                Bah, you want to have the cake and eat it too.
                Of course I want to have my cake and eat it too :P The problem is that i know it doesn't HAVE TO be this way. I'm typing this reply on my laptop which currently has 2 1TB external hard drives hooked up to it doing a 78GB copy.... you know whats IN those 78GB's? 11 Disk images made by Acronis True Image. And after those 78GB's will be another ~80 or so as I go through and backup tons of system images.

                I can throw in the Acronis CD, select the "$Vendor - $Brand - $Model - $OS" image and Acronis doesn't care that the original image was taken from a 250GB Hard drive and the new one is a 120GB, or a 120GB and the new one is a 1TB. It just makes sure the partition table roughly matches resizing as needed. And thats just the automatic mode, manual mode I could configure it.

                Originally posted by GreatEmerald View Post
                including "free space" (which is never really free).
                You're right that free space is never really free, its not zero'd out, its just "not used anymore." Unfortunately the current mechanism TO zero out the free space is to

                1) pipe output output from /dev/zero into a file until the filesystem -ERRNOSPC's
                2) then you have a file that takes up your remaining drive space, filled with zero's
                3) then delete said file
                4) then dd the drive, piping THAT into tar
                5) and tell it to compress it.

                Which ONLY works because all those zero's in the free space get compressed down to nothing. But thats not fast, reliable (seriously, torturing your disk and filesystem?), or easy.

                I work in a computer repair shop, anytime we get a "Just reinstall it" job on a computer we haven't seen before the last step we do is to create an image of that computer's drive with Acronis, with all drivers and our 'standard' apps installed (flash, Avast, Reader, Chrome, etc). This way if we ever get a similar one that is also a "Just reinstall it" we don't have to go through all the hassle again. We can just restore the image we made from the last guy's clean drive and then work from there.

                Sector by sector is great in that its filesystem agnostic and it will pretty much always work. But if you're taking an image of a 1TB hard drive... good luck finding a place to store that. Because you need either a blank 1TB drive, or a drive larger than 1TB that HAS 1TB free.

                I just pulled up the Acronis image for a Dell Dimension system on one of the external hard drive. Full copy of the OS, + updates, + apps, + drivers.... 9.8Gb's. 9.8Gb's for an image, that for all I know came from a 1TB hard drive, or came from a 20GB hard drive. And if I restored that? Acronis wouldn't care either as long as the system had at least a 10GB hard drive.

                I get that sometimes you WANT to do dd because you need a PERFECT bit by bit copy with zero possible errors... But even compressing it, I really doubt that you'll save much space unless you first zero out the drive, which means we need something better and smarter than just sector by sector.

                Comment


                • #23
                  Originally posted by Ericg View Post
                  4) then dd the drive, piping THAT into tar
                  5) and tell it to compress it.
                  I know this is off-topic but is there any reason for tar? AFAIK you could pipe the dd output directly to the compressing program instead of using the (useless as it's just one file) container.

                  Comment


                  • #24
                    Originally posted by Ericg View Post
                    I get that sometimes you WANT to do dd because you need a PERFECT bit by bit copy with zero possible errors... But even compressing it, I really doubt that you'll save much space unless you first zero out the drive, which means we need something better and smarter than just sector by sector.
                    I assume that Acronis does all that by checking NTFS metadata to see what space is occupied and what space isn't? Because, again, the same thing is possible with btrfs send/receive, you'll get a small snapshot with only the data you need. The other partitions, well, you'd have to use some similar filesystem tools in those. I can see how this would be doable, but I can also see how it could fail horribly (you'd need to handle MBR, partition flags, UEFI partitions, suspend partitions, OEM partitions, MS hidden partitions, etc. etc. and all of the filesystems on them).

                    As for simple reinstalls, we have deployment ISOs (see SUSE Studio) for that. You insert a CD/USB drive, boot from it, and the image gets deployed on the hard drive. Much safer than to make a disk copy and pray that copying it back makes everything work as expected.

                    Comment


                    • #25
                      Originally posted by TAXI View Post
                      I know this is off-topic but is there any reason for tar? AFAIK you could pipe the dd output directly to the compressing program instead of using the (useless as it's just one file) container.
                      Most How-To's and such use Tar, so I went with tar. But AFAIK there is not anything stopping you from piping it directly to like lzo or 7z. I was just using the steps that i had seen the most often over the last month or so of trying to find a linux-native way of doing it.

                      Comment


                      • #26
                        Originally posted by GreatEmerald View Post
                        I assume that Acronis does all that by checking NTFS metadata to see what space is occupied and what space isn't? Because, again, the same thing is possible with btrfs send/receive, you'll get a small snapshot with only the data you need. The other partitions, well, you'd have to use some similar filesystem tools in those
                        AFAIK, Yes, Acronis just uses NTFS utilities to query the metadata and pick whats needed. And yes the same thing is possible with Btrfs.. but that requires btrfs on both drives. Unfortunately there's no utilities out there right now that actually DO use the filesystem utilities and API's to figure out whats going on inside the filesystem. PartImage, PartClone, (AFAIK) CloneZilla, Gnome-Disks, Gparted, all just take the easy route and do a sector-by-sector copy. Hell they may just be frontends for dd anyway.

                        The situation at the shop is we have 1 external hard drive that is an "Image drive" which has a directory layout of images and we have an Acronis True Image liveCD. Fire up the LiveCD and do everything in a limited Acronis environment and then everything is in Acronis' secret format.

                        What I was hoping for was throwing Fedora on my external hard drive, installed, and whenever I needed to image a drive I would plug it in and boot onto the installed, full, Fedora system on the drive and all drive images would be in a ~/Drive Images folder and all client data backups would be under ~/Client Data. Images would (hopefully) be taken / restored from linux native utilities from within Fedora... but I can't do that if every image is multi-hundred gigabytes.

                        So I'm back to the way we do it at the shop with Acronis Live CD's because its infinitely more space efficient.


                        Originally posted by GreatEmerald View Post
                        As for simple reinstalls, we have deployment ISOs (see SUSE Studio) for that. You insert a CD/USB drive, boot from it, and the image gets deployed on the hard drive. Much safer than to make a disk copy and pray that copying it back makes everything work as expected.
                        I'll check out Suse Studio for Suse installs, but that doesn't help me for XP or Win7 installs where every model has different drivers needed and may have different patch-levels installed.

                        Comment


                        • #27
                          Originally posted by Ericg View Post
                          AFAIK, Yes, Acronis just uses NTFS utilities to query the metadata and pick whats needed. And yes the same thing is possible with Btrfs.. but that requires btrfs on both drives.
                          I don't think so, I don't see why you couldn't loop-mount a Btrfs volume inside a file for the purposes of receive (though I've never tried it, so I'm not really sure if it works in practise).

                          Originally posted by Ericg View Post
                          Unfortunately there's no utilities out there right now that actually DO use the filesystem utilities and API's to figure out whats going on inside the filesystem. PartImage, PartClone, (AFAIK) CloneZilla, Gnome-Disks, Gparted, all just take the easy route and do a sector-by-sector copy. Hell they may just be frontends for dd anyway.
                          Yeap. It would require quite a bit of effort to make such an utility, as you might expect (and it seems that for the most part people are fine with the whole zero out/dd/bzip thing; at least that's the official method of compacting VirtualBox disk images, for instance). Of course, there is nothing stopping you from trying to code something of the sort on your own That's the usual workflow of making new utilities that nobody thought about earlier appear. That's also how I ended up packaging Snapper (which is openSUSE tech) for Gentoo

                          Originally posted by Ericg View Post
                          So I'm back to the way we do it at the shop with Acronis Live CD's because its infinitely more space efficient.
                          Lies! It's probably around 5 times more efficient on average

                          Originally posted by Ericg View Post
                          I'll check out Suse Studio for Suse installs, but that doesn't help me for XP or Win7 installs where every model has different drivers needed and may have different patch-levels installed.
                          Well, with Windows, you have unattended install CDs for pretty much the same purpose. You can use nLite for Windows XP for that. For Win7, there isn't a universally accepted good ISO customisation utility, but there are a few different ones with different pros and cons. All of the utilities allow you to customise what drivers are included in the ISO by editing a config file. I use these utilities to set up ultra-minimal Windows images for use on virtual machines (for software that doesn't run under Wine), for instance.

                          Comment


                          • #28
                            Originally posted by GreatEmerald View Post
                            Yes. In fact, I have a LiveCD build on SUSE Studio just for that (it automatically pulls in the latest kernel and btrfs tools for performing complex offline tasks; last time I used it to set up a backup in a different drive using btrfs-send/btrfs-receive). The btrfs check tool itself I usually run regularly to see if it reports any problems. There sometimes are some problems when I have to force-poweroff the machine (like when the GPU hangs), but they usually amount to some warnings and some data truncation. The former are mostly solved by running btrfs scrub. There was a case when it didn't help, and I needed to run the btrfs check repair to solve those. (There was also a time when btrfs check reported a lot of false positives, and even the warnings emitted now generally don't amount to much the developers themselves can't tell what some of them are about without having direct access to the drive in question). There was also one time some years ago when I had bad RAM, which caused corruption that btrfs check repair couldn't solve either, but that's pretty much a given (and I still was able to retrieve all the data I needed).
                            Thanks.
                            I guess its time to start playing with btrfs.

                            - Gilboa
                            DEV: Intel S2600C0, 2xE52658V2, 32GB, 4x2TB + 2x3TB, GTX780, F21/x86_64, Dell U2711.
                            SRV: Intel S5520SC, 2xX5680, 36GB, 4x2TB, GTX550, F21/x86_64, Dell U2412..
                            BACK: Tyan Tempest i5400XT, 2xE5335, 8GB, 3x1.5TB, 9800GTX, F21/x86-64.
                            LAP: ASUS N56VJ, i7-3630QM, 16GB, 1TB, 635M, F21/x86_64.

                            Comment


                            • #29
                              test of lvm/raid +ext4 vs btrfs

                              hi,

                              would it make sense to test btrfs on disk device compared to a mdadm raid 1 on lvm2 partition. Most of us use raid or/and Lvm2 to provide raid and snapshotting to ext4 partition. So basicaly it would be fair to test a settup with

                              Disk
                              Soft RAID 1
                              Lvm2
                              Ext4

                              vs

                              Disk
                              Btrfs in raid1 like setup


                              Both present similar features so this could be very interesting to test.


                              regards,
                              Ghislain.

                              Comment

                              Working...
                              X