Btrfs On Ubuntu Is Running Well

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • movieman
    replied
    Originally posted by squirrl View Post
    RAID is old news with drives approaching 4 TB or surpassing. Having multiple storage devices on a network is safer. It's wasted resources packing mutiple drives in a single point of failure!
    You appear to think that 4TB is a lot of data?

    Leave a comment:


  • Brane215
    replied
    Originally posted by Ra698shida View Post
    Why are you trying to fix logical problem with visual aid ?

    Leave a comment:


  • liam
    replied
    What we really need in linux is something like Amplidata's Distributed Storage System. That thing looks ideal for both uptime, bit rot, and data loss.
    Sort of like a next-gen zfs.

    Leave a comment:


  • Ra698shida
    replied
    And it is not clear how can it scale better than existing mdraid+lvm solutions

    Oakley Sunglasses

    Last edited by Ra698shida; 11 September 2013, 11:55 PM.

    Leave a comment:


  • squirrl
    replied
    Also

    jones_supa writes "The sudden death of a solid-state drive in Linus Torvalds' main workstation has led to the work on the 3.12 Linux kernel being temporarily suspended. Torvalds has not been able to recover anything from the drive. Subsystem maintainers who have outstanding pull requests may need to...


    Damn :P

    Leave a comment:


  • squirrl
    replied
    Of Raids and Backups

    RAID is old news with drives approaching 4 TB or surpassing. Having multiple storage devices on a network is safer. It's wasted resources packing mutiple drives in a single point of failure!

    What really matters is the performance Virtual Machines experience on the file-systems.

    I always figured JFS was a better solution because of the low performance overhead.
    This frees up CPU cycles for the Virtuals.

    That's one test I'd like to see; perhaps it's being tested but I don't recall seeing the statistics.

    Just from my experience alone, I've noticed I can't tell much difference in any of them: Ext4, JFS, XFS, BTRFS

    Leave a comment:


  • Brane215
    replied
    Originally posted by benmoran View Post
    Your question was whether or not there exist any advantages. Resizing arrays is one. Many of us who deal with big storage arrays daily DO have to use these features. You seem to now be arguing that you don't need any more advanced features, and that's fine.

    Again, I use MDADM extensively. It has it's limitations.
    Yes, but those affect system administrator much more than actual user, which might care more about array performance.

    When you buy new car, is your sole criteria how elegantly will your mechanic be able to replace spark plugs, air filter or oil ?

    Leave a comment:


  • benmoran
    replied
    Originally posted by Brane215 View Post
    Who cares about few extra steps ? Do you reshape your disk storage this way several times per day ?
    I USE it every day for most of the time. So, its performance during work is by far primary for me. Qty of work at one-time maintenance work, if it is not excessive or unreasonable is totally irellevant.

    I don't have RAID for fun.
    Your question was whether or not there exist any advantages. Resizing arrays is one. Many of us who deal with big storage arrays daily DO have to use these features. You seem to now be arguing that you don't need any more advanced features, and that's fine.

    Again, I use MDADM extensively. It has it's limitations.

    Leave a comment:


  • Brane215
    replied
    Originally posted by kenneth13 View Post
    The real danger with mdadm raid 5 arrays is tripping on an unrecoverable read error rate (URE) on a second drive while trying to re-silver the array. I've come across posts that made me worry: http://www.zdnet.com/blog/storage/wh...ng-in-2009/162 .

    People suggest backing up the data (if not having done so already) BEFORE replacing the failed drive to avoid danger of hitting an URE. For arrays near or over 12TB the danger is real.

    I've got a 4x2TB raid 5 ext4 array and I've successfully replaced 2 bad drives over a period of time, but I'm looking forward to replacing it with btrfs when its raid 5 and btrfs-tools become mature enough.

    I hope you enjoy your large array, but please make sure you take regular backups.

    1. I know that RAID, as it stands now, has its deficiencies. Or maybe it is too stroong of a word. Raid gets done exactly what is possible within redundancy allowed. If you need more, well add more mechanisms with redundancy.

    2. Even in that example, I don't think it's that bad. In that example with a drive with one bad sector, you could force rebuild around it and be prepared to lose a file or two.
    3. That is solvable with RAID-6.
    4. You always have an option of professional recovery, which shouldn't be too expensive or painfull if we are talking about simple sector copy of the drive.
    5. What makes you think that Btrfs underlying solution wil be superior in practice ?

    I have nothing against Btrfs and I agree that at least its the PR pamphlet adresses _some_interesting problems. I just have hard time believing it is time to jump the Btrfs bandwagon for the majority.

    And, to be blunt, it seems to much of the tutti-fruti solution to me. Instead being focused on one compatible set of objectives with maximum efficiency, they seem to be trying to cover everything they can and end result is less than stellar. Hydroplane is neither competitite as a plane nor as a boat. It "needs" right customer profile. Btrs feels a bit the same way, at least from afar...

    Leave a comment:


  • ninez
    replied
    Originally posted by kernelOfTruth View Post
    unfortunately I always got issues when trying out the rt-kernel & ZFS doesn't work yet with it - so I'll settle for BFS + tweaking on an up-to-date kernel for now
    About ZFS - that sucks. I use ext4 (as i said before) so it's a non-issue for me.. Is ZFS not thread-safe or something?

    Anyway, After further testing, those initial ipc/sem patches kick ass! ~ it's a definite performance improvment. Next, I'm gonna play around with the cfq-iosched patch, mutex_don't deal with unnecessary waiters patch, sync dont block flusher thread patch and throw in the optimize_strlen_using simd instructions patch too - just adding them to the queue now.

    Originally posted by kernelOfTruth View Post
    most of the recent performance-related improvements & patches for btrfs can be found on http://marc.info/?l=linux-btrfs&r=1&b=201308&w=2 are from August until now

    glad it helps
    Maybe i will subscribe to btrfs mailing list, once it gets a little closer to me assembling my new machine... Also, yes so far these patches have been useful <that i've tested anyway>. It's funny, I've been slacking recently <as far as digging for patches goes> and on updating my kernel packages for Archlinux - So you pointing out some of these patches is helpful. Even given, that many are in linux-next - they won't be backported to linux-rt 3.10.x - so it's nice to go through and cherry pick, some of the more useful ones to backport...

    thx again, cheerz

    Leave a comment:

Working...
X