Announcement

Collapse
No announcement yet.

SUSE Enterprise Considers Btrfs Production Ready

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Ericg
    replied
    Originally posted by GreatEmerald View Post
    It is when the FS itself has auto-repair capabilities...
    He's got a point Curaga. This isnt Ext4 where the only real mechanism is journal + fsck. Btrfs was designed for data integrity, even if they have to cut performance in some areas to do it. Before you hit the point of an fsck, you'd have to bust through a few layers of integrity checking and data repair that btrfs does automatically behind the scenes. And yes the fsck is released and can fix most issues. 100% issues? No, but then again neither can ext4's fsck.

    Quite frankly...this needs to happen anyway. You can run all the simulated tests in the world but at the end of the day even the best simulated test is still a simulated test. You need real-world usage on a larger scale and hopefully SUSE doing this will get the developers some more real-world data points so that any niche-cases where Btrfs falls on its face can be fixed.

    Leave a comment:


  • GreatEmerald
    replied
    Originally posted by curaga View Post
    "mostly" is not "enterprise ready" by any definition.
    It is when the FS itself has auto-repair capabilities...

    Leave a comment:


  • curaga
    replied
    "mostly" is not "enterprise ready" by any definition.

    Leave a comment:


  • GreatEmerald
    replied
    Originally posted by curaga View Post
    Is the fsck available, fully featured, and reliable? No? Then Suse is out of their minds.
    Yes, mostly and mostly.

    Leave a comment:


  • curaga
    replied
    Is the fsck available, fully featured, and reliable? No? Then Suse is out of their minds.

    Leave a comment:


  • ssam
    replied
    this news is from February

    Leave a comment:


  • jwilliams
    replied
    Originally posted by benmoran View Post
    Stop it. Learn to compose a solid argument.
    Right back at you.

    For something that has a relatively low failure rate, like say between 0.1 and 10%, anecdotes about it working are almost useless, since by definition, it works for the vast majority of cases. In contrast, anecdotes about it NOT working can be somewhat useful, since at least you can hear about some possible failure modes, and if there are enough failure reports then you may be able to conclude that the failure rate is higher than previously hypothesized.

    I think that btrfs definitely falls in this category. For a stable filesystem, I'd like to see the annual failure rate well below 0.1%. But from the reports that I have seen, and my guess at how many systems are using btrfs regularly, I think the failure rate is above 0.1%.

    Leave a comment:


  • benmoran
    replied
    Stop it. Learn to compose a solid argument.

    Btrfs has worked fine for me as well, despite what lot of people on the internet (who've never used it) like to say. Outside of the limitations listed on the kernel site, it's a solid FS already. Probably not -quite- production ready, but it's stable enough to be. The most important thing is having an up to date Kernel, so it's not recommended on your average Ubuntu spin unless you update it manually.

    I've tested it extensively on multiple servers, two personal workstations, and my home machine. It's still slow for certain types of file operations, but otherwise solid. One of the things I experimented with was the btrfs "RAID 1" implementation, which is not exactly RAID 1. I used a medium sized array of failing hard disks. These were all years old disks with dozens of bad sectors. My main conclusion is that the file-aware stripe duplication works amazingly well. The build in scrub functionality is also really damn amazing, and pretty much negates the need for an fsck.

    One feature that it doesn't have yet, that would keep me from using it in production, is to automatically remove failing devices from an array. I believe this is on the todo list, but nobody has tackled it yet. When a hard disk hits some unrecoverable sectors, btrfs works amazingly well in recoving from duplicate stripes. It does this seamlessly. The big issue is that if a disk is REALLY failing, like literally self descructing, then btrfs will keep hitting bad sector after bad sector, and recovering them as it goes. This works great when just a few sectors die, but not half a disk. This had the effect of slowing my server down to a crawl, making it essentially unusable. It would be much better to have the disk dropped from the array at this point.

    Leave a comment:


  • garegin
    replied
    if MS pulled s*** like this, the fanboys would've jumped on them like a rapid pack of wolves. but hey, we are already releasing alpha grade turdballs like opensuse so we might as well have a alpha-grade filesystem to match. the fact that ubuntu already has a huge server marketshare proves my point that you don't need a rock solid distro to power a LAMP stack.

    Leave a comment:


  • LightDot
    replied
    Originally posted by mazumoto View Post
    I use btrfs for a few years now on all my systems (server, desktop, laptop), I use it on dm-crypted raid5, I use it as root-fs. And I never had any data loss (where ext4, on a rc-kernel, but already stable, ate some of my data). Not even when it was really experimental and really didn't have any out-of-disk handling - I hit that of course :-)
    So yeah, it's not statistically valid, but for me it is production ready.
    Anecdotal experiences can be deceiving, that's true. This being said, when I tested brtfs out a while ago, I experienced catastrophic data loss. A combination of LVM and LUKS killed it, at least that was my hypothesis at the time. It was perhaps too soon to expect it to survive such scenarios.

    I was running tests and the data was there to be shredded, so no actual loss. I plan to do another round of testing sometimes in the 3.7 - 3.8 period... Is it ready to be used in production? I couldn't tell right now. Some distros seem to think it is, I'm not that convinced. With good, often made and actually restorable backups... and with enough time to do the restore... why not. I'm just afraid that people will start using it without backups. Than again, fsck and related tools could probably use more suck... ehm... testers.

    Leave a comment:

Working...
X