Announcement

Collapse
No announcement yet.

Linux 5.5 SSD RAID 0/1/5/6/10 Benchmarks Of Btrfs / EXT4 / F2FS / XFS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • xinorom
    replied
    Originally posted by DrYak View Post
    In my very long experience with BTRFS I've never seen a filesystem corrupt itself "just because BTRFS". It was always either me playing with experimental options, or the medium breaking.
    This guy probably can't even tell the difference. He obviously has no patience whatsoever and just wants to blame the first thing that isn't himself. Probably caused by the same ADHD that prevents him from waiting for a technology to mature before going all-in, despite claiming to want "rock-solid stability".

    This cretin's reasoning is just all over the place -- no wonder he's trashed his data 3 times.

    Leave a comment:


  • xinorom
    replied
    Originally posted by profoundWHALE View Post
    Well now I know that you don't know what you're talking about. Maybe you should go troll somewhere else

    https://git.kernel.org/pub/scm/linux...cb5c58097b918e
    Marking the disk format as "no longer unstable" is NOT the same thing as marking the entire filesystem stable. How f*cking retarded do you have to be to make that leap of logic? You're just an overly excitable Consoomer who wants extreme stability and up-to-the-minute cutting edge at the same time. You can't have both.

    I truly hope you get stung again and lose lots of important data. Perhaps eventually you'll realize how moronic you are and start making decisions like an adult.

    Leave a comment:


  • DrYak
    replied
    Originally posted by profoundWHALE View Post
    then in 2016 I tried a RAID10 set-up with 24TB worth of drives and the only reason why I didn't lose everything was because I had everything important backed up on some old drives.
    Having backup is always a good idea no matter what

    Originally posted by profoundWHALE View Post
    I spent 2 weeks of what I can only describe as hell. I had so much data to scrub through and for tools to comb through that I would have to run them for a whole day, only to find that it failed at like 25%.
    If your scrubs are taking multiple days, then there's something wrong.
    e.g.: some background tasks that takes way too much I/O.
    or e.g.: smartctl kicking into full long selt-tests (which also kill I/O on rotationnal media due to seeking)
    or you're stacking above a lower layer that also has it's own pitfalls (stacking above a mdadm RAID5/6 which brings in a lot of read-modify-write cycles. Or used shingled drives in a way that managed to increase the r-m-w cycles despite btrfs being cow)

    Normally scrub should take a couple of hours max, and is something that needs to be performed on a regular basis to guarantee data safety.
    (I tend to run it weekly, monthly is about the min recommandations).

    If you have I/O problems, you might consider (a stable a mature) SSD caching layer between BTRFS and the drives.

    Originally posted by profoundWHALE View Post
    And the problem is that losing 75% of data means I lost more like 25% because the corruption was all over the place randomly. I had wedding videos that had chunks of missing audio and video.
    If you get corruption all over the place:
    - you've been mistaken and actually run one of the features not considered stable (like RAID5/6 instead of RAID0/1, like extref or skinny on a to old kernel).
    - you've got some massive hardware problem the difference being that the BTRFS checksumming actually notices it. It needs to be very massive if the RAID1 duplication is insufficient for recovering data.

    In my very long experience with BTRFS I've never seen a filesystem corrupt itself "just because BTRFS". It was always either me playing with experimental options, or the medium breaking.

    Originally posted by profoundWHALE View Post
    Exactly, fsck is known to break the filesystem. It's a complaint I listed because if you have a serious problem and fsck is more likely to destroy your data than to recover it, maybe it's poorly designed.

    And no, I didn't just fsck it and type in whatever commands. I read up on the manuals and only did "safe" commands, the problem I had is they kept failing at like 75%.
    {...}
    If the filesystem fails, which happens, I should be able to run a file system check that doesn't destroy the filesystem.
    *BTRFS SCRUB* is the standard check that you need to run periodically on BTRFS.

    FSCK is a big no-no on BTRFS - it's always stated as a last-resort only procedure when everything else fails, and usually there are better options before.
    And doesn't make sense on a CoW system - there should always be an older copy you could roll back to. And always checksums to be able to check *which* is the last known good. Do not try to reconstruct filesystem information if you could just fetch a still good copy.

    So either the filesystem should work as-is in recovery mode. Or you have too much corruption on your filesystem (Note: write random shit on random sectors of any filesystem could do this, it's not BTRFS specific).

    At which point you should immediately consider using the well documented 'btrfs restore' to extract any file (currently missing in your backup) before the drive actually dies (and watchout the logs of the command for checksum fails - btrfs restore can be set to ignore checksum errors instead of interrupting).

    At which point you have already recovered what you need and need only to run FSCK if you want to play around with the corrupted filesystem.


    Originally posted by profoundWHALE View Post
    and right now I'm testing bcachefs. No corruption issues, yet
    Originally posted by profoundWHALE View Post
    But guess what? Every time I've had an issue, either the filesystem can fix itself, or Kent pushes an update that day.
    Sorry, I can't follow you. Which is which ? No corruption issue, or Kent pushing update to fix corruption ?

    Originally posted by profoundWHALE View Post
    I'm the one who posts the Mega download links for the Deb packages on Reddit.
    Megadownload links? On Reddit? Okay, that kind of says it all. (Please, try to learn using 3rd party repos and digital signature).

    Leave a comment:


  • profoundWHALE
    replied
    Originally posted by xinorom View Post

    Which "people"? Pretty sure no one you should have been listening to was calling it stable in 2013.
    Well now I know that you don't know what you're talking about. Maybe you should go troll somewhere else

    https://git.kernel.org/pub/scm/linux...cb5c58097b918e

    Originally posted by xinorom View Post

    That sounds like a problem I've heard several times before, where people were just mashing keyboard and running random commands hoping it'd fix their system. There were explicit warnings about certain fsck options in the docs.
    Exactly, fsck is known to break the filesystem. It's a complaint I listed because if you have a serious problem and fsck is more likely to destroy your data than to recover it, maybe it's poorly designed.

    And no, I didn't just fsck it and type in whatever commands. I read up on the manuals and only did "safe" commands, the problem I had is they kept failing at like 75%.

    Geez you need to learn to read.

    Originally posted by xinorom View Post

    In your case it sounds more like giving an mildly retarded child a hand grenade and telling him not to pull the pin out, but knowing he definitely will anyway.
    You've got a serious case of projection my friend.

    Originally posted by xinorom View Post

    Maybe when trying to replace a mature filesystem, you might want to actually use another mature filesystem instead of an early stage, beta filesystem that's been clearly labelled as such by it's developers?
    Again, you clearly don't know what you're talking about because it was "supposed" to be stable in 2013 and it sure as heck should have been stable by 2016.

    But that's not really my complaint. If the filesystem fails, which happens, I should be able to run a file system check that doesn't destroy the filesystem.

    Originally posted by xinorom View Post

    I think you're in for a surprise if you think bcachefs is going to be plain sailing...
    You're seriously retarded if you think that I don't know that. I build the package from source and install on root. I'm the one who posts the Mega download links for the Deb packages on Reddit.

    But guess what? Every time I've had an issue, either the filesystem can fix itself, or Kent pushes an update that day.

    Leave a comment:


  • xinorom
    replied
    Originally posted by profoundWHALE View Post
    I tried it again in 2013 because people called it 'stable'. Nope.
    Which "people"? Pretty sure no one you should have been listening to was calling it stable in 2013.

    Originally posted by profoundWHALE View Post
    fsck doesn't work or sometimes is the cause of the problem.
    That sounds like a problem I've heard several times before, where people were just mashing keyboard and running random commands hoping it'd fix their system. There were explicit warnings about certain fsck options in the docs.

    Originally posted by profoundWHALE View Post
    It's like having this super awesome railgun that you can shoot people with, but also sometimes it shoots backwards. Oops!
    In your case it sounds more like giving an mildly retarded child a hand grenade and telling him not to pull the pin out, but knowing he definitely will anyway.

    Originally posted by profoundWHALE View Post
    Maybe when trying to replace something like ZFS which is rock-solid stable, you might want to make sure your filesystem doesn't just eat your data.
    Maybe when trying to replace a mature filesystem, you might want to actually use another mature filesystem instead of an early stage, beta filesystem that's been clearly labelled as such by it's developers?

    Originally posted by profoundWHALE View Post
    Bcachefs hasn't eaten my data despite using the same drives in a similar configuration
    I think you're in for a surprise if you think bcachefs is going to be plain sailing...
    Last edited by xinorom; 02 February 2020, 12:16 AM.

    Leave a comment:


  • profoundWHALE
    replied
    Originally posted by xinorom View Post

    Btrfs was still marked as experimental back then. So you decided to use beta software for a production use case and now you want us to believe the current state of Btrfs is "untrustworthy" because you used it way before it was ready? I've been using it since about 2010 too and have never had a single issue with it, although in the early days I wouldn't have blamed anyone except myself if I had.
    That's why in 2010 I never put anything on it I cared about. I was testing it to see how it panned out.

    I tried it again in 2013 because people called it 'stable'. Nope.

    The one that really bothered me was from 2016. By that time, again, people had been saying btrfs is awesome and doesn't have any of the problems! Nope.

    Originally posted by xinorom View Post
    Those are both great filesystems, but one is not in the same league as Btrfs w.r.t. to features and the other seems further away from being mainlined than the author wants to believe. Also, you seem to be making the exact same mistake as you made before. I hope bcachefs doesn't eat your data and make you ragequit again.
    Features are useless if it corrupts stuff and the scrubbing or fsck doesn't work or sometimes is the cause of the problem.

    It's like having this super awesome railgun that you can shoot people with, but also sometimes it shoots backwards. Oops!

    Maybe when trying to replace something like ZFS which is rock-solid stable, you might want to make sure your filesystem doesn't just eat your data.

    Bcachefs hasn't eaten my data despite using the same drives in a similar configuration. The only headache that I can run into is due to it not being mainlined yet.

    Leave a comment:


  • xinorom
    replied
    Originally posted by profoundWHALE View Post

    Not a troll. I've tried btrfs 3 times and had terrible experiences each time. The first time was a long time ago, like 2010 and my system lasted a few weeks before I had to reinstall. The second time was in 2013 and I had issues with file corruption
    Btrfs was still marked as experimental back then. So you decided to use beta software for a production use case and now you want us to believe the current state of Btrfs is "untrustworthy" because you used it way before it was ready? I've been using it since about 2010 too and have never had a single issue with it, although in the early days I wouldn't have blamed anyone except myself if I had.

    Originally posted by profoundWHALE View Post
    If you're curious, I've been running those with XFS for a few years, and right now I'm testing bcachefs. No corruption issues, yet
    Those are both great filesystems, but one is not in the same league as Btrfs w.r.t. to features and the other seems further away from being mainlined than the author wants to believe. Also, you seem to be making the exact same mistake as you made before. I hope bcachefs doesn't eat your data and make you ragequit again.
    Last edited by xinorom; 30 January 2020, 05:17 PM.

    Leave a comment:


  • profoundWHALE
    replied
    Originally posted by xinorom View Post

    Obvious troll is obvious. You can do better than that...
    Not a troll. I've tried btrfs 3 times and had terrible experiences each time. The first time was a long time ago, like 2010 and my system lasted a few weeks before I had to reinstall. The second time was in 2013 and I had issues with file corruption, then in 2016 I tried a RAID10 set-up with 24TB worth of drives and the only reason why I didn't lose everything was because I had everything important backed up on some old drives. I spent 2 weeks of what I can only describe as hell. I had so much data to scrub through and for tools to comb through that I would have to run them for a whole day, only to find that it failed at like 25%. And the problem is that losing 75% of data means I lost more like 25% because the corruption was all over the place randomly. I had wedding videos that had chunks of missing audio and video.

    All you need is one serious data loss scare like that and you lose all trust in it.

    If you're curious, I've been running those with XFS for a few years, and right now I'm testing bcachefs. No corruption issues, yet
    Last edited by profoundWHALE; 29 January 2020, 09:24 PM.

    Leave a comment:


  • Yoshi
    replied
    Michael
    Thanks for the test. Is it possible, in the future, to write an Overview about Gluster and Ceph? I find it really hard to get into this stuff. Maybe it is something worth looking at.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by lsatenstein View Post
    Back to xfs and tests. Would data centers be using terrabyte SSDs or Spinners? Raid on disk may favour btrfs.
    How to choose drives for Azure Stack HCI and Windows Server clusters to meet performance and capacity requirements.


    This is not a Linux site but the 7 layouts Microsoft writes up there you see in data centre. Data centre can be using like 18TB spinnig(HDD). Your back ups and cold storage don't need SSD speed.



    The issue with shrinking XFS is really developers having time to implement the missing features on the Linux version of xfs. Xfs got really badly limited when it got ported to LInux.

    Leave a comment:

Working...
X