Announcement

Collapse
No announcement yet.

Features That Didn't Make It For The Mainline Linux 4.18 Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    BUS1 would get my vote.

    Comment


    • #12
      While I also would like to see Bus1 in the kernel, I very much would prefer to have wireguard finally mainlined. It works amazing via dkms already and just having it by default on any linux kernel in a year oh so would be awesome!

      For the bcachefs crowd:
      I never understood what is so great about it, especially in contrast to brtfs. Trying to find something via google only leads to ton of stuff like "its going to be faster and better code quality then brtfs" but the feature set seems to be much smaller than brtfs.

      Comment


      • #13
        Originally posted by Weasel View Post
        I know that inotify/fanotify or a cron job can affect performance, but remember that a full CoW filesystem also affects performance (also fragmentation, if you use a HDD) and the checksumming + parity will likely also be a performance killer compared to a "generic" filesystem. But this will not show up as CPU usage since it's in the kernel, so most people are deceived.
        Yeah I know that a filesystem with checksumming and CoW will be slower (while as long as parity is on different drives having parity shouldn't matter much). Main point here is that a filesystem does checksumming at the block level, so if you edit a file it will recompute checksum and update parity only of the edited blocks, and also does so as soon as the edit takes place, so it can join the write operations.

        While if I'm not mistaken this tool has to recompute the whole file on each update, and unless you store parity on a second drive you also get write speed penalty.

        Also, a CoW provides 100% protection from corruption on unclean shutdowns as the filesystem operations are atomic, you either have a file modified or it is not modified, it won't be corrupted if the PC was shut down abruptly while writing the file edit to disk.

        And again here this tool may or may not protect you on a poweroff depending on how big is the percentage of parity you choose for the files, which affects the tool's performance.

        It will not auto-detect the corrupted files on read and fix them automatically, so after an unclean shutdown you would have to run a verify check on root filesystem to be sure of fixing any corruption. And what if the PC was shut down while the tool was computing parity for some files? You probably have a partial parity file which won't protect as well and has to be re-computed assuming the original file isn't corrupted as well.

        That's what I meant, it's not terribly suited for protecting the root filesystem, but for 99.99% of the cases most people would want to protect against bitrot in a home or even prosumer environment it's good enough.

        You can reload the root filesystem from a partition image in the rare cases something actually fails so hard that you have to, and on Linux it takes little time to do as you don't need to have a huge root to begin with, while this tool will protect the home partition and the data drives from bitrot perfectly fine.

        While in embedded devices in most cases you have ECC when reading/writing to its embedded flash anyway, because the raw flash isn't terribly reliable to begin with, so the root filesystem is protected already, and I only need something light to protect the data drives.

        Which is why I'm going to try compile it for my OpenWrt NAS and see how it goes. Btrfs kills off any semblance of performance an embedded device can have.
        Last edited by starshipeleven; 18 June 2018, 04:17 AM.

        Comment


        • #14
          Originally posted by ResponseWriter View Post
          A shame about bcachefs and freesync not being included.
          freesync is a real need, bcachefs brings nothing to the table. even if it was in usable state

          Comment


          • #15
            Originally posted by Weasel View Post
            I know that inotify/fanotify or a cron job can affect performance, but remember that a full CoW filesystem also affects performance
            there will be order of magnitude difference in performance drop

            Comment


            • #16
              Originally posted by starshipeleven View Post
              Yeah I know that a filesystem with checksumming and CoW will be slower (while as long as parity is on different drives having parity shouldn't matter much). Main point here is that a filesystem does checksumming at the block level, so if you edit a file it will recompute checksum and update parity only of the edited blocks, and also does so as soon as the edit takes place, so it can join the write operations.

              While if I'm not mistaken this tool has to recompute the whole file on each update, and unless you store parity on a second drive you also get write speed penalty.

              Also, a CoW provides 100% protection from corruption on unclean shutdowns as the filesystem operations are atomic, you either have a file modified or it is not modified, it won't be corrupted if the PC was shut down abruptly while writing the file edit to disk.

              And again here this tool may or may not protect you on a poweroff depending on how big is the percentage of parity you choose for the files, which affects the tool's performance.

              It will not auto-detect the corrupted files on read and fix them automatically, so after an unclean shutdown you would have to run a verify check on root filesystem to be sure of fixing any corruption. And what if the PC was shut down while the tool was computing parity for some files? You probably have a partial parity file which won't protect as well and has to be re-computed assuming the original file isn't corrupted as well.

              That's what I meant, it's not terribly suited for protecting the root filesystem, but for 99.99% of the cases most people would want to protect against bitrot in a home or even prosumer environment it's good enough.

              You can reload the root filesystem from a partition image in the rare cases something actually fails so hard that you have to, and on Linux it takes little time to do as you don't need to have a huge root to begin with, while this tool will protect the home partition and the data drives from bitrot perfectly fine.

              While in embedded devices in most cases you have ECC when reading/writing to its embedded flash anyway, because the raw flash isn't terribly reliable to begin with, so the root filesystem is protected already, and I only need something light to protect the data drives.

              Which is why I'm going to try compile it for my OpenWrt NAS and see how it goes. Btrfs kills off any semblance of performance an embedded device can have.
              Oh, I thought you just wanted to protect your data from bitrot. I mean, you don't need parity & checksumming for data loss protection in case of unclean shutdown. CoW is enough for that. (also, journal filesystems work fine in 99% of such cases, and if you are really paranoid about it, just get an UPS?). Kernel panics might be a problem but they should be are (even normal "crashes" can be solved by simply using Alt+SysRq+S, forced sync)

              Anyway for par2 don't forget to set -n1 (really no need for more recovery files than 1), the redundancy you want via -r (e.g. -r4) and the number of blocks via -b (the options are a bit confusing since there's a lot of "alternatives" that can't be used together). Remember to keep -b as a multiple of 1/r% for optimal encoding (so you don't waste much data on the recovery file due to overhead).

              But yes, that's mostly for bitrot, which I thought was the point of parity & checksumming. Obviously, you still need off-drive backups to protect from drive failure.

              You don't have to verify religiously either (or "scrub" as called in zfs), since a 4% recovery (or whatever you set it to) is pretty large for bitrot. Even if you end up copying a damaged file to the backup, as long as you also copy the recovery, the chances of a unrecoverable file is minuscule.
              Last edited by Weasel; 18 June 2018, 07:56 AM.

              Comment

              Working...
              X