Announcement

Collapse
No announcement yet.

Features That Didn't Make It For The Mainline Linux 4.18 Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Features That Didn't Make It For The Mainline Linux 4.18 Kernel

    Phoronix: Features That Didn't Make It For The Mainline Linux 4.18 Kernel

    There are many changes and new features for Linux 4.18 with the merge window having just closed on this next kernel version, but still there are some prominent features that have yet to work their way to the mainline tree...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    A shame about bcachefs and freesync not being included. I'd also like to see zram supporting zstd if that hasn't already been merged.

    Comment


    • #3
      I read the article just to get the update on bcachefs. I assume that it will go for 4.19, as there didn't seem to be too many dedicated objections in the patchset discussion.

      Comment


      • #4
        Freesync would be nice to see soon for the folks with a suitable display/GPU combination. Also anything Openchrome would be a joy for me.
        Other than the usual GPU improvements I'd also welcome a maintenance release, just making sure all the little quirky things get fixed.
        Stop TCPA, stupid software patents and corrupt politicians!

        Comment


        • #5
          Freesync would be great to have working.

          Comment


          • #6
            richacl's would have been cool

            Comment


            • #7
              Originally posted by ResponseWriter View Post
              A shame about bcachefs and freesync not being included.
              until bcachefs got at least parity and checksumming working at all, it's not that hot for me.

              Comment


              • #8
                Checksumming and parity on metadata is pretty crucial since you can't do it easily on a generic filesystem.

                On data at least you can make a cron job with a script using par2cmdline and you get both parity (recovery ability) and checksumming. And that works on any filesystem.

                Comment


                • #9
                  Originally posted by Weasel View Post
                  Checksumming and parity on metadata is pretty crucial since you can't do it easily on a generic filesystem.

                  On data at least you can make a cron job with a script using par2cmdline and you get both parity (recovery ability) and checksumming. And that works on any filesystem.
                  That's nifty indeed, and great to just correct bitflips/bitrot in data drives where data is mostly static, much better at that than Snapraid, which forces you to go with 100% size of parity (while here you can decide how much of parity you want) and does not work on a per-file basis so it has funky RAM requirements.

                  How much ram does it use? I need to try it out on my ARM NAS and see if it still hashes fast enough to be useful on low-power hardware too.

                  It won't be that good at protecting a system drive or a drive with frequently accessed data though, as when files are changed you'd have to recompute its parity file asap, and while I know it can be done (there are daemons that detect write activity to files) it would be a drag on performance.
                  Which is why CoW and checksumming filesystems are still a thing.

                  Comment


                  • #10
                    Originally posted by starshipeleven View Post
                    That's nifty indeed, and great to just correct bitflips/bitrot in data drives where data is mostly static, much better at that than Snapraid, which forces you to go with 100% size of parity (while here you can decide how much of parity you want) and does not work on a per-file basis so it has funky RAM requirements.

                    How much ram does it use? I need to try it out on my ARM NAS and see if it still hashes fast enough to be useful on low-power hardware too.

                    It won't be that good at protecting a system drive or a drive with frequently accessed data though, as when files are changed you'd have to recompute its parity file asap, and while I know it can be done (there are daemons that detect write activity to files) it would be a drag on performance.
                    Which is why CoW and checksumming filesystems are still a thing.
                    It seems it uses between 16MB and 32MB of RAM, the man page / readme says by default it's 16MB, but used up to 8 cores in my case (it actually scales to all your cores if you wish so). Note that it is pretty slow, relatively speaking, depending on the number of "blocks". Increasing the blocks has a quadratic effect on speed, so it can be *very* slow if you use too many blocks. For example with 8 cores and ~3k blocks it's about 50 MB/s, but I doubt a filesystem using checksumming will be faster unless it stores full redundancy (like SnapRAID does).

                    You should set the "number of blocks" relative to the "redundancy percent" from my experience, to have the least amount of overhead. 1/blocks should be a multiple of the percent. So with a redundancy of 4%, for example, the number of blocks should be a multiple of 25 (1/4%), it's better to use more blocks for larger files but again, don't go too crazy since the speed will be quadratically slower...

                    I know that inotify/fanotify or a cron job can affect performance, but remember that a full CoW filesystem also affects performance (also fragmentation, if you use a HDD) and the checksumming + parity will likely also be a performance killer compared to a "generic" filesystem. But this will not show up as CPU usage since it's in the kernel, so most people are deceived.

                    If you don't have too many files (i.e. less than a million?) and have CPU cores to spare, you can use a cron job that scans your filesystem every 5 mins or so for changes (based on date or whatever) and then automatically uses par2 on those, or inotify/fanotify but they have some quirks (the latter would be perfect except it doesn't fucking support delete/creation signals yet, WTF?).

                    Most of the time, within 5 minutes, you'll probably still have the new files cached in RAM, so even if they got bitrotted in the meantime (paranoia) par2cmdline will still do the original file's recovery. I'm still investigating how to make it a better experience for now.

                    Comment

                    Working...
                    X