Announcement

Collapse
No announcement yet.

Linux's NTFS Driver Drops "No Access Rules" Option, Adds Small Optimizations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by ypnos View Post

    Why bother dissenting when you don't know? Never heard of fscrypt? It's quite nice IMHO.
    That is why I said "I don't think". The point of discussion is to learn things.

    Comment


    • #12
      Originally posted by Old Grouch View Post

      The allowed character set for filenames stored using NTFS is a subset of the character set for ext4. That alone rules it out for me. Mixing both without caution leads to all sorts of interesting problems with obscurely-named files.
      At the same time NTFS filenames can be up to 255 UTF-16 code units while ext4 is limited to 255 bytes. If anything NTFS is a lot more advanced than ext4 in this regard and then by default NTFS3 in POSIX mode (i.e. under Linux) allows to use pretty much everything except / and NUL.

      Your knowledge of NTFS is extremely lacking.

      NTFS is in an order magnitude more advanced than ext4. The only thing that goes for ext4 is its performance while working with a ton of small files. NTFS can become slower but that's the price you pay for its super advanced ACL and various logging/reporting/transactions/mirroring/compression/encryption features.

      Lastly NTFS can be fully defragmented while ext4 can only defragment individual files. Directories under ext4 can only be defragmented in offline mode (i.e. reboot is required).
      Last edited by avis; 28 April 2023, 10:38 AM.

      Comment


      • #13
        Originally posted by Old Grouch View Post

        I suspect that Windows/NTFS inherited the extensive ACL capability from VMS, which is both good and bad.

        Quoting from a VMS Access Control List Editor manual:



        Access control list entries could grant or deny access to multiple system-manager defined groups of users, and/or write entries to security logs, and/or propagate to files created in subsidiary directories (when applied to files in the filesystem). You ended up with a very flexible and powerful system which was challenging in its complexity to administer. I suspect you'd need to use SE Linux to get the same flexibility and granularity in Linux.

        Access Control Lists, like firewall rule sets, were processed sequentially until the first match, and if you were not careful, could drag down performance. I've not used them in anger on Windows or Linux, and I'm happy not to have had to.
        Trying to compare NTFS ACLs with SELinux hurts my brain. I suspect there are non-overlapping use cases that literally no one encounters. As you mention, actually using NTFS ACLs in that manner is a challenge and probably provides more self-gratification to the sysadmin implementing it than the value it provides to the end user.

        SELinux big benefit would be on classified systems (e.g. with MCS) but I suspect whatever uses it has there are waning due to that incredible complexity.

        The octal permissions system has occasionally felt limiting but I think it does adequately capture what access control looks like in 99% of actual use cases.

        Comment


        • #14
          Originally posted by ll1025 View Post

          That is why I said "I don't think". The point of discussion is to learn things.
          And what did we learn from the unsubstantiated "I don't think so"?

          Comment


          • #15
            Originally posted by ll1025 View Post

            What AD support are you wanting that doesn't exist? Base SSSD supports AD fully-- dynamic dns registration, kerberos-backed SSH logins, pubkeys pulled from LDAP, and sudoers files based on a computer's netgroup. There's even support for using GPO to enforce HBAC, and if you're nuts enough to use Ubuntu I understand that GPO support gets even deeper.

            Plus, Microsoft doesn't want you using on-prem AD. That's why the forest level hasn't been updated in 8 years, and the DC role hasn't been touched in 5. They want you on Azure AD.

            As for NTFS vs ext4-- NTFS big benefit is its very granular ACLs. AFAIK its performance is meh, and has some really nasty corner cases that kneecap it (e.g. directories with thousands of small files). One might speculate that its sprawling featureset / metadata could be linked to those performance issues.

            There's supposedly some "self-healing" stuff with it but I've never found it to be more reliable than ext4 or xfs.
            Naw, the problem with NTFS isn't its huge feature set and metadata storage and more to do with 30 years of file system API cruft. Many programs, including some parts of Windows itself still use antiquated serial file system APIs which kills performance. You can't really realistically compare a reverse engineered file system driver (Linux, BSD, etc) with native access on Windows, and even then you have to know whether the program you're using is using an old blocking access API or a newer threaded/asynchronous API. Otherwise all you're really measuring is API execution times, not the filesystem's performance capabilities.

            Comment


            • #16
              As for benefit, wouldn't Microsoft supporting ntfs on Linux allow them to theoretically properly share files with wsl2?

              Comment


              • #17
                Originally posted by WiR3D View Post
                As for benefit, wouldn't Microsoft supporting ntfs on Linux allow them to theoretically properly share files with wsl2?
                I've never had issues accessing files in WSL2, what support have you found to be missing?

                Comment


                • #18
                  Originally posted by stormcrow View Post

                  Naw, the problem with NTFS isn't its huge feature set and metadata storage and more to do with 30 years of file system API cruft. Many programs, including some parts of Windows itself still use antiquated serial file system APIs which kills performance. You can't really realistically compare a reverse engineered file system driver (Linux, BSD, etc) with native access on Windows, and even then you have to know whether the program you're using is using an old blocking access API or a newer threaded/asynchronous API. Otherwise all you're really measuring is API execution times, not the filesystem's performance capabilities.
                  Not disputing that but there are some O(N^2) type performance issues when you go above a certain number of files in a directory. It's not a linear speed issue-- 50k files and your speed might be fine, 200k files and it might take 30 seconds to enumerate the directory; 500k and you might be there for an hour.

                  That points to some kind of deep architectural issue.

                  Comment


                  • #19
                    Originally posted by ypnos View Post

                    And what did we learn from the unsubstantiated "I don't think so"?
                    I (re)learned about fscrypt and encryption modes I haven't seen in decades. Hopefully you're learning that crowing "YOU WERE WRONG" isn't the best way to interact with people.

                    Comment


                    • #20
                      Originally posted by avis View Post

                      At the same time NTFS filenames can be up to 255 UTF-16 code units while ext4 is limited to 255 bytes. If anything NTFS is a lot more advanced than ext4 in this regard and then by default NTFS3 in POSIX mode (i.e. under Linux) allows to use pretty much everything except / and NUL.

                      Your knowledge of NTFS is extremely lacking.

                      NTFS is in an order magnitude more advanced than ext4. The only thing that goes for ext4 is its performance while working with a ton of small files. NTFS can become slower but that's the price you pay for its super advanced ACL and various logging/reporting/transactions/mirroring/compression/encryption features.

                      Lastly NTFS can be fully defragmented while ext4 can only defragment individual files. Directories under ext4 can only be defragmented in offline mode (i.e. reboot is required).
                      I'll start by saying I think its a dumb argument because most of this stuff doesn't matter outside of high-end storage arrays:
                      • Almost no one uses filesystem compression or dedup
                      • most deployments I've seen disk-level encryption like LUKS or bitlocker
                      • "advanced ACLs" are mostly irrelevant and unused in actual production, either insufficient or too complex most of the time
                      • If you have a use-case for "super advanced ACLs" SELinux absolutely crushes NTFS
                      • Performance is usually decided far more by other factors (mdraid vs storage spaces; underlying hardware; kernel) than by actual filesystem
                      • Fragmentation is a performance issue only affecting non-performance-focused hardware-- no one with NVMe cares about defragmenting, and no one with HDDs care about anything other than sequential performance
                      But "can become slower" is an understatement. I've seen Exchange Server badmail directories get clogged with hundreds of thousands of smtp files and even entering the directory on the command line can take minutes-to-hours; actually clearing the files can take multiple hours to days. NTFS (or maybe Microsoft's driver implementation) has a deep problem with directories of many small files that I haven't encountered on other filesystems.

                      Any time you start looking at high-end performance Linux tends to dominate Windows on IOPs. Dynamic disk is dead at this point and Storage Spaces with parity is a joke that destroys your performance, whereas you can seriously discuss getting mdraid with dual parity up to a million IOPs on whitebox hardware.

                      You're also taking some serious license with things. Logging / reporting is not a function of NTFS, mirroring is a function of Storage Spaces rather than the filesystem, and last I checked there was no longer a manual defragmentation tool in Windows-- but when there was, there were some files it would not defragment without a reboot. The built in utilities would not inform you of this, but if you used a full version of Diskkeeper (what the built in utility was based upon) it would tell you these things.

                      I'm pretty neutral on operating systems and filesystems these days (I think for the most part theyre all awful in their own special ways) so I can say NTFS is "good enough" without trying to pretend it doesn't have some very, very large warts.

                      And frankly no dumb internet argument on filesystems is complete without noting that both NTFS and EXT4 are rickety old filesystem designs that are far inferior to ZFS, in every way.

                      Comment

                      Working...
                      X