Announcement

Collapse
No announcement yet.

Windows 11 vs. Ubuntu Linux Performance Is Very Close On The AMD Ryzen 9 7950X

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Anux View Post
    Win 11 got lucky, if michael had tested large file copy ...
    Ubuntu (or just linux...) got lucky, if microsoft had tested copying files from one hard drive to another... (try it, it's fucking atrociously slow, tried copying an 80gb game between ssds, it went so slow i might have actually been able to re-download it entirely faster; Just let me repeat that: SSDs)

    Speed so bad I'm thinking even a unicore pc with windows 95 and PATA HDDs could have possibly done it faster.
    Last edited by rabcor; 14 October 2022, 04:26 AM.

    Comment


    • #32
      Originally posted by rabcor View Post
      tried copying an 80gb game between ssds, it went so slow
      That's strange, maybe you copied it to ntfs or something. I'm always capped by hardware on copying stuff. Sure small files are slower but that ain't faster with Win either.

      Comment


      • #33
        Originally posted by Anux View Post
        Sure I'm already seeing all those Win users firing up their cmd and fiddling with robocopy.
        There is also a GUI for it but this is a largely pointless exersize because what you are arguing about has nothing to do with Windows being slow but rather the default set of applications.

        The equivalent would be claiming that Linux is slow because some distribution ships some slow version of some program.

        Comment


        • #34
          Originally posted by mdedetrich View Post
          because what you are arguing about has nothing to do with Windows being slow
          I didn't say Windows is slow, I just made a comment about the file copy bug. Windows is slow in many other areas that won't ever get caught by a benchmark.

          Comment


          • #35
            Originally posted by jacob View Post

            Windows is not inherently slow, NTFS is. Linux will blow it out of the water in any benchmark that uses IO or filesystem operations, where NTFS would hold it back against Ext4, XFS or Btrfs. Which is ironic because for a while, NT (the predecessor of modern Windows) had better IO performance than then-Linux.
            There are also reasons for this. NTFS for example allows you to register hooks (called filters in Windows) that execute whenever certain IO operations happen and applications like Antivirus typically use these hooks.

            The existence of having such a design does slow down the filesystem (i.e. batching is really hard), there are other reasons as well.



            Originally posted by Anux View Post
            Windows is slow in many other areas that won't ever get caught by a benchmark.
            Thats a non statement that can be used to argue anything.

            Comment


            • #36
              Originally posted by mdedetrich View Post

              There are also reasons for this. NTFS for example allows you to register hooks (called filters in Windows) that execute whenever certain IO operations happen and applications like Antivirus typically use these hooks.

              The existence of having such a design does slow down the filesystem (i.e. batching is really hard), there are other reasons as well.
              But Linux has those as well (LSM), and I presume that no antivirus that uses them is installed when running the benchmarks.

              Comment


              • #37
                Originally posted by jacob View Post

                But Linux has those as well (LSM), and I presume that no antivirus that uses them is installed when running the benchmarks.
                I think the problem is more to the design space, LSM path hooks (if thats what you are talking about) is an optional part of the kernel (or more accurately Linux system) where as with NTFS its baked into the filesystem itself and it cannot be removed which means that even if you don't have antivirus installed (which isnt the case anymore due to Windows defender, you would need to manually disable it) you still have to pay for the performance cost of the feature.

                Comment


                • #38
                  Originally posted by mdedetrich View Post

                  I think the problem is more to the design space, LSM path hooks (if thats what you are talking about) is an optional part of the kernel (or more accurately Linux system) where as with NTFS its baked into the filesystem itself and it cannot be removed which means that even if you don't have antivirus installed (which isnt the case anymore due to Windows defender, you would need to manually disable it) you still have to pay for the performance cost of the feature.
                  I'm not sure I follow. A filesystem is a data structure, it can't have callback hooks baked into itself, it's the implementation driver that has them. LSM may be "optional" but it's actually active by default in most distros (used among others by apparmor in Ubuntu, SUSE and derivatives and by SELinux in RedHat, Fedora and derivatives). Either way, when not in use, which is the default case on Windows, the overhead cost of checking that a callback pointer is not set is negligible and the performance impact should be naught. I think there is something very wrong with NTFS itself and the callback API has nothing to do with it.

                  By the way, if IO benchmarks are ran on out-of-the-box OS installations, then Linux results will include LSM and whatever is built on top of it, whereas on Windows it won't. But NTFS still ends up being dramatically slower. I don't know if it's the design that is fundamentally flawed, or if MS never bothered optimising the driver for it.

                  Comment


                  • #39
                    Originally posted by jacob View Post

                    I'm not sure I follow. A filesystem is a data structure, it can't have callback hooks baked into itself, it's the implementation driver that has them.
                    So I managed to find the post I was referring to, read https://github.com/Microsoft/WSL/iss...ment-425272829.

                    Originally posted by jacob View Post
                    By the way, if IO benchmarks are ran on out-of-the-box OS installations, then Linux results will include LSM and whatever is built on top of it, whereas on Windows it won't. But NTFS still ends up being dramatically slower. I don't know if it's the design that is fundamentally flawed, or if MS never bothered optimising the driver for it.
                    From what I recall given the design constraints they optimized NTFS as much as they could without breaking backwards compatibility. The link I referenced above claims that.

                    Comment


                    • #40
                      Originally posted by mdedetrich View Post
                      So I managed to find the post I was referring to, read https://github.com/Microsoft/WSL/iss...ment-425272829.
                      That's an interesting read, thanks for sharing. Still, it only seems to go half way towards explaining the issues. Without being familiar at all with the internals of the NT kernel, it makes me ask the following questions:
                      1. Ok so Windows doesn't have a toplevel dentry cache - it begs the question why, and why can't one be implemented? Besides, I'm not sure how much that affects benchmarks where stat() doesn't happen nearly as often; creating tens of thousands of new files and then overwriting them etc. wouldn't be terribly affected by it anyway unless in NT they have to re-resolve the full path every single time, which I doubt they do because it would be a really dumb way to do it? The fact that NT doesn't parse paths at the VFS level like Linux does shouldn't make a difference either (they need to be parsed somewhere), but the optimisation of that particular routine probably does. I remember reading that in the recent Linux kernels, path resolution is done basically without any lock contention, which certainly is a huge performance win - but if so, and if NT doesn't have that, can't MS optimise it?
                      2. The description of the "filters" etc seems to suggest (but I could be wrong) that the handlers run in user space. If so then it's something different that Linux doesn't really have a direct equivalent of, the closest thing in Linux-land would likely be some kind of FUSE handler mounted on top of the actual filesystem as an overlay. That would have a huge performance impact indeed. Linux *does* use something vaguely like that too (ecryptfs for example), but it's all in kernel space. But again, it begs one question: why do they have to do this on the system partition (C: ) of all places?
                      3. The handle-oriented API sounds like a weird argument. A handle is just what Linux calls a fd and it's used everywhere too. True, in some cases syscalls like unlink() can take a path and not a fd, but at the VFS level the unlink() operation expect an inode and a dentry, which is exactly what a fd ultimately points to. The only difference seems to be that in Windows, they have to resolve the path (which is slow according to the description), return the handle and then invoke another syscall to delete the file, which means they have twice as many context switches. Fair enough, but Linux had been adding syscalls precisely to avoid this type of multiple context switching throughout the years, why can't Windows add a DeleteFileByPath() ?

                      Comment

                      Working...
                      X