Announcement

Collapse
No announcement yet.

Windows 11 vs. Linux Performance For Intel Core i9 12900K In Mid-2022

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by coder View Post
    If the entire file is read twice, then one of two things will happen:
    1. If there's enough free RAM, the file will be sitting in the page cache and read-ahead will be eliminated from the picture on the second pass.
    2. If there's not enough free RAM to cache the entire file, then you'll take 2x the hit of reading it once. This would amplify the effect of insufficient read-ahead.


    Right, at least if you don't take the trouble to use fadvise. So, Linux will generally cache as much of the file as it can, using a page-based LRU (Least-Recently Used) eviction policy.


    I wasn't talking about sophistication, though that's another possibility. I was merely talking about the fixed amount of read-ahead, on Linux.


    Could be, but it's a big leap to take on the basis of not much data.
    Yep, I know that readahead and caching is limited by memory on Linux.

    Once you run out of memory, the OS will remove cache first, then dirty pages, finally it might swap some unbacked pages to disks.

    Though I wonder if using mmap would make a difference, at least it would avoid some copying between userspace and kernelspace, at best it might also avoid some syscalls, though interruption can be even more expensive.

    Comment


    • #72
      Originally posted by NobodyXu View Post
      Yep, I know that readahead and caching is limited by memory on Linux.
      If you have enough open files, read-ahead might be memory-limited, but usually not. Last I looked into it (a while ago) it wasn't adaptive - just fixed-size.

      Originally posted by NobodyXu View Post
      Though I wonder if using mmap would make a difference, at least it would avoid some copying between userspace and kernelspace, at best it might also avoid some syscalls, though interruption can be even more expensive.
      It's basically like swapping, except you're swapping in/out chunks of a specific file rather than having your normal process memory getting swapped in/out to a swap device or swapfile.

      As for whether it's more efficient, I'd say it depends on your access pattern. If you're just doing linear reads, then making a read call can operate on more than a page of data at a time. If you rely on mmap(), it's entirely possible that you get a page fault and have to block on each new page you access - I don't know if it tries to do any read-ahead. Even if it does read-ahead, you'll still be faulting every 128 kB or whatever the default value is, compared with being able to make explicit calls to read MBs at a time.

      Comment


      • #73
        Originally posted by tildearrow View Post

        Must be an old, modified or non-stock version of Windows because latest stock Windows 10 takes forever to start.
        Not really. On my machine, it takes 5 seconds to get to grub, there there to login screen it takes 1 second on Linux, and 10 seconds on Windows.. That might seem like a big difference, but after the login screen Linux takes 10 seconds to get to the user desktop, where Windows takes 3 seconds.

        Comment


        • #74
          Boot start time is crucial for Linux users because they can't use sleep because linux then hangs .
          Last edited by HEL88; 10 July 2022, 06:16 AM.

          Comment


          • #75
            Originally posted by birdie View Post

            Windows overall is just 4% slower in benchmarks than Ubuntu 22.04 + Linux 5.18. Nothing to worry about for anyone.

            At the same time this comparison doesn't show that in Windows all web browsers and most video players support HW accelerated video decoding and proper power management out of the box which is what people will actually appreciate, not some surface-level improvements for various benchmarks most people never run.

            What do people actually care about? How fast their web browser works. How fast their system boots. How fast they can perform calculations in Excel. Wait, there's no Excel in Linux. Whether their hardware works without issues. Overall on the desktop this is far more important than performance in benchmarks.
            Birdie be like:
            - something good happens in Linux Desktop world
            Birdie: ohhh, your shitty linux cannot into Alder Lake unlike Windows 11!!!!
            - Phoronix provides benchmarks that show that Linux works just fine on Alder Lake
            Birdie: nooo, those benchmarks show nothing, your linux is still shit!!!!

            I can point out other messages when you deny facts of Windows behaving worse than Linux. And you deny them like "Windows works flawlessly for me".

            I've not heard of a single person on Windows having troubles with it. It just works. What's more, it's worked with zero issues for over a decade for everyone who I know. Bugged shit could be in your brain.
            Can't confirm. W10 boots in fewer than 20 from HDD seconds on my PC. Again, people love to perpetuate myths.
            You are repeating mistakes of Linux zealots who claim that Linux is perfect because it works for them.

            It is sad to see that you provide right and sound points about Linux on your website, but act like a MS fan here.
            Last edited by Ermine; 10 July 2022, 10:54 AM.

            Comment


            • #76
              Originally posted by coder View Post
              If you have enough open files, read-ahead might be memory-limited, but usually not. Last I looked into it (a while ago) it wasn't adaptive - just fixed-size.
              I am pretty sure it is adaptive.
              Last time I do a zstd compression, I can see that my memory is filled with caches from filesystem using `free -hw`.

              Originally posted by coder View Post
              It's basically like swapping, except you're swapping in/out chunks of a specific file rather than having your normal process memory getting swapped in/out to a swap device or swapfile.

              As for whether it's more efficient, I'd say it depends on your access pattern. If you're just doing linear reads, then making a read call can operate on more than a page of data at a time. If you rely on mmap(), it's entirely possible that you get a page fault and have to block on each new page you access - I don't know if it tries to do any read-ahead. Even if it does read-ahead, you'll still be faulting every 128 kB or whatever the default value is, compared with being able to make explicit calls to read MBs at a time.
              While I haven't tried, I am pretty sure that it would trigger read-ahead just like reads, though I am not sure whether it will setup the page mapping at background for you after the page is read in.

              As for the page size, you can use huge page, which significantly reduce faulting.

              By default, paging on almost all OSes (including Windows) is 4K, but using huge page, you can get pages that are 2MB large or even 1G large.

              Comment


              • #77
                Originally posted by carewolf View Post

                Not really. On my machine, it takes 5 seconds to get to grub, there there to login screen it takes 1 second on Linux, and 10 seconds on Windows.. That might seem like a big difference, but after the login screen Linux takes 10 seconds to get to the user desktop, where Windows takes 3 seconds.
                Using an SSD? These numbers were for HDD

                Comment


                • #78
                  Originally posted by tildearrow View Post

                  Using an SSD? These numbers were for HDD
                  Yes, an SSD. Just saying that without too much crap installed Win10 is pretty similar to Linux, just with a difference in how much is loaded before and after login. Of course if I used Windows more, I might have more crap installed slowing it down, I see that on corporate installations all the time, stuff popping up for a minute after login.

                  Comment


                  • #79
                    Originally posted by coder View Post
                    Right, at least if you don't take the trouble to use fadvise.
                    For those watching: the W32 approach to this is you pass your expected use pattern at file open time. It's easy to think that "it doesn't matter", since caching is pretty lightweight stuff to implement, but by the time you're reading files significantly larger than the available RAM in the machine it adds up to a LOT of bookkeeping, for what you already know is no gain, so regardless of how "easy" it is it's still all wasted effort. Waste even a small amount of time a large *number* of times and it matters.

                    What I was thinking of though was the *sequential* indicator - hence "readahead", though perhaps calling it *prefetch* would have made things clearer. In nearly all typical file processing, especially on multicore CPUs, you have at least some number of cycles sitting around waiting for something to do, and a medium that is similarly idle. On something with a BIG.little design in particular, the odds that the CPU can prefetch the next few blocks of input "for free" are *extremely* high, even if all the P cores are busy, because the E cores are only being used for background work.

                    You can argue that's a scheduling issue at heart, given the silicon design, and I think that might be what NobodyXu is trying to say: it's just that, with all the scheduling issues Linux has had over the years even on homogeneous cores, it's probably not really descriptive enough for scenarios like that one.

                    Comment


                    • #80
                      Originally posted by arQon View Post
                      You can argue that's a scheduling issue at heart, given the silicon design, and I think that might be what NobodyXu is trying to say: it's just that, with all the scheduling issues Linux has had over the years even on homogeneous cores, it's probably not really descriptive enough for scenarios like that one.
                      Shouldn't the scheduling part of Linux be among the most optimized parts of the kernel by now given that the Top500 is relying on it exclusively?

                      Comment

                      Working...
                      X