Announcement

Collapse
No announcement yet.

Intel i9-12900K Alder Lake Linux Performance In Different P/E Core Configurations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by smitty3268 View Post

    A much better "sweet spot" would be to just buy a 12700K. Mostly the same performance while much better power use and cost.
    Not true, i have already debunked this.

    Originally posted by Sonadow View Post
    Fact remains that for all the benchmarks Michael has done about Linux having better performance over Windows, they simply don't carry forward to real-world computing. Till now nobody can provide a reasonable explanation as to why Windows boots, launches programs and generally respond to application inputs faster than Linux on the same hardware, especially on low-power hardware like Atoms.
    Do you have any data to back this up? Because my experience is totally oposit to yours.

    Comment


    • #92
      Originally posted by Sonadow View Post
      Which is the most basic principal that nobody seems to get.
      What about this principle do you believe that people don't get?

      Originally posted by Sonadow View Post
      If an application is in focus, it means that the user intends to use it right there and then. There is no reason it should get shafted by assigning it to a lower-priority queue or an E core in Alder Lake's case.
      What if you are rendering a scene or encoding video in the background while working your report in Word or reading something in the browser? What if your workload consists of multiple processes that exchange data and both require a lot CPU power; like playing and streaming a video game?

      Originally posted by Sonadow View Post
      This has been the default behavior for Windows since Windows Vista, unlike Linux where every user application gets assigned a nice 0, regardless of the amount of resources and focus time it gets.
      Niceness has nothing to do with this. The scheduling problem is about deciding how to allocate CPU resources on the fly. The decision logic must be both accurate, fast and account for the cost of its own calculations. This is not an easy problem and you often run into situations where you'd have to know the future in order to make a sane decision.

      Originally posted by Sonadow View Post
      Right, a production-use scheduler that Intel and Microsoft worked on for Intel's own hardware on Windows and is currently in widespread deployment right now is inferior to the Linux scheduler,
      Are you implying that Linux CPU scheduler is not widely used in production?

      Originally posted by Sonadow View Post
      so much so that the inferior option actually works properly right now while the Intel developers have to keep the code disabled in the Linux scheduler for various reasons.
      What code are you talking about here?

      Comment


      • #93
        Originally posted by Anux View Post

        Do you have any data to back this up? Because my experience is totally oposit to yours.
        I have been running Debian with a custom-built kernel and Mesa on my Apollo Lake Atom laptop for the last three years.
        Two weeks ago I threw Debian out and put Windows 11 on it. The difference in performance is immediately noticeable. Web browsers and other heavy applications like productivity suites no longer randomly stall for a minute when scrolling through >20 tabs or multiple pages in a docx file loaded with lots of images, photos and tables.

        Comment


        • #94
          Originally posted by Sonadow View Post

          Two weeks ago I threw Debian out and put Windows 11 on it. The difference in performance is immediately noticeable.
          That's because you are a Linux newbie and have no idea what you are doing. Not a Linux problem. That's a Sonadow/birdie/whatever-your-other-usernames-are problem.

          Comment


          • #95
            Originally posted by Sonadow View Post

            And yet Alder Lake performs much better on Windows than it does in Linux.
            This may be true but it doesn't have anything to do with the scheduler because Windows 10 (which doesn't have the new scheduler specifically for Alder lake) also beats Linux, this seems to be more due to processor support


            Originally posted by CardboardTable View Post

            Intel is not trying to provide an "automagic" solution.

            Developers can still choose to directly schedule workloads onto each processor type (or ideally affinity hint the OS as to what the workload type is), see this:
            "However, it may be more optimal to run background worker threads on the Efficient-cores. The API references in the next section lists many of the functions available, ranging from those providing OS level guidance through weak affinity hits, such as SetThreadIdealProcessor() and SetThreadPriority(), through stronger control like SetThreadInformation() and SetThreadSelectedCPUSets(), to the strongest control of affinity using SetThreadAffinityMask()."

            The idea is that both the software and the hardware (Thread Director) are providing hints to the OS (Windows in this case) and the Windows scheduler matches up the workload to the right core, Intel isn't forcing any sort of automagic scheduling (again Thread Director only gives hints to the OS about the current state of the cores).

            See the diagram and description in this section: "IntelĀ® Thread Director and Operating System Vendor (OSV) Optimizations for the Performance Hybrid Architecture"

            And again to reiterate, a developer can also still choose hard affinities if they want "through stronger control like SetThreadInformation() and SetThreadSelectedCPUSets(), to the strongest control of affinity using SetThreadAffinityMask()".

            All of the quotes are from this intel developer guide: https://www.intel.com/content/www/us...per-guide.html
            I actually don't disagree with this, my point is precisely that to get any real measurable result you DO need to use affinity hint for the OS. The issue is that Intel has been pushing the idea that the scheduler will solve this issues without requiring the processor affinity which is what I am calling snake oil.

            Its also an obvious difficult problem because programs have to be coded specifically to take advantage of the affinity hints, hence why its not surprising that Intel is pushing this because they don't want to admit its a lot of work to get tangible benefit out of it/

            Comment


            • #96
              Originally posted by Sonadow View Post

              I have been running Debian with a custom-built kernel and Mesa on my Apollo Lake Atom laptop for the last three years.
              Two weeks ago I threw Debian out and put Windows 11 on it. The difference in performance is immediately noticeable. Web browsers and other heavy applications like productivity suites no longer randomly stall for a minute when scrolling through >20 tabs or multiple pages in a docx file loaded with lots of images, photos and tables.
              That sounds like an OOM problem that the multigeneration LRU patches might alleviate. Did you try that?

              Comment


              • #97
                Originally posted by perpetually high View Post

                That's because you are a Linux newbie and have no idea what you are doing. Not a Linux problem. That's a Sonadow/birdie/whatever-your-other-usernames-are problem.
                Amazing that the person who said he never wanted to quote me again decided to quote and comment on my post. What credibility.

                And unlike a certain person who claims to be a power luser and an 'enthusiast' yet doesn't even know how to compile the X server, Mesa or a web browser and its dependencies and only has enough intelligence to use prebuilt binaries, I have been building my own kernels and drivers and recompiling the applications I use on Linux for the past 13 years for maximum optimization.

                Comment


                • #98
                  Originally posted by MadCatX View Post

                  That sounds like an OOM problem that the multigeneration LRU patches might alleviate. Did you try that?
                  It's not an OOM problem at all. The laptop has access to 8GB of memory and 4GB of swap, and even when the applications were stalling free never reports more than 5GB in use at any time. And it was a 5.15 kernel, not the dinosaur 5.10 kernel that got bundled with Bullseye.

                  Lastly, I never perform in-place upgrades. The upgrade from Buster to Bullseye was done with a full format and install with Debian's netinst image.
                  Last edited by Sonadow; 21 December 2021, 06:23 AM.

                  Comment


                  • #99
                    Originally posted by Sonadow View Post
                    And unlike a certain person who claims to be a power luser and an 'enthusiast' yet doesn't even know how to compile the X server, Mesa or a web browser and its dependencies and only has enough intelligence to use prebuilt binaries, I have been building my own kernels and drivers and recompiling the applications I use on Linux for the past 13 years for maximum optimization.
                    Because I don't *need* to compile the X server. LOL! This is what I meant when you guys are doing all that extra shit just to say you can do it, with no real performance impact to show for it. Bravo?

                    And what skill does it take to compile? Are you even writing the code? That's where the complexity is. Not figuring out how to compile it. I really just have no respect for you or your existence. Take the best of care. I decided to quote you to expose you, because you are a sham and a fraud. Having said that, take the best of care. And do way, way better.

                    Comment


                    • Originally posted by Sonadow View Post

                      Right, because you have nothing to argue?

                      Fact remains that for all the benchmarks Michael has done about Linux having better performance over Windows, they simply don't carry forward to real-world computing. Till now nobody can provide a reasonable explanation as to why Windows boots, launches programs and generally respond to application inputs faster than Linux on the same hardware, especially on low-power hardware like Atoms.
                      ????

                      They do not carry over .... uhm, then why are you here commenting/reading benchmark articles? move on, nothing of interest for you
                      Go and buy whatever cpu you want/you get hooked to.

                      Comment

                      Working...
                      X