Announcement

Collapse
No announcement yet.

Intel i9-12900K Alder Lake Linux Performance In Different P/E Core Configurations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Anux View Post
    Maybe that was a Debian problem or an I/O-sheduler problem? Especially old Hardware runs generally smoother on Linux (provided there is proper GPU support, but my Debian times are long gone and I can only speak vor Arch Linux.
    If it were a Debian problem it would have gone away after building an up-to-date kernel, and up-to-date Mesa.

    Comment


    • Sonadow

      I compile all the time. If you think that's skillful, it's not. It takes competence, that's all.

      Go put on your resume "Knows how to compile software" and let me know how that works out for you.

      Comment


      • Originally posted by Sonadow View Post

        Typical remark of someone who has never done so and assumes that takes no skill. Go ahead, build something big like LibreOffice and see everything for yourself. You aren't going to even get past the first compiler error because you don't have the knowledge to do so.

        And try telling Gentoo users that compiling their stuff is pointless. Go right ahead. You have shown nothing except your own ignorance and incompetence outside of building a kernel and running pretty benchmarks.
        Gentoo users re-define the compiler/linker flags to tune code to their hardware.
        If you compile from source for debian, without fine tuning flags, you achieve nothing in terms of performance -- surely it remains an interesting thing to do from a knowledge POV.
        BTW, Debian is known for been conservative and favor stability. You should try another distro for desktop usage (or heavily modify it with specific kernels)

        Comment


        • Originally posted by Grinness View Post

          Gentoo users re-define the compiler/linker flags to tune code to their hardware.
          If you compile from source for debian, without fine tuning flags, you achieve nothing in terms of performance -- surely it remains an interesting thing to do from a knowledge POV.
          BTW, Debian is known for been conservative and favor stability. You should try another distro for desktop usage (or heavily modify it with specific kernels)
          -march=native not good enough for you?

          Comment


          • Originally posted by Sonadow View Post

            -march=native not good enough for you?
            I am fine with Arch, compile what I need (e.g. ROCm, pytorch and related packages) with default flags.
            It is you who is complaining/showing off that you are able to compile thus you have superior understanding ...
            Do what you want, I don't care

            Comment


            • Originally posted by Sonadow View Post

              Which was the point I was trying to make. Windows 10 does not have a scheduler specially for Alder Lake but they have experience on BIG.little architectures because of their work on Windows RT and the ARM64 versions of WIndows 10, where all hardware use BIG.little. It's practically a forgone conclusion that this experience factored into their continuous work on the scheduler to the point where Windows 10 for x64 is able to handle Alder Lake as-is.
              Well firstly I don't think there is much cross over between ARM64 BIG little and alder lake, or enough for it to be useful

              Secondly the mere fact that Windows 10 with Alder Lake is beating Linux is probably demonstrating that we are talking about the difference between general schedulers and/or CPU support rather than hybrid core scheduler specifically.

              And its not like Linux doesn't have experience with big little, pretty much all android phones also follow a big/little design.

              Comment


              • I'm also not sure if there is much to gain by compiling the kernel yourself, apart from newer drivers. Atleast not, if you don't heavily modify it to a specific task. I bet one can find a benchmark here on Phoronix.

                Comment


                • Originally posted by Sonadow View Post
                  It's not an OOM problem at all. The laptop has access to 8GB of memory and 4GB of swap, and even when the applications were stalling free never reports more than 5GB in use at any time. And it was a 5.15 kernel, not the dinosaur 5.10 kernel that got bundled with Bullseye.
                  Was it 5 GiB with or without the page cache? Swap is only helpful if you have a lot of pages that you really don't need for a long time. In most cases you might actually be better off without swap and having the system kill the process that runs out of memory. Managing to keep the interactive processes in RAM is one area where Windows does a lot better than Linux.

                  Originally posted by Sonadow
                  Which was the point I was trying to make. Windows 10 does not have a scheduler specially for Alder Lake but they have experience on BIG.little architectures because of their work on Windows RT and the ARM64 versions of WIndows 10, where all hardware use BIG.little.
                  I seriously doubt this is the case. Win10 scheduler has had numerous issues with far simpler CPU topologies, take first Ryzens and Threadrippers for example. What I think is more likely is the fact that the simple scheduling logic of Win10 is less likely to take a wrong guess and mess things up. With predictable workloads which spawn fixed number of threads that do the same kind of work all the time it might work out better than attempts at sophisticated guesswork. Linux on ADL still beats Win11 in tasks like physics simulations, DNN stuff and 3D rendering because the scheduler just assigns works to CPUs and probably doesn't move things around very much then. Things get dicey when you have asymmetric tasks that require a lot of CPU-to-CPU synchronization but then again even Win11 doesn't seem to be conclusively better in this area either.

                  Comment


                  • Originally posted by davidbepo View Post
                    Michael you say it gives the option to enable AVX 512 but you dont specify if you did or not, could you clarify?
                    None of the tests in this article were AVX-512. See the linked article from there if wanting AVX-512 ADL data. Was simply mentioning when all E cores are disabled, AVX-512 is possible. AVX-512 was out of scope for this article especially with many workloads not being relevant for AVX-512, this article was just about core/thread comparison.
                    Michael Larabel
                    https://www.michaellarabel.com/

                    Comment


                    • Originally posted by mdedetrich View Post
                      I am getting the impression that the biggest issue appears to be Intel trying to provide a solution for something that from at least my OS studies back at uni is not really solvable, i.e. automagic scheduling on big little design that generally works better than the alternative. ...
                      There is no unsolvable issue here. It is just a mess that still needs sorting out. An OS can certainly decide, depending on its energy setting, to prefer performance cores over efficiency cores and vice versa, and further also automatically migrate workloads depending on whether these are compute- or I/O-bound. It does not have to be perfect, because there will always be edge cases. Only what should not be is the need for users to go into their BIOS settings to control it, possibly disabling all their efficiency cores, but they should either not have to bother about it or at least be able to control it from within the OS. This will already please the majority of users and one can improve it further from there.

                      Comment

                      Working...
                      X