Announcement

Collapse
No announcement yet.

Intel i9-12900K Alder Lake Linux Performance In Different P/E Core Configurations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Grinness
    replied
    Originally posted by Volta View Post

    This could be easily done by distributions, but I'm not sure about their reasons to not to do so. Look below:



    Maybe in your 'nobody ever heard of' distribution. In Fedora you can do this from system monitor, but to set higher priority you have to provide root password. This is because of 'obvious' security reasons.
    This is doable in Gnome System Monitor (Gnome 41.2 -- Arch) : user can set affinity, lower or increase priority per process; the bird-man as usual lives under a rock and/or has no clue.
    Last edited by Grinness; 22 December 2021, 04:57 AM. Reason: Typo: Gnome System Settings -> Gnome System Monitor

    Leave a comment:


  • Volta
    replied
    Originally posted by birdie View Post
    In Linux on the other hand we have the kernel all by itself, the Xorg/WM or Wayland Compositor by themselves and running applications. All of them are not aware of one another altogether.
    This could be easily done by distributions, but I'm not sure about their reasons to not to do so. Look below:

    This is further exacerbated by the fact that in Linux you can increase [decrease] your process priority ("niceness"), say 19, but you can never lower it back to the original value, e.g. 0. This sounds almost idiotic to think about that. Why can't you renice it back to 0? In the end you can simply restart it and circumvent this "restriction".
    Maybe in your 'nobody ever heard of' distribution. In Fedora you can do this from system monitor, but to set higher priority you have to provide password. This is because of 'obvious' security reasons.
    Last edited by Volta; 22 December 2021, 06:09 AM. Reason: user password not root

    Leave a comment:


  • Volta
    replied
    Originally posted by birdie View Post

    Ah, so there are "right" and "wrong" Linux distros and only you know which one to choose.
    You can choose one of the hundreds of Linux distributions (which nobody ever heard about), but it won't make any sense. Don't cry when someone chooses broken copy of Windows from torrent for a benchmark, ok?
    Last edited by Volta; 21 December 2021, 06:09 PM.

    Leave a comment:


  • davidbepo
    replied
    Originally posted by Michael View Post

    None of the tests in this article were AVX-512. See the linked article from there if wanting AVX-512 ADL data. Was simply mentioning when all E cores are disabled, AVX-512 is possible. AVX-512 was out of scope for this article especially with many workloads not being relevant for AVX-512, this article was just about core/thread comparison.
    thanks

    Leave a comment:


  • mangeek
    replied
    Fantastic benchmarks; thanks for doing this.

    Outside of 'benchmarks', I can totally see myself being happy with a 1P/4E/96EU casual computing machine where the kernel only throws consistently demanding threads onto the P core.

    Leave a comment:


  • atomsymbol
    replied
    Originally posted by Michael View Post
    Haven't been able to reproduce, but now that you indicate an OpenBenchmarking connectivity problem.... If you try now does it work? Just flushed some firewall blocks in case something like that happened...
    Articles are now loading OK. Thanks.

    Leave a comment:


  • CardboardTable
    replied
    Originally posted by mdedetrich View Post
    Intel has been pushing the idea that the scheduler will solve this issues without requiring the processor affinity which is what I am calling snake oil.
    I'm not sure where you see Intel pushing this.

    Leave a comment:


  • Sonadow
    replied
    Originally posted by birdie View Post
    The saddest thing about the whole drama about ADL and its Linux support is that in Windows, UI, Explorer.exe and other system components are tightly connected and integrated, so the Windows kernel knows or gets hints about what applications are running in the foreground and it can adjust the process CPU cores affinity accordingly.

    In Linux on the other hand we have the kernel all by itself, the Xorg/WM or Wayland Compositor by themselves and running applications. All of them are not aware of one another altogether.
    Exactly. This option to have Windows prioritize foreground or background applications existed in Windows since the days of Windows 2000. And it's entirely dynamic; users do nothing more outside of going to Advanced Settings and selecting a radio button.



    Leave a comment:


  • skeevy420
    replied
    Originally posted by atomsymbol View Post

    By the time of the last upgrade of the dual-socket machine: Was the older dual-socket machine faster in multi-threaded workloads than a single multi-core CPU of a similar price?
    Yes.

    PC the before the dual socket had a Q6600, 65mn quad core with the FSB 3.0ghz mod and 2x4GB of DDR2-800, (it started as a Core2Duo with 4GB ram) and I upgraded to a HP Pavilion T5500 that came with 2-X5550, 45mn 2.5ghz 4c8t 2x4GB of DDR3 1033. A few days later I upgraded the memory to 6x8GB DDR3 1333. Core for core, the [email protected] ran my games about the same as the [email protected] Compile times were two to four times as good due to twice the cores and the addition of hyperthreading so that was a win. Gaming wise, not much of a difference. My GPU back then was an R7 260x...workhorse of a GPU that took me from Catalyst to Radeon to AMDGPU. Those were some Fun Days.

    Lightning is why I upgraded to that PC. Woke up and half the electronics in the house didn't work and a $200 used T5500 was all my budget could afford. Thank goodness my GPU and HDDs still worked.

    I upgraded the x5550 to x5660 (6c/12t 2.5Ghz 32nm) and from there I went to X5687 (4c8t 3.6ghz 32nm) which is the fastest clocked CPU in that family outside of the rare X5698 (2c4t 4.4ghz 32nm). All that happened in the span of three months. Going from 8 to 12 cores and 45nm to 32nm felt like diminishing returns so I went with faster CPUs with less threads which had the most noticeable effect on my gaming since upgrading from the C2Q era...that and finally upgrading my R7 260x to an RX 580.

    Compiling between the X5660 and X5687 was very similar for the most part and which was better varied by workload -- optimized mulit-thread stuff like the kernel preferred the slower 12c24t setup while single threaded and less optimized multi-thread stuff like Half Life 2 preferred the faster 8c16t setup. Trying to figure all that out is what led me towards Phoronix.

    Pairing the X5687 with an RX 580 had me playing most games at 1080p60 until late 2019 where I started being too CPU bound with games like Cyberpunk and Stellaris (two completely different games to stress the point) to fully enjoy them. I used that PC right up until I could no longer repair it and was forced to build a new system.

    Now I'm on a Ryzen 5 4650G Pro 6c12t 4.3Ghz and 2x16GB DDR4 3700. The one thing I really, really missed when using dual core Xeons was having an iGPU so having an iGPU was the highest priority for me when building this system. They're a godsend if you want a hardware accelerated VM and don't want to bother with two dGPUs. I went with AMD because AM4 leaves me with plenty of upgrade paths. As you can tell by my PC history, I'll upgrade what I can and use it until it's no longer usable.

    Leave a comment:


  • Anux
    replied
    Originally posted by Michael View Post

    Haven't been able to reproduce, but now that you indicate an OpenBenchmarking connectivity problem.... If you try now does it work? Just flushed some firewall blocks in case something like that happened...
    Everything works nice and fast, that did the trick. THX

    Leave a comment:

Working...
X