Announcement

Collapse
No announcement yet.

Intel i9-12900K Alder Lake Linux Performance In Different P/E Core Configurations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by birdie View Post

    Except when seasoned IT pros cannot even get Linux to work:
    Calling these benchmark EXE doubleclickers "IT pros" is a real stretch...

    Comment


    • #52
      Originally posted by Volta View Post

      Oh, nvidia fanboy chose another company for his fetish. Maybe because of DLSS blurry failure that he promoted as a miracle, the best in the world and so on.. It seems birdie is just AMD hater.
      I am sure you are one of those "geniuses" who put oversharpening to 100%, turn of TAA and say have my antialiased ultra jagged image is the best.

      None of the reviewers say DLSS is blurry, it can only produce ghosting in some games, but with newer revisions of DLSS that is a ton improved and gives no ghosting in most games at all, to a point it is reasonable to use DLSS even in the worst case scenario (F1 games where ghosting orginally was indeed horrible). DLSS doesn't produce blur, it can produce ghosting same way as TAA, and DLSS most of times simply trade blows with "does TAA in game is better in terms of ghosting or does DLSS provide better TAA". And answear is it depends, in games like Marvel heroes DLSS is far better, in F1 TAA in built is better while in most games both implementations are equally good.

      Comment


      • #53
        Originally posted by jrch2k8 View Post

        ok, is not an AMD bias but more like the lesser evil birdie.

        AMD/Intel/nVidia will charge as much as it can get for product for as long as possible? fuck yes, is a business not a charity and if you expect different well prepare to be disappointed.

        Then why AMD get less shit than Intel/nVidia? because those two have pulled every dark dirty anti competitive cheat on the book for decades now and as informed consumer that piss us off while AMD usually on the receiving end of it.

        Also AMD business decisions tend to be more pro consumer (like their whole Open approach to everything).

        I also think is possible that if AMD stay at the top for too long they may develop these anti competitive behavior that Intel and nVidia have but even when AMD crushed Intel decades ago, they didn't step that low so i kinda trust them a bit more.

        Also, as a Linux user i'm completely happy losing 5%-15% IPC and some raytracing power for few months to support a business model that give me an open software stack and stay out of the way to let it grow naturally. (i know RADV is not fully AMD but they didn't tried anything dirty to stop it either)
        AMD did pull a ton of dirty tricks, just it is more on AIBs and fairly recently.

        First dirty trick was setting MSRP on reference AMD's models on basicly 0 profit margin. That meant in day of launch, there was literally no AIBs selling the cards at AMD's MSRP price. Like people used to compare 650$ RX6800XT vs 700$ RTX 3080 as AMD is theoretically cheaper. Problem was that at launch day, in all online shops (at least in Poland) Nvidia 3080 was cheaper then 6800XT, because AMD's 650$ was purely fictional, i mean it still sort of is because of supply issue, but if you were lucky you could get Nvidia 3080 from AIBs for 700$ but there was no way you could do the same with AMD's 6800XT or 6800.

        Second dirty trick was 5600XT BIOS situation and not disclosing information to universal AIBs like MSI, Gigabyte, Asus about that you need to validate memory for higher speeds... yuck, but somehow Sapphire XFX and Powercolor knew.

        Third is maybe not dirty trick, but fact AMD almost doesn't want to resolve supply issues etc. Like if you look at Steam hardware survey, most popular AMD card from RDNA2 has only 0.19% popularity. Nvidia has 12 models that each are more popular from ampere, and 4 models are above 1% and highest popularity model is 1,75%. Probably if i summed up all Ampere models together they would be pretty close to 10% of all Steam cards, meanwhile AMD wouldn't reach even 1%. Nvidia at least tried to combat supply issues, they used samsung to not overwhelm TSMC, they released some only mining cards that are defective as normal gpu, they made LHR and improved them so it is no longer possible to bypass, also didn't put radicolous prices on low end models that even at MSRP are worse then previous generation models. Of course execution has own issues and i have complains about them too, but hey at least they tried. AMD is nothing and no improvement in sight.

        4th. generally weaker quality assurance standards, AMD motherboards issues with USB ports or PCI-E gen 4 are still popular topics, but not so much on Intel side. Intel has really giant reserves in terms of undervolting/overclocking while AMD often has close to none. I still remember 2600X of my friend that was unstable on stock on 2 diffrent motherboards with 2 diffrents sets of RAM that was becomming stable with higher voltage or lower clocks.

        I am not saying intel and nvidia doesn't sin, they do. But claiming AMD doesn't have anti-competitive practises is just a joke.

        When historically intel was the most dirty of all, nowadays Intel is actually the most fine out of them all. Due to silicon shortage they allowed other companies to produce chips at their process node what never happened before, they invest into making own GPUs what could help us in future, provided finally a good alternative to DLSS and with Alder Lake and their 10nm (that is like 7nm) they got their shit together to return, and Intel CPUs were never having supply issue. Imagine if Intel was like AMD and nvidia, you would have CPUs price skyrocket up like GPUs,.
        Last edited by piotrj3; 20 December 2021, 04:39 PM.

        Comment


        • #54

          Originally posted by Markopolo View Post
          ... the first few posts were clearly trolls ...
          Sorry, but you are wrong.

          What users expect from a CPU is that it runs all of their software well and not just some of it. These benchmarks show one has to look much closer with Alder Lake than before and one is possibly forced to tweak BIOS settings on Alder Lake systems in order to get the most out of them.

          This is definitely not what users want. Geeks and nerds may enjoy experimenting with BIOS settings, but the majority of users and developers expect that it just works and that it does so well. So yes, it is a mess Intel has created here, and it is left to others to sort it out.

          Comment


          • #55
            Originally posted by birdie View Post

            Another one with "the mess". Are you sure you're using the right word here? HW Unboxed has actually praised Alder Lake and put it above Ryzens in their "Best 2021 CPUs":

            Well, if you look at gaming performance with E cores enabled, it is a mess, as you better disabling them to get more performance ...
            what you call a processor that behave like this? a beautiful idea? a wonderful solution?

            Again, it would have been great to see direct comparison with 5900X: 24 threads AMD + 24 threads (8P+HT+8E)

            Note: I am happy that a new CPU (Intel) is 'better' than an older CPU (AMD) ... let's the when the 'new' becomes 'old'

            Comment


            • #56
              Originally posted by birdie View Post
              I see an impressive uArch which has the best in the world x86 single- and multi-threaded performance (in terms of performance per core). The temps are a bit high but that can be easily fixed by setting PL1/PL2 limits. A sweet spot for 12900K is around 170-190W.
              A much better "sweet spot" would be to just buy a 12700K. Mostly the same performance while much better power use and cost.

              Comment


              • #57
                Originally posted by Grinness View Post

                Well, if you look at gaming performance with E cores enabled, it is a mess, as you better disabling them to get more performance ...
                what you call a processor that behave like this? a beautiful idea? a wonderful solution?

                Again, it would have been great to see direct comparison with 5900X: 24 threads AMD + 24 threads (8P+HT+8E)

                Note: I am happy that a new CPU (Intel) is 'better' than an older CPU (AMD) ... let's the when the 'new' becomes 'old'
                I've seen tons of reviews and disabling E-cores may at most net you a few % FPS boost. I see no "mess" here at all, more like W10/W11 schedulers could be a tad better.

                On the other hand Linux cannot properly manage ADL CPUs at all, so I'm not sure what is that people are discussing here at all. Linux users currently should just forget about them altogether. So much for "stellar" hardware support in Linux except when it doesn't support something as basic as a 100% x86-64 compatible CPU.

                Originally posted by smitty3268 View Post

                A much better "sweet spot" would be to just buy a 12700K. Mostly the same performance while much better power use and cost.
                I totally agree with that and many reviewers as well. 12900K is for ultimate overclockers and people seeking performance no matter what. :-)
                Last edited by birdie; 20 December 2021, 07:22 PM.

                Comment


                • #58
                  Originally posted by davidbepo View Post

                  they were very successful at showing that...
                  I am happy those lads made it, good for them. I got bored listening to their videos on the matter

                  Comment


                  • #59
                    From what I can see it looks like a HT'd P-core has about the same performance as a P-core + E-core combo. While there might be some performance left on the table due to suboptimalscheduling, I wonder if a chip with, say, 12 P-cores wouldn't be a better design than this hybrid mess. It probably wouldn't use much more power that 8P+8E chip, it wouldn't need all kinds of software trickery to work properly and AVX-512 would work out of the box. And considering that Sapphire Rapids will have no E-cores, it'd clearly be doable. I get it that Intel needs something to compete against ARM in the mobile segment but this hybrid core idea looks like a step in the Itanium direction. It *might* be more interesting in laptops.

                    Comment


                    • #60
                      Originally posted by birdie View Post

                      I've seen tons of reviews and disabling E-cores may at most net you a few % FPS boost. I see no "mess" here at all, more like W10/W11 schedulers could be a tad better.

                      On the other hand Linux cannot properly manage ADL CPUs at all, so I'm not sure what is that people are discussing here at all. Linux users currently should just forget about ADL CPUs altogether. So much for "stellar" hardware support in Linux except when it doesn't support something as basic as a 100% x86-64 compatible CPU.
                      Back in to your rat hole

                      Comment

                      Working...
                      X