Announcement

Collapse
No announcement yet.

Linux 5.16's New Cluster Scheduling Is Causing Regression, Further Hurting Alder Lake

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by jaxa View Post

    P.S. You technically can't add a Threadripper to a Ryzen system because they use a different socket and motherboards.
    That's why I wrote that I wanted to be able to.

    Comment


    • #52
      Originally posted by mSparks View Post
      And I still dont get the point of all this messing around writing new software supporting something that draws so much more power to just about equal AMDs performance from last year.
      A drowning man will grasp at straws, as they say.

      Comment


      • #53
        Originally posted by mSparks View Post
        If what you said was true, they could get the same single thread performance and nearly twice the multithread performance at a fraction of the power consumption with one p core and 21 ecores instead of 8 p cores and 8 ecores.

        But what you said isnt true so the 12900k has 8 p cores, 8 ecores and still draws a whopping 250W at 100% utilisation.

        It also seems unlikely any fractional lead in performance they have over 12 month old AMD will hold once AMD also moves to DDR5.

        And I still dont get the point of all this messing around writing new software supporting something that draws so much more power to just about equal AMDs performance from last year.
        "Same single thread performance" is misleading. Putting in only a single P-core core means you get to run 1-2 non-parallelizable threads at the highest possible performance. They have to add more than that in a gaming-oriented CPU. But it looks like Intel has no plans to add more than 8 P-cores in upcoming generations, while they will repeatedly double the E-core count.

        The 12900K flagship draws 250+ Watts in part so it can beat AMD in more benchmarks, and also because the kind of people who will buy the 12900K don't care about the high power consumption. AMD sees this and will be boosting power consumption on the AM5 socket (raising TDP from 105W to 170W), possibly only for the flagship CPU.

        This is a starting point for Intel. They are playing catchup with AMD. But in a couple of years, AMD will also start using their own heterogeneous architectures. WIthin a few years, it will be in almost all new x86 PCs.

        Originally posted by geearf View Post
        That's why I wrote that I wanted to be able to.
        Oops, sorry.

        The AM4 socket isn't going past 16 cores, but you can expect to see at least 24-32 cores of various types on AM5.

        Comment


        • #54
          Originally posted by jaxa View Post

          Oops, sorry.

          The AM4 socket isn't going past 16 cores, but you can expect to see at least 24-32 cores of various types on AM5.
          No worries.
          By the time AM5 will get that many, whichever socket the TR will be on will get even more and so I'll still be jealous.
          Of course I have no need for that many cores, but still...
          Actually, maybe it'd be good to be able to pin certain things to a core, that way when the software goes into infinite loop or whatever, it doesn't cause any problem outside of that one tiny core, but maybe that's just a stupid idea when we already have cgroups.

          Comment


          • #55
            Originally posted by jaxa View Post

            "Same single thread performance" is misleading. Putting in only a single P-core core means you get to run 1-2 non-parallelizable threads at the highest possible performance. They have to add more than that in a gaming-oriented CPU. But it looks like Intel has no plans to add more than 8 P-cores in upcoming generations, while they will repeatedly double the E-core count.

            The 12900K flagship draws 250+ Watts in part so it can beat AMD in more benchmarks, and also because the kind of people who will buy the 12900K don't care about the high power consumption. AMD sees this and will be boosting power consumption on the AM5 socket (raising TDP from 105W to 170W), possibly only for the flagship CPU.

            This is a starting point for Intel. They are playing catchup with AMD. But in a couple of years, AMD will also start using their own heterogeneous architectures. WIthin a few years, it will be in almost all new x86 PCs.
            except it doesnt really beat amd in the game benchmarks.
            https://openbenchmarking.org/embed.p...b97cb9f057&p=2
            It breaks tons of games completely
            https://www.tweaktown.com/news/82612...pus/index.html
            And is considerably more expensive when you factor in needing to cool 250w and the additional cost of DDR5.

            If thats Intels idea of targeting gamers they are ruined.
            Last edited by mSparks; 15 November 2021, 08:26 PM.

            Comment


            • #56
              Originally posted by mSparks View Post
              except it doesnt really beat amd in the game benchmarks.
              https://openbenchmarking.org/embed.p...b97cb9f057&p=2
              It breaks tons of games completely
              https://www.tweaktown.com/news/82612...pus/index.html
              And is considerably more expensive when you factor in needing to cool 250w and the additional cost of DDR5.

              If thats Intels idea of targeting gamers they are ruined.
              Except it does when you don't count Linux which currently doesn't properly support ADL. No idea why you're showing openbenchmarking results which are irrelevant for 98% of gamers out there. Yeah, as a reminder, less than 2% of gamers use Linux.

              Even 12600K beats all AMD CPUs:



              And don't get me started on higher resolutions which are too often GPU bound. If you want ultimate performance in games no matter what ADL is the fastest CPU out there.
              Last edited by avem; 15 November 2021, 08:50 PM.

              Comment


              • #57
                Originally posted by avem View Post

                Except it does when you don't count Linux which currently doesn't properly support ADL. No idea why you're showing openbenchmarking results which are irrelevant for 98% of gamers out there. Yeah, as a reminder, less than 2% of gamers use Linux.

                Even 12600K beats all AMD CPUs:



                And don't get me started on higher resolutions which are too often GPU bound. If you want ultimate performance in games no matter what ADL is the fastest CPU out there.
                Theres an easy way to tell when a benchmark graph is paid for by intel/not telling the truth.
                It is well established the 11900k performs significantly worse in almost every game than the 10900k, any graph that puts it ahead is at best deeply flawed, at worst mismarketting (which is illegal btw)

                The second way of telling is any benchmark that doesn't scale linearly with the number of cores within a cpu gen (excluding 5950x which has a lower clock speed), which is almost every AAA game released in the last 2 years.

                Also, re your gamer hypothesis, AMD already has that well and truly locked up with the 5600X, 5900x, ps5, xbox series x|s and now the steam deck. which gamer focused device do you see the Intel 12th gen in, and who are these gamers that dont care about battery life in their gaming laptops?
                Last edited by mSparks; 15 November 2021, 10:14 PM.

                Comment


                • #58
                  I just want to point out one thing to everyone that is bitching about Intel cpu's power consumption: since Sandy Bridge Intel cpu's have had igpu's that are always active, even when you use a discrete gpu. Consequently, comparing the power draw of a cpu with an igpu against a cpu sand igpu (AMD's non apu offerings) is disingenuous at best.

                  A proper comparison is with either an system that uses an Intel cpu that has an igpu against a system that is powered by an AMD APU or an Intel cpu sand igpu against an AMD cpu sans igpu and in both cases measuring total system power draw using a kill-a-watt.

                  Comment


                  • #59
                    Originally posted by mSparks View Post

                    Theres an easy way to tell when a benchmark graph is paid for by intel/not telling the truth.

                    [...]

                    The second way of telling is any benchmark that doesn't scale linearly with the number of cores within a cpu gen (excluding 5950x which has a lower clock speed), which is almost every AAA game released in the last 2 years.
                    You actually believe that using benchmarks that aren't embarrassingly parallel means that a review is untruthful and sponsored by Intel?

                    Tell me you are an unserious partisan without telling me you are an unserious partisan.

                    Comment


                    • #60
                      Originally posted by mSparks View Post

                      If this was true, there would be no need for more than one or two p cores on desktop chips.

                      pretty sure its not true or there wouldnt be a minimum of 6 on every chip they just released.
                      It is true, and there are plenty of benchmarks out there to prove it if you don't believe me. Just go look at some of them. Try anandtech's, for a start.

                      Comment

                      Working...
                      X