Announcement

Collapse
No announcement yet.

Intel Announces 13th Gen "Raptor Lake" - Linux Benchmarks To Come

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by piotrj3 View Post
    Now AMD with technically superior node clocked chips high making 7950X do significantly less work per watt then 5950X (that is on inferior node).
    Power-efficiency is a set of data points (which can be visualized in a chart as a curve) - not a single data point. Making judgements about 7950X vs 5950X power efficiency based on a single data point is absurd.

    All Ryzen 7000 CPUs are being sold without a cooling solution, which means that more people than before will have to learn how to properly configure their Ryzen 7000 systems for power-efficiency.
    Last edited by atomsymbol; 28 September 2022, 05:28 AM.

    Comment


    • #32
      Originally posted by TemplarGR View Post

      I agree. The most useless tech youtuber. But you have to remember 99% of the people watching him (and many others like him) are clueless gamers, not tech industry professionals, so.... He has has been so wrong so many times it is not even funny, he has no real insider info, he is just browsing for rumors and then parrots them from his house as facts....
      Holy shit, what did this guy do to you? Killed your family?
      I'm only watching him for a year now and there he has never been wrong, can you give an example?

      Comment


      • #33
        Originally posted by birdie View Post

        He's got absolute most of his stuff wrong. He deletes his previous videos and tweets. Oftentimes he just makes stuff up to get views on youtube. AFAIK he's been banned from r/NVIDIA and r/AMD. Kinda says a lot how the largest communities treat his "revelations".
        He's got absolute most of his leaks near 100% right. I have noticed none of deleted videos, at least all of the main leaks videos are online. I have no idea WTF are you talking about. He is regularly critical regarding HW companies, so i guess he has heart your Intel/NVIDIA fanboy feelings. Cry a river now.

        PS. Leaks should be interpreted. If you are unable to turn off your "autist" mode and expect MLID numbers to be 100% correct, when this is your problem and inability to comprehend a concept of an estimate . "MLID said it's supposed to be ~350USD, but it was 329USD, so he was wrong" LOL
        Last edited by drakonas777; 28 September 2022, 05:02 AM.

        Comment


        • #34
          Originally posted by Anux View Post
          For example Geekbench multi thread showed 7% plus for ryzen 9 3950x over i9 10900k, while in reality it was 20 % or more:
          https://chipguider.com/?p=amd-ryzen-...core-i9-10900k
          https://www.phoronix.com/review/3900x-3950x-10900k

          The reason for this difference is because the Geekbench tests are very short, allowing the use of a much higher power consumption than possible for longer durations.

          The main cause why the AMD CPUs of the latest generations have been better than the Intel CPUs at multi-threaded tasks is that when both CPUs are reaching a power limit, so they run at a constant power consumption, at equal power the AMD CPUs have a higher clock frequency, due to the superior manufacturing process.

          So for longer multi-threaded benchmarks, the CPUs must fall down to the power limits configured for steady-state conditions, which are closer between Intel and AMD than the power limits for boost conditions.

          Hence the greater advantage of AMD in other multi-threaded benchmarks than Geekbench.


          Comment


          • #35
            Originally posted by scottishduck View Post
            7000 series seems like a poor showing by AMD. This looks like Intel going in for the kill this gen.

            Raptor Lake will be very likely the best CPUs for gaming, due to having the best single-thread performance.


            However, there are plenty of people who use computers for other things than games.

            For those Zen 4 will be a much better choice, due to better multi-threaded performance, better energy efficiency, and especially due to a much better AVX-512 implementation than in the majority of the Intel CPUs (with the exception of Xeon Platinum and similar overpriced Intel SKUs).


            Rewriting or recompiling programs to use AVX-512 can give a very nice boost to many applications, and it is a much more pleasant instruction set to program in than the previous crippled instruction sets implemented by Intel, i.e. MMX, SSE and AVX.







            Last edited by AdrianBc; 28 September 2022, 04:13 AM.

            Comment


            • #36
              Originally posted by piotrj3 View Post

              this is hilarious stupid comparison. 65W TDP with package power of 90W during handbrake, and 241W PL2 max power draw with 210W TDP. And during one single workload handbrake. Not to mention x264 doesn't like scalling beyond 8 threads (and output beyond 16 threads is garbage)
              Didn't read very far did you ?

              Comment


              • #37
                Originally posted by scottishduck View Post
                7000 series seems like a poor showing by AMD. This looks like Intel going in for the kill this gen.
                This is a bit of a too brave and too sweeping generalization. I would add two caveats (but there are more):

                * most people are not considering 13900k vs 7950x. The big battle will be fought between the likes of 13400 and 7600 (x or non-x). We do not know yet how these will compare. So far we can say that 7000-series looks great, although I agree the Raptor lake may turn up to be even greater.
                * Another terribly important factor is price. While the technology is fixed, you can always change the price and manipulate the value of the products accordingly. I'd wait till early next year before drawing any firm conclusions, I am sure intel is also trying to milk early adopters in a similar fashion as amd.

                Comment


                • #38
                  Originally posted by AdrianBc View Post
                  Raptor Lake will be very likely the best CPUs for gaming, due to having the best single-thread performance.


                  However, there are plenty of people who use computers for other things than games.

                  For those Zen 4 will be a much better choice, due to better multi-threaded performance, better energy efficiency, and especially due to a much better AVX-512 implementation than in the majority of the Intel CPUs (with the exception of Xeon Platinum and similar overpriced Intel SKUs).


                  Rewriting or recompiling programs to use AVX-512 can give a very nice boost to many applications, and it is a much more pleasant instruction set to program in than the previous crippled instruction sets implemented by Intel, i.e. MMX, SSE and AVX.
                  Raptor Lake will outperform Zen4 in multicore performance as well. The 13900K will use more power than the 7950X but the 13700K/13600K should be more efficient than the 7600X/7700X due to the higher core count. AMD’s advantage will be AVX-512.

                  Comment


                  • #39
                    Originally posted by AdrianBc View Post
                    However, there are plenty of people who use computers for other things than games.
                    I wonder if that should even matter much these days? From those people who play games, most can't afford a GPU that wouldn't bottleneck way before the CPU does. And from people who could afford it, most don't care to make such an investement anyway. For a large majority of gamers any somewhat recent CPU does what needs to get done just fine. That said, there will always be also those people who believe pairing an i9-12900 with their 750Ti will help. Business, I guess.

                    Comment


                    • #40
                      Originally posted by ms178 View Post
                      For me the interest in Arc took a serious hit with all the believable rumors of Intel canceling the dGPU lineup as I wouldn't buy into a generation that is likely to see driver support EOL'ed faster than you might hope for. Intel needs to prove to the market that they are in for the long run though. But with all of their short-sighted decisions lately and in the past, that only would make them look even more unreliable. All of these uncertainties come to their disadvantage as they haven't established themselves in the dGPU market yet and their failed execution doesn't inspire confidence either.
                      as i unterstand it they did not chanceling the future dGPU.... with their DG2 lineup they had 4 different GPU dies for this generation.
                      this is the point they change in the future... in the future the DG3 lineup will only be 1 single gpu die...
                      and they chancel the highend part means they only produce a 8gb vram version they can use for notebooks and lowend/midrange for Desktop. with this stradegy they can spend more time to optimise drivers for this 1 single DG3 chip instead if split software development resources into 3-4 chips...
                      intel did accept that with that driver status they can not compete in the highend... but they want to sell notebooks with an extra GPU and also want one PCIe card for developers and so one.

                      DG4 will also be a single chip .... they will do this until their driver and gpu tech become competive.



                      Phantom circuit Sequence Reducer Dyslexia

                      Comment

                      Working...
                      X