AMD Ryzen 5 9600X & Ryzen 7 9700X Linux Performance With 105 Watt cTDP

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • drakonas777
    Senior Member
    • Feb 2020
    • 532

    #31
    Technically you are correct. However, I also do consider Intel K/KF models and AMD X models (especially ZEN4) to be effectively factory overclocked. Perhaps in pure technical terms these CPUs are below official maximum frequency limits, but when these limits require 250+ watts - it's not fucking normal. It used to be a domain of manual OC until vendors started to do it in a form of aggresive boosting algorithms and push CPUs insanely above their optimal frequency/voltage range.
    Last edited by drakonas777; 14 September 2024, 10:51 AM.

    Comment

    • coder
      Senior Member
      • Nov 2014
      • 8822

      #32
      Originally posted by drakonas777 View Post
      I also do consider Intel K/KF models and AMD X models (especially ZEN4) to be effectively factory overclocked.
      I suppose it depends on your definition of overclocking. To me, "factory overclocked" is a contradiction in terms. I define overclocking as exceeding the specifications, usually at the expense of invalidating warranty coverage and risking instability. The X- and K- series CPUs ship with warranties and default limits which are guaranteed to operate stably.

      Originally posted by drakonas777 View Post
      when these limits require 250+ watts - it's not fucking normal.
      So, do you also consider a RTX 4090 to be factory overclocked? Where's the threshold for overclocking, according to your definition?

      Originally posted by drakonas777 View Post
      It used to be a domain of manual OC until vendors started to do it in a form of aggresive boosting algorithms and push CPUs insanely above their optimal frequency/voltage range.
      It's not due to insane frequencies that can burn well over 200 W of power. It's due to having more cores, with more transistors, and the fact that we're well beyond the era of Dennard Scaling.

      Comment

      • drakonas777
        Senior Member
        • Feb 2020
        • 532

        #33
        Originally posted by coder View Post
        I suppose it depends on your definition of overclocking.
        I agree with your definition which I believe is commonly recognized one. That's why I wrote you are technically correct. I just think that there is more to this story than pure "dry" technicalities.

        Originally posted by coder View Post
        To me, "factory overclocked" is a contradiction in terms.
        Fair point. We can call it "highclocking" or "bigboosting" or something like that instead to be more logically consistent. It does not change the main idea I'm trying to express here which is CPUs used to have much more of OC headroom mostly because their factory frequency and voltage settings used to be much more closer to the optimal range. Modern CPUs utilize this headroom for aggresive boosts (at least most of K/KF/X SKUs). While you are correct that this is not a OC in pure technical terms, it is a similar technique with the main difference being it's automatic and validated. As Intel 13/14 gen demonstrates being validated and under warranty is not a guarantee for stability and longevity. They pushed ring bus too in a very similar way manual OCer would do it. Basically that's all I wanted to add to this discussion.

        ​​​
        Originally posted by coder View Post
        I define overclocking as exceeding the specifications, usually at the expense of invalidating warranty coverage and risking instability. The X- and K- series CPUs ship with warranties and default limits which are guaranteed to operate stably.
        I agree.

        Originally posted by coder View Post
        So, do you also consider a RTX 4090 to be factory overclocked? Where's the threshold for overclocking, according to your definition?
        I can't answer this right away. I think to some extent I do. I'd have to analize 4090 power/frequency scaling ir order to be more precise.

        Originally posted by coder View Post
        It's not due to insane frequencies that can burn well over 200 W of power. It's due to having more cores, with more transistors, and the fact that we're well beyond the era of Dennard Scaling.
        It's both. More transistors add more flat power usage, however, power usage itself is related to voltage in quadratic relation. I believe you are well aware of ZEN4/ZEN5 power scaling where power usage can be drastically reduced with a minimal impact for performance just by reducing voltage and dropping frequency.

        To sum things up, the main problem with my position is I can't objectively and precisely define the universal exact threshold where "factory OC" starts. Lithos are different, ICs are different, binings are different, optimal ranges may be defined differently and so on. From the discussion perspective you are "better equiped" so to speak because you can actually define factory/vendor limits/specs and your position is logically and argumentally stronger. But I came here to express my personal opinion and not win debates, and in my personal opinion most of K/KF SKUs and some of X SKUs are "factory OCed" as fuck in my reasoning and framework I described. That's all





        Comment

        • coder
          Senior Member
          • Nov 2014
          • 8822

          #34
          Originally posted by drakonas777 View Post
          As Intel 13/14 gen demonstrates being validated and under warranty is not a guarantee for stability and longevity.
          Well, that's a bug. Even if you just look at Alder Lake, they pushed PL2 up to 241 W and it's not known to have any stability or longevity problems. So, we can just exclude Raptor Lake from consideration.

          Originally posted by drakonas777 View Post
          ​I can't answer this right away. I think to some extent I do. I'd have to analize 4090 power/frequency scaling ir order to be more precise.
          Here:BTW, I noticed that increasing power limits on games with poor utilization provides much lesser performance benefits. If you just look at games with high GPU utilization (like 95% or higher), then perf/W scaling is definitely more robust.

          Originally posted by drakonas777 View Post
          ​I believe you are well aware of ZEN4/ZEN5 power scaling where power usage can be drastically reduced with a minimal impact for performance just by reducing voltage and dropping frequency.
          If you want to reduce power usage on lightly-threaded tasks, then imposing lower frequency limits is the only solution I'll accept. Undervolting risks instability. The manufacturer picked the V/F curves they did, for good reasons. If they could've reliably used lower voltages, they would've.

          Originally posted by drakonas777 View Post
          I can't objectively and precisely define the universal exact threshold where "factory OC" starts. Lithos are different, ICs are different, binings are different, optimal ranges may be defined differently and so on. From the discussion perspective you are "better equiped" so to speak because you can actually define factory/vendor limits/specs and your position is logically and argumentally stronger. But I came here to express my personal opinion and not win debates, and in my personal opinion most of K/KF SKUs and some of X SKUs are "factory OCed" as fuck in my reasoning and framework I described.
          Eh, that's kinda weak. You could always define a certain slope in the perf/W curve as the limit of sensible boosting. That would apply to all lithos, microarchitectures, etc.

          Comment

          • drakonas777
            Senior Member
            • Feb 2020
            • 532

            #35
            A small clarification. By reducing voltage and lowering frequency I meant not an explicit manual undervolting but rather a) reducing power limits/using ECO mode and leting boosting to do the job under the hood b) choosing non-K/X part where this is done by lowered builtin config. Or any other means by which limits on how much voltage can be dynamically increased are imposed upon boost algos.

            I'm not a fan of manual undervolting.

            Comment

            • zeb_
              Phoronix Member
              • Jun 2008
              • 56

              #36
              I think that AMD has recently clarified that PBO (and some overclocking) did not invalidate warranty anymore. Edit: I may be wrong, there are conflicting infos there. Know that if your CPU broke when using only PBO, you should avoid saying you used it to claim warranty. AMD brags about PBO to their consumers, and I find it unfair they would not support this function anyway.

              That said, I question the usefulness of TDP 105W mode when enabling PBO also unlocks the full potential of the CPU in high-stress load. I just acquired the 9700X with an MSI B650 board and started to run PTS tests. Using the 65W TDP, but with PBO activated, the temperature shoots up to 95-96°C when building the linux kernel, whereas without PBO it is in the order of 60-65°C. The time gain with PBO enabled is around 14%, which is significant (the 9700X being the CPU that gets the most room) but also dependent on the test. Other tasks that do not load the CPU fully do gain around 0 to 5% only. So my first instinct was to leave PBO disabled for peace of mind and reduced noise. At the same time, I could leave it to benefit a bit more oomph e.g. boosting multitasking under no full load. Time will tell.

              Now is switching to TDP 105W the good strategy compared to 65W with PBO? Since the throttling kicks at 95°C, there is no higher efficiency to expect at full load than with PBO, and at lower loadit would just increase consumption? Any users who can share thoughts?
              Last edited by zeb_; 11 January 2025, 09:56 AM.

              Comment

              • coder
                Senior Member
                • Nov 2014
                • 8822

                #37
                Originally posted by zeb_ View Post
                Now is switching to TDP 105W the good strategy compared to 65W with PBO? Since the throttling kicks at 95°C, there is no higher efficiency to expect at full load than with PBO, and at lower loadit would just increase consumption? Any users who can share thoughts?
                I'm not very knowledgeable about PBO, specifically. I think it can enable you to reach higher single-thread clocks than you could reach otherwise. So, it's probably not comparable to simply using a higher power limit.

                My experience fiddling with Intel CPUs' power limits has taught me that increasing your power limit won't affect usage or efficiency on low thread-count jobs. The power limit is just that - a limit. You don't start throttling until the entire package hits that limit. In Intel's case, they have 2 power limits (actually 4, if you really want to get into it) to consider, but the point is that a single thread won't even hit 65W, so increasing it beyond that won't help you until you start using enough threads that the power limit becomes relevant.

                If you want better lightly-threaded performance, do a bit more reading on PBO, because that seems like your only option. If you just want better build times, but don't like the high power consumption of PBO and perhaps any fears of instability or premature CPU wear that it might incur, then stick to just monkeying with power limits. Personally, I wouldn't touch PBO, but then I'm also the sort of person who uses ECC memory.

                Comment

                • zeb_
                  Phoronix Member
                  • Jun 2008
                  • 56

                  #38
                  Originally posted by coder View Post
                  If you want better lightly-threaded performance, do a bit more reading on PBO, because that seems like your only option. If you just want better build times, but don't like the high power consumption of PBO and perhaps any fears of instability or premature CPU wear that it might incur, then stick to just monkeying with power limits. Personally, I wouldn't touch PBO, but then I'm also the sort of person who uses ECC memory.
                  Many thanks coder for these explanations. Indeed, PCO is AMD OC algorithm "on the fly" if I understand correctly, changing frequencies and voltage within safe limits depending on load. I understand that, on the other hand, increasing TDP is not an overclock, but instead gives more heat overhead before throttling.

                  With TDP105W I get around 85°C max at full load (build-linux-kernel), whereas PBO makes the CPU reach 95°C (which is the limit, but still ok according to AMD). Now the gains with PBO are significant in some scenarios (15% faster compilation, 10% faster x265 and 18% faster for c-ray 1080p). TDP105W provides also a boost, which may be considered safer.

                  As you say I could prefer to disable these boosters for several reasons: in addition to the risk of wearing the CPU and MB, there is also the electricity consumption, the noise from fans and the simple fact that I spend little time building and encoding. These CPUs are also very efficient at stock levels and the trade-off of losing 5-10% is really compensated by quality of life.

                  Comment

                  • coder
                    Senior Member
                    • Nov 2014
                    • 8822

                    #39
                    Originally posted by zeb_ View Post
                    Now the gains with PBO are significant in some scenarios (15% faster compilation, 10% faster x265 and 18% faster for c-ray 1080p). TDP105W provides also a boost, which may be considered safer.
                    I'd be curious to know how much the gains are with the 105W limit vs. 65W or PBO.

                    Originally posted by zeb_ View Post
                    As you say I could prefer to disable these boosters for several reasons:
                    Although PBO is designed to be stable, I'd be worried about the possibility of introducing errors. That's why I don't overclock or undervolt. Ever.

                    Changing power limits is fine, though. On Intel CPUs, at least, it won't void your warranty. As long as the board can supply adequate power and you're not running the CPU at its thermal limit, all the time, using a higher power limit should be safe.

                    Comment

                    • zeb_
                      Phoronix Member
                      • Jun 2008
                      • 56

                      #40
                      Originally posted by coder View Post
                      I'd be curious to know how much the gains are with the 105W limit vs. 65W or PBO.
                      I can help here Doing some CPU tests now.
                      1. Impact of TDP 65W (stock) and 105W, (without PBO)
                        pts_1.pngpts_2.pngpts_3.png
                      2. Impact of enabling PBO on stock TDP65W
                        pts_4.pngpts_5.pngpts_6.png
                      3. Impact of High-Perf RAM (MSI's timing schemes) without PBO and stock TDP65W
                        pts_7.png
                      • So enabling PBO is the most impactful. PBO also leads to the highest temperatures (95°C at full load). Note this is the "AMD" PBO. There are also MSI's "Enhanced" PBO configurations which marginally improve performance, but the "recipes" are not communicated ; the most agressive Enhanced PBO from MSI led to crashes.
                      • Enabling TDP105W is also very impactful in some scenarios, albeit slightly less than PBO. Temperatures are high but lower than PBO (around 80-85°C)
                      • Tweaking memory timings has small impact. This is in addition to the "EXPO1" timings I used, for which my RAM is certified. They have several schemes and there is almost no impact betwen "Balanced" and the more aggressive "Tighter" and "Tightest" ones. This is in line with what I saw on other tests published around the web.
                      I could try to enable both PBO and TDP105W but since with PBO alone we reach the max temperature, I do not expect significant improvement due to the lack of headroom. A better cooler could maybe help.
                      Last edited by zeb_; 12 January 2025, 09:44 AM.

                      Comment

                      Working...
                      X