Announcement

Collapse
No announcement yet.

Linux 4.20 Allows Overclockers To Increase The Radeon TDP Power Limit

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Zucca View Post
    Do people really benefit overclocking their GPUs?
    See by yourself: https://openbenchmarking.org/result/...SK-RX580444728
    The graphic card is a Sapphire NITRO+ Radeon RX 580 with 4GB of GDDR5.
    The GDDR5 OC is 28% with stock voltage (950mV).

    Comment


    • #12
      Missing dot at the end of this paragraph:

      Originally posted by phoronix View Post
      This has allowed manipulating the core and memory clock speeds as well as tweaking the voltage but has not supported increasing the TDP limit of the graphics card: that's in place with Linux 4.20

      Comment


      • #13
        Originally posted by Mathias View Post

        What Happens if one simply removes that check? Can you overclock above the bios level? I assume no, but maybe someone knows for sure.
        This is purely speculation, but I think the check is there because setting a power limit above the VBIOS specified power limit could result in undefined behavior. Throw third party VBIOS modifications into the mix and you'd probably be better off just modifying your VBIOS.

        Comment


        • #14
          Originally posted by Noee View Post

          Same issue here on RX560, but if I choose High or Low, seems fine. Auto is where the flickering comes back (with multi-mon). Happens with 5.0 kernels so far as well.
          Also happens on R7 260x and the same fix high/low/auto fix applies.

          Comment


          • #15
            Raising the power cap is important for overclocking Polaris and most AIB's support this. On Windows when choosing their OC modes the power cap is usually raised and is normally an option otherwise. I think typically a 20% raise is usually possible. For me my Gigabyte RX 580 Aorus defaults to 1365MHz core and to keep ROTR at 60fps all the time I needs 1470MHz but on Linux it hits the power cap with that and throttles long before it gets toasty.

            Does anyone know if you can probe the VBIOS for what its hard set limit is?

            Comment


            • #16
              Originally posted by Zucca View Post
              Do people really benefit overclocking their GPUs?
              I could potentially test this... I have water cooled R9 Nano. I could get it ot Fury X perforrmance levels this way maybe. But is it worth of it?
              The power limit in-particular seems easy for me to hit on a RX 580 @ 4K. The GPU ends up being limited by the power before actually being limited by how powerful it is. On Windows, I used to instantly increase the power limit +50% just to prevent this from being an issue.

              Comment


              • #17
                Originally posted by Zucca View Post
                Do people really benefit overclocking their GPUs?
                I could potentially test this... I have water cooled R9 Nano. I could get it ot Fury X perforrmance levels this way maybe. But is it worth of it?
                Biggest benefit I've seen with my AMD ones is in reducing the voltage. I'd like to write something automated to tune the voltages down, but honestly not sure how to generate something approximating 100% load on these GPUs (except a little GL program I've written, which seems to at least hold up the shader cores, but doesn't invoke the samplers at all).
                Last edited by microcode; 18 January 2019, 02:57 PM.

                Comment


                • #18
                  Originally posted by microcode View Post
                  but honestly not sure how to generate something approximating 100% load on these GPUs (except a little GL program I've written, which seems to at least hold up the shader cores, but doesn't invoke the samplers at all).
                  Yea it would probably take some work to write a program that pushes all components of the GPU to max.
                  In radeontop you can see gpu load a bit more differentiated and I think with umr --top you could really see everything.

                  In my experience, the furmark benchmark from the proprietary gputest is the application that caused the highest power consumption for my RX 480, and it actually downclocked sclk to 600mhz instead of 1288mhz because of the power cap.

                  Comment


                  • #19
                    Originally posted by Xorg View Post

                    See by yourself: https://openbenchmarking.org/result/...SK-RX580444728
                    The graphic card is a Sapphire NITRO+ Radeon RX 580 with 4GB of GDDR5.
                    The GDDR5 OC is 28% with stock voltage (950mV).
                    Interesting that just bumping the memory results in such an improvement.

                    Comment


                    • #20
                      Originally posted by dungeon View Post

                      Of course, when you don't have artifacts you can overclock GPU do get some.

                      After recent nouveau user comment on google chrome i really started to believe that there people who prefer to have artifacts even on Desktop
                      I prefer to leave artifacts in/on the ground for archaeologists to discover hundreds of years from now.

                      Comment

                      Working...
                      X