Announcement

Collapse
No announcement yet.

AMDGPU Linux Driver No Longer Lets You Have Unlimited Control To Lower Your Power Limit

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by skeevy420 View Post
    It's as close as you can get to shooting your eye out without actually doing it. He literally shoots the bone around the eye socket.
    So basically what you are saying is that he did not shoot his eye out.

    Comment


    • #32
      Originally posted by stormcrow View Post
      ...some commenters apparently fell asleep in high school science classes...

      Basic high school electricity. Under voltage beyond tolerance will kill electronics nearly as fast as an over voltage (where you get arcing). Under voltage increases your amperage to meet the basic power levels required by the electronics. Electronics are rated to a certain voltage, but more importantly, to a certain amperage. When that amperage is exceeded Bad Things happen. Ever wondered why high amperage extension cords are much larger and more expensive than low amperage for the same voltage? (Look it up.) Additionally left to your education, find out why weak(ening) PSUs often scorch power traces on connected boards. (Hint: Power (Watts) = V (voltage) x I (current or amperage) )

      Don't expect this to ever be reverted. It was an oversight/bug to begin with.
      Shhh, dont bring real information to this party, the Ngreedia white knights disguised as FOSS loving members need everything they can use to show their loyalty to Dear Leader Jensen third leg.

      Comment


      • #33
        One thing to say...
        this-is-some--8i0rr2.jpg
        - Harry ( Resident Alien )
        Attached Files

        Comment


        • #34
          Ladies and gentlemen: The comment section.

          Comment


          • #35
            What happened to Project X? https://www.phoronix.com/news/Projec...D-Zen-Coreboot

            Comment


            • #36
              How dare AMD use software to make sure that their hardware runs within the engineered power limits?

              I know my rights, if I want to make my computer unstable by lowering the power limits of my video card to below what the engineers that designed the card specified, then I should be allowed to!

              Linux is open source, I can do anything I want with it, including ruining my hardware. I'll just hack the open source drivers and remove this bull spit limit myself.



              Are system hardware components (i.e., CPU, GPU, RAM) operating at factory recommended settings?
              I agree with the general sentiment that AMD engineers that actually designed and built the hardware are less knowledgeable than the average Linux user that doesn't have an engineering degree and thus AMD should not be taking any action to prevent Linux users from damaging their hardware or making their system unstable.

              Today AMD takes steps to bring Linux stability on par with Windows stability and then what, AMD will work on bringing Linux functionality on par with Windows?

              Where does it end?

              Linux stops being a half-assed OS and Linux users in general actually start knowing what they are doing?

              We can't have that and we need to band together to right this slap in the face.

              Comment


              • #37
                Looks sensible to me, but I wish they would focus on idle efficiency ASAP. My RX6800 jumps from 8W idle for a single 165Hz monitor to 45W for dual, even if I set them both to 50Hz. Same behavior in Windows btw. Makes no sense to see the VRAM clock being stuck at max speed as soon as a second monitor is added, Nvidia doesn't have this issue.

                Comment


                • #38
                  That sucks, some older SKUs are reaching 95C at a fixed limit of 35W. With amdgpu I was able to lower their powerlimit to 20W and together with agressive downclock/downvolt it was possible to maintain temperatures below 80C. It seems that will no longer possible.

                  Comment


                  • #39
                    Originally posted by citral View Post
                    Looks sensible to me, but I wish they would focus on idle efficiency ASAP. My RX6800 jumps from 8W idle for a single 165Hz monitor to 45W for dual, even if I set them both to 50Hz. Same behavior in Windows btw. Makes no sense to see the VRAM clock being stuck at max speed as soon as a second monitor is added, Nvidia doesn't have this issue.
                    Something is wrong with your setup or it's a driver bug with high refresh panels.
                    My RX6800XT (Gigabyte Gaming OC) idles at 8W when using two 2560x1440p 60Hz displays with 96Mhz reported memory clock. Connected using DisplayPort cables bundled with monitors.
                    This rises to around 15W for simplest 3D graphics or around 40W for 4K vp9 video hardware acceleration.
                    Try setting your card to POWER_SAVING mode. This will lower especially memory clocks significantly, but cause some stutter in games, though. And make sure it's not in 3D_FULL_SCREEN mode, as it ramps up clocks very quickly. Default setting (BOOTUP_DEFAULT) works best for me.

                    As for change, this really depends on whether it can or not damage your hardware.
                    But I don't see how a part designed for let's say 300W TDP can melt while capped to use 60W and downclocked.
                    Both voltage and amperage will be probably much lower than it's designed for.
                    And this is just a power LIMIT. Your card doesn't hit power limit on idle and somehow doesn't break. And at stock settings can take anything between idle power to power limit depending on load. And doesn't break either.
                    So as long as new power limit is higher than idle power it should make no difference. It will just limit performance levels.
                    Last edited by sobrus; 05 March 2024, 04:16 AM.

                    Comment


                    • #40
                      Originally posted by varikonniemi View Post
                      At least i would expect an explanation how this would damage the card?
                      You could exploit the low power states maybe. for example you can bypass security processors buy underpower them like in the switch or the exploit that was presented for the tesla board computers. the low power state could let the DMA engine do funky stuff like writing in memory regions that they normal should never write data to. for example the issue where the kernel let write in the EFI store and brik some laptops in the paste.

                      There could be a hole lot that can go wrong in that case. but i also would like that AMD and the AIB check how low you can go without issue and let that happen. its annoy me to see that the gpu core and memory clocks ramp up for just watching a h264 720p video clip.

                      Comment

                      Working...
                      X