Announcement

Collapse
No announcement yet.

Over 100 Linux Gaming/Graphics Tests Looking At The Radeon RX 570 vs. GTX 1650

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by birdie View Post
    But I guess since NVIDIA doesn't release open source drivers for their GPUs then whatever they do sucks by default for most Phoronix readers.
    Pretty much. The price of freedom is infernal impotence.

    It's still a lot better than my server's MGA200. I had to drop the console framebuffer to 8-bit colour because scrolling was using 100% CPU!

    Comment


    • #32
      Originally posted by artivision View Post
      https://wiki.archlinux.org/index.php...U#Overclocking

      https://forums.opensuse.org/showthre...not-persistent

      1. My above custom power levels are valid probably for all Polaris but the memory "m 2 power level is not, other gpus have different memory vendor.
      2. When you benchmark don't close on first run and let it go second and third run to see if it hangs at 70c. If it does then raise by 10mv and test again and goes on.
      3. I would like someone to test the 1040-1100mv range (i suspect [email protected]) and the 6w idle that you mention.
      Thanks a Lot!
      I think I found the Culprit:
      "Since Linux 4.17, it is possible to adjust clocks and voltages of the graphics card via /sys/class/drm/card0/device/pp_od_clk_voltage. It is however required to unlock access to it in sysfs by appending the boot parameter amdgpu.ppfeaturemask=0xffffffff"

      I don't have amdgpu.ppfeaturemask=0xffffffff it set on boot ..
      I will check that configs then I will also port mine ones

      Comment


      • #33
        Originally posted by skeevy420 View Post
        Since we're showing our amdgpu undervolts, here's what I have for my RX 580:
        Thanks a lot for your config!
        I will try to enable Overdrive and then try to configure mines

        Comment


        • #34
          The power draw difference is a non-issue to me. It's not like the RX570 is drawing 400 or 500 watts. 50-80 more watts under load, when I'm gaming maybe an average 1hr per day is not going to come close to adding up to any real money. (Tweaking the thermostat is going to save way more $$.)

          And if I was trying to upgrade a PC with a crap power supply, I'd just pay $130 for the 570 then spend another $25 on a proper PSU.

          Comment


          • #35
            Originally posted by debianxfce View Post
            Just a reminder that with the GTX 1650 you can not use the desktop when installing and updating the driver, the driver does not work with latest mainline kernels without patching and it takes weeks before nvidia makes patches. No way to fix closed source bugs by yourself.
            Kinda one of the benefits of using ARCH based distro, they include patches to make nvidia drivers work with newer kernels (except AUR beta drivers can fall behind a bit).

            Comment


            • #36
              I see that many are messing with pp_od_clk_voltage on their Radeons.
              I find this script awesome for managing over/under/clock/volt in a super-easy manner, and even on multiple cards: https://github.com/sibradzic/amdgpu-clocks






              Comment


              • #37
                Power bills are not a concern for gamers, because even for hardcore gamers the amount of time spent under gaming loads is a fraction of the day. The rest of the time the GPU is in low power mode and they are all pretty much the same.

                Power consumption is a legit concern for people who load their GPUs 24x7. E.g. miners.

                Comment


                • #38
                  Originally posted by skeevy420 View Post

                  No I haven't. Right now I'm using what Wattman suggested, 0xfffd7fff, but before using that value I used 0xffffffff, the catch-all value, without issues.

                  For me, artifacts were usually the result of the gpu not clocking up at all or fast enough...but that was early amdgpu days with my 260x and I could use the following to "fix" it. Changing "high" to "auto" goes back to the default setting. Eventually amdgpu.dpm=1 fixed that for me, been using it ever since....
                  Code:
                  echo high > /sys/class/drm/card0/device/power_dpm_force_performance_level
                  If it'll help, my entire kernel command line regarding amdgpu is:
                  Code:
                  amdgpu.dpm=1 amdgpu.dc=1 amdgpu.ppfeaturemask=0xfffd7fff amdgpu.deep_color=1 amdgpu.exp_hw_support=1
                  Hmm, yeah the feature mask flag gets me artifacts no matter what (which I've never had otherwise). Isn't DPM on by default for this card anyway?

                  Comment


                  • #39
                    No one should be specifying amdgpu.dpm=1. dpm is enabled by default on all asics.

                    Comment


                    • #40
                      Originally posted by humbug View Post
                      Power bills are not a concern for gamers, because even for hardcore gamers the amount of time spent under gaming loads is a fraction of the day. The rest of the time the GPU is in low power mode and they are all pretty much the same.

                      Power consumption is a legit concern for people who load their GPUs 24x7. E.g. miners.
                      Have you ever considered that higher power consumption doesn't only mean more watts being consumed. It also means more energy to dissipate and cool down, which means your GPU will likely be a lot more noisy and you'll have to have a spacious case to accommodate such a GPU or else everything will overheat and then throttling will kick in and slow everything down.

                      Comment

                      Working...
                      X