Announcement

Collapse
No announcement yet.

Over 100 Linux Gaming/Graphics Tests Looking At The Radeon RX 570 vs. GTX 1650

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by skeevy420 View Post
    Since we're showing our amdgpu undervolts, here's what I have for my RX 580:
    (/usr/local/bin/Set_WattmanGTK_Settings.sh)

    Code:
    echo "manual" > /sys/devices/pci0000:00/0000:00:03.0/0000:03:00.0/power_dpm_force_performance_level
    echo 125000000 > /sys/class/hwmon/hwmon3/power1_cap
    echo "s 0 300 750" > /sys/devices/pci0000:00/0000:00:03.0/0000:03:00.0/pp_od_clk_voltage
    echo "s 1 600 769" > /sys/devices/pci0000:00/0000:00:03.0/0000:03:00.0/pp_od_clk_voltage
    echo "s 2 918 912" > /sys/devices/pci0000:00/0000:00:03.0/0000:03:00.0/pp_od_clk_voltage
    echo "s 3 1167 1075" > /sys/devices/pci0000:00/0000:00:03.0/0000:03:00.0/pp_od_clk_voltage
    echo "s 4 1239 1075" > /sys/devices/pci0000:00/0000:00:03.0/0000:03:00.0/pp_od_clk_voltage
    echo "s 5 1282 1075" > /sys/devices/pci0000:00/0000:00:03.0/0000:03:00.0/pp_od_clk_voltage
    echo "s 6 1326 1075" > /sys/devices/pci0000:00/0000:00:03.0/0000:03:00.0/pp_od_clk_voltage
    echo "s 7 1366 1075" > /sys/devices/pci0000:00/0000:00:03.0/0000:03:00.0/pp_od_clk_voltage
    echo "m 0 300 750" > /sys/devices/pci0000:00/0000:00:03.0/0000:03:00.0/pp_od_clk_voltage
    echo "m 1 1000 800" > /sys/devices/pci0000:00/0000:00:03.0/0000:03:00.0/pp_od_clk_voltage
    echo "m 2 2000 875" > /sys/devices/pci0000:00/0000:00:03.0/0000:03:00.0/pp_od_clk_voltage
    echo "c" > /sys/devices/pci0000:00/0000:00:03.0/0000:03:00.0/pp_od_clk_voltage
    And my systemd unit:
    (/etc/systemd/system/wattmanGTK.service)
    Code:
    [Unit]
    Description=Apply wattmanGTK settings
    
    [Service]
    Type=oneshot
    ExecStart=/usr/local/bin/Set_WattmanGTK_Settings.sh
    RemainAfterExit=yes
    
    [Install]
    WantedBy=multi-user.target
    That way all I have to do is use WattmanGTK and sudo cp -f the file from $HOME to /usr/local/bin when I find some settings I like and work well for me and, bam, underclocking on boot.
    Whenever I run WattmanGTK and set the kernel parameter it suggests I get really bad graphic artifacting. Did you run into that at all?

    Comment


    • #22
      Originally posted by tuxd3v View Post
      Cannot say the same about RX580, which is headless mode uses around 34 watts..
      Some users posted here saying that it consumes 6 watts of power if undervolted, but I couldn't find undervolt controls via sysfs..
      Something's wrong. All modern AMD cards have ZeroCore and should use roughly 3 Watts once the display goes off.


      More on topic: With AMD you get a 97% free-as-in-freedom driver. That is a (big) plus for me. (The 3 remaining percent go to these little firmware blobs you load with the kernel for the microcontrollers on the card.)
      Stop TCPA, stupid software patents and corrupt politicians!

      Comment


      • #23
        Originally posted by Peter Fodrek View Post

        Excuse me, but I cannot find way to 73 divided by more then 100 to became 0,83 or 83%
        I would like to be explained to teach me how this become possible.
        Maybe by excluding ties. I'm just guessing though.

        Comment


        • #24
          I am glad Phoronix now has a 570 to test. With such perf/price ratio it is more or less a pivotal card.

          Comment


          • #25
            Originally posted by GreenReaper View Post

            You appear to have confused system power with graphics card power. The PCIe slot limit is 75W (or arguably closer to 66W?). The CPU, PSU, drives, RAM take power as well.
            Yes. What if something goes wrong with the power? What happens if this card is used at its max power all the time? I think the auxiliar power connector is needed by this card too.

            Comment


            • #26
              Originally posted by fuzz View Post

              Whenever I run WattmanGTK and set the kernel parameter it suggests I get really bad graphic artifacting. Did you run into that at all?
              No I haven't. Right now I'm using what Wattman suggested, 0xfffd7fff, but before using that value I used 0xffffffff, the catch-all value, without issues.

              For me, artifacts were usually the result of the gpu not clocking up at all or fast enough...but that was early amdgpu days with my 260x and I could use the following to "fix" it. Changing "high" to "auto" goes back to the default setting. Eventually amdgpu.dpm=1 fixed that for me, been using it ever since....
              Code:
              echo high > /sys/class/drm/card0/device/power_dpm_force_performance_level
              If it'll help, my entire kernel command line regarding amdgpu is:
              Code:
              amdgpu.dpm=1 amdgpu.dc=1 amdgpu.ppfeaturemask=0xfffd7fff amdgpu.deep_color=1 amdgpu.exp_hw_support=1
              Last edited by skeevy420; 16 May 2019, 01:46 PM.

              Comment


              • #27
                I wish to see also benchmarks on Windows.

                Comment


                • #28
                  Originally posted by AndyChow View Post
                  Wow, I didn't think the GTX 1650 would perform so badly. Very eye-opening.
                  161W vs 71W. I could never imagine AMD power efficiency would suck so much. But I guess since NVIDIA doesn't release open source drivers for their GPUs then whatever they do sucks by default for most Phoronix readers.

                  Comment


                  • #29
                    Originally posted by skeevy420 View Post

                    Oh, you mean the tedious method I've been meaning to do to find my exact limits...to do that right would take days. An added level of difficulty and time is having to run game benchmarks in addition to GPU benchmarks when doing that. I've very often experienced that what works for benchmarks will crash when playing a game or whatever...I consider it to be an unstable UV if programs crash even if benchmarks and the desktop don't.

                    Is the "leave state 0 alone, set all other states to what you're testing" method acceptable? That's the easiest way to test and the way I like finding exact values. When I do get around to doing this myself (within in the next two months...I'm not in any hurry since what I have fixed my heating issues) I'll keep you in mind and record my results.

                    I know that ASUS pushes some of their 580s all the way up to 1466mhz. My MSI can do that with my card's stock voltage of, IIRC, 1250 or 1260, but somewhere around 1150mv is where my card starts to really heat up and the fans spin up to max.
                    There is a reason why this happens to you: the voltage that is needed for 50 Celsius is not the same that is needed for 70c. My gpu at 1020v starts at 1.35ghz but after after many continued benchmark circles (not just one) it hangs. That's why i run it at 1.3ghz, because after a few minutes i need 1040v to stabilize at 1.35ghz and my 120w limited gpu doesn't allow that.

                    The best thing you can do is test your 1.040mv if it can hold 1.35ghz and your 1.100mv if it can hold 1.45ghz. Then no more volts than 1.100. I know for sure that the first three power levels are used even on video playback but i have no idea what voltages should i use to get 15w instead of 35w that is on idle today.

                    Comment


                    • #30
                      Originally posted by birdie View Post
                      But I guess since NVIDIA doesn't release open source drivers for their GPUs then whatever they do sucks by default for most Phoronix readers.
                      Pretty much. The price of freedom is infernal impotence.

                      It's still a lot better than my server's MGA200. I had to drop the console framebuffer to 8-bit colour because scrolling was using 100% CPU!

                      Comment

                      Working...
                      X