Announcement

Collapse
No announcement yet.

AMD Sends In Their Initial AMDGPU Driver Updates For Linux 5.2

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Sends In Their Initial AMDGPU Driver Updates For Linux 5.2

    Phoronix: AMD Sends In Their Initial AMDGPU Driver Updates For Linux 5.2

    Joining the DRM-Next party with the Intel driver feature work is now the initial batch of the AMDGPU Radeon driver changes for Linux 5.2...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    The "DRM-next party" as of today sees still the shader and memory clocks being set to seemingly arbitrary values depending on the refresh rate (without any GPU load): second-highest sclk but lowest mclk at 4k 60Hz, lowest sclk but highest mclk at 4k 50Hz and so on.
    Unlike a month back, now X doesn't immediately crash when started with amdgpu.vm_update_mode=3, but the instabilities aren't gone, also with vm_update_mode=0. So no light at the end of the instability tunnel. Still hoping that Intel gets those Xe units out, to finally have some alternative.

    Comment


    • #3
      Originally posted by dwagner View Post
      The "DRM-next party" as of today sees still the shader and memory clocks being set to seemingly arbitrary values depending on the refresh rate (without any GPU load): second-highest sclk but lowest mclk at 4k 60Hz, lowest sclk but highest mclk at 4k 50Hz and so on.
      Unlike a month back, now X doesn't immediately crash when started with amdgpu.vm_update_mode=3, but the instabilities aren't gone, also with vm_update_mode=0. So no light at the end of the instability tunnel. Still hoping that Intel gets those Xe units out, to finally have some alternative.
      I've given AMD a lot of slack but honestly I could see myself going Intel also. I'd much rather that they went after stability than the rush to new features.

      Comment


      • #4
        Originally posted by ihatemichael

        Huh? I'm on Arch Linux and I have the latest packages of the kernel, mesa and LLVM, I even tried mesa-git and AMD wip kernel. I still get frequent corruption/quirks with glamor, I'm not sure how you can claim that.
        I've seen the same on my new Ryzen / Vega 10 laptop. New gtk3 windows briefly render a window full of garbage before behaving properly. I'm also seeing rendering glitches in gtktextview and gtksourceview widgets, which is fixed when I drag-select the content. I'm also on a recent kernel ( 5.0.1 ) and mesa ( git, various builds, updated weekly ). Hopefully will get fixed soon. Not sure if it's been reported, but it's hard to see how they wouldn't be aware of it ...

        Comment


        • #5
          Is the corruption on under X or wayland?
          Cant see any corruption in X using modestteting DDX.

          Comment


          • #6
            Originally posted by atomsymbol

            My card uses lowest mclk when a single monitor is plugged in, but switches to a higher mclk (for no rational reason) when two monitors are plugged in which adds about 40 watts to power consumption at the outlet and prevents GPU fans from stopping. Watching a video can lead to unnecessary increases in power consumption as well. I prefer to set mclk and sclk ranges manually, enabling higher clocks only when they are needed.
            The issue is that switching power modes needs some time and is normally done during vblank, when no data has to be send to the screen. With higher frequency the vblank time is shorter (My 290 stays in full memory clocks when I set over 120 Hz on my monitor). And with multiple monitors I guess the vblank is not synced?! In that case the monitor could start to flicker when the power switching is still ongoing and no data send to the monitor in time. So the drivers go in permanent higher clock speed mode to prevent this flickering.

            Comment


            • #7
              Originally posted by pixo View Post
              Is the corruption on under X or wayland?
              Cant see any corruption in X using modestteting DDX.
              Me neither with amdgpu DDX.

              Comment


              • #8
                Originally posted by pixo View Post
                Is the corruption on under X or wayland?
                Cant see any corruption in X using modestteting DDX.
                Ah good question. It's only happening for me under X. The 'random garbage rendering' happens every new gtk3 window. The partially-rendered textview content happens semi-regularly. I've just tested for about 5 minutes under Wayland, and can't trigger either of these bugs.

                As for the modesetting DDX - what GPU are you using? If you use modesetting for AMD, I believe you lose features, and I'm not sure what the 3D acceleration situation would be. I used to use the modesetting driver when I had an Intel GPU - and it ( and therefore glamor ) certainly worked well in that setup. This is definitely an AMD driver issue ... but as I've just discovered, only under X.

                I'd switch to wayland now if I could, but I'm still having 2 major issues:
                • gtk3 menu rendering issues - the 1st menu pop-up is placed partially off-screen, and subsequent pop-ups appear empty
                • compositor crashes kill all X clients
                As I'm developing a gtk3 app that uses menus to navigate between windows, the 1st point is a deal-breaker for me. Oh and the 2nd point is also a deal-breaker for me. Other than these 2 issues, I can see already that Wayland ( this is under Enlightenment, by the way ) is much faster and smoother. Looking forward to it all coming together

                Comment


                • #9
                  Originally posted by atomsymbol
                  My card uses lowest mclk when a single monitor is plugged in, but switches to a higher mclk (for no rational reason) when two monitors are plugged in which adds about 40 watts to power consumption at the outlet and prevents GPU fans from stopping. Watching a video can lead to unnecessary increases in power consumption as well. I prefer to set mclk and sclk ranges manually, enabling higher clocks only when they are needed.
                  The reason this normally happens is that if memory clock (and actual GDDR voltage, but this isn't exposed) remained at the lowest state when two monitors are plugged in and enabled, intense flickering could occur depending on the monitor frequency and resolution (typically when the displays have different resolution and/or timings) - this is exactly what happens when the overclocking flag is enabled with the appropriate amdgpu.ppfeaturemask flags, on my Sapphire Nitro+ RX480, when using a 1920x1080 60 Hz and a 1920x1200 60 Hz display at the same time: there, for some inexplicable reason (likely a bug), normal automatic mclk selection behavior does not occur as with overclocking disabled and users are forced to micromanage it when connecting/disconnecting a secondary monitor.

                  Does the auto-off fan feature work in your case by the way? On my RX480 it does only on Windows, and on Linux I have to use a script to manage it depending on temperature (this one to be specific). However that sort of simple temperature-dependent fan curve tends to be noisier in general (when fans are active) than the built-in fuzzy-logic fan control behavior.

                  Comment


                  • #10
                    Originally posted by Solid State Brain View Post

                    The reason this normally happens is that if memory clock (and actual GDDR voltage, but this isn't exposed) remained at the lowest state when two monitors are plugged in and enabled, intense flickering could occur depending on the monitor frequency and resolution (typically when the displays have different resolution and/or timings) - this is exactly what happens when the overclocking flag is enabled with the appropriate amdgpu.ppfeaturemask flags, on my Sapphire Nitro+ RX480, when using a 1920x1080 60 Hz and a 1920x1200 60 Hz display at the same time: there, for some inexplicable reason (likely a bug), normal automatic mclk selection behavior does not occur as with overclocking disabled and users are forced to micromanage it when connecting/disconnecting a secondary monitor.
                    I had (not so intense) flickering issues on my ultrawide monitor, the problems went away when I replaced my displayport cable. IIRC there is no ack on the display output protocol so it's difficult to determine exact cause. In my case the bandwidth used was way below the displayport spec of my screen/GPU (also nitro+ rx480) and cables are generic I am left to believe that it was a faulty/damaged cable.

                    Does the auto-off fan feature work in your case by the way? On my RX480 it does only on Windows, and on Linux I have to use a script to manage it depending on temperature (this one to be specific). However that sort of simple temperature-dependent fan curve tends to be noisier in general (when fans are active) than the built-in fuzzy-logic fan control behavior.
                    Exactly the same as you described. The second that I pass my GPU through to KVM (windows) then auto-off fan kicks in . It does not work on Linux without manual configuration. One can also configure the profiles in the GPU BIOS if you really wanted. That said I've been holding back kernel updates for a couple of months now.

                    Comment

                    Working...
                    X