Originally posted by debianxfce
View Post
Announcement
Collapse
No announcement yet.
AMD Sends In Their Initial AMDGPU Driver Updates For Linux 5.2
Collapse
X
-
Originally posted by Solid State Brain View Post
The reason this normally happens is that if memory clock (and actual GDDR voltage, but this isn't exposed) remained at the lowest state when two monitors are plugged in and enabled, intense flickering could occur depending on the monitor frequency and resolution (typically when the displays have different resolution and/or timings) - this is exactly what happens when the overclocking flag is enabled with the appropriate amdgpu.ppfeaturemask flags, on my Sapphire Nitro+ RX480, when using a 1920x1080 60 Hz and a 1920x1200 60 Hz display at the same time: there, for some inexplicable reason (likely a bug), normal automatic mclk selection behavior does not occur as with overclocking disabled and users are forced to micromanage it when connecting/disconnecting a secondary monitor.
Does the auto-off fan feature work in your case by the way? On my RX480 it does only on Windows, and on Linux I have to use a script to manage it depending on temperature (this one to be specific). However that sort of simple temperature-dependent fan curve tends to be noisier in general (when fans are active) than the built-in fuzzy-logic fan control behavior.
Leave a comment:
-
Originally posted by atomsymbolMy card uses lowest mclk when a single monitor is plugged in, but switches to a higher mclk (for no rational reason) when two monitors are plugged in which adds about 40 watts to power consumption at the outlet and prevents GPU fans from stopping. Watching a video can lead to unnecessary increases in power consumption as well. I prefer to set mclk and sclk ranges manually, enabling higher clocks only when they are needed.
Does the auto-off fan feature work in your case by the way? On my RX480 it does only on Windows, and on Linux I have to use a script to manage it depending on temperature (this one to be specific). However that sort of simple temperature-dependent fan curve tends to be noisier in general (when fans are active) than the built-in fuzzy-logic fan control behavior.
- Likes 2
Leave a comment:
-
Originally posted by pixo View PostIs the corruption on under X or wayland?
Cant see any corruption in X using modestteting DDX.
As for the modesetting DDX - what GPU are you using? If you use modesetting for AMD, I believe you lose features, and I'm not sure what the 3D acceleration situation would be. I used to use the modesetting driver when I had an Intel GPU - and it ( and therefore glamor ) certainly worked well in that setup. This is definitely an AMD driver issue ... but as I've just discovered, only under X.
I'd switch to wayland now if I could, but I'm still having 2 major issues:- gtk3 menu rendering issues - the 1st menu pop-up is placed partially off-screen, and subsequent pop-ups appear empty
- compositor crashes kill all X clients
Leave a comment:
-
Originally posted by pixo View PostIs the corruption on under X or wayland?
Cant see any corruption in X using modestteting DDX.
Leave a comment:
-
Originally posted by atomsymbol
My card uses lowest mclk when a single monitor is plugged in, but switches to a higher mclk (for no rational reason) when two monitors are plugged in which adds about 40 watts to power consumption at the outlet and prevents GPU fans from stopping. Watching a video can lead to unnecessary increases in power consumption as well. I prefer to set mclk and sclk ranges manually, enabling higher clocks only when they are needed.
- Likes 1
Leave a comment:
-
Is the corruption on under X or wayland?
Cant see any corruption in X using modestteting DDX.
Leave a comment:
-
Originally posted by ihatemichael
Huh? I'm on Arch Linux and I have the latest packages of the kernel, mesa and LLVM, I even tried mesa-git and AMD wip kernel. I still get frequent corruption/quirks with glamor, I'm not sure how you can claim that.
- Likes 2
Leave a comment:
-
Originally posted by dwagner View PostThe "DRM-next party" as of today sees still the shader and memory clocks being set to seemingly arbitrary values depending on the refresh rate (without any GPU load): second-highest sclk but lowest mclk at 4k 60Hz, lowest sclk but highest mclk at 4k 50Hz and so on.
Unlike a month back, now X doesn't immediately crash when started with amdgpu.vm_update_mode=3, but the instabilities aren't gone, also with vm_update_mode=0. So no light at the end of the instability tunnel. Still hoping that Intel gets those Xe units out, to finally have some alternative.
Leave a comment:
-
The "DRM-next party" as of today sees still the shader and memory clocks being set to seemingly arbitrary values depending on the refresh rate (without any GPU load): second-highest sclk but lowest mclk at 4k 60Hz, lowest sclk but highest mclk at 4k 50Hz and so on.
Unlike a month back, now X doesn't immediately crash when started with amdgpu.vm_update_mode=3, but the instabilities aren't gone, also with vm_update_mode=0. So no light at the end of the instability tunnel. Still hoping that Intel gets those Xe units out, to finally have some alternative.
Leave a comment:
Leave a comment: