AMD cards have historically not been very overclockable, for example the 390x barely tolerates any overclock. Polaris should be a good change however, wouldn't be surprised we see overclocks on par with what NVIDIA users experience (which are huge overclocks).
Announcement
Collapse
No announcement yet.
AMD Delivers OverDrive Overclocking Support For AMDGPU DRM Driver
Collapse
X
-
Originally posted by bridgman View PostThe idea is that DE's would implement standard GUIs that work across all HW vendors, using sysfs controls to interact with the drivers. It never seems to quite get to the top of the DE priority list though. The obvious alternative is vendor-specific GUIs, but that really does seem like throwaway work if we're going to end up with DE-level GUIs.
I'm wondering if it would make sense to implement a simple command-line utility for now, since as soon as we release a GUI for something the next question is usually "how do I script it ?". On the other hand all the CLU would do is remember the details of how the sysfs interface works...
Is the real need for an actual GUI (because sliders are cooler than text), persistence of settings across reboots, or the ever-popular "forcing settings that the OSS driver developers all agree should be handled in the app not the driver" ?
Other than putting a civilized face on dual-GPU setups, my impression is that two top priorities are (a) persistence of settings and (b) ability to force settings that the driver devs don't thing are a good idea and haven't actually implemented as driver over-rides in the first place
- Likes 1
Comment
-
Originally posted by bridgman View PostThe idea is that DE's would implement standard GUIs that work across all HW vendors, using sysfs controls to interact with the drivers. It never seems to quite get to the top of the DE priority list though. The obvious alternative is vendor-specific GUIs, but that really does seem like throwaway work if we're going to end up with DE-level GUIs.
I'm wondering if it would make sense to implement a simple command-line utility for now, since as soon as we release a GUI for something the next question is usually "how do I script it ?". On the other hand all the CLU would do is remember the details of how the sysfs interface works...
Is the real need for an actual GUI (because sliders are cooler than text), persistence of settings across reboots, or the ever-popular "forcing settings that the OSS driver developers all agree should be handled in the app not the driver" ?
Other than putting a civilized face on dual-GPU setups, my impression is that two top priorities are (a) persistence of settings and (b) ability to force settings that the driver devs don't thing are a good idea and haven't actually implemented as driver over-rides in the first place
Comment
-
Nice, thanks AMD ;-)
I agree with a sysfs file is a good solution.
As I stated before, I think it would be nice to have a vendor/gpu independent gui for this. Bridgman seems to agree on that (assuming the DEs would implement this).
A simple, interactive cli reference tool would maybe be nice, but I wouldn't put too much effort into that. I'd really want to see the above solution...
btw: is it possible to underclock too? For better efficiency...
TBH, I didn't look closely (it's late :P ), but what about the other features, Windows users have? under-/overclock GDDR/HBM, rise/lower power target? And maybe some day tweak voltages, too? *dream*Last edited by juno; 13 May 2016, 07:20 PM.
- Likes 1
Comment
-
Originally posted by bridgman View PostOther than putting a civilized face on dual-GPU setups, my impression is that two top priorities are (a) persistence of settings and (b) ability to force settings that the driver devs don't thing are a good idea and haven't actually implemented as driver over-rides in the first place
Comment
-
Originally posted by bridgman View PostThe idea is that DE's would implement standard GUIs that work across all HW vendors, using sysfs controls to interact with the drivers. It never seems to quite get to the top of the DE priority list though. The obvious alternative is vendor-specific GUIs, but that really does seem like throwaway work if we're going to end up with DE-level GUIs.
I'm wondering if it would make sense to implement a simple command-line utility for now, since as soon as we release a GUI for something the next question is usually "how do I script it ?". On the other hand all the CLU would do is remember the details of how the sysfs interface works...
Is the real need for an actual GUI (because sliders are cooler than text), persistence of settings across reboots, or the ever-popular "forcing settings that the OSS driver developers all agree should be handled in the app not the driver" ?
Other than putting a civilized face on dual-GPU setups, my impression is that two top priorities are (a) persistence of settings and (b) ability to force settings that the driver devs don't thing are a good idea and haven't actually implemented as driver over-rides in the first place
There's a GNOME Control Center for Wacom tablets specifically. I would not scoff at an AMD graphics panel, not one bit. Especially reasonable if AMD has vendor-specific parameters of some sort. Though if you think there is enough shared functionality, a vendor-neutral one would be even better.
To your point about whether it's really necessary: I imagine you could probably automate the whole process of finding a card-specific clock rate which is the most performant stable configuration. If the GPU crashes while running a stress program, or temperatures approach warranty limits, the clock rate is too high. May be tough to get a useful real-world balance between memory and core clocks though. It would be interesting if somebody with extensive knowledge of GPUs could theorize on this.
The most interesting case, for my interests, is finding a combination of clocks and voltages which offer a stable target performance level with the lowest dissipation. Maybe hook in automatically to a benchmark run with libframerate and go from there.Last edited by microcode; 13 May 2016, 08:48 PM.
Comment
-
-
Isn't this percentage approach a little too limiting? I say that because I overclock my oland based laptop on Windows with MSI Afterburner and I know for a fact that my GPU is most stable with 25MHz increments, both for core and memory.
Also, I haven't had a desktop in a while, so, will this ever be officially supported in laptops?
Comment
Comment