Announcement

Collapse
No announcement yet.

amdgpu questions

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • juno
    replied
    bridgman
    agd5f
    I saw this video: https://www.youtube.com/watch?v=tKBthlKTtvQ
    It's not very technical, I assume the functionality is in the Windows driver and VMWare or is this also possible with Linux? If not, are there plans for supporting this tech on Linux too?
    And is this limited to FirePRO hardware forever or also coming to radeons?

    edit: OK, got it. It's only for VMWare ESXi and vSphere. However, the question still stands: is support planned for amdgpu/kvm? This would basically make the need to passthrough obsolete, enabling multiple guests (win or linux) to utilise the same GPU...
    Last edited by juno; 22 May 2016, 08:39 AM.

    Leave a comment:


  • juno
    replied
    In the commit history we can see that there is progress made in DAL, also with the help of redhat devs. In the discussion in the mailing list there were still concerns about DAL on general, even almost suggesting to drop completely.

    Can somebody tell something about the plans about this topic? It might not be a problem for the hybrid stack but when customers consider to move over to amdgpu all-open (e.g. for later open source vulkan and openCL support) they may lose functionality like adaptive sync or dp/HDMI audio.
    Talking about adaptive sync: Intel said to support it in the future. Is there any chance, code could be shared on a higher level? And are there any little tasks that new interested devs could jump into to make features like adaptive sync and in the long-term feature parity (or catching up to a point) for the open stack happen (faster)?
    Last edited by juno; 25 April 2016, 06:23 AM.

    Leave a comment:


  • juno
    replied
    Originally posted by Xen0sys View Post
    Typically true though it can make a difference for <30/60 FPS cases when you just need a little more performance: https://www.guru3d.com/articles-page...review,26.html
    ooh that's ugly. The MSI 390s already have exorbitant power targets. That beast with +50% power target and 1200 MHz will swallow a lot.
    Current gen Radeons aren't really overclockable easy. Something is holding them back and they are sold quite near their limits, as duby pointed out. Also, they have been very generous when it comes to voltages. All newer Chips (Hawaii, Fiji, Tonga) could be undervolted quite well. Nvidia's Maxwell was introduced well near their sweet spots with a lot of clock reserves and the boost with adaptive clocks/voltages seems to work better, too. If you really fine tune your Radeon, you can get quite close to the efficiency of Maxwell but most of them still don't overclock nice at all.

    That's also a bit problematic, as you can set a delta voltage in Windows with tools from Asus, MSI or Sapphire. But if you set a delta of e.g. -100 mV that could run nice in the highest dpm state but crash in the lower ones when the voltage drops too much. So for the fine tuning you need to adjust the vbios and tweak the voltage tables, which is not a nice solution.
    Would be nice, however to have access to the voltage tables without the risk of flashing the vbios...

    Leave a comment:


  • Xen0sys
    replied
    Originally posted by duby229 View Post
    Not that overclockers will care much, but most GPU's are sold at very near their clock speed limits. It's been a long time since overclocking a GPU made any sense.
    Typically true though it can make a difference for <30/60 FPS cases when you just need a little more performance: https://www.guru3d.com/articles-page...review,26.html

    Leave a comment:


  • duby229
    replied
    Not that overclockers will care much, but most GPU's are sold at very near their clock speed limits. It's been a long time since overclocking a GPU made any sense.

    Leave a comment:


  • agd5f
    replied
    Originally posted by juno View Post
    Is/will there be functionality analogue to Catalyst/Crimson's "Overdrive" on Windows? This enables to alter core and memory clocks as well as power/temperature targets and fan speed.
    It would be nice just having a userspace tool and e.g. a /etc/amdgpu.conf plain text file or something for that reason
    The driver exposes the temperature and fan control via the standard Linux hwmon interfaces. There are tons of tools and GUIs for that. Adjusting the clocks is somewhat non-standard (varies based on GPU and vendor). We provide a sysfs interface to adjust them. The driver will adjust the clocks dynamically based on load. In most cases the user shouldn't need to mess with them manually.

    Leave a comment:


  • bridgman
    replied
    Always a tough question... the right answer is to have those controls in each desktop's settings apps rather than a vendor-specific thing that sits outside them all, but there's a chicken/egg issue with standards for communicating with drivers. AFAIK the radeon and amdgpu drivers already have most of the functionality enabled via sysfs so it's just a matter of wiring up a userspace tool.

    Leave a comment:


  • juno
    replied
    Is/will there be functionality analogue to Catalyst/Crimson's "Overdrive" on Windows? This enables to alter core and memory clocks as well as power/temperature targets and fan speed.
    It would be nice just having a userspace tool and e.g. a /etc/amdgpu.conf plain text file or something for that reason

    Leave a comment:


  • bridgman
    replied
    Yep... I believe the Linux drivers have async compute enabled as well (I think radeon exposed two of the queues by default, amdgpu either 2 or 8).

    Vulkan is the first graphics API to be able to make use of them on Linux AFAIK, but OpenCL can make use of multiple queues as well. We enable the HW scheduler by default when running the HSA stack on Linux, and I believe Windows is starting to use it as well.

    Leave a comment:


  • juno
    replied
    Originally posted by bridgman View Post
    The MEC block has 4 independent threads, referred to as "pipes" in engineering and "ACEs" (Asynchronous Compute Engines) in marketing. One MEC => 4 ACEs, two MECs => 8 ACEs. Each pipe can manage 8 compute queues, or one of the pipes can run HW scheduler microcode which assigns "virtual" queues to queues on the other 3/7 pipes.
    Seems like Windows drivers are catching up to hardware at this point: https://community.amd.com/community/...haders-evolved

    There are statements from game developers about upcoming HLSL extensions, allowing even more console-like low-level access, including scheduling with ACEs. AMD is going to publish tools for that via GPUOpen.
    Does these extensions address the MEC's 'scheduling' mode? Is this functionality also coming to amdgpu and do you know, if equivalent Vulkan extensions are planned?

    Leave a comment:

Working...
X