Announcement

Collapse
No announcement yet.

AMDKFD Code Updated For Linux 4.14, More Changes Being Upstreamed

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • MrCooper
    replied
    Originally posted by Veerappan View Post

    At this time, does CIK have working UVD/VCE support, and is it just SI that's lacking it (my recollection could be outdated, or that could be part of DC that I haven't tried yet)?
    CIK has UVD/VCE, but no HDMI/DP audio (needs DC).

    SI has HDMI/DP audio (even without DC), but no UVD/VCE.

    Those are the main blockers for making amdgpu the default for them.

    I'm still running a 7850/Pitcairn in my Ryzen system, and I'm jealous of all the new ROCm stuff I don't get to play with until I upgrade to something newer.
    Not sure Pitcairn can usefully support ROCm.

    Leave a comment:


  • Veerappan
    replied
    Originally posted by MrCooper View Post
    We're planning to switch the upstream Linux kernel default for CIK GPUs such as Kaveri from radeon to amdgpu when DC lands upstream, at which point amdgpu will have feature parity with radeon (currently amdgpu has no HDMI/DP audio support for CIK).
    Awesome. At this time, does CIK have working UVD/VCE support, and is it just SI that's lacking it (my recollection could be outdated, or that could be part of DC that I haven't tried yet)?

    I'm still running a 7850/Pitcairn in my Ryzen system, and I'm jealous of all the new ROCm stuff I don't get to play with until I upgrade to something newer.

    Leave a comment:


  • MrCooper
    replied
    Originally posted by phred14 View Post
    As a Kaveri owner, I haven't bothered to spend the time migrating from RADEON to AMDGPU yet, which is what I understand is the "stock" path. However it sounds as if AMDKFD is going to work with AMDGPU, not with RADEON. So if I want shiny, neat stuff like OpenCL-2.x, it sounds like I'll need to be moving from RADEON to AMDGPU.
    Note that the same is true already if you want to use Vulkan.

    Is there a suggested timeframe or milestone for that, or is it just when I get a round tuit?
    We're planning to switch the upstream Linux kernel default for CIK GPUs such as Kaveri from radeon to amdgpu when DC lands upstream, at which point amdgpu will have feature parity with radeon (currently amdgpu has no HDMI/DP audio support for CIK). In the meantime, as of kernel 4.13 you can easily try out the amdgpu driver via

    radeon.cik_support=0 amdgpu.cik_support=1

    on the kernel command line.

    Leave a comment:


  • agd5f
    replied
    Originally posted by Meteorhead View Post

    Does this have anything to do with the limit to the max number of GPUs? As far as I recall that is a limitation of BIOSs being 32 bit and not being able to reserve 256 MB of RAM per GPU device (usual 8-9 GPU limit). With EPYC having 128 PCIE 3.0 lanes, will amdkfd/ROCm allow ~16 GPU systems (8 dual GPU cards in a single node) for latency limited peer-to-peer communication? (As is the case with most lattice calculations, where several KB of data need be sent, but as fast as possible).
    No. The limit on most systems is the sbios. In most cases they limit the size of PCI(e) bridge aperture to support 32 bit OSes. Server systems often have "large BAR" support which provides a larger aperture to support more and larger PCI BARs. The number of PCIe lanes is a concern for bandwidth, but generally won't prevent use of the devices.

    Leave a comment:


  • phred14
    replied
    As a Kaveri owner, I haven't bothered to spend the time migrating from RADEON to AMDGPU yet, which is what I understand is the "stock" path. However it sounds as if AMDKFD is going to work with AMDGPU, not with RADEON. So if I want shiny, neat stuff like OpenCL-2.x, it sounds like I'll need to be moving from RADEON to AMDGPU. Is there a suggested timeframe or milestone for that, or is it just when I get a round tuit?

    Leave a comment:


  • Meteorhead
    replied
    Originally posted by bridgman View Post
    amdkfd provides a single entry point (/dev/kfd) with access to all GPUs in order to efficiently support a unified virtual address space across all GPUs and easy peer-to-peer addressing from one GPU to others.
    Does this have anything to do with the limit to the max number of GPUs? As far as I recall that is a limitation of BIOSs being 32 bit and not being able to reserve 256 MB of RAM per GPU device (usual 8-9 GPU limit). With EPYC having 128 PCIE 3.0 lanes, will amdkfd/ROCm allow ~16 GPU systems (8 dual GPU cards in a single node) for latency limited peer-to-peer communication? (As is the case with most lattice calculations, where several KB of data need be sent, but as fast as possible).

    Leave a comment:


  • jrch2k8
    replied
    Originally posted by bridgman View Post
    Right.. we started amdkfd as a separate driver to let the compute part move quickly without destabilizing graphics, but probably will integrate them over time.

    The big thing is that the drm subsystem treats each GPU as an independent entity (with a separate driver instance) while amdkfd provides a single entry point (/dev/kfd) with access to all GPUs in order to efficiently support a unified virtual address space across all GPUs and easy peer-to-peer addressing from one GPU to others.
    nice, you know if someone has looked into using HHM for AMD stack? giving AMD cpu division just gave us Threadripper/Epyc monsters that can allocate 1/4 TB RAM(in the future tho) HMM could be very interesting when you need massive amount of data to go through the GPU.

    I'm not sure tho if KFD already do that or if it can be hooked up with HMM in any case

    Leave a comment:


  • bridgman
    replied
    Right.. we started amdkfd as a separate driver to let the compute part move quickly without destabilizing graphics, but probably will integrate them over time.

    The big thing is that the drm subsystem treats each GPU as an independent entity (with a separate driver instance) while amdkfd provides a single entry point (/dev/kfd) with access to all GPUs in order to efficiently support a unified virtual address space across all GPUs and easy peer-to-peer addressing from one GPU to others.

    Leave a comment:


  • jrch2k8
    replied
    Originally posted by GruenSein View Post
    It seems, I have a fundamental problem understanding the driver situation. Why is it even necessary to have two kernel drivers (amdgpu and amdkfd) instead of one that allows all APIs like OpenGL, Vulkan, OpenCL ... to run on top of it?
    AMDGPU is basically for graphic operations like OpenGL and Vulkan while AMDKFD is more like a complement driver for certain HSA compute operations for things like OpenCL 2+, they don't substitute each other but work togheter as far as I understand

    Leave a comment:


  • GruenSein
    replied
    It seems, I have a fundamental problem understanding the driver situation. Why is it even necessary to have two kernel drivers (amdgpu and amdkfd) instead of one that allows all APIs like OpenGL, Vulkan, OpenCL ... to run on top of it?
    Last edited by GruenSein; 08-21-2017, 08:26 AM.

    Leave a comment:

Working...
X