Announcement

Collapse
No announcement yet.

Radeon ROCm 5.0 Released With Some RDNA2 GPU Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • nuetzel
    replied
    Originally posted by xuhuisheng View Post

    The good news is the requirement for PCIe 3.0 atomics is removed from driver, the 21.40 package driver no long need it, for some RDNA cards.

    https://github.com/RadeonOpenCompute..._device.c#L161
    Ah, good catch! - Luke, may the source...
    But, as you stated only for RDNA+.
    Thanks for the pointer.

    Leave a comment:


  • xuhuisheng
    replied
    Originally posted by nuetzel View Post
    Now, Polaris without PCIe 3.0 atomics (PCIe 2.0), again. - Please.
    The good news is the requirement for PCIe 3.0 atomics is removed from driver, the 21.40 package driver no long need it, for some RDNA cards.

    https://github.com/RadeonOpenCompute..._device.c#L161

    Leave a comment:


  • satanas
    replied
    Originally posted by nuetzel View Post
    Now, Polaris without PCIe 3.0 atomics (PCIe 2.0), again. - Please.
    Ah, while it would be nice, even vega 56/64 is considered "legacy ASIC" by now (that's how they described it on github ticket at least..). So yeah, better pray for full RDNA2 and it'd be a miracle if RDNA1 happened at this point..

    Leave a comment:


  • nuetzel
    replied
    Now, Polaris without PCIe 3.0 atomics (PCIe 2.0), again. - Please.

    Leave a comment:


  • davide445
    replied
    https://rocmdocs.amd.com/en/latest/FAQ/FAQ_HIP.html
    Didn't understand if this it's up to date with 5.0 release, in therm of supported and unsupported features.
    Interested in graphics interop.

    Leave a comment:


  • billyswong
    replied
    Originally posted by boboviz View Post

    Rocm is NOT for smartphone. AMD is very clear about the use of Rocm.
    First: data center
    Second: high-end gpu
    If "General Purpose GPU" is only supposed to be used for data centers and the selected premium high-end workstations (a moving narrow range of not too old and not too new), then it is not "General Purpose".

    CUDA is far more "General Purpose" than ROCm in this aspect. Don't wonder why many GPU computation frameworks are written for CUDA only. It is about market share and entry barrier.

    I took smartphone as example because it shows there are applications of GPU-style computations outside data centers and selected high-end workstations. ROCm being restrictive and high barrier to enter is an issue to be solved, not an explanation to its defect.

    Leave a comment:


  • boboviz
    replied
    Originally posted by billyswong View Post
    Mobile phone makers advertise their new chips contain neural co-processors. 99.999% of phone users don't write neural software either.
    "Consumers" by definition don't "make" stuff. They consume GPGPU applications if they are widely supported and available.
    Rocm is NOT for smartphone. AMD is very clear about the use of Rocm.
    First: data center
    Second: high-end gpu

    Leave a comment:


  • boboviz
    replied
    Originally posted by Keith Myers View Post
    Oh, I don't know . . . . . how about 4 Million users and 205K hosts according to today's BoincStats BOINC combined stats. That is a not small number.
    Wait, wait, i'm boinc volunteers since...i don't remember.
    In the boinc's world i don't think rocm can change the situation:
    - projects with gpu support (Milkyway, Einstein, ecc) have Cuda and OpenCl.
    - projects without gpu support continue not to have it.

    Leave a comment:


  • gobenji
    replied
    In my experience on Debian Sid, an easy way to get going with rocm is to use the rocm/tensorflow-autobuilds docker image:
    Code:
    docker run --rm -it --name rocm --device=/dev/kfd --device=/dev/dri --security-opt seccomp=unconfined rocm/tensorflow-autobuilds
    With a 6700xt I've found that opencl works but not tensorflow.

    Leave a comment:


  • Spacefish
    replied
    5700XT (RDNA1) with tensorflow-rocm build from source (for rocm 5.0)

    Code:
    Python 3.9.10 (main, Jan 16 2022, 17:12:18)
    [GCC 11.2.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import tensorflow as tf
    >>> tf.add(1, 2).numpy()
    "hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"
    Aborted (core dumped)
    could probably work if i build rocm from source with the gfx1010 backend enabled.. But building rocm from source is a pain in the b**** IMHO (at least it was for older versions where i did it once).

    Edit: HIP examples work fine / HIP works out of the box for gfx1010
    Last edited by Spacefish; 11 February 2022, 09:58 PM.

    Leave a comment:

Working...
X