Announcement

Collapse
No announcement yet.

OpenCL 1.2 Support Merged For Mesa's Gallium3D Clover While OpenCL 3.0 Is Being Tackled

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • illwieckz
    replied
    Originally posted by pal666 View Post
    so you are trying to compete against fglrx? recommended amd driver is rocm
    Why are you saying so much harmful things? No one is trying to compete against fglrx, fglrx is quoted because unfortunately that's still the only working stack today on some OpenCL hardware, without alternative. It's pretty hard, but that's a fact and we have to deal with it.

    ROCm is even not ready for all GCN hardware, even some GCN hardware that are claimed to be supported (GFX7/Hawaii) are known to not work with ROCm as soon as the PCIe version is not the right one. It used to work but is now broken for years. And we talk about some of the most powerful compute cards of their generation.

    i also don't use nvidia, but it doesn't preclude me from knowing that nouveau has opencl driver.
    You seem to miss one very important thing: opencl nouveau is disabled by default, not advertised, likely not distributed, not conformant, and not usable. At this point it's only a thing nouveau developers deal with.

    It's worst than ROCm that is likely to work on a very small selection of GPUs (GCN3+ flagships like Vega or Radeon VII).

    Unlike ROCm, there is no way for a Linux user who don't have OpenCL hardware yet to chose which piece of Nvidia hardware to buy to get a fully working free OpenCL task.

    On AMD side, ROCm only supports a handful of cards. On libclc side, hardware support is wider but feature support is not complete so various applications will not use OpenCL at all (like image software including Darktable). For people owning ROCm unsupported hardware and software requiring features not implemented yet in libCLC, there is no free open source OpenCL stack available, but there is working closed source OpenCL stack, and it works.

    Originally posted by pal666 View Post
    i'm not sure what are those closed implementations you are referring to
    So, why are you acting like there is nothing outside of ROCm or libCLC, this builds denial and it's harmful to Linux OpenCL users who can be mislead to believe they can buy any AMD hardware and then discover ROCm does not work on what they spent money for.

    So, why are you acting like there is no closed implementation, this builds denial and it's harmful to Linux OpenCL users who can be mislead to believe their AMD hardware has no OpenCL support or only have incomplete implementations that is unusable.

    It's really sad that the AMD OpenCL implementation that supports the widest range of hardware is closed, but it exists, and this is a solution for many people, the only solution for many AMD hardware owner.

    Why are you building denial around those solutions, why are you building denial around those people needs, and why do you write misleading statements that is likely to hurt AMD hardware owners and Linux users? What do you gain from the denial and the hurt?
    Last edited by illwieckz; 20 October 2020, 02:56 AM.

    Leave a comment:


  • pal666
    replied
    Originally posted by illwieckz View Post
    As far as I know, that's the status for AMD OpenCL on Linux:[LIST][*]libclc r600
    open, incomplete, TeraScale2+[*]libclc amdgcn
    open, incomplete, GCN, RDNA[*]AMDGPU-PRO legacy
    closed, complete, GCN 1+[*]AMDGPU-PRO
    closed, complete, GCN 3+?, RDNA?[*]ROCm
    open, complete?, select of GCN3+, RDNA?[*]fglrx-era AMD APP
    closed and requires very old kernel, complete, probably only option for TeraScale 1[*]pocl with HSA
    open, early state, probably not usable
    so you are trying to compete against fglrx? recommended amd driver is rocm
    Originally posted by illwieckz View Post
    I don't understand
    it's not the first time https://www.phoronix.com/forums/foru...80#post1201480
    Originally posted by illwieckz View Post
    while you clearly state you don't use compute
    i also don't use nvidia, but it doesn't preclude me from knowing that nouveau has opencl driver.

    Leave a comment:


  • illwieckz
    replied
    Originally posted by pal666 View Post
    i'm not sure what are those closed implementations you are referring to. afaik intel and amd's opencl is open
    As far as I know, that's the status for AMD OpenCL on Linux:
    • libclc r600
      open, incomplete, TeraScale2+
    • libclc amdgcn
      open, incomplete, GCN, RDNA
    • AMDGPU-PRO legacy
      closed, complete, GCN 1+
    • AMDGPU-PRO
      closed, complete, GCN 3+?, RDNA?
    • ROCm
      open, complete?, select of GCN3+, RDNA?
    • fglrx-era AMD APP
      closed and requires very old kernel, complete, probably only option for TeraScale 1
    • pocl with HSA
      open, early state, probably not usable
    More knowledge here.

    Originally posted by pal666 View Post
    i'm not sure […] afaik
    I don't understand why you continuously interfere in OpenCL-related threads while you clearly state you don't use compute, and you got warned multiple time to stop polluting threads about a topic you obviously know so few at a point you even don't notice what may be wrong in your own words, and this new sentence from you clearly shows you even did not read the answer I wrote in a previous topic I told you you lacked knowledge. I gave you the knowledge, you didn't pick that knowledge, but you continue to interfere in OpenCL-related threads with wrong assumptions despite people gave you multiple time the change to get that knowledge.

    It's ok for you to talk in OpenCL threads if you pick the knowledge people give you to begin with, so you can then say something that helps.

    Leave a comment:


  • s_j_newbury
    replied
    Originally posted by tuxd3v View Post
    I for instance, have pcie 3.0 but don't have pcie atomic operations supported..
    This as nothing to do with _only_ pcie 3.0..
    Good point. Yes, I meant 3.0 + atomics. I'm still on PCIe 2 because I have no intention (or financial ability) to upgrade my current system just to get PCIe atomic operations support, even if I would like to use, or even contribute to ROCm. I'm still using a high-end Piledriver system, despite being easily outclassed by modern Ryzen machines; with optimized code it's generally fast enough, if rather power hungry.

    Leave a comment:


  • tuxd3v
    replied
    Originally posted by oleid View Post
    AFAIK no atomics are required from Vega on. Not sure why Polaris would still need them.
    But isn't PCIe 3.0 common nowadays? I mean, PCIe 3.0 specs was released in 2010.
    I for instance, have pcie 3.0 but don't have pcie atomic operations supported..
    This as nothing to do with _only_ pcie 3.0..

    Leave a comment:


  • Veerappan
    replied
    Originally posted by aufkrawall View Post
    It indeed works here on Polaris, which is a great achievement. Though ppd are still way lower than with amdgpu-pro Orca CL driver.
    clinfo also still reports CL 1.1 for whatever reason.
    If the program you want to run doesn't require printf() support, you can force CL 1.2 using the following environment variables:
    Code:
    CLOVER_PLATFORM_VERSION_OVERRIDE=1.2 CLOVER_DEVICE_VERSION_OVERRIDE=1.2 CLOVER_DEVICE_CLC_VERSION_OVERRIDE=1.2  ./your_program

    Leave a comment:


  • aufkrawall
    replied
    But does folding@home even work with rocm? If not, why not, and why does clover already work? I'd assume that you can simply forget about rocm for consumer/prosumer purposes. It's such a relief that work is going on for clover...

    Leave a comment:


  • oleid
    replied
    Originally posted by s_j_newbury View Post

    Polaris is probably more common, and needs PCIe 3.0 for ROCm unless that's changed?
    AFAIK no atomics are required from Vega on. Not sure why Polaris would still need them.
    But isn't PCIe 3.0 common nowadays? I mean, PCIe 3.0 specs was released in 2010.

    Leave a comment:


  • s_j_newbury
    replied
    Originally posted by oleid View Post

    Some? Vega and newer should work everywhere.
    Polaris is probably more common, and needs PCIe 3.0 for ROCm unless that's changed?

    Leave a comment:


  • oleid
    replied
    Originally posted by LinAGKar View Post
    Only ROCm, which only works on some systems.
    Some? Vega and newer should work everywhere.

    Leave a comment:

Working...
X