Announcement

Collapse
No announcement yet.

Ethereum & OpenCL: ROCm vs. AMDGPU-PRO 17.40

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • phoronix
    started a topic Ethereum & OpenCL: ROCm vs. AMDGPU-PRO 17.40

    Ethereum & OpenCL: ROCm vs. AMDGPU-PRO 17.40

    Phoronix: Ethereum & OpenCL: ROCm vs. AMDGPU-PRO 17.40

    Following this week's Ethereum and OpenCL benchmarks with Radeon vs. NVIDIA using the latest Linux drivers, some premium supporters requested a fresh AMDGPU-PRO vs. ROCm comparison. So here are a couple of those OpenCL benchmarks of AMDGPU-PRO vs. ROCm on different Polaris / Fiji and Vega GPUs.

    http://www.phoronix.com/vr.php?view=25425

  • bridgman
    replied
    Not sure if we still have atomics requirement for 580 - I believe it was relaxed for Vega but not sure about earlier parts.

    Ignoring atomics for a minute, you would need something like:

    - latest WIP 4.18 kernel (580 support went into 4.17 but fixes kept going up after that),

    - Felix's WIP thunk with upstream-compatible IOCTL calls,
    https://github.com/RadeonOpenCompute...d/drm-next-wip

    - the rest of the ROCm stack above ROCT (so ROCR and up).

    I haven't tried that myself but I think it should work. Without the matching thunk nothing will work on upstream kernel because the IOCTLs won't line up.

    All this should also fetch up in the ROCm 1.9 release fairly soon.
    Last edited by bridgman; 07-30-2018, 11:09 AM.

    Leave a comment:


  • lucrus
    replied
    So, are amdgpu (not -pro) and a 4.18 kernel on any distro enough to run ethminer with rocm, on my hardware which happens to be a Saberthooth 990FX R2.0 (no PCIe atomics AFAIK), AMD FX 8350 CPU and a RX580 GPU?
    Last edited by lucrus; 07-30-2018, 09:38 AM.

    Leave a comment:


  • bridgman
    replied
    Originally posted by lucrus View Post
    Did that happen? I haven't kept myself up to date with Rocm news, and I'm courious to know if current kernels have those bits mainlined.
    IIRC support for most GPUs went into 4.17, Vega10 support went into 4.18, and Raven Ridge support is lined up for 4.19.

    Leave a comment:


  • lucrus
    replied
    Originally posted by Qaridarium View Post
    but they work hard and plan this for Linux kernel 4.15....
    Did that happen? I haven't kept myself up to date with Rocm news, and I'm courious to know if current kernels have those bits mainlined.

    Leave a comment:


  • r1348
    replied
    Originally posted by Qaridarium View Post

    why do you not just dual use your hardware? gaming sure and if you do not do gaming just do some mining instead.
    This is profitable even on relative high electric bill costs. for example my 2 systems with 6 vega-64 make 200€ electric costs per month but they make a 310€ output per month.
    Take a look at this very nice list: http://www.worldatlas.com/articles/e...the-world.html
    I'm right there at #1.
    Also, I'm not really interest in fairy money that can crash at any given moment, or be easily manipulated by a few "whales". Not to mention its environmental cost.

    Leave a comment:


  • Qaridarium
    replied
    Originally posted by jstefanop View Post

    I think most of his frustrations are from the wasted OpenGL efforts over the years on linux, with OpenCL efforts only recently gaining steam. I agree with him, no one seriously games on linux, and with all the effort wasted on getting 3D on par with windows over the years, they could have built an amazing linux compute software stack that cold be rivaled by no one (you could even say ROCm is almost at the point even though is only a year or so old).

    The lack of any control over our GPUs is the most frustrating part on the compute side. We are still no where near fglrx's level of control (full CLI controls over clocks/voltages). They have exposed basic clocking over /sys but this is usually buggy, and if you want real control have to hack the open source bits to do what you want which is supper annoying.

    I want to be an expert in optimizing the compute applications I run, not an expert in figuring out how AMD's driver works...thats AMD's job, and thats why we pay 1K+ for GPUs. I literally had an AMD dev just point me to the linux kernel and pretty much say here its open source now so code what you want it to do. If this is this stance AMD takes with their open source initiative (i.e. offloading driver work they should do to end users)...it wont be a good outcome...
    I think you have a miss conception of driver development you really think openGL work slow down OpenCL work? but this is complete wrong because OpenCL is made by complete different team with a complete different skillset and openGL is made by complete different team with complete different skillset... .... this means in reality you have "no" point because you can not switch a OpenGL developer to the OpenCL team... because it is a complete different "task" and need complete different skillset.

    "they could have built an amazing linux compute software stack that cold be rivaled by no one"

    But ROCm is exactly this? my six ~600€ VEGA-64 GPUs beat a ~800€ Nvidia GTX 1080TI in ethereum with the ROCm stack....
    ROCm is open-source even in the closed source AMDGPU-pro the ROCm part is opensource ....

    opensource but problem is for most users this ROCm code is not in the mainline linux projects means clover OpenCL has no rocm code and the linux kernel right now has no AMDGPU code of the ROCm stack.

    but they work hard and plan this for Linux kernel 4.15....

    this means your dream come true... it is just not yet done in mainline linux...

    Leave a comment:


  • jstefanop
    replied
    Originally posted by Qaridarium View Post
    Marc Driftmeyer why do you blame amd for upsteaming problems THEY CAN NOT CONTROL... they just can not control what Linus Torvalds is doing... in my point of view they do the best they can... really.
    I think most of his frustrations are from the wasted OpenGL efforts over the years on linux, with OpenCL efforts only recently gaining steam. I agree with him, no one seriously games on linux, and with all the effort wasted on getting 3D on par with windows over the years, they could have built an amazing linux compute software stack that cold be rivaled by no one (you could even say ROCm is almost at the point even though is only a year or so old).

    The lack of any control over our GPUs is the most frustrating part on the compute side. We are still no where near fglrx's level of control (full CLI controls over clocks/voltages). They have exposed basic clocking over /sys but this is usually buggy, and if you want real control have to hack the open source bits to do what you want which is supper annoying.

    I want to be an expert in optimizing the compute applications I run, not an expert in figuring out how AMD's driver works...thats AMD's job, and thats why we pay 1K+ for GPUs. I literally had an AMD dev just point me to the linux kernel and pretty much say here its open source now so code what you want it to do. If this is this stance AMD takes with their open source initiative (i.e. offloading driver work they should do to end users)...it wont be a good outcome...
    Last edited by jstefanop; 10-30-2017, 05:28 PM.

    Leave a comment:


  • Qaridarium
    replied
    Marc Driftmeyer why do you blame amd for upsteaming problems THEY CAN NOT CONTROL... they just can not control what Linus Torvalds is doing... in my point of view they do the best they can... really.

    Leave a comment:


  • Qaridarium
    replied
    Originally posted by Kano View Post
    Marc Driftmeyer

    Why don't you use Nvidia cards for opencl/cuda? I do not understand that you seem to need use AMD hardware just because you invested some money. This is a completly independent incident. If your solution is to buy a Mac to get a better opencl stack that's fine but somehow it seems that you have got weird ideas to solve issues...
    also this is just upsteam and open-stack only blaming... in my point of view the closed source 17.40 amdgpu-pro also has the same open-source ROCm openCL stack...

    so why not just install a LTS distro and use the closed source driver? it is not less open-source for the openCL part than the full-open-stack

    just my point of view

    Leave a comment:

Working...
X