Announcement

Collapse
No announcement yet.

PoCL 5.0-RC1 Released With Experimental OpenCL For Networked Systems

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • PoCL 5.0-RC1 Released With Experimental OpenCL For Networked Systems

    Phoronix: PoCL 5.0-RC1 Released With Experimental OpenCL For Networked Systems

    PoCL 5.0-RC1 is out today as the newest feature release being brewed for this "Portable Computing Language" implementation that allows for OpenCL code to run on CPUs as well as running OpenCL code on other back-ends such as atop NVIDIA CUDA and AMD ROCm and other LLVM back-ends...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    PoCL has ROCm and CUDA backends? This is getting interesting: PoCL vs RustiCL vs some AMD OpenCL implementation (aren't there more then one?)

    Comment


    • #3
      That's very good! I don't get why this is not a thing since already a decade! I tried to revive CLara but the task of implementing and maintaining such “OpenCL over the network” feature will be better done by PoCL guys.

      So, soon we will be able to select in OpenCL apps GPUs and CPUs from other computers in the network. Actually it will give usefulness to CPU implementations of OpenCL as an easy way to dispatch compute on other computers. In the mean time, Blender still doesn't allow to mix GPUs from different hardware brand to compute a single render, because you have to choose if you want to do CUDA, OneApi or HIP, and only one of them at once.

      Comment


      • #4
        Originally posted by illwieckz View Post
        That's very good! I don't get why this is not a thing since already a decade!
        There kind of has been but it keeps getting broken. People's need for it tends to come in waves and it gets neglected again and starts to rot until the next time.

        My thesis was basically to implement a distributed network aware OpenGL (kind of the opposite to VirtualGL). I was amazed at how so much of this stuff existed but in such a derelict form.

        Comment


        • #5
          Originally posted by kpedersen View Post
          There kind of has been but it keeps getting broken. People's need for it tends to come in waves and it gets neglected again and starts to rot until the next time.
          That's sad, but yeah…

          My thesis was basically to implement a distributed network aware OpenGL (kind of the opposite to VirtualGL). I was amazed at how so much of this stuff existed but in such a derelict form.
          Very interesting, do you have a link to that thesis document?

          Comment


          • #6
            Yet again I am excited but confused.

            Does this allow the simultaneous use of multiple OpenCL backends for networked parallel processing?

            Utilization of all available resources in a network, including a shared VRAM pool for tasks that exceed a single or dual local GPU setup is particularly fascinating.

            Very curious how the scheduler works to determine how to best distribute the workload... Especially considering factors like the computational capabilities / CL extensions / FP Performance / etc of each node and network latency.

            ​​

            Comment


            • #7
              Originally posted by Eirikr1848 View Post
              Utilization of all available resources in a network, including a shared VRAM pool for tasks that exceed a single or dual local GPU setup is particularly fascinating.
              Not sure exactly what you're asking, but it's certainly not going to support SVM across the network.

              Originally posted by Eirikr1848 View Post
              ​Very curious how the scheduler works to determine how to best distribute the workload... Especially considering factors like the computational capabilities / CL extensions / FP Performance / etc of each node and network latency.​​
              Last I checked, OpenCL didn't do load-balancing for you. That's still left up to the application.

              Comment


              • #8
                Originally posted by coder View Post
                Not sure exactly what you're asking, but it's certainly not going to support SVM across the network.


                Last I checked, OpenCL didn't do load-balancing for you. That's still left up to the application.
                That's what I was wondering, thank you. It made no sense to have network SVM and auto-load balancing. Which would be cool, albeit impractical.

                Thanks for clarifying

                Comment


                • #9
                  Originally posted by Eirikr1848 View Post
                  That's what I was wondering, thank you. It made no sense to have network SVM and auto-load balancing. Which would be cool, albeit impractical.
                  First, take what i say with a grain of salt. Pekka is one of the authors and would be able to answer your questions with certainty.

                  Second, it's probably not impossible to have an OpenCL stack do some sort of load-balancing. I'm guessing you could make a special kind of device that maps to a collection of devices or network nodes. However, I'd expect most developers would be able to load-balance and pipeline more effectively - especially since data-movement could be a bottleneck if the load balancer is too naive.

                  Finally, it's probably most helpful to consider what this stands to replace, which is probably low-level APIs like MPI (Message-Passing Interface). Compared to that, OpenCL i a big step up!

                  Comment


                  • #10
                    Originally posted by illwieckz View Post
                    Very interesting, do you have a link to that thesis document?
                    Sure, you can grab it here:



                    That is quite long winded. You might also prefer the more succinct paper here on just the networking part:



                    Some ideas were good, some were less so. Let me know if you have any questions
                    Last edited by kpedersen; 09 December 2023, 01:11 PM.

                    Comment

                    Working...
                    X