Announcement

Collapse
No announcement yet.

Intel oneAPI 2023 Released - AMD & NVIDIA Plugins Available

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel oneAPI 2023 Released - AMD & NVIDIA Plugins Available

    Phoronix: Intel oneAPI 2023 Released - AMD & NVIDIA Plugins Available

    Ahead of 4th Gen Intel Xeon Scalable, Xeon CPU Max, and Intel Data Center GPUs shipping, Intel today announced the oneAPI 2023 tools release...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Hope it proves to be competitive to CUDA and makes its way into things like Da Vinci Resolve soon

    Comment


    • #3
      Originally posted by Azrael View Post
      Hope it proves to be competitive to CUDA and makes its way into things like Da Vinci Resolve soon
      Unfortunately it doesn't look like it will be competitive with CUDA. OneAPI actually requires CUDA to be installed on the system to talk to Nvidia. Its more of yet another "layer". To be honest I dont know how far it will go outside of intel. If your going to build something GPU accelerated there is basically no reason to use OneAPI as you would most likely go Nvidia native... or if you hate your life ROCm.

      Comment


      • #4
        Am I correct to assume oneAPI wraps OpenCL and thus requires ROCm and cuda installations?
        If this is the case, I'm wondering how fast it really is compared to directly using cuda/hip.

        Nevertheless, I really like that oneAPI provides support for non Intel stuff.

        Comment


        • #5
          Originally posted by zexelon View Post

          Unfortunately it doesn't look like it will be competitive with CUDA. OneAPI actually requires CUDA to be installed on the system to talk to Nvidia. Its more of yet another "layer". To be honest I dont know how far it will go outside of intel. If your going to build something GPU accelerated there is basically no reason to use OneAPI as you would most likely go Nvidia native... or if you hate your life ROCm.
          I presume that the idea is to not be tied to one specific underlying technology...

          Comment


          • #6
            Would it be helping, if Intel would spend it to Khronos, so that oneAPI will be more widely used and more spreaded?
            I think, if Microsoft would have had oneAPI invented, then every CPU, GPU and other manufacturer would supporting it and Microsoft would creating a lot of programs based on it.
            But it is created by Intel. So it is a competitor to the other CPU, GPU, ... manufacturer. And Intel don't create much programs, which would be based on oneAPI.
            So, oneAPI is platform-independent and OpenSource. But Intel is nearly the only supporter of that API.

            Comment


            • #7
              So to be fair, this is SYCL based which is Khronos based. Intel didn't create SYCL by any means, it has an oneAPI open specification that includes SYCL. There are SYCL compilers from companies like Huawei, Xilinx (now AMD) etc that also compile SYCL to run on various platforms.

              Comment


              • #8
                Originally posted by zexelon View Post

                Unfortunately it doesn't look like it will be competitive with CUDA. OneAPI actually requires CUDA to be installed on the system to talk to Nvidia. Its more of yet another "layer". To be honest I dont know how far it will go outside of intel. If your going to build something GPU accelerated there is basically no reason to use OneAPI as you would most likely go Nvidia native... or if you hate your life ROCm.
                So actually it's not really another layer. NVIDIA architecture will execute a binary sent to it in the format it expects, so what the compilers here do is generate that code that will run on an NVIDIA or AMD GPU. You need the SDK installed to help create that binary, but at runtime it shouldn't be any slower due to runtime "translation" to CUDA. It's just another binary executing on the GPU.

                The real question is can an expression written in SYCL be translated to a fast binary version just as CUDA expressions of parallelism can be. That depends on the expressiveness of the language. There are some things you can express in CUDA you cannot in SYCL, but a large majority of things map in a straight forward way. Similar to how writing C++ code can run on ARM or x86, it's just a matter of how good your compiler is at creating the binary optimizations.

                Comment


                • #9
                  Originally posted by rastersoft View Post

                  I presume that the idea is to not be tied to one specific underlying technology...
                  On further thought I see your point. For application developers doing things on the work station (say in the engineering space) this could make a lot more sense.

                  Comment


                  • #10
                    I was pleasantly surprised about oneApi. It seems to be largely just a SYCL implementation. I didn't know much about it, while it is a Khronos standard. It is a bit like Cuda in that you just write code and at compile time it splits into CPU and GPU code. It came from the OpenCL space and naturally targets that, but can also handle other backends.

                    It seems like a clean and open approach just pushed into Intel marketing. I could see it potentially working as it is just nice clean standards.

                    Comment

                    Working...
                    X