Announcement

Collapse
No announcement yet.

Red Hat Developers Working Towards A Vendor-Neutral Compute Stack To Take On NVIDIA's CUDA

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • faph
    replied
    Michael
    The Video is up:

    Leave a comment:


  • duby229
    replied
    I don't think being a good alternative to CUDA will be enough. Of course being a fully open source stack will bring a lot of people in itself that don't currently already use CUDA, but if you want folks to rebase from it to this new stack then it needs some killer feature that everyone would think is cool as hell that CUDA cant replicate.

    I think that whole scenario is wishful thinking. But then again, that's what the kernel stated out as, it's what mesa started as and wine and many others.... Soooo.... Could work out.

    Leave a comment:


  • mdvle
    replied
    Any viable competitor to CUDA needs a several things:

    1) macOS support - a lot of developers use Mac's and given Apple's lack of Nvidia support there is a built in market / pent up demand for an alternative to CUDA.

    2) per 1), can't be based on OpenCL as OpenCL is dead on macOS. Vulkan is the future and can be made to work on macOS.

    3) significant investment in highly optimized runtime libraries - this is why CUDA has won so far. This will require someone to supply an ongoing budget for specialized developers to create and continue to maintain such libraries.

    4) significant investment in nouveau - companies aren't going to throw away their investment in Nvidia hardware, and they aren't going to jump ship from one monopoly to a different monopoly.

    5) major rethink on approach by the non-Nvidia hardware vendors, who all so far seem to be attempting to merely replace Nvidia by creating their own non-standard silos - see whatever mess AMD is attempting at the moment, Intels Neural Computer Stick.

    6) work on cross platform tool support - this is a current weakness of CUDA given problematic support of Visual Studio on Windows and the Eclipse requirement on Linux. Build tooling around Visual Studio Code / Atom which can be made to work on any platform.

    In short, being an alternative to CUDA that is open source won't be enough, you need to provide the performance and other benefits to get people to rewrite that large base of installed code / retrain on a new stack.

    Leave a comment:


  • andrew.corrigan
    replied
    Originally posted by airlied View Post

    The clang is just the compiler, you still need the CUDA headers and runtime library installed. Those are copyright NVIDIA for a start. They also control the language direction, so they can add new features that won't work on other GPUs and you are then left holding the bag.

    Dave.
    Thank you for explaining this. Do you think think there are limitations in SYCL compared to CUDA? For example, why does https://github.com/KhronosGroup/SyclParallelSTL have so many limitations after so many years? Is that reflective of the language as a whole? I read the paper linked below about HPX, which speaks of further limitations that prevents the use of std::tuple. Are there other limitations lurking?

    Leave a comment:


  • shmerl
    replied
    Originally posted by pal666 View Post
    and you seem to completely misunderstand my remark regarding binding on gpu. if your "other language" does not run on gpu, you can't bind it to anything running on gpu, so there are no usecases for binding to other languages here.
    You can write in assembly if you want. You still need to know how to bind. Besides, targeting GPUs from other languages is not a bizarre idea. See https://github.com/rust-lang/rust/issues/51575

    Therefore questions of bindings should be considered not just in the scope of "write it all in C++".
    Last edited by shmerl; 19 November 2018, 02:50 AM.

    Leave a comment:


  • pal666
    replied
    Originally posted by shmerl View Post
    https://en.wikipedia.org/wiki/Name_m...angling_in_C++

    C++ doesn't set any standards for it.
    i know, but it is completely irrelevant
    Originally posted by shmerl View Post
    Linux is far from the only compiler targets, so just because it's easy for Linux doesn't mean it's easy in general.
    we are discussing linux-only proposition, so it is as general as needed.
    on other platforms there are other platform-specific standards for name mangling (like ms abi which is supported by clang already), which you can follow...
    ...in some rare cases when it matters, because it does not matter at all when you compile from source or use same compiler.
    btw, "other languages" usually have no issues with abi because they have exactly one implementation. so if you are using "other language" you can as well dictate the only c++ implementation of your choice, it will have same effect as having the one c++ abi

    and you seem to completely misunderstand my remark regarding binding on gpu. if your "other language" does not run on gpu, you can't bind it to anything running on gpu, so there are no usecases for binding to other languages here.
    Last edited by pal666; 19 November 2018, 01:04 AM.

    Leave a comment:


  • shmerl
    replied
    Originally posted by pal666 View Post
    considering qt is dominating, they surely were much easier to implement than gtk+. maybe even c++ had something to do with it

    i'm not sure what do you mean by symbol naming, but on linux name mangling is standardized by itanium abi, so stop making excuses


    C++ doesn't set any standards for it. Linux is far from the only compiler targets, so just because it's easy for Linux doesn't mean it's easy in general.
    Last edited by shmerl; 18 November 2018, 08:18 PM.

    Leave a comment:


  • Drago
    replied
    It is interesting what path Intel gonna take with their GPU coming in 2020. Don't make mistake, they don't give a *hit about gaming. All is about datacenters, AI and ML.

    Leave a comment:


  • airlied
    replied
    Originally posted by andrew.corrigan View Post

    Thank you for the reply, but I don't understand. If NVIDIA proposes another API or loses interest, how does that stop us from using CUDA? CUDA support in Clang (frontend) was implemented without NVIDIA. If the proposed stack provided a SPIR-V backend, then NVIDIA is out of the picture.
    The clang is just the compiler, you still need the CUDA headers and runtime library installed. Those are copyright NVIDIA for a start. They also control the language direction, so they can add new features that won't work on other GPUs and you are then left holding the bag.

    Dave.

    Leave a comment:


  • pal666
    replied
    Originally posted by shmerl View Post
    It doesn't matter on what you bind, it's the complexity of it. Sure, Qt has bindings but see how difficult they were to implement.
    considering qt is dominating, they surely were much easier to implement than gtk+. maybe even c++ had something to do with it
    Originally posted by shmerl View Post
    All this could really be simplified, if C++ cared to standardize its symbol naming and name mangling across all compilers.
    i'm not sure what do you mean by symbol naming, but on linux name mangling is standardized by itanium abi, so stop making excuses

    Leave a comment:

Working...
X