Announcement

Collapse
No announcement yet.

Red Hat Developers Working Towards A Vendor-Neutral Compute Stack To Take On NVIDIA's CUDA

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • boxie
    replied
    Originally posted by pal666 View Post
    cuda is closed source, so it can't have good dev environment
    I am afraid that that line of thinking is very closed :P

    They have the $ behind them to supply a good dev environment. Sure - if you come across a bug you have to get them to fix it, on the plus side you wrap that in an S.E.P. field and wait for the next point release with the fix.

    Leave a comment:


  • pal666
    replied
    Originally posted by shmerl View Post
    Isn't using C++ posing a problem for bindings for other languages
    no, it isn't - qt has plenty of bindings. btw, how do you bind to other languages on gpu?

    Leave a comment:


  • pal666
    replied
    Originally posted by boxie View Post
    To win the hearts and minds of the developers you need a really good dev environment that makes their life easy. Cuda has this currently
    cuda is closed source, so it can't have good dev environment

    Leave a comment:


  • shmerl
    replied
    Isn't using C++ posing a problem for bindings for other languages, or they offer solutions for that?

    Leave a comment:


  • polarathene
    replied
    Originally posted by airlied View Post
    Dave.
    Have you considered something like what ArrayFire does? It's more of an API/SDK than a language I guess, but uses a JIT compiler to produce optimized code for Cuda/OpenCL and something else I think. AFAIK it's open-source too(licenses for commercial use I think however..).

    Leave a comment:


  • Guest
    Guest replied
    HokTar - I don't agree that OpenCL is any less performant than CUDA - they both get compiled to a set of GPU instructions that run on the same hardware.

    Maybe implementations differ in performance, but AFAIK there's nothing that makes OpenCL as an API inferior to CUDA in terms of performance.

    Leave a comment:


  • ms178
    replied
    I am looking forward to his presentation. From my current observations (and I am not a programmer), it is the (end) goal from Nvidia and AMD (plus possibly Intel in the future) to have better support for GPUs inside the C++ standard. Also there is AMD's HCC2 which is an experimental prototype that is intended to support multiple programming models including OpenMP 4.5+, C++ parallel extentions (original HCC), HIP, and cuda clang. It supports offloading to multiple GPU acceleration targets (multi-target). It also supports different host platforms such as AMD64, PPC64LE, and AARCH64. (multi-platform). So why should developers invest into Khronos' SYCL based approach if they could get multi-target and multi-platform support with something like HCC2? I'd like to hear the pros and cons for each approach from someone knowledgeable who can explain this to laymen like me.

    Leave a comment:


  • HokTar
    replied
    Really good idea, it is very much needed.

    The problem is that OpenCL is way behind CUDA in terms of features and performance. Until this changes, our closed code will also remain in CUDA.

    Leave a comment:


  • andrew.corrigan
    replied
    Originally posted by airlied View Post

    The video goes into more detail on why SYCL, but you can't create a standard around CUDA without NVIDIA giving CUDA to a standards body, it kinda limits your choices.

    If you don't have a standard, NVIDIA can remove the rug at any point.
    I'll keep my eyes out for the video. Can you please elaborate on how NVIDIA can remove the rug? Could NVIDIA somehow get the CUDA support that is already present in Clang removed? How about something that is functionally equivalent, just with slightly different syntax, like HIP?

    Leave a comment:


  • airlied
    replied
    Originally posted by msroadkill612 View Post
    My newb general impression of the gpu compute world is the pervasive view that cudas superiority is set in stone, yet it seems such a young field. Surely its far to early to call.

    Business history tells us they certainly WILL lose their dominance & protective moat. Its just a question of when.

    CUDA is dominant - fine - I accept that, but their position is nothing like assured after such a short time in such a young and dynamic field.
    People used to think there was only Windows, only Solaris, those people learn over time.

    Dave.

    Leave a comment:

Working...
X