Originally posted by andrew.corrigan
View Post
Announcement
Collapse
No announcement yet.
Red Hat Developers Working Towards A Vendor-Neutral Compute Stack To Take On NVIDIA's CUDA
Collapse
X
-
Originally posted by airlied View Post
The video goes into more detail on why SYCL, but you can't create a standard around CUDA without NVIDIA giving CUDA to a standards body, it kinda limits your choices.
If you don't have a standard, NVIDIA can remove the rug at any point.
- Likes 5
Comment
-
Originally posted by sandy8925 View PostHokTar - I don't agree that OpenCL is any less performant than CUDA - they both get compiled to a set of GPU instructions that run on the same hardware.
Maybe implementations differ in performance, but AFAIK there's nothing that makes OpenCL as an API inferior to CUDA in terms of performance.
People need to stop caring only for their short term gain. A heavy vendor lock-in is not great for anyone.
- Likes 6
Comment
-
Originally posted by TemplarGR View PostWhat is so hard to understand for me is, why you can find *Linux users* in favor of vendor lock in... It boggles the mind...
Maybe we can say that there are two types of vendor lock-in, open and closed
As a result of open lock-in pushovers, we have 300 distros and counting Push Gnome 3 and here we got forks of new and the old, push systemd and all cockroaches start jumping and so on and goes like that with so many pushoversLast edited by dungeon; 18 November 2018, 02:07 AM.
Comment
-
Originally posted by airlied View Post
People used to think there was only Windows, only Solaris, those people learn over time.
Dave.
It flatlined~ for most of its life, and only took off mid 2016 - not long ago at all.
It has slumped badly in recent months, back to mid 2017 prices, so it could be argued the market has little faith in Nvidias moat.
Comment
-
Originally posted by dungeon View Post
They could simply lost interest Or to make it obsolete at random point by inventing something else or at worse something like situation of 3dfx with Glide API might happen
Comment
-
Originally posted by msroadkill612 View PostInterest piqued, I checked Nvidia share history, theory being that lucrative data center gpu compute was the driver for its stellar stock performance in recent years.
It flatlined~ for most of its life, and only took off mid 2016 - not long ago at all.
It has slumped badly in recent months, back to mid 2017 prices, so it could be argued the market has little faith in Nvidias moat.
https://www.google.com/search?source...10.7hl4u4wtE_M
Comment
-
Originally posted by pal666 View Postno, it isn't - qt has plenty of bindings. btw, how do you bind to other languages on gpu?
Comment
-
Originally posted by shmerl View PostIt doesn't matter on what you bind, it's the complexity of it. Sure, Qt has bindings but see how difficult they were to implement.
Originally posted by shmerl View PostAll this could really be simplified, if C++ cared to standardize its symbol naming and name mangling across all compilers.
Comment
Comment