Looks like Intel is releasing 5 discrete GPU's.
Announcement
Collapse
No announcement yet.
Mumblings Of A "Big New" Open-Source GPU Driver Coming...
Collapse
X
-
Originally posted by Grim85 View Post
I havent read the whole thread so shoot me if someone mentioned this already - but why would a dx12 driver go in the kernel? It would be a mesa contribution would it not?
Originally posted by ThoreauHD View PostLooks like Intel is releasing 5 discrete GPU's.
https://www.bestbuy.com/site/cyberpo...?skuId=6462676
Comment
-
Originally posted by hotaru View Post
you do realize that AMD also has open-source drivers in the kernel, and doesn't have the huge downside of subpar CPU and GPU performance that you get with Intel, don't you?
The Vega cards took one year to become (mostly) stable, and the Navi cards are still a mess (for some it works; for some it does not)...
- Likes 1
Comment
-
Originally posted by tildearrow View Post
(AMD) Reduced stability. Had more crashes and hangs with AMD graphics cards than Intel ones.
The Vega cards took one year to become (mostly) stable, and the Navi cards are still a mess (for some it works; for some it does not)...
Comment
-
Originally posted by ThoreauHD View PostLooks like Intel is releasing 5 discrete GPU's.
https://www.bestbuy.com/site/cyberpo...?skuId=6462676
I don't want to say it, but if Intel gets competitive performance with NVidia on the Windows platform while just maintaining a working open source driver on the Linux platform, they could own the GPU market within a relative short time span.
I recently purchased a Dell Inspiron 15 5000 Series 5502 laptop having only the integrated Xe grade graphics hardware, and absolutely love the Intel Xe grade graphics as it is much quicker/faster than past integrated graphics hardware.Last edited by rogerx; 23 May 2021, 05:51 PM.
- Likes 1
Comment
-
Originally posted by StillStuckOnSI View PostIf anything, HPC installations are more sensitive to the impact of proprietary drivers because they diverge from your usual desktop/server OS. There's a reason you'll see highly-optimized custom implementations of MPI, for example. These deployments don't run bespoke stuff on a lark, but because the tradeoff of having better perf/reliability is worth the investment. Having components behind proprietary blobs is anathema to that. Got a problem with results precision or incorrect calculations in some CUDA library? Tough luck, line up on the nvidia help forums. Need newer/older kernels to work with other software? Tough luck, you may be completely SoL. I think it's fair to say that outside of the real big labs (e.g. US department of energy), everyone puts up with nvidia despite the proprietary bits and begrudgingly accepts whatever scraps they're offered in terms of support.
Same deal with Intel Optane, its also closed source technology but you do get support with all of that $$$ that you are spending.
I mean if you are doing HPC on a budget this is a different story but its also a very small market. I am not dissing on open source here, but just stating facts which is that NVidia would not be in this position if their support was a shit as you are implying (just like no one would buy insanely expensive Intel Optane if they didn't get support for it).
Originally posted by tildearrow View Post
Reduced stability. Had more crashes and hangs with AMD graphics cards than Intel ones.
The Vega cards took one year to become (mostly) stable, and the Navi cards are still a mess (for some it works; for some it does not)...Last edited by mdedetrich; 23 May 2021, 06:24 PM.
Comment
-
Originally posted by mdedetrich View PostOdds are if you are paying the big bucks to NVidia for such large CUDA installations and you get a problem like this, you get first line support and such issues typically get hot patched quite quickly (you get sent a new patched binary blob to install).
I mean if you are doing HPC on a budget this is a different story but its also a very small market. I am not dissing on open source here, but just stating facts which is that NVidia would not be in this position if their support was a shit as you are implying (just like no one would buy insanely expensive Intel Optane if they didn't get support for it).
I was in a call a few months back where some folks from a major supercomputing project (don't remember if it was European or American) were loudly complaining about (proprietary) CUDA libraries not giving enough control and the opaqueness of the feedback loop. These are people working on top 100, if not top 50 supercomputers. In other words, they could and do write a big part of the compute stack themselves if given the means to. If you set "on a budget" to anything below that, there goes 90%+ of all research clusters.
Lest you think I'm unfairly picking on nvidia, let me note that they have by far the best developer support for compute software of all the big GPU vendors. Even then, it's a matter of scale. They just don't have enough engineers to resolve every customer issue in a timely fashion. Anyone who's worked with proprietary vendor software in a corporate/enterprise context can attest to how frustrating it is to not have sufficient ability to tweak or introspect things when something goes wrong (which it inevitably will). Open source is not a panacea here, but it gives you some additional leverage to find solutions without being entirely beholden to the vendor. For example, I can and have dug through the source of PyTorch to suss out why some (usually poorly documented) functionality is not working as expected. Hell, even nvidia is aware of this: most of their higher-level compute software is open source and actually accepts contributions.
So if we're talking "facts", nvidia maintains their position by a) quality hardware/software, b) first mover advantage/inertia, and c) not being too greedy with how they play their cards. The moment any of these positions are attacked, it's absolutely in their best interest to compromise and play nicer. How else do you think we got the turnaround from EGLStreams to GBM? Certainly not out of the goodness of their hearts.
- Likes 2
Comment
-
Originally posted by Grim85 View Post
I havent read the whole thread so shoot me if someone mentioned this already - but why would a dx12 driver go in the kernel? It would be a mesa contribution would it not?
Comment
-
I hope it's Nvidia. Specifically, I hope it's about Nvidia supporting Nouveau. Or even better, providing their own open driver and merging it into the kernel while promising to be up-to-date with firmware, making Nouveau unneeded.
But that's going to be extremely unlikely.Last edited by Sonadow; 23 May 2021, 09:18 PM.
- Likes 3
Comment
-
Originally posted by Grim85 View Post
I havent read the whole thread so shoot me if someone mentioned this already - but why would a dx12 driver go in the kernel? It would be a mesa contribution would it not?
Longer answer - see https://devblogs.microsoft.com/direc...x-heart-linux/, which i believe is what Michael is referring to. I think the whole DX12 side of things is more an easy addon to doing this rather than the main reason. It still requires the windows usermode drivers to support this, but apparently newer DX12 drivers will have integration so they work in WSL.Last edited by smitty3268; 23 May 2021, 09:14 PM.
Comment
Comment