Announcement

Collapse
No announcement yet.

Mumblings Of A "Big New" Open-Source GPU Driver Coming...

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by birdie View Post

    It's sad Microsoft cannot and will not sue you for defamation. As for me personally, I don't care about statements with zero proof. Find someone else to argue with.

    I need an actual proof, network dumps showing that Microsoft indeed siphons random files from people's computers.

    So far, no one in the entire world has produced anything which makes you a liar.
    Uhhhh.... Windows Defender automatic sample submission? (TBF, most other antivirus software does the same.)

    Comment


    • #82
      Originally posted by scottishduck View Post
      This is the SiS driver update I’ve been waiting for
      Gold.

      Comment


      • #83
        Looks like Intel is releasing 5 discrete GPU's.

        https://www.bestbuy.com/site/cyberpo...?skuId=6462676

        Comment


        • #84
          Originally posted by Grim85 View Post

          I havent read the whole thread so shoot me if someone mentioned this already - but why would a dx12 driver go in the kernel? It would be a mesa contribution would it not?
          I am by no means an expert, but I tend to think you are correct, but I mentioned DX12 because the Phoronix article mentioned it. So I suppose it leaves Nvidia as the only candidate for being truly "big". ANYTHING else would be completely underwhelming.

          Originally posted by ThoreauHD View Post
          Looks like Intel is releasing 5 discrete GPU's.

          https://www.bestbuy.com/site/cyberpo...?skuId=6462676
          But Intel's drivers are already open, right?

          Comment


          • #85
            Originally posted by hotaru View Post

            you do realize that AMD also has open-source drivers in the kernel, and doesn't have the huge downside of subpar CPU and GPU performance that you get with Intel, don't you?
            Reduced stability. Had more crashes and hangs with AMD graphics cards than Intel ones.
            The Vega cards took one year to become (mostly) stable, and the Navi cards are still a mess (for some it works; for some it does not)...

            Comment


            • #86
              Originally posted by tildearrow View Post

              (AMD) Reduced stability. Had more crashes and hangs with AMD graphics cards than Intel ones.
              The Vega cards took one year to become (mostly) stable, and the Navi cards are still a mess (for some it works; for some it does not)...
              Ditto. The precursors I noticed, either a lack of documentation of make/models having completely open source drivers and/or disclosures for graphics cards depending upon EFI partitioned disks and/or operating systems for the AMD hardware to properly operate. The AMD hardware may work, but you'll likely need insider information on which make/model to get for your platform or use scenario.

              Comment


              • #87
                Originally posted by ThoreauHD View Post
                Looks like Intel is releasing 5 discrete GPU's.

                https://www.bestbuy.com/site/cyberpo...?skuId=6462676
                Ditto. Good post. I was presuming, all along, this press article was likely referring to Intel's discrete graphics line of cards.

                I don't want to say it, but if Intel gets competitive performance with NVidia on the Windows platform while just maintaining a working open source driver on the Linux platform, they could own the GPU market within a relative short time span.

                I recently purchased a Dell Inspiron 15 5000 Series 5502 laptop having only the integrated Xe grade graphics hardware, and absolutely love the Intel Xe grade graphics as it is much quicker/faster than past integrated graphics hardware.
                Last edited by rogerx; 23 May 2021, 05:51 PM.

                Comment


                • #88
                  Originally posted by StillStuckOnSI View Post
                  If anything, HPC installations are more sensitive to the impact of proprietary drivers because they diverge from your usual desktop/server OS. There's a reason you'll see highly-optimized custom implementations of MPI, for example. These deployments don't run bespoke stuff on a lark, but because the tradeoff of having better perf/reliability is worth the investment. Having components behind proprietary blobs is anathema to that. Got a problem with results precision or incorrect calculations in some CUDA library? Tough luck, line up on the nvidia help forums. Need newer/older kernels to work with other software? Tough luck, you may be completely SoL. I think it's fair to say that outside of the real big labs (e.g. US department of energy), everyone puts up with nvidia despite the proprietary bits and begrudgingly accepts whatever scraps they're offered in terms of support.
                  Odds are if you are paying the big bucks to NVidia for such large CUDA installations and you get a problem like this, you get first line support and such issues typically get hot patched quite quickly (you get sent a new patched binary blob to install).

                  Same deal with Intel Optane, its also closed source technology but you do get support with all of that $$$ that you are spending.

                  I mean if you are doing HPC on a budget this is a different story but its also a very small market. I am not dissing on open source here, but just stating facts which is that NVidia would not be in this position if their support was a shit as you are implying (just like no one would buy insanely expensive Intel Optane if they didn't get support for it).

                  Originally posted by tildearrow View Post

                  Reduced stability. Had more crashes and hangs with AMD graphics cards than Intel ones.
                  The Vega cards took one year to become (mostly) stable, and the Navi cards are still a mess (for some it works; for some it does not)...
                  Yeah AMD isn't really the bastion of open source, I mean they definitely do open source their code but historically speaking their driver stack has been a mess and its only improved somewhat recently. Intel generally does a fantastic job of open source, and NVidia while closed source almost always works without problems (barring the occasional X quirkisms)
                  Last edited by mdedetrich; 23 May 2021, 06:24 PM.

                  Comment


                  • #89
                    Originally posted by mdedetrich View Post
                    Odds are if you are paying the big bucks to NVidia for such large CUDA installations and you get a problem like this, you get first line support and such issues typically get hot patched quite quickly (you get sent a new patched binary blob to install).

                    I mean if you are doing HPC on a budget this is a different story but its also a very small market. I am not dissing on open source here, but just stating facts which is that NVidia would not be in this position if their support was a shit as you are implying (just like no one would buy insanely expensive Intel Optane if they didn't get support for it).
                    What are the thresholds for "big" and "on a budget"?

                    I was in a call a few months back where some folks from a major supercomputing project (don't remember if it was European or American) were loudly complaining about (proprietary) CUDA libraries not giving enough control and the opaqueness of the feedback loop. These are people working on top 100, if not top 50 supercomputers. In other words, they could and do write a big part of the compute stack themselves if given the means to. If you set "on a budget" to anything below that, there goes 90%+ of all research clusters.

                    Lest you think I'm unfairly picking on nvidia, let me note that they have by far the best developer support for compute software of all the big GPU vendors. Even then, it's a matter of scale. They just don't have enough engineers to resolve every customer issue in a timely fashion. Anyone who's worked with proprietary vendor software in a corporate/enterprise context can attest to how frustrating it is to not have sufficient ability to tweak or introspect things when something goes wrong (which it inevitably will). Open source is not a panacea here, but it gives you some additional leverage to find solutions without being entirely beholden to the vendor. For example, I can and have dug through the source of PyTorch to suss out why some (usually poorly documented) functionality is not working as expected. Hell, even nvidia is aware of this: most of their higher-level compute software is open source and actually accepts contributions.

                    So if we're talking "facts", nvidia maintains their position by a) quality hardware/software, b) first mover advantage/inertia, and c) not being too greedy with how they play their cards. The moment any of these positions are attacked, it's absolutely in their best interest to compromise and play nicer. How else do you think we got the turnaround from EGLStreams to GBM? Certainly not out of the goodness of their hearts.

                    Comment


                    • #90
                      Originally posted by Grim85 View Post

                      I havent read the whole thread so shoot me if someone mentioned this already - but why would a dx12 driver go in the kernel? It would be a mesa contribution would it not?
                      Mesa already has a gallium d3d12 driver. but you still need a virtual device to intercept the calls and transfer them to the host. likely Microsoft wants mesa to have the translation libraries and a kernel driver for the interface.

                      Comment

                      Working...
                      X