Announcement

Collapse
No announcement yet.

ZLUDA v2 Released For Drop-In CUDA On Intel Graphics

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    One of many interesting and original open-source projects to be started in 2020 was ZLUDA, an open-spurce drop-in CUDA implementation for Intel graphics.
    Found a typo

    Comment


    • #12
      Originally posted by lyamc View Post
      Found a typo
      Really? Didn't you ever hear that open-spruce was a good way to clear out the code-webs?

      Comment


      • #13
        I'm actually quite excited to hear about this. Will have to do a lot more looking into it, and benchmark with some in-house software to see if it might be viable. While it wouldn't completely remove our CUDA dependence, for the programs we have ourselves, or have the source for, not needing $10,000 GPUs for some projects would be amazing.

        Comment


        • #14
          Originally posted by kpedersen View Post

          They would be shooting themselves in the foot. Companies will not be pleased if NVIDIA keeps making breaking changes to their API and will seek alternatives.

          The only risk is if NVIDIA keep adding "new" little features and then spreading the word that ZLUDA is "out of date". Luckily only students (and a few too many open-source developers) blindly use the latest and greatest gimmicks needlessly.
          If Nvidia did not have the monopoly in compute hardware then ZLUDA and AMD (hip) could fork the spec if it changes, couldn't they? Given the current state of the market my guess is that Nvidia can shoot themselves in the foot over and over before it would have any effect.

          Nvidia likes to strong-arm customers and clients, just a few days ago started to slow down mining performance on gaming GPUs while releasing dedicated mining hardware. It's a good idea. Gamers would probably love them for doing it, but the implementation is not very reassuring. Modifying performance of a sold product through a driver update... I don't care what the application is, this should not be legal IMO. Let's see if anything comes from it.

          Comment


          • #15
            Originally posted by Paradigm Shifter View Post
            not needing $10,000 GPUs for some projects would be amazing.
            If you need such Nvidia GPUs, then you're not going to find anything comparable from Intel for a while. And when you do, it'll probably cost $9k (if not $12k).

            The other thing that suggests is that what you're doing with it probably using more advanced API functions they likely haven't yet translated (i.e. tensor cores). And who knows what their level of support is for precompiled CUDA kernels, if at all.

            Comment


            • #16
              Seems like it might be possible to support on AMD via hipSYCL? The Phoronix article was a bit misleading on the oneAPI for AMD support though, so linking to a comment from someone involved in the project with their corrections:

              Phoronix: Intel's oneAPI Is Coming To AMD Radeon GPUs While yesterday brought the release of Intel's oneAPI 1.0 specification, the interesting news today is that oneAPI support is coming to AMD Radeon graphics cards... http://www.phoronix.com/scan.php?page=news_item&px=oneAPI-AMD-Radeon-GPUs

              Comment


              • #17
                Originally posted by coder View Post
                If you need such Nvidia GPUs, then you're not going to find anything comparable from Intel for a while. And when you do, it'll probably cost $9k (if not $12k).

                The other thing that suggests is that what you're doing with it probably using more advanced API functions they likely haven't yet translated (i.e. tensor cores). And who knows what their level of support is for precompiled CUDA kernels, if at all.
                Yes, agreed. It all depends on how it's implemented. We don't use Tensor cores (at least currently) and 99.9% of the time, 11GB/12GB cards are more than enough for what we do... it's just those few occasions where we do need more, we need a lot more.

                Comment


                • #18
                  Originally posted by Paradigm Shifter View Post
                  it's just those few occasions where we do need more, we need a lot more.
                  I have no first-hand experience with it, but cloud-based GPU services seem like they'd make economic sense for such cases where a very large amount of compute is needed infrequently.

                  At least in the case of AI training, the advice to buy your own is usually when you're keeping it busy for most of the time.

                  Comment


                  • #19
                    Originally posted by coder View Post
                    I have no first-hand experience with it, but cloud-based GPU services seem like they'd make economic sense for such cases where a very large amount of compute is needed infrequently.

                    At least in the case of AI training, the advice to buy your own is usually when you're keeping it busy for most of the time.
                    Also true. I have two issues with cloud computing: 1) it's cloud computing and 2) it's cloud computing. More seriously, some of our raw datasets can push into the 40TB range, and I have no interest in faffing around with the logistics of that. I had hopes for AVX-512, and for AMD too, but as yet, CUDA still rules the roost.

                    Comment


                    • #20
                      Originally posted by Paradigm Shifter View Post
                      I had hopes for AVX-512, and for AMD too, but as yet, CUDA still rules the roost.
                      AVX-512 was no match for GPUs before they started adding things like tensor cores.

                      And AMD's problem (in AI) was always competing against the previous Nvidia GPU. But it looks like their Matrix cores might've finally managed to leap-frog Nvidia, at least for some use cases. Now, if they can just get out of their own way and work on the software situation. However, to be really successful, they're going to have to find a way to build more enthusiasm for their GPU and compute products.

                      Comment

                      Working...
                      X