Announcement

Collapse
No announcement yet.

AMD's GPUOpen HIP Project Made Progress Over The Summer

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by schmidtbag View Post
    CUDA, to my knowledge, is now open-sourced
    your knowledge is incorrect

    Comment


    • #12
      Originally posted by pal666 View Post
      your knowledge is incorrect
      Well, I'm not totally wrong:
      Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

      Comment


      • #13
        Originally posted by schmidtbag View Post
        CUDA, to my knowledge, is now open-sourced.
        WAT?

        WAT?!?!?!111!



        Cuda the language and the spec is of course open because they want people to use it, but the compiler isn't. No way in hell you can compile CUDA to work on other GPUs, currently.

        Comment


        • #14
          Originally posted by starshipeleven View Post
          WAT?

          WAT?!?!?!111!



          Cuda the language and the spec is of course open because they want people to use it, but the compiler isn't. No way in hell you can compile CUDA to work on other GPUs, currently.
          I suggest you calm down and read the link in my last post, before you look like more of a moron than you seem to implying I am. The compiler is in fact open-sourced.

          I'm well aware you cannot currently get existing CUDA applications to work on other GPUs. The exact purpose of my original post was to suggest that people create drivers so other GPUs can be CUDA-compatible. Calm down and read more carefully, next time.

          I understand that not the entire CUDA spec is open source, so I understand creating drivers for other GPUs could be tricky. Regardless, I think a compatibility layer for CUDA to OpenCL would likely be more beneficial than this HIP project (in the same sense of wine running Windows programs on Linux).

          Comment


          • #15
            Originally posted by schmidtbag View Post
            The compiler is in fact open-sourced.
            That's only a mere detail. They must have moved all useful stuff in their blob first. The main fact you are wrong about (and the reason I'm reacting like that) is that you can even think NVIDIA has left open the door for anyone to use CUDA on their non-NVIDIA GPUs. That's madness.

            I understand that not the entire CUDA spec is open source, so I understand creating drivers for other GPUs could be tricky. Regardless, I think a compatibility layer for CUDA to OpenCL would likely be more beneficial than this HIP project (in the same sense of wine running Windows programs on Linux).
            You mean that people in computing will be interested in a project that will allow them to run *SOME* few select programs like meh, quite a few like crap, and most not at all?

            I think they went with this because that's the only realistic way to get same or near-same performance reliably, adding a layer of redirection on GPU code is bad, real bad.

            Comment


            • #16
              Originally posted by starshipeleven View Post
              That's only a mere detail. They must have moved all useful stuff in their blob first. The main fact you are wrong about (and the reason I'm reacting like that) is that you can even think NVIDIA has left open the door for anyone to use CUDA on their non-NVIDIA GPUs. That's madness.
              Not really... Nvidia contributes to open source drivers for Tegra. Every once in a while, Nvidia pitches in a little bit toward nouveau. What use does Nvidia have of open-sourcing the CUDA compiler if that doesn't open doors for other GPUs to utilize it? As you said, they may "have moved all useful stuff in their blob first" but that useful stuff is probably specific to Nvidia hardware. CUDA as-is probably doesn't really reveal much about Nvidia's architectures - that's what they actually care about. However, knowing the way Nvidia's hardware works isn't really relevant to AMD, Intel, or any other company with GPUs, because they don't have CUDA-based hardware. They have to emulate CUDA cores with OpenCL. Knowing how the compiler works ought to be enough information to develop such drivers. But, that takes time, and OpenCL is a greater priority.

              You mean that people in computing will be interested in a project that will allow them to run *SOME* few select programs like meh, quite a few like crap, and most not at all?
              You mean that people would prefer to be forced to re-compile something so they can run it on their machine? Most people aren't willing to do that. Sure, it'll probably run better, but had it not occurred to you that not all CUDA applications are open source? Do you really think the devs of the closed-source applications are going to want to support 2 builds? In the likely scenario that a closed-source application doesn't get ported, I would much rather have an AMD GPU emulate CUDA at half it's potential performance, than to use strictly the CPU or be forced to buy a new GPU.

              I think they went with this because that's the only realistic way to get same or near-same performance reliably, adding a layer of redirection on GPU code is bad, real bad.
              Again - it's better than no GPU support at all. Would you rather have great performance in a limited selection of applications, or "decent" performance in all applications?

              Comment


              • #17
                Originally posted by schmidtbag View Post
                Not really... Nvidia contributes to open source drivers for Tegra.
                That's because they are desperate to sell it, so far it's still a fail, and only devices that are still standing with a Tegra are stuff made by NVIDIA themselves, or some automotive or random embedded.
                Every once in a while, Nvidia pitches in a little bit toward nouveau.
                So little that it is too easy to approximate to "they don't contribute at all".
                What use does Nvidia have of open-sourcing the CUDA compiler if that doesn't open doors for other GPUs to utilize it?
                Offloading its mainteneance to the main project, also making a PR move.
                but that useful stuff is probably specific to Nvidia hardware.
                Wishful assumption. Their blob contains what in open drivers is Mesa/Gallium (which isn't exactly hardware-specific), it's likely they added stuff that isn't specific to their hardware just because they can.
                CUDA as-is probably doesn't really reveal much about Nvidia's architectures - that's what they actually care about.
                Wrong, CUDA allows them to have a firm grasp on computing markets, as if softwares made for CUDA don't run on non-NVIDIA GPUs, that's a big fat vendor lock-in right there.
                It would be against the whole point of having their own different implementation for computing to just let everyone use it on their cards.

                They are going the same way with physx, down to the point of disabling it if the driver detects a non-NVIDIA GPU in the system (I assume they ignore Intel stuff).

                You mean that people would prefer to be forced to re-compile something so they can run it on their machine?
                Yes, because computing customers care mostly about performance. If performance isn't around on par there is no point in this porting at all.

                Most people aren't willing to do that.
                Computing market isn't "people" it's a company with its own staff, or a university with its own staff.

                had it not occurred to you that not all CUDA applications are open source?
                Sure you have workstation programs that can use some openCL or CUDA, but the target here is mostly stuff for computing in clusters or something like that, and most of the programs there are custom-made affairs doing specific stuff.

                Do you really think the devs of the closed-source applications are going to want to support 2 builds?
                Actually, most closed workstation applications I know already support both CUDA and OpenCL, those that make a choice tend to be the open/linux ones (usually toward CUDA).

                I would much rather have an AMD GPU emulate CUDA at half it's potential performance, than to use strictly the CPU or be forced to buy a new GPU.
                Do you run a cluster of GPUs for computing? Those don't usually like the idea of running very expensive AMD hardware at half capacity for lulz, they prefer getting NVIDIA cards at the same price to get full performance.
                Companies don't really care about hardware costs. Of course they don't buy 200 new GPUs every saturday, but when they change their systems, the cost of the hardware isn't a factor.

                Again - it's better than no GPU support at all. Would you rather have great performance in a limited selection of applications, or "decent" performance in all applications?
                I think you don't have the same target in mind. The target that matters for $$$ is the one I talked about.
                Making a shim so that a bunch of workstation guys can run CUDA applications that on average also run on OpenCL too does not make economic sense.

                Comment


                • #18
                  Originally posted by starshipeleven View Post
                  Do you run a cluster of GPUs for computing? Those don't usually like the idea of running very expensive AMD hardware at half capacity for lulz, they prefer getting NVIDIA cards at the same price to get full performance.
                  Companies don't really care about hardware costs. Of course they don't buy 200 new GPUs every saturday, but when they change their systems, the cost of the hardware isn't a factor.
                  A clarification: Yes I'm not making sense here.

                  What I wanted to say here is that the costs of the cards themselves aren't terribly relevant, but the running costs and the performance of the system are.

                  So yeah, they prefer vendor lock-in to slashing performance or introducing instability or (the horror) errors in the API translation leading to calculation errors.

                  Comment


                  • #19

                    Thanks for your help, but I know what's C++. It seems they used some stuff to support that code into other GPUs, is that OpenCL or what?

                    Comment


                    • #20
                      Originally posted by timofonic View Post
                      It seems they used some stuff to support that code into other GPUs, is that OpenCL or what?
                      they support only gcn gpus. no opencl involved

                      Comment

                      Working...
                      X