Announcement

Collapse
No announcement yet.

Mesa 20.3 Released With Big Improvements For Open-Source Graphics Drivers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by zxy_thf View Post
    No, they are different kinds of animals.
    You don't need specialized compiler to process your host code in either Vulkan, OpenGL, or even OpenCL.
    But CUDA needs one (nvcc), and that's the reason CUDA SDK supports only a few versions of GCC.
    I'm not sure things are as bleak as you describe. Both CUDA and HIP compilers are backed by LLVM. Indeed, you can compile CUDA with Clang! https://www.llvm.org/docs/CompileCudaWithLLVM.html. This is not limited to the frontend either. LLVM has a PTX and AMDGCN target, both of which are used in projects like POCL.

    So no, "CUDA is more like a programming language than an API" isn't a good moat because a) HIP already copied the "programming language" side of things and b) the language parts are effectively open source anyhow.

    Originally posted by zxy_thf View Post
    If I wanna start a GPU programming project, why would I prefer your new, untested, not-sure-if-working-but-still-need-installing-packages platform, over the mature, battle-tested, requiring-a-package-but-everyone-around-me-knows-how-to-install CUDA?
    From what I've seen, Intel/AMD have been bootstrapping a user base through supercomputer contracts. Whether or not that's the correct strategy (i.e. do a bunch of consumer users swear it off in the meantime) remains to be seen.


    Originally posted by zxy_thf View Post
    2. Verbose interface (at least for beginners): CUDA requires a specialized compiler (nvcc), but the language itself is very clean (<<<>>> may look weird but not too horrible).
    My understanding is that HIP and SYCL are trying to address this use-case. I personally refuse to write anything like C++ unless absolutely necessary


    Originally posted by zxy_thf View Post
    I won't the expect the dominance of CUDA may end any time soon.
    Agreed unfortunately, but it's nice to see viable open source alternatives popping up.

    Comment


    • #22
      Anyone got an idea how can I use/test Lavapipe? I don't see it in meson's build options.

      Comment


      • #23
        Originally posted by zxy_thf View Post
        No, they are different kinds of animals.
        You don't need specialized compiler to process your host code in either Vulkan, OpenGL, or even OpenCL.
        But CUDA needs one (nvcc), and that's the reason CUDA SDK supports only a few versions of GCC.
        The compilers for compute and graphics are pretty much the same in terms of complexity... they all use LLVM as back end and most use Clang as front end. The main distinction between OpenCL/graphics and HIP/CUDA/SYCL etc... is the single-source aspect, where one set of source files generates both CPU and GPU executables, so there is some pre-processing to separate them out, but once the GPU and CPU code have been separated out the remaining GPU code processing is pretty much identical to graphics & OpenCL.

        Originally posted by zxy_thf View Post
        HIP mostly copied CUDA's API, but it has quite a few weird design decisions.
        For example I don't see the benefits of requiring a specialized kernel driver (amdkfd) for GPU computing as its runtime.
        Sure they may have a few corporate grade features there, but it's not accessible to general developers and thus almost DOA (looking at my 5500XT).
        We needed a separate kernel driver (really separate kernel API - amdgpu and amdkfd build into a single driver) because the Linux GPU framework (drm) treats each GPU independently without cross-GPU functions. That is just starting to change now. We were also asked by our HSA partners to have a separate kernel driver for compute in order to act as a reference design and code base for other vendors to use as well.

        AFAIK NVidia does not use the standard Linux/drm user/kernel interfaces in their driver stack - I imagine they maintained something like the Windows driver interface which does support cross-GPU awareness and functionality. We use the drm interfaces for graphics functionality and so had to either signfificantly extend drm (which we didn't feel we were ready for) or add a separate user/kernel interface for cross-GPU work.

        The main distinction, by the way, is in the user/kernel interface - drm/amdgpu presents each GPU as a separate device, eg. /dev/dri/card0, while amdkfd presents a single device (/dev/kfd) that can access all GPUs.

        I expect that over time we should be able to migrate the cross-GPU functionality of amdkfd into drm and any remaining single GPU functionality into amdgpu. Even now most of amdkfd is the cross-GPU functionality, things like unified address space across all the GPUs and the ability for one GPU to queue work to other GPUs, and we call into amdgpu for most of the single-GPU functions.

        Starting with the 20.45 release amdkfd/ROCR paths are the default back end for Linux OpenCL on Vega and Navi parts.
        Last edited by bridgman; 04 December 2020, 05:22 PM.
        Test signature

        Comment


        • #24
          Originally posted by tuxd3v View Post
          AMD lost a big opportunity to have OpenCL via mesa Clover, for machines that doesn't support pcie-atomic operations,
          Which is tons of previous hardware, still without a viable open-source stack..
          Big Opportunity lost by AMD..sad.
          Vega cards have never required PCIE atomics. Not 100% sure but I don't think Navi cards need them either.
          Last edited by bridgman; 04 December 2020, 05:19 PM.
          Test signature

          Comment


          • #25
            Originally posted by jojo7887 View Post
            Anyone got an idea how can I use/test Lavapipe? I don't see it in meson's build options.
            Add swrast to the list of vulkan drivers, i.e. like this: "-Dvulkan-drivers=amd,swrast", this will build lavapipe and radv. You can then control which one to use using VK_ICD_FILENAMES , or enable both, and use VK_LAYER_MESA_device_select layer to reorder or filter out stuff. Many other programs and app will try to priotize a discrete GPU if it is available, so they will usually not select lavapipe (which is marked as CPU device), some will even refuse to use it (incorrectly), and some others will have a command line option or something to select which device to use. For example, I simply use VK_ICD_FILENAMES=/home/user/mesa-git/installdir/build-amd64-opt/install/share/vulkan/icd.d/lvp_icd.x86_64.json at the moment for this (or a colon separated list if you wish).

            If you install mesa into standard locations, then the best option is to use VK_MESA_device_select. I noticed that lavapipe is listed as a first Vulkan device on my system, even if I use VK_ICD_FILENAMES and specify the lvp at the end. Could be a bug in the loader. But even then most programs I tested so far, still will try to find GPU devices first to use, even if lavapipe is first on the list, so it is reasonable.

            Once lavapipe (or as it is called "lvp") is used, will show as "llvmpipe" vulkan device.

            I tested many things, and it does work pretty well. I couldn't really spot any issues or incorrect rendering yet.

            It is about 100 slower on average on my 16-core ThreadRipper 2560X, compared to AMD Radeon R9 Fury X (FIJI, GFX8) with RADV/ACO. In few benchmarks it was almost 700 times slower. In some others it was reasonable, with slowdown of about 40-60. I.e. RADV will give me 5000 FPS, lavapipe will give me 50 FPS. This was very consistent, and CPU usage during runs was pretty high (>1250% CPU load). It is with LLVM 11.0. I think it is using AVX and AVX2, but I am not sure. There is probably also some know to increase number of threads (I would like to use 30 threads, so it utilizes the SMT better). I think there is a environment variable LP_NUM_THREADS to set it, but from what I know in llvmpipe source, it capped to 16. You need to change LP_MAX_THREADS in src/gallium/drivers/llvmpipe/lp_limits.h to use more.




            On few benchmarks (like stressing atomics, i.e. order independent transparency on simple scenes), the lavapipe was actually faster than RADV/ACO on Fury X.

            Maybe on 64 cores it could be useful for some real titles. But as it is, it is still pretty decent even on 8 or 16 core CPU, at least for testing, headless rendering without a GPU, troubleshooting, doing debugging and validation, etc. I like having this option. Another good use case is rendering very large scenes, that would not fit in GPU memory (less of an issue now with some GPUs having 16GB or 24GB of memory), but still if you have some crazy scene that requires 150GB of data, lavapipe could be an interesting option to try.

            Happy testing.

            PS. Zink on top of lavapipe is coming too! It works but requires minor modifications in the sources at the moment. This will be resolved soon in upstream mesa. Would be awesome to compare zink+lavapipe to llvmpipe or SWR.
            Last edited by baryluk; 05 December 2020, 06:34 AM.

            Comment


            • #26
              Originally posted by baryluk View Post

              Add swrast to the list of vulkan drivers, i.e. like this: "-Dvulkan-drivers=amd,swrast", this will build lavapipe and radv
              This was the key, had everything setup correctly except for this which was set as amd+intel only. I now have the 'libvulkan_lvp.so' file in my build and the corresponding json file in icd folder.

              Thanks for the detailed explanation, I've also taken note of that LP_MAX_THREADS line.

              Have a nice weekend

              Comment


              • #27
                Originally posted by jojo7887 View Post

                This was the key, had everything setup correctly except for this which was set as amd+intel only. I now have the 'libvulkan_lvp.so' file in my build and the corresponding json file in icd folder.

                Thanks for the detailed explanation, I've also taken note of that LP_MAX_THREADS line.

                Have a nice weekend

                Good! I expect most games will not work unfortunately, because lavapipe is really quite behind in terms of supported extensions , 10 supported extensions, compared to 110 supported extensions on my Fury X GPU. https://www.vulkan.gpuinfo.org/compa...d%5B9906%5D=on , but worth a try for some stuff.

                Comment


                • #28
                  Originally posted by baryluk View Post


                  Good! I expect most games will not work unfortunately, because lavapipe is really quite behind in terms of supported extensions , 10 supported extensions, compared to 110 supported extensions on my Fury X GPU. https://www.vulkan.gpuinfo.org/compa...d%5B9906%5D=on , but worth a try for some stuff.
                  I think it's too early for most games to start running without issues. I still gave a shot at vkQuake 1+2 and they both work fine, Anisotropic in vkQ1 has no effect but the game is playable.

                  Lavapipe has great potential already, a quick test against Swiftshader's CPU Vulkan shows it performing almost twice as fast: https://i.imgur.com/emddBDK.jpg

                  Test performed on a Ryzen 1700X, game resolution was at 3840x1080

                  Comment


                  • #29
                    Originally posted by jojo7887 View Post

                    I think it's too early for most games to start running without issues. I still gave a shot at vkQuake 1+2 and they both work fine, Anisotropic in vkQ1 has no effect but the game is playable.

                    Lavapipe has great potential already, a quick test against Swiftshader's CPU Vulkan shows it performing almost twice as fast: https://i.imgur.com/emddBDK.jpg

                    Test performed on a Ryzen 1700X, game resolution was at 3840x1080
                    Nice!

                    I tried to do some dxvk testing, but unfortunaly / fortunately it requires Vulkan 1.2, and lavapipe doesn't support it, nor it implement all needed extensions. It will take some time to get there.

                    Comment


                    • #30
                      Originally posted by xxmitsu View Post

                      Can't wait for usable mesa clover on amdgpu
                      CLVK is the best bet there.

                      Comment

                      Working...
                      X