Announcement

Collapse
No announcement yet.

Vulkan 1.0 Released: What You Need To Know About This Cross-Platform, High-Performance Graphics API

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by SystemCrasher View Post
    ... So it seems "GCN" term mostly refers to new shader core design ...
    What does the C in "GCN" mean?

    Comment


    • Originally posted by bridgman View Post
      Yeah, but the fact that something is experimental today doesn't mean it will be experimental forever
      "Experimental" could end by either being promoted to "mainline/default" or being chopped away. TBH I've got impression AMD devs are rather up for later.

      AFAIK we have been REALLY FREAKIN' CLEAR that the upstream restrictions on breaking userspace do not apply to packaged binary drivers (eg amdgpu hybrid)
      Distros have each and every option to load modules the way they see it fits by fiddling with blacklists/HWDB/aliases/etc. Most are using udev which allows to do a plenty of strange things. Furthermore, distros are responsible for consistency of updates, etc. I do not really get why 2 modules able to support same HW should be a big deal. There are other things supported by 2 different modules, e.g. some Realtek Wi-Fi cards (one module just works and another is a "proper" mac80211 rewrite). I do not get why loading different kernel module is a "breakage". Most of time decisions are being made by udev, it is not up to kernel to decide. If usermode loads module it can't handle, there is nothing kernel can do. At most I can imagine strange combo where one builds both modules as part of kernel image (uhm, is this supported at all?). But kernel allows to configure even more strange combos. So if kernel devs are so inclined on breakages, building kernel without /proc BREAKS usermode, and without ELF support getting usermode started could be a bit tricky :P. So I'm not really sure why kernel needs option to disable support of "older" parts at all and especially why it should be disabled by default. Are there some unique failure modes AMD or kernel devs afraid of?

      that amdgpu support earlier HW had been enabled by default from day one. Alex and I both said multiple times that initial amdgpu development had been done on CI, and that hybrid development was continuing on it.
      But it was not clear about fate of this code on long run. I.e. it sounded it happens only because newer parts are not here and when they arrive, old code could be chopped away since it is not needed to test driver at this point. I can't remember anyone telling it going to stay, and especially it going to be new defaults at some point. Maybe I've missed something once more time though.

      Yeah, even I forget sometimes that "GCN" means "Graphics Core Next", ie just the shader core. There's no rigid pattern but generally you won't see big core changes happen in the same generation as big uncore changes.
      [...]
      The kernel driver cares about uncore and scheduling/dispatching but not about ISA. Take the above list with a grain of salt, it's a 90-second brain dump.
      Verrrrry good explanation, it explains why things happen this way. I guess it could be good to put it somewhere near http://xorg.freedesktop.org/wiki/RadeonFeature (though this wiki seems to be "private club" so it is up for devs).

      Yes, all drivers that support multiple HW generations have to do that. The issue is that by breaking between SI and CI we were avoiding a big chunk of duplication. We can't remove the code from radeon because it's still needed for NI, but adding NI to amdgpu starts to get stupid.
      Hmm, NI in AMDGPU? Sounds funny, I can imagine NI owners could be happy, though it could look a bit unexpected unless one gets idea this idea about core/uncore/ISA .

      Yeah, IIRC that's still an option, but doing something like that at the same time as all the other changes we are making gets impractical. Agree that if nothing else that could make us feel a bit better about the bloat from adding SI to amdgpu
      Now I can at least understand why SI support in AMDGPU looks not so exciting from technical point of view.

      Comment


      • Originally posted by bridgman View Post
        Unapproved again. Sigh.
        Damn, now its my turn to face random moderation stuff :\.

        drSeehas yeah, C stands for Core (Next), but as you can see overall storyline is more funny, and when it comes to kernel, core is not a big deal, and it mostly about "uncore" parts, and in this regard it seems GCN 1.0 more like NI, which is not something very obvious unless you're AMD staff or just maniac who dug out all differences between GPUs himself, which does not seems to be something easy to do.

        Comment


        • Originally posted by SystemCrasher View Post
          "Experimental" could end by either being promoted to "mainline/default" or being chopped away. TBH I've got impression AMD devs are rather up for later.
          ...
          But it was not clear about fate of this code on long run. I.e. it sounded it happens only because newer parts are not here and when they arrive, old code could be chopped away since it is not needed to test driver at this point. I can't remember anyone telling it going to stay, and especially it going to be new defaults at some point. Maybe I've missed something once more time though.
          Sorry about that, thought we had made it sufficiently clear eg. Alex & Jammy's talk at XDC and associated media coverage. The hybrid driver was always going to include CI code, and there was a strong desire to include SI support so we could retire Linux Catalyst rather than having to maintain it for earlier HW. Note that while changing upstream defaults was highly desirable, it was not and is not something we can take for granted. See below.

          Originally posted by SystemCrasher View Post
          Distros have each and every option to load modules the way they see it fits by fiddling with blacklists/HWDB/aliases/etc. Most are using udev which allows to do a plenty of strange things. Furthermore, distros are responsible for consistency of updates, etc. I do not really get why 2 modules able to support same HW should be a big deal. There are other things supported by 2 different modules, e.g. some Realtek Wi-Fi cards (one module just works and another is a "proper" mac80211 rewrite). I do not get why loading different kernel module is a "breakage". Most of time decisions are being made by udev, it is not up to kernel to decide. If usermode loads module it can't handle, there is nothing kernel can do.
          Usermode doesn't load kernel graphics drivers any more, and has not done so since pre-KMS days. These days the kernel graphics driver comes up at boot, sets up the HW and display(s), then waits for calls from userspace. If the userspace driver is written to use radeon IOCTLs but the amdgpu driver is actually responding for that hardware (with different IOCTLs, one of the reasons for starting a new driver) then Bad Things happen.

          Originally posted by SystemCrasher View Post
          At most I can imagine strange combo where one builds both modules as part of kernel image (uhm, is this supported at all?). But kernel allows to configure even more strange combos. So if kernel devs are so inclined on breakages, building kernel without /proc BREAKS usermode, and without ELF support getting usermode started could be a bit tricky :P. So I'm not really sure why kernel needs option to disable support of "older" parts at all and especially why it should be disabled by default. Are there some unique failure modes AMD or kernel devs afraid of?
          We always have to build both modules, to handle the case where (for example) the system includes both NI and VI GPUs. Because of that, the drivers have to have a coordinated view of which hardware generations they support, and that has to align with the expectations of userspace drivers. First step is getting userspace drivers written and broadly distributed which can "speak both languages", ie can work with either amdgpu or radeon IOCTLs for the SI/CI hardware, and once that is in common use we can start looking at flipping the switch on defaults.

          EDIT - missed a point in last paragraph - distros always build both drivers because they want a single kernel image that can work with all of the target hardware devices.
          Last edited by bridgman; 27 February 2016, 11:37 AM.
          Test signature

          Comment


          • #133 unapproved, bogus "next page" message. Definite pattern here.
            Test signature

            Comment


            • Posting too much

              Comment


              • Originally posted by haagch View Post
                Thanks for the links to the compiled version. I tried compiling the sources, but it seems cmake tries to link the windows .dll libraries. (???) and I didn't look too much into it.

                Still, none of the demos run on my GPU which is:
                Code:
                GPU0
                VkPhysicalDeviceProperties:
                ===========================
                apiVersion = 4194306
                driverVersion = 1
                vendorID = 0x8086
                deviceID = 0x0166
                deviceType = INTEGRATED_GPU
                deviceName = Intel(R) Ivybridge Mobile
                Some of the failures:

                gears: /home/sascha/dev/vulkan/base/vulkantools.cpp:236: VkShaderModule_T* vkTools::loadShader(const char*, VkDevice, VkShaderStageFlagBits): Assertion `size > 0' failed.

                bloom: /home/sascha/dev/vulkan/base/vulkanTextureLoader.hpp:360: void vkTools::VulkanTextureLoader::loadCubemap(const char*, VkFormat, vkTools::VulkanTexture*): Assertion `!texCube.empty()' failed.

                mesh: /home/sascha/dev/vulkan/base/vulkanTextureLoader.hpp:66: void vkTools::VulkanTextureLoader::loadTexture(const char*, VkFormat, vkTools::VulkanTexture*, bool): Assertion `!tex2D.empty()' failed.

                pushconstants: /home/sascha/dev/vulkan/base/vulkanexamplebase.cpp:287: void VulkanExampleBase::loadMesh(const char*, vkMeshLoader::MeshBuffer*, std::vector<vkMeshLoader::VertexLayout>, float): Assertion `mesh->m_Entries.size() > 0' failed.
                These asserts are not related to your Intel architecture being too old, but rather that the data files can't be found.
                Just make sure that you execute from a specific directory (usually the "bin/" dir) such that "./../data/" is a valid dir.
                After that, you should get at least a bit further.

                I tried this with HD Graphics 530 (Skylake GT2), latest from ppa:canonical-x/vulkan, and I can run all but the tessellation demos.
                Another exception is the "pushconstants" demo, which seems to crash:
                Code:
                ../../../../../src/intel/vulkan/anv_device.c:414: FINISHME: Get correct values for VkPhysicalDeviceLimits
                ../../../../../src/intel/vulkan/anv_device.c:414: FINISHME: Get correct values for VkPhysicalDeviceLimits
                WARNING: Unsupported SPIR-V Capability
                WARNING: Unsupported SPIR-V Capability
                
                Program received signal SIGSEGV, Segmentation fault.
                0x00007ffff5471e6e in ?? () from /usr/lib/x86_64-linux-gnu/libvulkan_intel.so
                (gdb) bt
                #0  0x00007ffff5471e6e in ?? () from /usr/lib/x86_64-linux-gnu/libvulkan_intel.so
                #1  0x00007ffff54737c3 in ?? () from /usr/lib/x86_64-linux-gnu/libvulkan_intel.so
                #2  0x00007ffff5473c8a in ?? () from /usr/lib/x86_64-linux-gnu/libvulkan_intel.so
                #3  0x00007ffff54738bf in ?? () from /usr/lib/x86_64-linux-gnu/libvulkan_intel.so
                #4  0x00007ffff5474357 in ?? () from /usr/lib/x86_64-linux-gnu/libvulkan_intel.so
                #5  0x00007ffff547465c in ?? () from /usr/lib/x86_64-linux-gnu/libvulkan_intel.so
                #6  0x00007ffff519e43a in ?? () from /usr/lib/x86_64-linux-gnu/libvulkan_intel.so
                #7  0x00007ffff5124795 in ?? () from /usr/lib/x86_64-linux-gnu/libvulkan_intel.so
                #8  0x00007ffff51253da in anv_pipeline_init () from /usr/lib/x86_64-linux-gnu/libvulkan_intel.so
                #9  0x00007ffff54ed306 in gen9_graphics_pipeline_create () from /usr/lib/x86_64-linux-gnu/libvulkan_intel.so
                #10 0x00007ffff5125b8e in anv_CreateGraphicsPipelines () from /usr/lib/x86_64-linux-gnu/libvulkan_intel.so
                #11 0x000000000042fd0a in ?? ()
                #12 0x0000000000000012 in ?? ()
                #13 0x0000000000000000 in ?? ()
                Performance seems to be good at default resolution (fps ranging from 200-600 for most demos), and all demos seem to use the GPU at 100%.

                Issues encountered so far:
                - window resizing can cause assertion errors: non-zero retval (always -1000001004) of
                Code:
                fpAcquireNextImageKHR(device, swapChain, UINT64_MAX, presentCompleteSemaphore, (VkFence)nullptr, currentBuffer);
                - in case shrink-resizing the window does succeed, it doesn't shrink the rendering surface at all (however, the opposite works)
                - seems like there are some input buffer overruns (e.g. when rotating scene with mouse for some time), but eventually catches up

                Comment


                • Originally posted by Eliasvan View Post
                  These asserts are not related to your Intel architecture being too old, but rather that the data files can't be found.
                  Well, with later versions of anvil and later versions of the samples, it worked fine. Perhaps something was fixed in the meantime.
                  Ich habe dieses Video mit dem Video-Editor von YouTube (https://www.youtube.com/editor) erstellt.

                  Comment


                  • This is an Aprils fool, I tested Vulkan after the mesa merge and only a few demos rendered correctly with Haswell. Would be very unlogical if Ivy Bridge had it working at this time.

                    Comment


                    • Originally posted by Kano View Post
                      This is an Aprils fool, I tested Vulkan after the mesa merge and only a few demos rendered correctly with Haswell. Would be very unlogical if Ivy Bridge had it working at this time.
                      Well, some still don't work and with anvil they are still visibly brighter than with LunarG's experimental driver.
                      E.g. the the skeletal animation demo:
                      anvil: https://www.youtube.com/watch?v=R071L6M-m3U
                      LunarG's early driver: https://www.youtube.com/watch?v=dVGgOQsUkEg
                      Not sure which one is correct.

                      Still, most demos now render fine, but at times very slowly. Now that someone from intel has fixed the Hologram example program from LunarG, even this one works.
                      BUT: On Ivy Bridge it's very slow: ~5 fps. So they still have a lot of work to do on Ivy Bridge (and Haswell).

                      So if it isn't working at all for you, maybe go to #intel-gfx or #dri-devel and ask about it, there's a good chance it only needs a small fix.

                      edit: vkcube has been working in the past, but is broken again now: https://bugs.freedesktop.org/show_bug.cgi?id=95139

                      And also make sure to use latest git from the SaschaWillems/Vulkan repository. There has been a lot of activity and fixes there too.
                      Last edited by haagch; 06 May 2016, 06:50 AM.

                      Comment

                      Working...
                      X