Announcement

Collapse
No announcement yet.

Details On NVIDIA's Vulkan Driver, Sounds Like It Will Be A Same-Day Release

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by liam View Post


    Maybe. What we know is that nvidia doesn't seem to do as well as gcn with these dx12 benchmarks (Ashes hasn't been there only example).

    edit: BTW, I don't own stock, contribute code, or own any nvidia/amd hardware (aside from what's in my consoles)
    http://www.extremetech.com/gaming/21...ronous-compute

    FuryX-8,6TFlops vs 980Ti-5,6TFlops/8,4TOps(plus,minus).

    Also i was referring to execution "lowest" level, MIMD vs SIMD.
    Last edited by artivision; 05 September 2015, 09:01 PM.

    Comment


    • #52
      Just wondering, could Vulkan help 3D acceleration on virtual machines (eg. virgil3d), or would the more direct hardware access actually be a burden?

      Comment


      • #53
        Originally posted by artivision View Post

        Or simply Nvidia Hardware "driver independent" takes Synchronous graphics and operates on them as Asynchronous tiles. AMD Hardware operates them as Synchronous bulk input and output, so you need to write them Asynchronous. As far as i know only AMD has that problem and not any other vendor from mobile or standard computing. Something is missing from Radeon Hardware.
        Oh hey, here's artivision again with his nonsensical technobabble. What in the fricking world is "synchronous graphics" lol

        Comment


        • #54
          Originally posted by dragorth View Post


          This seems to be the biggest problem with the getting into 3D graphics. Not only are you having to wrangle OpenGL/DXxx, mixed in with that you have to learn about the GPU system, and if your math skills aren't ready, you also need to wrangle somewhat advanced mathematics. (This last is relative, but can be a showstopper for the wrong mindset.)
          You don't need to learn complicated math to understand GPUs, what you do need math for is modeling a 3D world onto a 2D display plane. Same way you don't need to understand advanced data structures to grasp how a CPU operates (even if you're planning to use those data structures on the CPU).

          Comment


          • #55
            Originally posted by Ancurio View Post

            Oh hey, here's artivision again with his nonsensical technobabble. What in the fricking world is "synchronous graphics" lol
            When you are obligated to use a specific number of objects from the A sum, with a specific number of objects from the B sum, with a specific number of objects from the C sum!

            Comment


            • #56
              Originally posted by kkppcc View Post
              Just wondering, could Vulkan help 3D acceleration on virtual machines (eg. virgil3d), or would the more direct hardware access actually be a burden?
              A strategy that could be taken by the VM venders is to implement a DX11 or OpenGL implementation on top of Vulkan capable systems. This could improve the performance of these systems, as has been shown with VMWare and Parallels implement their DX implementations on top of OpenGL (on Mac and Linux, at least). They may be able to create a thinner layer for Vulkan itself. There would still be over head, of course. And vGPUs may make that a less viable option in the future.

              Comment


              • #57
                Originally posted by Ancurio View Post

                You don't need to learn complicated math to understand GPUs, what you do need math for is modeling a 3D world onto a 2D display plane. Same way you don't need to understand advanced data structures to grasp how a CPU operates (even if you're planning to use those data structures on the CPU).
                Not to disagree for the sake of argument, but you will do better with some calculus even if all you are doing is 2D, not to mention some basic Linear Algebra. Doing transformations on images and color spaces just to blit to the screen in sRGB space, and none of that requires a 3D game or display. And they all lay at the satin point for GPU stuff. It would be much better to learn these things separately, say in a software renderer, than to figure it out in addition to the OpenGL API and the performance optimization of the GPU.

                You are right that you don't NEED it, you can do it the hard way, except when you go to look at other people's code, you won't know what it is walking about.

                Comment


                • #58
                  Originally posted by dragorth View Post

                  Not to disagree for the sake of argument, but you will do better with some calculus even if all you are doing is 2D, not to mention some basic Linear Algebra. Doing transformations on images and color spaces just to blit to the screen in sRGB space, and none of that requires a 3D game or display. And they all lay at the satin point for GPU stuff. It would be much better to learn these things separately, say in a software renderer, than to figure it out in addition to the OpenGL API and the performance optimization of the GPU.

                  You are right that you don't NEED it, you can do it the hard way, except when you go to look at other people's code, you won't know what it is walking about.
                  Okay, but what I'm saying is that's the type of knowledge you need if you want to do something with a GPU, not if you want to know how a GPU itself works Linear algebra and calculus will teach you nothing about when vertex and fragment shaders are run, how vertex data is fetched from memory, how different memory heaps are utilized, how caches work etc. etc. Do you see where I'm going?

                  Comment


                  • #59
                    Originally posted by Ancurio View Post

                    Okay, but what I'm saying is that's the type of knowledge you need if you want to do something with a GPU, not if you want to know how a GPU itself works Linear algebra and calculus will teach you nothing about when vertex and fragment shaders are run, how vertex data is fetched from memory, how different memory heaps are utilized, how caches work etc. etc. Do you see where I'm going?
                    I agree that you don't need to learn that stuff at the same time, and suggest you shouldn't. I am only saying that despite that, a disproportionate number of developers do start learning it all at once, due to wanting to implement some feature, or being required to for their job. I am basically decrying this state of affairs.

                    Comment

                    Working...
                    X