Announcement

Collapse
No announcement yet.

DirectX 10/11 Coming Atop Gallium3D

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by BlackStar View Post
    Current D3D drivers are easily an order of magnitude more stable than OpenGL, not the least because they use a common HLSL compiler instead of 5 different ones.

    In fact, today I was debugging a GLSL shader that's interpreted differently by all three major vendors. With a tiny tweak, I can cause an access violation on Ati, a black screen on Nvidia and multicolored sparkly rendering on Intel (with current drivers). Three mutually incompatible interpretations of the same code - beat that!

    I dread to think what will happen if I add Mesa and OS X drivers into the mix...
    the solution is MESA ... once it is used for all the platforms on Linux and up to date with latest OpenGL version, you'll just need one code path :-)

    scary but true ...

    Comment


    • #32
      Yet Another State Tracker? Please, no!

      I swear "There's going to be a state tracker for that!" is the Gallium3D way of saying "There's an app for that". Notice the shift in tense though. Name one Gallium3D state tracker that's done. Name one distro shipping with Gallium3D in their stack.

      Gallium3D is nearly vaporous and appearing more so every year. It doesn't need another 3D state tracker right now. It needs to finish one 3D state tracker. It needs to ship in a distro's 3D stack. Rotting in some niche git tree only to be entirely rewritten from scratch 14 months later does not count.

      I love that Gallium3D can host a DX10/11/12 state tracker. But please stay focused and finish shipping something first! Else it's time for me (at least) to stop hoping to see OSS 3D drivers in Linux before I die -- you know, sometime in the next 40 years.

      Comment


      • #33
        ummmm nobody is talking about VMWARE virtualization products, and i think that it is the primary objective of this thing.

        WINE is an amazing project but it isnt 100% compatible, in fact, is far for 100% and it never be. Virtualization is near 100%, but unfortunatelly graphics sucks.

        VMWARE workstation 7 introduced a "wrapper" for directx 9.0 to opengl 2.1 using gallium to access to native opengl capabilities on the linux host, graphics performance increased A LOT, but doing this conversion, intorduces some errors, and it is not yet near the performance of native directx.

        But introducing DirectX into gallium, allows the windows guest access to native directx hardware capabilities on the linux host!!!!! if they achieve that VMWARE will be the *perfect* windows virtualization software!

        Comment


        • #34
          Originally posted by haplo602 View Post
          the solution is MESA ... once it is used for all the platforms on Linux and up to date with latest OpenGL version, you'll just need one code path :-)

          scary but true ...
          Keyword: once.

          Mesa won't help with driver bugs in closed-source drivers and you cannot really afford to ignore nvidia and fglrx users (not to mention their windows/os x counterparts when going cross-platform).

          One code path, if only... (sighs)

          A question to those in the know: how does VMWare use Gallium? My limited understanding is that they use TGSI as a transport layer between the guest and the host, i.e. client D3D turns into TGSI, passes to the host - and then? I guess if the host uses Gallium, it could execute the code directly, but I can't really imagine VMWare Workstation being used on hosts that use Gallium (instead of nvidia/fglrx).

          Do they translate TGSI back into OpenGL for execution? Something else entirely?

          Comment


          • #35
            Originally posted by Jimbo View Post
            VMWARE workstation 7 introduced a "wrapper" for directx 9.0 to opengl 2.1 using gallium to access to native opengl capabilities on the linux host, graphics performance increased A LOT, but doing this conversion, intorduces some errors, and it is not yet near the performance of native directx.
            That's not at all how it works. We use Gallium in the guest to implement OpenGL 2.1 and only that. We use the bog standard Mesa GL state tracker to translate OpenGL calls into a command protocol that we ship over the virtual PCI bus to our virtual video card. In the host we basically reverse the process, using a command interpreter to generate OpenGL calls to implement each of the commands. At the moment, there are no gallium components at all in the DirectX path, unless, of course, you happen to be running a Gallium based host driver.

            The reason we don't use the D3D state tracker in the guest is that it implements the wrong interface. D3D driver writers need to implement the D3DDDI, which is the driver facing interface that the D3D runtime uses to communicate with the IHV provided driver. This interface already looks fairly similar to Gallium (shaders are fully parsed, compiled and tokenized, there's no fixed function cruft to deal with, in D3D10 everything is simplified into constant state objects and uniform buffers).

            Comment


            • #36
              So.. As far as I understand it:

              Gallium 3D is a middle-ware. It provides lots of API's (openGL/dx3d/etc) which then translates it into a generic-ish command it then passes back to it's drivers.
              It's drivers accept this generic stuff and translate into card-specific stuff.

              Basically, it's openGL.

              So this "state tracker" is effectively a DX3D "front-end" API for use by applications?

              ... Then only specific applications would use this stuff, and all I can see is:
              1) Wine
              2) .NET programs

              .NET would probably still require a whole bunch of other useless MS API system calls to do this, so wouldn't work without some implementation of wine.
              Wine could conceivably ease up their development on turning each DX3D command into openGL commands - except that not everyone would put gallium on their machine whereas all drivers have openGL support.

              Before, when it's stated that "DX3D is much more stable than openGL" and "I can tweak a GLSL shader to have different effects on different drivers" .. Are you infact talking about effects that are *NOT* specified in the openGL standards?
              You know, the ones where it says "the result of this is unspecified" or some similar wording?
              Similar to C++ code where you would try to write past the end of memory allocation. Try to read a destroyed object. etc.

              This wont gain much ground unless gallium is used as the mainstay for everything - and I fail to see a purpose in adding more conversion into the whole process... Unless I'm missing something here?

              Comment


              • #37
                Originally posted by paul_one View Post
                Before, when it's stated that "DX3D is much more stable than openGL" and "I can tweak a GLSL shader to have different effects on different drivers" .. Are you infact talking about effects that are *NOT* specified in the openGL standards?
                You know, the ones where it says "the result of this is unspecified" or some similar wording?
                Similar to C++ code where you would try to write past the end of memory allocation. Try to read a destroyed object. etc.
                You can get VERY strange behaviour from the various different GLSL compilers in existence. The OSX shader compiler will, pretty much at random, decide that a video card is unworthy of running a vertex shader and send it through it's software fallback path. I've also seen the OSX GLSL shader compiler cause system wide lockups with a very simple shader. I've got one shader where Nvidia G9x hardware will produce a NaN result from an operation that can't produce one while ATI hardware returns a correct result.

                It also doesn't help that GLSL 1.3+ is such a massive departure from GLSL 1.2 and lower or that Nvidia's GLSL compiler accepts a lot of left over baggage from their Cg project. We often encounter shaders that work well on Nvidia/Linux but don't even compile on OSX.

                D3D wins points in my book for shaders being compiled off-line into a binary format that's easy for the driver to deal with. This is almost certainly faster than running an optimizing compiler on shader text at runtime.

                Comment


                • #38
                  @alexcorscadden: thanks for the insight, much appreciated!

                  @paul_one: my experience with GLSL is very similar to what alexcorscadden describes. Random instability, crashes and rendering bugs on perfectly fine shaders (with fine referring to strict adherence to standards, not Nvidia's fluid interpretation).

                  Actually the bogus NaN on G9x could explain a lot. I have a lighting shader that, under specific combinations of input, simply stops running. It is as if the card calculates some result that turns all further processing off: say, you have four lights, and use those specific parameters on light #3 - bam, your output is black even if lights 1, 2 and 4 are very much non-black. Fudge the input by 0.0001 and the result looks fine.

                  Comment


                  • #39
                    Originally posted by BlackStar View Post
                    @alexcorscadden: thanks for the insight, much appreciated!

                    @paul_one: my experience with GLSL is very similar to what alexcorscadden describes. Random instability, crashes and rendering bugs on perfectly fine shaders (with fine referring to strict adherence to standards, not Nvidia's fluid interpretation).

                    Actually the bogus NaN on G9x could explain a lot. I have a lighting shader that, under specific combinations of input, simply stops running. It is as if the card calculates some result that turns all further processing off: say, you have four lights, and use those specific parameters on light #3 - bam, your output is black even if lights 1, 2 and 4 are very much non-black. Fudge the input by 0.0001 and the result looks fine.
                    Have you ever watched a video about ATI VS nVidia GPU's? Bank violations and such? ATI OpenCL will not work on nVidia OpenCL and vise versa...

                    OpenCL is just a generic

                    Comment


                    • #40
                      Originally posted by V!NCENT View Post
                      OpenCL is just a generic...
                      God fscking damn 1min edit settings

                      OpenCL is just a generic way of programming different things of each available type of compute device out there

                      Comment

                      Working...
                      X