Announcement

Collapse
No announcement yet.

A New Radeon Shader Compiler For Mesa

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by BlackStar View Post
    That's not exactly true. Most games ship precompiled shaders, simply because shader compilation takes a *lot* of time.

    On the other hand, OpenGL does not support precompiled shaders, forcing OpenGL programs to ship with shaders in source form. Most OpenGL developers have been asking for precompiled shaders for *years* (think 2003), but it seems that IHVs haven't been able to decide on a common format.
    I know that, should have been clearer. (not sure about "most games", I personally checked BioShock and its shaders are shipped in HLSL, easily readable) However, the DX driver must still do the chip-specific optimizations at runtime, because the DX binary shader is in an intermediate representation. On the other hand, OpenGL ES has binary shaders through GL_OES_get_program_binary and IR may be chip-specific.

    Comment


    • #12
      I think it depends on the platform. Games (all apps, in fact) shipped for PCs tend to ship shaders in source form, albeit usually "stripped" of comments, since the app needs to run on a wide variety of graphics hardware, each with different shader hardware assembly instructions. Users may also upgrade their graphics hardware after installing the app and expect their programs to take full advantage of the new hardware.

      Applications which use source-level shaders usually make the API calls to compile the shaders during application startup, so once the app is running there is no overhead from the shader compilation step.

      Embedded apps, which are expected to run only on a single specific hardware configuration, often use precompiled shaders, which allows a smaller driver stack. There are also API options for OpenGL ES allowing an app to do a "one-time" shader compile at installation or only the the first time an application is run, and then save the HW-specific binary for future invocations of the app.
      Last edited by bridgman; 07-25-2009, 02:34 PM.

      Comment


      • #13
        Originally posted by BlackStar View Post
        On the other hand, OpenGL does not support precompiled shaders, forcing OpenGL programs to ship with shaders in source form. Most OpenGL developers have been asking for precompiled shaders for *years* (think 2003), but it seems that IHVs haven't been able to decide on a common format.
        In my opinion, not having a common binary format is an advantage. It means that
        a) IHVs can change the opcodes of their hardware to squeeze the most out of it, and
        b) as driver developers, we can use whatever representation we want internally.

        The only advantage of precompiled shaders is loading time. This can easily be achieved via a caching mechanism, so that's what ISVs should be asking for, if anything.

        Comment


        • #14
          A question that probably has nothing to do with shading language compiler but what are the reasons why graphics is slower on R400/R500 cards with the open source radeon driver than with fglrx? I suppose the proprietary driver still hasn't delivered all of its secret but do we have an idea what these are?

          As an example, I got ~1000FPS with radeon driver and ~5000FPS with fglrx on a Dell Inspiron 9400. (Don't make me wrong, 1000FPS is very well indeed.) Does the shading language support bring substantial performance boost compared against previous driver implementations?

          Comment


          • #15
            Originally posted by VinzC View Post
            A question that probably has nothing to do with shading language compiler but what are the reasons why graphics is slower on R400/R500 cards with the open source radeon driver than with fglrx? I suppose the proprietary driver still hasn't delivered all of its secret but do we have an idea what these are?

            As an example, I got ~1000FPS with radeon driver and ~5000FPS with fglrx on a Dell Inspiron 9400. (Don't make me wrong, 1000FPS is very well indeed.) Does the shading language support bring substantial performance boost compared against previous driver implementations?
            There have been a lot of improvements in mesa git master already compared the last mesa release. However, there are lots of optimizations that just haven't been implemented yet in the open driver. The information is there, but no one's had the time to do it yet.

            Some examples:
            - OQ support (in progress)
            - VBO support (in progress)
            - shader compiler improvements (in progress)
            - texture tiling
            - hyperz support

            Comment


            • #16
              Originally posted by agd5f View Post
              There have been a lot of improvements in mesa git master already compared the last mesa release. However, there are lots of optimizations that just haven't been implemented yet in the open driver. The information is there, but no one's had the time to do it yet.

              Some examples:
              - OQ support (in progress)
              - VBO support (in progress)
              - shader compiler improvements (in progress)
              - texture tiling
              - hyperz support
              Thanks for the hint. I don't know at all these acronyms (OQ and VBO). I just wish I could help but it's too technical for me. Keep up the work guys.

              Note I wonder why both teams (radeonhd and radeon) don't merge. Rather than having two sets of (human) resources working on separate but equivalent projects, why don't these guys (you?) join efforts together and work on one single driver? Since they now tend to converge, the more resources on a project the more efficient, isn't it?

              EDIT: VBO=Vertex Buffer Objects? and OQ=Object Queueing?
              Last edited by VinzC; 07-28-2009, 06:33 AM.

              Comment


              • #17
                Originally posted by VinzC View Post
                Note I wonder why both teams (radeonhd and radeon) don't merge. Rather than having two sets of (human) resources working on separate but equivalent projects, why don't these guys (you?) join efforts together and work on one single driver? Since they now tend to converge, the more resources on a project the more efficient, isn't it?
                Actually does seem that they *already* merged in a more technical sense since KMS support is only getting written for xf86-video-ati. (writing it for xf86-video-radeonhd too would be rather silly because then the two drivers would really be sharing pretty much everything) Then again, I've heard ideas too that xf86-video-ati might be dropped altogether in favour of xf86-video-modesetting in time when when Gallium matures and will be able to handle all 2D accel as well as 3D accel.
                Edit: Iirc OC == Occlusion Queries, not exactly sure of that nor what it exactly means.

                Comment


                • #18
                  Originally posted by VinzC View Post
                  Note I wonder why both teams (radeonhd and radeon) don't merge.
                  This has been discussed quite often in this forum.

                  In the short term it's more work to merge them than to keep developing both. In the long term both will (probably) be replaced by a generic KMS/Gallium driver anyway.

                  For the long version, try the search function.

                  Comment


                  • #19
                    isn't that wasteful ? having a whole high level language compiler in the form of a driver ?

                    isn't it better to have an external executable (say, binary that comes with the driver) to compile the shaders first, and let the driver to only run the compiled shaders ?

                    Comment


                    • #20
                      There is only one drm and 3d driver, regardless of which ddx you use (radeon or radeonhd).

                      Comment

                      Working...
                      X