Announcement

Collapse
No announcement yet.

OpenVG State Tracker For Gallium3D Tomorrow

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by mendieta View Post
    Bridgman: long answer? Bring it on!
    /me is not bridgman

    There are two planned drivers for Radeons in Gallium, r300 and r600, with one winsys, DRM-based radeon. r300 covers r300-r500 hardware, and r600 covers r600-r700+ hardware.

    r300 is started, and kind of works for some very trivial cases. It needs a lot more work, but most of the code is there, just broken.

    r600 hasn't been started. There's all kinds of problems in the winsys and kernel code that are preventing it from being started.

    Comment


    • #17
      Originally posted by Pfanne View Post
      well thats nice
      having all that accelerated would be useful ^^
      I was under the impression we already had a certain level of 2d acceleration already (XAA, EXA, UXA?)

      Originally posted by mendieta View Post
      Guys, this is all cool and it looks like the next generation open source linux graphics will probably bit the crap out of the proprietary world.
      Well... yes and no. (What follows is a summary of what as been said already)

      From my understanding (I am not a driver developer), Gallium3d will allow us to be more competitive with good proprietary drivers and make development and maintenance more efficient.

      Gallium3d (as already described) is a video card driver modeled on a generic representation of a modern 3d graphics card. The driver is constructed into distinct layers to separate the hardware, OS, and graphics API from each other. This means a developer primarily has to focus on creating the hardware specific layer, a relatively small portion of a modern graphics driver.

      This means a new graphics card can get a complete driver with relatively little effort, especially for a vendor who doesn't have an existing Linux driver. It also means that adding a new API (say - OpenGL3) to all existing drivers only has to be done once.

      Making something more generic and possibly adding more layers does slow things down a little bit and/or removes options for optimizing it. That being said, the savings in effort of sharing and generalization of all this code should mean stable feature complete drivers are created faster for open systems Linux/BSD/etc.. And by using LLVM to optimize the outputted instructions from the driver should help accelerate rendering considerably, more than making up for the extra layers.

      Now does this blow the pants off of proprietary solutions?

      Some comments by driver developers suggest that we should be able to create a high performance driver for Linux/BSD/etc. hitting about 80% (can't remember the number exactly) of the performance of proprietary drivers, generally much better than what is available today. Their reasoning was that the last 20% requires a LOT of hardware specific tweaking and tuning, and that LLVM currently doesn't optimize super scalar architectures well enough.

      I'm not a driver developer, so I can't comment on the limits of the design choices... but it seems relatively obvious that the trade off of that initial 20% is more than worth it for the better Linux/BSD drivers overall, greater development efficiencies, and more developer time to tweak drivers or to improve other areas of the graphic stack.

      ... and all the excitement around GPU processing will mean lots of focus on performance from a broader set of developers so things like LLVM super-scalar optimizations shouldn't be that far behind.

      It remains to be seen whether open source or proprietary will be king of the FPS scores in the end... but it will be exciting to watch.

      Exciting times...

      Comment


      • #18
        Some comments by driver developers suggest that we should be able to create a high performance driver for Linux/BSD/etc. hitting about 80% (can't remember the number exactly) of the performance of proprietary drivers, generally much better than what is available today. Their reasoning was that the last 20% requires a LOT of hardware specific tweaking and tuning, and that LLVM currently doesn't optimize super scalar architectures well enough.

        Close. The last 20-30% in performance requires a lot of _application_specific_ tweaks.

        That is people buying video cards are generally looking for good performance in specific areas.. like some people want very good performance for Maya for doing 3D editing. Or, for marketting reasions, the ATI or Nvidia folks want to have the best Quake4 performance possible.

        Stuff like that.

        Of course hardware tweaks are very important... But even if OSS drivers reach the same level of hardware support sophistication as proprietary drivers (which isn't going to happen very soon) they still won't look good in benchmarks.

        Linux developers tend to shy away from application-specific stuff.. It's a layers violation to make low level behavior specific to certian high-level applications.. which means that you end up with multiple code paths and thus are much more likely to run into bugs and have big maintainance issues. Plus OSS folks just don't have the resources to go through applications one by one and hack support for specific apps into the drivers.

        Maybe the OSS folks are hoping that a long-term solution would be to take advantage of JIT engines (like what LLVM can support) and such to make the drivers self-optimizing. That is if you doing benchmarks or something like that the 2 or third pass will be faster then the first. But that is some serious, serious computer science voodoo, so I wouldn't expect that to very effective any time in the next few years.. If Linux gains acceptance as a OpenCL computing platform that would probably help out a _lot_ since you'd have lots of commercial interest in GPU optimizations. Of course, Stability and bug fixing will come first and that is going to take a while in itself.

        Of course if your using your own 3D stuff or playing more indie or Open Source/Free software games there isn't much commercial interest from folks like Nvidia or ATI for their proprietary drivers. So for that sort of stuff the OSS drivers may actually end up being competitive.
        Last edited by drag; 05-01-2009, 12:03 PM.

        Comment


        • #19
          Originally posted by drag View Post
          Close. The last 20-30% in performance requires a lot of _application_specific_ tweaks.
          Close. The upper tiers of performance require a lot of expensive generalized optimizations. If an application runs slowly, we profile it, look at what parts are slow, and optimize those parts in the driver. As a bonus, other applications get faster too. Sometimes this is stuff like adding in support for new OGL extensions; sometimes it's things like redoing math routines in assembly. Whatever gets us more speed by eliminating bottlenecks, really.

          Comment


          • #20
            Originally posted by Craig73 View Post
            I was under the impression we already had a certain level of 2d acceleration already (XAA, EXA, UXA?)
            well having this hardware independent is pretty useful

            Comment


            • #21
              Originally posted by Pfanne View Post
              well having this hardware independent is pretty useful
              Agreed... or separated from X :-)

              Comment


              • #22
                Originally posted by MostAwesomeDude View Post
                Close. The upper tiers of performance require a lot of expensive generalized optimizations. If an application runs slowly, we profile it, look at what parts are slow, and optimize those parts in the driver. As a bonus, other applications get faster too. Sometimes this is stuff like adding in support for new OGL extensions; sometimes it's things like redoing math routines in assembly. Whatever gets us more speed by eliminating bottlenecks, really.
                Thanks for clarifying (to you both). What's not clear to me is how much of this optimization is still available under Gallium3d without breaking the generalization too much; although I would presume there would be other optimizations available (new state trackers, state tracker tweaking, refactoring/redesign of the gallium generalizations, LLVM optimization improvements...likely tonnes to do yet)

                ...that being said, that is also being too narrow a focus. After the 80% is achieved, I expect that developer resources will get more low hanging fruit with more state trackers or other areas of the graphic stack.... perhaps an XOrg release for Michael ;-)

                Comment


                • #23
                  does anyone actually know how good all these statetrackers work alongside eachother?
                  for example im playing a game wich uses the direct3d statetracker(if this is ever going to happen) + compiz which is using the openglstate tracker while having physics effects in this game accelerated with opencl...
                  will all this work well together, or will you notice a performance hit, that is higher than each tracker for itself?

                  Comment


                  • #24
                    Originally posted by Craig73 View Post
                    Thanks for clarifying (to you both). What's not clear to me is how much of this optimization is still available under Gallium3d without breaking the generalization too much; although I would presume there would be other optimizations available (new state trackers, state tracker tweaking, refactoring/redesign of the gallium generalizations, LLVM optimization improvements...likely tonnes to do yet)

                    ...that being said, that is also being too narrow a focus. After the 80% is achieved, I expect that developer resources will get more low hanging fruit with more state trackers or other areas of the graphic stack.... perhaps an XOrg release for Michael ;-)
                    Our guess was 60-70% of theoretical performance assuming something like Gallium3D but without a fancy shader compiler (LLVM or something else) for complex workloads.

                    For simpler workloads (where the GPU is not shader-limited) I think the open source drivers will get a lot closer to 100%. Strictly speaking you probably don't need Gallium for that but I expect it will help.

                    Originally posted by Pfanne View Post
                    does anyone actually know how good all these statetrackers work alongside eachother?
                    for example im playing a game wich uses the direct3d statetracker(if this is ever going to happen) + compiz which is using the openglstate tracker while having physics effects in this game accelerated with opencl...
                    will all this work well together, or will you notice a performance hit, that is higher than each tracker for itself?
                    I guess it depends mostly on how much video memory you have relative to the sum of all the buffer requirements for the different apps and state trackers. The memory manager can flip things between video and system memory but you take a performance hit if that happens much. An overloaded GPU will slow down predictably (2 tasks each run half as fast or better) but if you start thrashing video memory then you can quickly get a much larger drop in performance.
                    Last edited by bridgman; 05-01-2009, 05:33 PM.

                    Comment


                    • #25
                      Originally posted by bridgman View Post
                      I guess it depends mostly on how much video memory you have relative to the sum of all the buffer requirements for the different apps and state trackers. The memory manager can flip things between video and system memory but you take a performance hit if that happens much. I'm mentioning that more than the obvious GPU load because an overloaded GPU will slow down predictably (2 tasks run half as fast or better) but if you start thrashing video memory then you can quickly get a larger drop in performance.
                      so having a shitload of gpu memory can never be wrong
                      thanks for the answer!

                      Comment


                      • #26
                        Originally posted by Pfanne View Post
                        so having a shitload of gpu memory can never be wrong
                        thanks for the answer!
                        Well only if it goes unused... which I'm sure most here would do their best and preventing

                        Comment

                        Working...
                        X