Announcement

Collapse
No announcement yet.

OpenVG State Tracker For Gallium3D Tomorrow

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Thanks a lot for the answers, Pfanne and Bridgman

    Bridgman: long answer? Bring it on!

    I just built a Phenom X3 box on a board with a Radeon HD 3200 GPU, and I am hoping I can stay with ATI (voting with my wallet, you know, I do appreciate their support for open drivers). On the other hand, fglrx is coring in Ubuntu 9.04 on my hardware

    Ah, here is a nice summary of where we are, for anyone interested:

    Comment


    • #12
      Originally posted by 89c51 View Post
      which applications/libraries have backends for OpenVG and will benefit from the implementation ??
      "OpenVG? is a royalty-free, cross-platform API that provides a low-level hardware acceleration interface for vector graphics libraries such as Flash and SVG" from the OpenVG home page.

      Comment


      • #13
        That seems to demonstrate the value Gallium has for putting certain APIs on top of the GPU. Nice.

        Comment


        • #14
          Originally posted by mendieta View Post
          Thanks a lot for the answers, Pfanne and Bridgman

          Bridgman: long answer? Bring it on!

          I just built a Phenom X3 box on a board with a Radeon HD 3200 GPU, and I am hoping I can stay with ATI (voting with my wallet, you know, I do appreciate their support for open drivers). On the other hand, fglrx is coring in Ubuntu 9.04 on my hardware

          Ah, here is a nice summary of where we are, for anyone interested:
          http://www.x.org/wiki/RadeonFeature


          Yes this is very nice. I hope this is kept up to date as it should help very much in my future hardware purchases. I have a feeling that my next machine is going to be AMD.

          Comment


          • #15
            Originally posted by remm View Post
            That seems to demonstrate the value Gallium has for putting certain APIs on top of the GPU. Nice.
            Yes it does.

            If Gallium really is able to isolate the hardware acceleration from the API stacks it supports then this should be a tremendous benefit to the Free software drivers.

            Previously each video card type essentially ended up with it's own specific OpenGL stack. Sure it was still using Mesa, but the amount of video card specific code was quite a bit.

            So this leads to a lot of spottiness when it comes to API support for applications.. The Intel drivers may be buggy with X and the radeon drivers may be fast with X, but buggy with Y. Hopefully now we can have a much more unified and highly optimized API stacks that are much more consistant across different video cards. That sort of thing would go a long long way to making application developer's and user's lives easier in Linux.

            Comment


            • #16
              Originally posted by mendieta View Post
              Bridgman: long answer? Bring it on!
              /me is not bridgman

              There are two planned drivers for Radeons in Gallium, r300 and r600, with one winsys, DRM-based radeon. r300 covers r300-r500 hardware, and r600 covers r600-r700+ hardware.

              r300 is started, and kind of works for some very trivial cases. It needs a lot more work, but most of the code is there, just broken.

              r600 hasn't been started. There's all kinds of problems in the winsys and kernel code that are preventing it from being started.

              Comment


              • #17
                Originally posted by Pfanne View Post
                well thats nice
                having all that accelerated would be useful ^^
                I was under the impression we already had a certain level of 2d acceleration already (XAA, EXA, UXA?)

                Originally posted by mendieta View Post
                Guys, this is all cool and it looks like the next generation open source linux graphics will probably bit the crap out of the proprietary world.
                Well... yes and no. (What follows is a summary of what as been said already)

                From my understanding (I am not a driver developer), Gallium3d will allow us to be more competitive with good proprietary drivers and make development and maintenance more efficient.

                Gallium3d (as already described) is a video card driver modeled on a generic representation of a modern 3d graphics card. The driver is constructed into distinct layers to separate the hardware, OS, and graphics API from each other. This means a developer primarily has to focus on creating the hardware specific layer, a relatively small portion of a modern graphics driver.

                This means a new graphics card can get a complete driver with relatively little effort, especially for a vendor who doesn't have an existing Linux driver. It also means that adding a new API (say - OpenGL3) to all existing drivers only has to be done once.

                Making something more generic and possibly adding more layers does slow things down a little bit and/or removes options for optimizing it. That being said, the savings in effort of sharing and generalization of all this code should mean stable feature complete drivers are created faster for open systems Linux/BSD/etc.. And by using LLVM to optimize the outputted instructions from the driver should help accelerate rendering considerably, more than making up for the extra layers.

                Now does this blow the pants off of proprietary solutions?

                Some comments by driver developers suggest that we should be able to create a high performance driver for Linux/BSD/etc. hitting about 80% (can't remember the number exactly) of the performance of proprietary drivers, generally much better than what is available today. Their reasoning was that the last 20% requires a LOT of hardware specific tweaking and tuning, and that LLVM currently doesn't optimize super scalar architectures well enough.

                I'm not a driver developer, so I can't comment on the limits of the design choices... but it seems relatively obvious that the trade off of that initial 20% is more than worth it for the better Linux/BSD drivers overall, greater development efficiencies, and more developer time to tweak drivers or to improve other areas of the graphic stack.

                ... and all the excitement around GPU processing will mean lots of focus on performance from a broader set of developers so things like LLVM super-scalar optimizations shouldn't be that far behind.

                It remains to be seen whether open source or proprietary will be king of the FPS scores in the end... but it will be exciting to watch.

                Exciting times...

                Comment


                • #18
                  Some comments by driver developers suggest that we should be able to create a high performance driver for Linux/BSD/etc. hitting about 80% (can't remember the number exactly) of the performance of proprietary drivers, generally much better than what is available today. Their reasoning was that the last 20% requires a LOT of hardware specific tweaking and tuning, and that LLVM currently doesn't optimize super scalar architectures well enough.

                  Close. The last 20-30% in performance requires a lot of _application_specific_ tweaks.

                  That is people buying video cards are generally looking for good performance in specific areas.. like some people want very good performance for Maya for doing 3D editing. Or, for marketting reasions, the ATI or Nvidia folks want to have the best Quake4 performance possible.

                  Stuff like that.

                  Of course hardware tweaks are very important... But even if OSS drivers reach the same level of hardware support sophistication as proprietary drivers (which isn't going to happen very soon) they still won't look good in benchmarks.

                  Linux developers tend to shy away from application-specific stuff.. It's a layers violation to make low level behavior specific to certian high-level applications.. which means that you end up with multiple code paths and thus are much more likely to run into bugs and have big maintainance issues. Plus OSS folks just don't have the resources to go through applications one by one and hack support for specific apps into the drivers.

                  Maybe the OSS folks are hoping that a long-term solution would be to take advantage of JIT engines (like what LLVM can support) and such to make the drivers self-optimizing. That is if you doing benchmarks or something like that the 2 or third pass will be faster then the first. But that is some serious, serious computer science voodoo, so I wouldn't expect that to very effective any time in the next few years.. If Linux gains acceptance as a OpenCL computing platform that would probably help out a _lot_ since you'd have lots of commercial interest in GPU optimizations. Of course, Stability and bug fixing will come first and that is going to take a while in itself.

                  Of course if your using your own 3D stuff or playing more indie or Open Source/Free software games there isn't much commercial interest from folks like Nvidia or ATI for their proprietary drivers. So for that sort of stuff the OSS drivers may actually end up being competitive.
                  Last edited by drag; 01 May 2009, 12:03 PM.

                  Comment


                  • #19
                    Originally posted by drag View Post
                    Close. The last 20-30% in performance requires a lot of _application_specific_ tweaks.
                    Close. The upper tiers of performance require a lot of expensive generalized optimizations. If an application runs slowly, we profile it, look at what parts are slow, and optimize those parts in the driver. As a bonus, other applications get faster too. Sometimes this is stuff like adding in support for new OGL extensions; sometimes it's things like redoing math routines in assembly. Whatever gets us more speed by eliminating bottlenecks, really.

                    Comment


                    • #20
                      Originally posted by Craig73 View Post
                      I was under the impression we already had a certain level of 2d acceleration already (XAA, EXA, UXA?)
                      well having this hardware independent is pretty useful

                      Comment

                      Working...
                      X