Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 26

Thread: OpenVG State Tracker For Gallium3D Tomorrow

  1. #11
    Join Date
    Apr 2009
    Posts
    519

    Default

    Thanks a lot for the answers, Pfanne and Bridgman

    Bridgman: long answer? Bring it on!

    I just built a Phenom X3 box on a board with a Radeon HD 3200 GPU, and I am hoping I can stay with ATI (voting with my wallet, you know, I do appreciate their support for open drivers). On the other hand, fglrx is coring in Ubuntu 9.04 on my hardware

    Ah, here is a nice summary of where we are, for anyone interested:
    http://www.x.org/wiki/RadeonFeature

  2. #12
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,577

    Default

    Quote Originally Posted by 89c51 View Post
    which applications/libraries have backends for OpenVG and will benefit from the implementation ??
    "OpenVG™ is a royalty-free, cross-platform API that provides a low-level hardware acceleration interface for vector graphics libraries such as Flash and SVG" from the OpenVG home page.

  3. #13
    Join Date
    Sep 2007
    Posts
    158

    Default

    That seems to demonstrate the value Gallium has for putting certain APIs on top of the GPU. Nice.

  4. #14
    Join Date
    Sep 2006
    Posts
    714

    Default

    Quote Originally Posted by mendieta View Post
    Thanks a lot for the answers, Pfanne and Bridgman

    Bridgman: long answer? Bring it on!

    I just built a Phenom X3 box on a board with a Radeon HD 3200 GPU, and I am hoping I can stay with ATI (voting with my wallet, you know, I do appreciate their support for open drivers). On the other hand, fglrx is coring in Ubuntu 9.04 on my hardware

    Ah, here is a nice summary of where we are, for anyone interested:
    http://www.x.org/wiki/RadeonFeature


    Yes this is very nice. I hope this is kept up to date as it should help very much in my future hardware purchases. I have a feeling that my next machine is going to be AMD.

  5. #15
    Join Date
    Sep 2006
    Posts
    714

    Default

    Quote Originally Posted by remm View Post
    That seems to demonstrate the value Gallium has for putting certain APIs on top of the GPU. Nice.
    Yes it does.

    If Gallium really is able to isolate the hardware acceleration from the API stacks it supports then this should be a tremendous benefit to the Free software drivers.

    Previously each video card type essentially ended up with it's own specific OpenGL stack. Sure it was still using Mesa, but the amount of video card specific code was quite a bit.

    So this leads to a lot of spottiness when it comes to API support for applications.. The Intel drivers may be buggy with X and the radeon drivers may be fast with X, but buggy with Y. Hopefully now we can have a much more unified and highly optimized API stacks that are much more consistant across different video cards. That sort of thing would go a long long way to making application developer's and user's lives easier in Linux.

  6. #16
    Join Date
    Aug 2007
    Posts
    153

    Default

    Quote Originally Posted by mendieta View Post
    Bridgman: long answer? Bring it on!
    /me is not bridgman

    There are two planned drivers for Radeons in Gallium, r300 and r600, with one winsys, DRM-based radeon. r300 covers r300-r500 hardware, and r600 covers r600-r700+ hardware.

    r300 is started, and kind of works for some very trivial cases. It needs a lot more work, but most of the code is there, just broken.

    r600 hasn't been started. There's all kinds of problems in the winsys and kernel code that are preventing it from being started.

  7. #17
    Join Date
    Dec 2008
    Posts
    160

    Default

    Quote Originally Posted by Pfanne View Post
    well thats nice
    having all that accelerated would be useful ^^
    I was under the impression we already had a certain level of 2d acceleration already (XAA, EXA, UXA?)

    Quote Originally Posted by mendieta View Post
    Guys, this is all cool and it looks like the next generation open source linux graphics will probably bit the crap out of the proprietary world.
    Well... yes and no. (What follows is a summary of what as been said already)

    From my understanding (I am not a driver developer), Gallium3d will allow us to be more competitive with good proprietary drivers and make development and maintenance more efficient.

    Gallium3d (as already described) is a video card driver modeled on a generic representation of a modern 3d graphics card. The driver is constructed into distinct layers to separate the hardware, OS, and graphics API from each other. This means a developer primarily has to focus on creating the hardware specific layer, a relatively small portion of a modern graphics driver.

    This means a new graphics card can get a complete driver with relatively little effort, especially for a vendor who doesn't have an existing Linux driver. It also means that adding a new API (say - OpenGL3) to all existing drivers only has to be done once.

    Making something more generic and possibly adding more layers does slow things down a little bit and/or removes options for optimizing it. That being said, the savings in effort of sharing and generalization of all this code should mean stable feature complete drivers are created faster for open systems Linux/BSD/etc.. And by using LLVM to optimize the outputted instructions from the driver should help accelerate rendering considerably, more than making up for the extra layers.

    Now does this blow the pants off of proprietary solutions?

    Some comments by driver developers suggest that we should be able to create a high performance driver for Linux/BSD/etc. hitting about 80% (can't remember the number exactly) of the performance of proprietary drivers, generally much better than what is available today. Their reasoning was that the last 20% requires a LOT of hardware specific tweaking and tuning, and that LLVM currently doesn't optimize super scalar architectures well enough.

    I'm not a driver developer, so I can't comment on the limits of the design choices... but it seems relatively obvious that the trade off of that initial 20% is more than worth it for the better Linux/BSD drivers overall, greater development efficiencies, and more developer time to tweak drivers or to improve other areas of the graphic stack.

    ... and all the excitement around GPU processing will mean lots of focus on performance from a broader set of developers so things like LLVM super-scalar optimizations shouldn't be that far behind.

    It remains to be seen whether open source or proprietary will be king of the FPS scores in the end... but it will be exciting to watch.

    Exciting times...

  8. #18
    Join Date
    Sep 2006
    Posts
    714

    Default

    Some comments by driver developers suggest that we should be able to create a high performance driver for Linux/BSD/etc. hitting about 80% (can't remember the number exactly) of the performance of proprietary drivers, generally much better than what is available today. Their reasoning was that the last 20% requires a LOT of hardware specific tweaking and tuning, and that LLVM currently doesn't optimize super scalar architectures well enough.

    Close. The last 20-30% in performance requires a lot of _application_specific_ tweaks.

    That is people buying video cards are generally looking for good performance in specific areas.. like some people want very good performance for Maya for doing 3D editing. Or, for marketting reasions, the ATI or Nvidia folks want to have the best Quake4 performance possible.

    Stuff like that.

    Of course hardware tweaks are very important... But even if OSS drivers reach the same level of hardware support sophistication as proprietary drivers (which isn't going to happen very soon) they still won't look good in benchmarks.

    Linux developers tend to shy away from application-specific stuff.. It's a layers violation to make low level behavior specific to certian high-level applications.. which means that you end up with multiple code paths and thus are much more likely to run into bugs and have big maintainance issues. Plus OSS folks just don't have the resources to go through applications one by one and hack support for specific apps into the drivers.

    Maybe the OSS folks are hoping that a long-term solution would be to take advantage of JIT engines (like what LLVM can support) and such to make the drivers self-optimizing. That is if you doing benchmarks or something like that the 2 or third pass will be faster then the first. But that is some serious, serious computer science voodoo, so I wouldn't expect that to very effective any time in the next few years.. If Linux gains acceptance as a OpenCL computing platform that would probably help out a _lot_ since you'd have lots of commercial interest in GPU optimizations. Of course, Stability and bug fixing will come first and that is going to take a while in itself.

    Of course if your using your own 3D stuff or playing more indie or Open Source/Free software games there isn't much commercial interest from folks like Nvidia or ATI for their proprietary drivers. So for that sort of stuff the OSS drivers may actually end up being competitive.
    Last edited by drag; 05-01-2009 at 12:03 PM.

  9. #19
    Join Date
    Aug 2007
    Posts
    153

    Default

    Quote Originally Posted by drag View Post
    Close. The last 20-30% in performance requires a lot of _application_specific_ tweaks.
    Close. The upper tiers of performance require a lot of expensive generalized optimizations. If an application runs slowly, we profile it, look at what parts are slow, and optimize those parts in the driver. As a bonus, other applications get faster too. Sometimes this is stuff like adding in support for new OGL extensions; sometimes it's things like redoing math routines in assembly. Whatever gets us more speed by eliminating bottlenecks, really.

  10. #20
    Join Date
    Sep 2008
    Posts
    332

    Default

    Quote Originally Posted by Craig73 View Post
    I was under the impression we already had a certain level of 2d acceleration already (XAA, EXA, UXA?)
    well having this hardware independent is pretty useful

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •