Announcement

Collapse
No announcement yet.

Gallium3D Now In Mainline Mesa Code-Base!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • RealNC
    replied
    People wanting performance are more interested in a fglrx with less bugs. If I say "fglrx doesn't support this and that" they say "use the open source drivers". But then I say "they suck ass; they're slow" and we're back to square 1. If all they can do is utilizing only 70% of my 300$ card, then no thanks.

    Leave a comment:


  • Regenwald
    replied
    hm and fglrx's move to gallium3d? are there already plans? do you already have a strategy how/when to do this? it's going to be difficult to share code, isn't it? perhaps than, the time for a full os-driver will come..

    however, some day we have to thank intel that the via/nouveau/ati-driver is that fast...

    Leave a comment:


  • bridgman
    replied
    Yes and no. That would work if we had a large Linux-specific performance team, or if we used Gallium3D and Mesa for our Windows and MacOS drivers, but we don't do either of those today.

    What we do instead is share big chunks of code across multiple OSes, so that tuning work done for one OS benefits users of the other OSes as well. If we had a completely separate code base for Linux drivers then the arguments about "dumping fglrx and concentrating on open source drivers" would make a lot more sense.
    Last edited by bridgman; 11 February 2009, 03:46 PM.

    Leave a comment:


  • Regenwald
    replied
    Originally posted by bridgman View Post
    The "pipe" driver is the primary hardware-dependent code, so tuning a cell or intel pipe driver wouldn't affect radeon. On the other hand, under Gallium3D the amount of hardware-dependent code is smaller, and really is a good match with the stuff that is different from one GPU to the next.

    The "winsys" driver used to be a mix of hardware-dependent (command submission) and hardware independent (window interface) functions, but AFAIK that has now been broken up to isolate the hw-dependent code in a separate module from the rest.
    well, if someone tunes let's say the opengl-implementation in mesa or the state tracker (don't know whether i'm right, mean opnecl-, openvg-, 2d-modul in gallium3d on top of the pipe driver), than you'll gain, too. so you could "move" all your tuning action from the fglrx-codebase to gallium/trackers

    Leave a comment:


  • bridgman
    replied
    The "pipe" driver is the primary hardware-dependent code, so tuning a cell or intel pipe driver wouldn't affect radeon. On the other hand, under Gallium3D the amount of hardware-dependent code is smaller, and really is a good match with the stuff that is different from one GPU to the next.

    The "winsys" driver used to be a mix of hardware-dependent (command submission) and hardware independent (window interface) functions, but AFAIK that has now been broken up to isolate the hw-dependent code in a separate module from the rest.
    Last edited by bridgman; 11 February 2009, 03:32 PM.

    Leave a comment:


  • Regenwald
    replied
    benches!!111!!1!
    however, i wonder why tuning the different pipes, mesa and (intel is going to do this) shouldn't help to improve radeon's gallium3d-performance.
    Last edited by Regenwald; 11 February 2009, 02:54 PM.

    Leave a comment:


  • ethana2
    replied
    How mature is the Cell driver? Are we talking Compiz on the PS3 by Klutzy?

    Leave a comment:


  • aaaantoine
    replied
    Originally posted by bridgman View Post
    Yep; without Gallium3D our comment that we thought open source 3D could fairly easily get to 60-70% of fglrx performance would be pretty lame. The last 30% will be hard though and I don't think anyone will bother; about half of the difference comes from a very sophisticated shader compiler, and the other half comes from constant bottom-to-top tuning and optimizing of the stack, from the GL API down to the bottom of the memory manager and command submission code.

    That said, if you can run a modern GPU at 60-70% of fglrx performance you're probably gonna be CPU-limited on anything but a CAD workstation app anyways
    That last 30% makes a difference when you're using an integrated card that barely performs (specifically, my Xpress 1100).

    Leave a comment:


  • [Knuckles]
    replied
    Congrats to all involved!

    Leave a comment:


  • bridgman
    replied
    Originally posted by Svartalf View Post
    Indeed. It translates into something that could get within spitting distance of the peak performance of the proprietary drivers. It makes the story your employer's got more compelling, don't you think?
    Yep; without Gallium3D our comment that we thought open source 3D could fairly easily get to 60-70% of fglrx performance would be pretty lame. The last 30% will be hard though and I don't think anyone will bother; about half of the difference comes from a very sophisticated shader compiler, and the other half comes from constant bottom-to-top tuning and optimizing of the stack, from the GL API down to the bottom of the memory manager and command submission code.

    That said, if you can run a modern GPU at 60-70% of fglrx performance you're probably gonna be CPU-limited on anything but a CAD workstation app anyways
    Last edited by bridgman; 11 February 2009, 12:39 PM.

    Leave a comment:

Working...
X