Results 1 to 6 of 6

Thread: Broadcom VC4 Work Well Underway On DRM, Gallium3D Support Planned

  1. #1
    Join Date
    Jan 2007
    Posts
    15,122

    Default Broadcom VC4 Work Well Underway On DRM, Gallium3D Support Planned

    Phoronix: Broadcom VC4 Work Well Underway On DRM, Gallium3D Support Planned

    Beginning this week, Eric Anholt is now working for Broadcom after working for Intel's Open-Source Technology Center the past several years on the Intel Linux graphics driver stack. While Eric just started there, he's already made some headway on a Broadcom DRM driver and expects to begin developing a Gallium3D driver soon...

    http://www.phoronix.com/vr.php?view=MTcyMzE

  2. #2
    Join Date
    Dec 2012
    Posts
    459

    Default

    Aside from the flying start, what's the advantage of using a classic mesa driver? Finetune of performance?

  3. #3
    Join Date
    Jun 2009
    Posts
    1,184

    Default

    Quote Originally Posted by Rexilion View Post
    Aside from the flying start, what's the advantage of using a classic mesa driver? Finetune of performance?
    well in theory gallium could have a small hit on raw performance but i think is more the classic intel PoV that gallium in unusable and llvm is impossible to use for shaders but is easy but seeing how well radeonsi/nouveau(clock x clock comparison ofc) perform these days i believe the overhead is quite minimal but in exchange you gain awesome things like state trackers that allows you to specialize acceleration for many type of infrastructure without reinvent the wheel over and over again(openvg, vdpau, etc).

    ofc llvm was not designed to work with GPU's so of course you will get many show stoppers at first but i believe tom stellard, jan vesely and matt arsenault have done the heavy lifting already and is actually getting in quite a good shape, if we got more people working with them instead of trying to resucitate the horrible Mesa IR we could get an awesome common compiler for all drivers.

    ofc at some point i believe is true that DRI could allow more handtuned micro optimizations but im not entirely sure they aren't possible in gallium as eric make it sound, radeonsi have to handle hardware many order of magnitude more powerful than intel can offer and in some scenario beats fglrx already(sure some rough edges persist here and there im not saying is perfect)(ofc may vary with your hardware, at the end of the day is in heavy development), so gallium/llvm has proven that at least they can handle very powerful hardware quite efficiently and rob clark has done quite a good job with freeedeno already that uses gallium too

  4. #4
    Join Date
    Jun 2010
    Location
    Brno, Czech Republic
    Posts
    25

    Default software fallbacks

    Quote Originally Posted by Rexilion View Post
    Aside from the flying start, what's the advantage of using a classic mesa driver? Finetune of performance?
    Not sure if I recall correctly, but I think with classic driver you can have a software fallback for fragment shaders.

    BTW this is one of the reasons why the i915 gallium driver was never made default (even though its faster and have more features). With classic i915 driver, when you hit some shader hardware limit it falls back to swrast, however with gallium this is not possible (probably by design), so you just got dummy shader and corrupted rendering. This is not a big deal for new powerfull hardware as the hardware limits are enough there, but can be a huge deal for old or mobile stuff.

  5. #5
    Join Date
    May 2012
    Posts
    29

    Default

    Quote Originally Posted by Paulie889 View Post
    Not sure if I recall correctly, but I think with classic driver you can have a software fallback for fragment shaders.
    swrast fallbacks are highly overrated. They are ok for running conformance tests but not really anything else.
    1) The huge (can easily be ~100x) performance hit is usually simply unacceptable, thus making whatever caused it unusable (which is the same result as if you had just drawn garbage). Now maybe you think you get lucky with for instance just draw a little tri requiring a fallback, but even in this case some drivers are required to transfer the whole framebuffer, guaranteeing the performance will tank completely.
    2) Even if you had some case where performance wouldn't be that bad (or performance didn't matter), it usually doesn't actually work in a useful way. The reason is that rasterization precision is going to be different in software than hardware (meaning if you render some tri with hardware, then render the same tri but with some differerent fragment shader requiring a fallback, it will not produce the same fragments, the depth values will also be different). Now this might actually be even ok according to GL standard (there's some sections about invariance requirements which this might violate though there are exceptions to it), but fact is apps can't deal with that, which often results in visual artifacts (z-fighting for one).

  6. #6
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,601

    Default

    Hmm, so this is only for that specific VideoCore chip in RPi, right?

    Would be nice if Broadcom cared about the other ones, too. Or about the future ones. For instance, the newly-announced Jolla phone (#2) could stand to have more open hardware like that.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •