Announcement

Collapse
No announcement yet.

Broadcom VC4 Work Well Underway On DRM, Gallium3D Support Planned

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Broadcom VC4 Work Well Underway On DRM, Gallium3D Support Planned

    Phoronix: Broadcom VC4 Work Well Underway On DRM, Gallium3D Support Planned

    Beginning this week, Eric Anholt is now working for Broadcom after working for Intel's Open-Source Technology Center the past several years on the Intel Linux graphics driver stack. While Eric just started there, he's already made some headway on a Broadcom DRM driver and expects to begin developing a Gallium3D driver soon...

    http://www.phoronix.com/vr.php?view=MTcyMzE

  • #2
    Aside from the flying start, what's the advantage of using a classic mesa driver? Finetune of performance?

    Comment


    • #3
      Originally posted by Rexilion View Post
      Aside from the flying start, what's the advantage of using a classic mesa driver? Finetune of performance?
      well in theory gallium could have a small hit on raw performance but i think is more the classic intel PoV that gallium in unusable and llvm is impossible to use for shaders but is easy but seeing how well radeonsi/nouveau(clock x clock comparison ofc) perform these days i believe the overhead is quite minimal but in exchange you gain awesome things like state trackers that allows you to specialize acceleration for many type of infrastructure without reinvent the wheel over and over again(openvg, vdpau, etc).

      ofc llvm was not designed to work with GPU's so of course you will get many show stoppers at first but i believe tom stellard, jan vesely and matt arsenault have done the heavy lifting already and is actually getting in quite a good shape, if we got more people working with them instead of trying to resucitate the horrible Mesa IR we could get an awesome common compiler for all drivers.

      ofc at some point i believe is true that DRI could allow more handtuned micro optimizations but im not entirely sure they aren't possible in gallium as eric make it sound, radeonsi have to handle hardware many order of magnitude more powerful than intel can offer and in some scenario beats fglrx already(sure some rough edges persist here and there im not saying is perfect)(ofc may vary with your hardware, at the end of the day is in heavy development), so gallium/llvm has proven that at least they can handle very powerful hardware quite efficiently and rob clark has done quite a good job with freeedeno already that uses gallium too

      Comment


      • #4
        software fallbacks

        Originally posted by Rexilion View Post
        Aside from the flying start, what's the advantage of using a classic mesa driver? Finetune of performance?
        Not sure if I recall correctly, but I think with classic driver you can have a software fallback for fragment shaders.

        BTW this is one of the reasons why the i915 gallium driver was never made default (even though its faster and have more features). With classic i915 driver, when you hit some shader hardware limit it falls back to swrast, however with gallium this is not possible (probably by design), so you just got dummy shader and corrupted rendering. This is not a big deal for new powerfull hardware as the hardware limits are enough there, but can be a huge deal for old or mobile stuff.

        Comment


        • #5
          Originally posted by Paulie889 View Post
          Not sure if I recall correctly, but I think with classic driver you can have a software fallback for fragment shaders.
          swrast fallbacks are highly overrated. They are ok for running conformance tests but not really anything else.
          1) The huge (can easily be ~100x) performance hit is usually simply unacceptable, thus making whatever caused it unusable (which is the same result as if you had just drawn garbage). Now maybe you think you get lucky with for instance just draw a little tri requiring a fallback, but even in this case some drivers are required to transfer the whole framebuffer, guaranteeing the performance will tank completely.
          2) Even if you had some case where performance wouldn't be that bad (or performance didn't matter), it usually doesn't actually work in a useful way. The reason is that rasterization precision is going to be different in software than hardware (meaning if you render some tri with hardware, then render the same tri but with some differerent fragment shader requiring a fallback, it will not produce the same fragments, the depth values will also be different). Now this might actually be even ok according to GL standard (there's some sections about invariance requirements which this might violate though there are exceptions to it), but fact is apps can't deal with that, which often results in visual artifacts (z-fighting for one).

          Comment


          • #6
            Hmm, so this is only for that specific VideoCore chip in RPi, right?

            Would be nice if Broadcom cared about the other ones, too. Or about the future ones. For instance, the newly-announced Jolla phone (#2) could stand to have more open hardware like that.

            Comment

            Working...
            X