Announcement

Collapse
No announcement yet.

Freedreno Graphics Driver Reaches Version 1.0

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Awesomeness View Post
    A) Android is Linux and B) Mesa in general works with Android, although the port is AFAIK mainly maintained by Intel for Android on Intel's own hardware.
    No android mods devs on phoronix?

    Michael more Android tests!!!

    Anyway, anyone announced that 1.0 on lets say: Cyanogenmod forums/mailing lists? There may be someone willing enough to make neccessary code changes (esspecially since it will reuse some of Intel work!)

    Comment


    • #12
      Jolla's device should benefit from this (they are already using Wayland). Even though they are supposedly using Android driver now with libhybris, but the plan was to replace that with native drivers when possible.

      Comment


      • #13
        Originally posted by robclark View Post
        small note.. the XA support isn't merged yet. Early on I had some slight corruption/mis-rendering issues with it so it stayed on a branch. But I think I've figured out the gallium issue (seems there is a workaround needed for hw bug in a320), so I should revisit XA when I have time.

        As far as performance, for desktop stuff it should be as good or better than blob driver. (Well, that is a slightly artificial statement, because blob driver doesn't support gl and blob driver doesn't support linux, so essentially freedreno is infinite times faster.) I implement some scissor related optimizations that the blob driver does not, which really help for gnome-shell/compiz type workloads. For things like xbmc which are not vertex heavy, and simple shaders, it should be comparable (although, without non android drivers there is really no way to compare). For stuff with high vertex loads and complex shaders, the blob driver is almost certainly better at this stage (if it existed for linux)... for this I need two things: (a) compiler optimizing pass(es) and (b) hw binning support. I have an experimental branch with the former, which gives >10% boost in fps in xonotic, and gives me what I need to generate vertex shaders for hw binning pass (which is the next step). I can't quite say what the fps boost for hw binning support will be, but from some rough expirements I'm guestimating >20% at 1024x768 and more at higher resolutions.

        That all said, the focus so far has not been performance. It has been getting something that works at all. (See again comment about no linux blob driver.) Certainly there are known optimizations that will come with time, but you have to walk before you can run. But the nice thing is we don't have those pesky reclocking issues, so it shouldn't be >10x differences from blob driver.
        Thanks for the detailed explanation. I looked up the definition of "binning" as "reducing the effects of minor observation errors", but I'm not quite sure how this relates to graphics drivers in detail. Can you explain this to me?

        Comment


        • #14
          Originally posted by shmerl View Post
          Jolla's device should benefit from this (they are already using Wayland). Even though they are supposedly using Android driver now with libhybris, but the plan was to replace that with native drivers when possible.
          On a mobile device you reaaaaaaaallly need an optimized driver. Battery life is important.

          Comment


          • #15
            So, they'll wait until Freedreno will get enough optimizations. They surely aren't using it right now, with their release.

            Comment


            • #16
              Originally posted by 89c51 View Post
              On a mobile device you reaaaaaaaallly need an optimized driver. Battery life is important.
              State of their 3D optimizations (and esp. kernel part) do not bode well for blob driver.

              So give a bit time for such optimizations to be made for FLOSS driver.

              Comment


              • #17
                Originally posted by Ancurio View Post
                Thanks for the detailed explanation. I looked up the definition of "binning" as "reducing the effects of minor observation errors", but I'm not quite sure how this relates to graphics drivers in detail. Can you explain this to me?
                I guess "binning" can mean a few different things in different contexts, such as separating out better batches of chips in the manufacture process and selling them with higher clock speed rating ("speed bin"), etc..

                In the context of adreno, hw binning pass is a pass that separates vertices into "bins" (or tiles) so that you don't have to re-process each vertex for each tile. For example, on a320, at 1024x768, the scene is split up and rendered as (of the top of my head) 12 tiles. Meaning that if you have 32k vertices, you are running the vertex shader and the hw is processing the resulting gl_Position 12 times 32k. At lower resolutions or apps with low vertex count (like say a window manager) this is pretty insignificant. But at higher resolutions, and more complex vertex shaders.. well, let's say that with xonotic you can definitely notice that the frame rate drops as the vertex count goes up.

                With hw binning, the driver generates a simplified vertex shader which only generates gl_Position (which can be done with the expiremental compiler optimizer branch, as it can do dead code elimination). And then the driver does a special binning pass with color pipe disabled to generate the visibility information used in the following rendering pass.

                fyi, https://github.com/freedreno/freedre.../Adreno-tiling

                Comment


                • #18
                  Originally posted by shmerl View Post
                  Jolla's device should benefit from this (they are already using Wayland). Even though they are supposedly using Android driver now with libhybris, but the plan was to replace that with native drivers when possible.
                  I really rather doubt that the jolla guys are interested in free drivers at all. I do not think anyone of the free driver guys have been contacted by anyone from jolla, but i would really like to be proven wrong. Jolla seem to be very happy with using libhybris and i doubt that they have other plans beyond keeping their binary driver compatibility mess working, somehow.

                  Comment


                  • #19
                    Originally posted by 89c51 View Post
                    On a mobile device you reaaaaaaaallly need an optimized driver. Battery life is important.
                    For mobile GPU's, the kernel part is really the most important part for power management (ie. turn off the gpu when it isn't doing anything.. and maybe get fancy and do frequency scaling). Currently for msm drm/kms driver, there really is no power management.. just turn on the GPU and leave it. But implementing a "hurry up and wait" (aka "race to idle") power management policy would not be too hard. But not been a priority yet as so far the drm/kms driver is mainly targeting snapdragon ARM boards (like ifc6410/bStem/dragonboard) rather than battery powered devices (because only HDMI is supported at the moment). Once DSI/lcd panel support is working, then PM probably gets more interesting, until then freedreno on a phone/tablet is going to want to be using qcom's downstream fbdev/kgsl drivers. (The freedreno userspace parts support either fbdev/kgsl combo or msm drm/kms.)

                    To do something more elaborate, such as trying to estimate how fast you need to run the GPU to make the next vsync deadline, is a pretty hard problem and would require some input from userspace. I'm not sure if anyone does this in a production device although I know people have expirmented with that idea. Without some sort of prediction, you are just scaling GPU freq reactively, so you probably end up getting a frame or two of stutter from missing vsycn deadline when the scene complexity suddenly shoots up. Probably why it isn't that uncommon to see phones "cheat" when they recognize certain games/benchmarks and just go to max-speed immediately. (Anyways, as we know there are "lies, damn lies, and benchmarks" :-P)

                    Comment


                    • #20
                      robclark: Question, if you have time.

                      I see that you've been working with the Adreno 320 quite a bit. Have you looked into the changes/differences for the 330? I'm hoping that it's minimal, but I guess knowing how some mobile stuff works, it could be an entirely different chip to program for.

                      Comment

                      Working...
                      X