No announcement yet.

How to tell if a driver is gallium or just mesa? (Slow renderng with radeon)

  • Filter
  • Time
  • Show
Clear All
new posts

  • How to tell if a driver is gallium or just mesa? (Slow renderng with radeon)

    Dear Phoronix members!

    I have changed my linux distro from ubuntu 16.04 to the latest arch32 and installed xf86-video-ati and mesa packages. Direct rendering is enabled, but 3D performance is slower than before. Window manager is the same as before, but I have no display manager now and just do a startx to start dwm so generally the system is sometimes even faster -except for 3D.

    The glxgears reports 50-60 FPS. Extreme Tux Racer lags a bit (especially in the menu) and there is an interesting thing that glxinfo does not report "Gallium 0.4 on ....". Earlier it said Gallium 0.4 on llvmpipe or Gallium 0.4 on Ati ... whatever. Both for software and hardware rendering.

    Do I miss some things on my system or just the strings have changed to not say this??? Please someone who knows it at least tell if this is a change in the strings but everything is the same or if this really indicates there is no gallium for some reason here...

    See these outputs:

    [[email protected] examples]$ glxinfo | grep Open
    OpenGL vendor string: X.Org R300 Project
    OpenGL renderer string: ATI RC410
    OpenGL version string: 2.1 Mesa 19.0.3
    OpenGL shading language version string: 1.20
    OpenGL extensions:
    OpenGL ES profile version string: OpenGL ES 2.0 Mesa 19.0.3
    OpenGL ES profile shading language version string: OpenGL ES GLSL ES 1.0.16
    OpenGL ES profile extensions:

    [[email protected] ~]$ export LIBGL_ALWAYS_SOFTWARE=1; glxinfo | grep Open
    OpenGL vendor string: VMware, Inc.
    OpenGL renderer string: llvmpipe (LLVM 8.0, 128 bits)
    OpenGL core profile version string: 3.3 (Core Profile) Mesa 19.0.3
    OpenGL core profile shading language version string: 3.30
    OpenGL core profile context flags: (none)
    OpenGL core profile profile mask: core profile
    OpenGL core profile extensions:
    OpenGL version string: 3.1 Mesa 19.0.3
    OpenGL shading language version string: 1.40
    OpenGL context flags: (none)
    OpenGL extensions:
    OpenGL ES profile version string: OpenGL ES 3.0 Mesa 19.0.3
    OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.00
    OpenGL ES profile extensions:
    My GPU is this one (lshw indicates and r300 kind of card):

                    description: VGA compatible controller
                    product: RC410M [Mobility Radeon Xpress 200M]
                    vendor: Advanced Micro Devices, Inc. [AMD/ATI]
                    physical id: 5
                    bus info: [email protected]:01:05.0
                    version: 00
                    width: 32 bits
                    clock: 66MHz
                    capabilities: vga_controller bus_master cap_list rom
                    configuration: driver=radeon latency=64 mingnt=8
                    resources: irq:17 memory:c0000000-cfffffff ioport:9800(size=256) memory:fe1f0000-fe1fffff memory:c0000-dffff
    I am aware that there used to be an r300 classic and an r300 gallium driver, but it was long ago. Is it possible that this is the classic one??? How can I know which it is?

    Maybe I am completely missing something instead, but I see the perfomance was better earlier...

  • #2
    But I prefer to know what is the issue and why there is a visible slowdown while having accelerated 3D still. I don't think it is a distro issue or can it be?

    After all my xorg.conf hackz (same as on the earlier machine), a "high" power profile, removing spectre and meltdown mitigations using kernel parameters I get 300-350 FPS in glxgears, but still there is a very much visible performance drop. UrT is unplayable now while it ran without any issue before. Nexus: The Jupiter incident was also playable before. I also see some minor degradation with 2D games using opengl, but other than that actually the system is faster generally with less memory and CPU usage...

    When I changed the distro I have also changed to newer kernel and xorg versions. Can it be that they just grow to be 50% to 100% slower on an r300 gpu, single core machine???

    Also I think maybe the driver is gallium as I can see
    export GALLIUM_HUD="cpu,fps;primitives-generated"
    working properly.

    Btw I think I got around the new distro and things are up and running. Everything is nice except I cannot tell why 3D performance became so much degraded on this card after running with newer driver, xorg, kernel... I thought these parts are mostly unchanged.


    • #3
      PS.: Except the "primitives-generated" part is not working
      Ps2.: I have mentioned UrT because it is not a wine game, but runs natively so it is a better measure. With wine games Mount & Blade: warband was going without any stuttering on this very machine on the earlier setup albeit on low graphics of course. Still it is a quite recent game came to life after the machine. Now it is also unplayable. Generically everything that is 3D is performing bad, but the driver seems to be up and running... Just interested in the cause.


      • #4
        Originally posted by debianxfce View Post
        Disable dri 3, removed unused systemd services, use a non debug 1000 Hz timer kernel.
        DRI3 was already disabled on this card as it runs dri2 by default, but I actually added the line to xorg.conf just to be sure. Barely there is any unused systemd services. 50-70Mb memory usage and practically zero CPU when system is idle and using X + DWM. The current kernel timer I have checked in:

        less /proc/config.gz
        ^^ and as far as I tell I am running with a 300hz kernel, see:

        # CONFIG_HZ_100 is not set
        # CONFIG_HZ_250 is not set
        # CONFIG_HZ_1000 is not set
        I see why it can be a good idea to change for an 1000hz one, but I have no idea how much it might help. Thanks for the suggestion though! This is something I was not aware of at least ;-)

        I the meantime I am trying to put together a secondary system with a very old 2.6 kernel because then I might even have fglrx drivers too. That will be not a main system, it will just sit on my laptops internal sd card reader... if this succeed I might be able to kexec into that - but maybe I cannot even make it work with that old stuff and I prefer to have the open source driver speed sorted out as it served so well for years. ;-)


        • #5
          I have tried with zen-kernel because I could get it via pacman easily and it is supposed to be a desktop kernel, but looking at /proc/config.gz it still has 300 HZ. No noticable difference so far with zen-kernel - maybe around 0.5 to 1FPS plus in extreme tux racer so it runs in 11-17 FPS instead of 10-15 haha. Both much more worse than before when I still had my ubuntu 16.04...

          Now I will try to compile my own zen-kernel and somehow set the value for 1000 HZ. I have no idea though what you mean by "debug kernel" and what I need to turn off for not having those debug things.

          Btw I am actually thinking that maybe it is not even the timer_hz, but an other parameter that counts:


          If I understand these well, with the latter enables the scheduler to preemt tasks that are right in the middle of a syscall while the earlier only enables it "voluntarily" on predefined points. I cannot check it myself now as I am not having my earlier system anymore, but according to internet forums the earlier is the default on ubuntu generic kernels and I see the latter in my arch kernel.

          I will try the 1000 HZ settings alongside setting the voluntary flag if I can compile my kernel... but I am not really sure this is a kernel problem. Cannot it be that this driver is just not the same and the older radeon driver before gallium or that is impossible? I am still not really given up on the idea that there is something of a bigger issue here...


          • #6
            This is the last point I can track back on the internet that still names two r300 drivers:


            I really have no idea though how I could maybe have r300 classic and not r300g or if that is possible at all. Maybe in the end I will even need to compile mesa myself just to understand it better or whatever :-).

            Am I missing something or only these count?
            - The kernel (version) and its config
            - The radeon driver (version) and if it is r300 or r300g and if it has changed lately or not
            - The Xorg (version)
            - Things that run on my system alongside me trying to run a 3D app

            I have no display manager or any further layers, so I just issue startx and enter dwm from there so there shouldn't be much overhead. I quess display managers and window environments only take resources but not optimize things...


            • #7
              oh and of course config files count too, like xorg.conf...


              • #8
                if you look at your glxinfo output you can see it's using llvmpipe for 3D. It's not using radeon.

                You probably need to add this file.... /etc/X11/xorg.conf.d/20-radeon.conf
                Section "Device"
                Identifier "Radeon"
                Driver "radeon"
                Option "AccelMethod" "glamor"
                Option "DRI" "3"
                Option "TearFree" "on"
                Option "ColorTiling" "on"
                Option "ColorTiling2D" "on"


                • #9
                  if you look at your glxinfo output you can see it's using llvmpipe for 3D. It's not using radeon.
                  No, it is using radeon except if I start it like this:

                  LIBGL_ALWAYS_SOFTWARE=1; glxinfo | grep Open
                  That was the output you saw. All I wanted there is to show that neither llvmpipe, neither the radeon driver does not say "gallium" in the "OpenGL renderer string". See the original post for details...

                  Reading from the current settings even the zen-kernel I've installed is still in debug mode. I will turn this off and set the timing (HZ) configs as debianxfce is telling me above.

                  Compiling mesa is difficult and takes a long time. But you have a 32-bit system.
                  It went quite smooth and now I am running with mesa from the latest git :-)
                  The renderer string is the same as before, but according to the docs in the source code I highly suspect that there is only gallium driver for r300 now:

                  Message: Configuration summary:
                          prefix:          /usr/local
                          libdir:          lib
                          includedir:      include
                          OpenGL:          yes (ES1: yes ES2: yes)
                          OSMesa:          no
                          DRI platform:    drm
                          DRI drivers:     i915 i965 r100 r200 nouveau
                          DRI driver dir:  /usr/local/lib/dri
                          GLX:             DRI-based
                          EGL:             yes
                          EGL drivers:     builtin:egl_dri2 builtin:egl_dri3
                          GBM:             yes
                          EGL/Vulkan/VL platforms:   x11 wayland drm surfaceless
                          Vulkan drivers:  amd intel
                          Vulkan ICD dir:  share/vulkan/icd.d
                          llvm:            yes
                          llvm-version:    8.0.0
                          Gallium drivers: r300 r600 radeonsi nouveau virgl svga swrast
                          Gallium st:      mesa xa xvmc xvmc vdpau va
                          HUD lmsensors:   yes
                          Shared-glapi:    yes
                  Build targets in project: 205
                  WARNING: Deprecated features used:
                   * 0.48.0: {'python3 module'}
                  Found ninja-1.9.0 at /usr/bin/ninja
                  You can clearly see that r300 is listed only in the gallium kind of list. I guess the renderer string was changed to not tell if a driver is gallium or mesa because maybe the old drivers died out practically so it made no sense anymore, but is still confusing to me.

                  After the build and installation I have ensured it three times that I am using the new shared objects from the build because I could not remove mesa from the package manager so I installed it to /usr/local/lib as seen here.

                  Also I have backed up original shared objects below /usr/lib/dri into /urs/lib/dri_old and after the build was completed (it was a good time for a lunch) I have moved the new /usr/local/dri ones in the place of the the ones coming from the package manager originally. Also I tried to move away /usr/lib/dri as /usr/lib/dri_ in which case glxgears does not start anymore, but show errors - so it is sure I am using things from here.

                  Still the mesa "ninja install" supposedly made pkgconfig to use the new location with the new *.so files so my system is a bit messy now and I need to keep it in my head that they "might" be used later when I build things from source even in case I do not need them or remove them (or do not update that location etc....)

                  I have also enabled gallium "nine" just out of curiosity as why not do that if I am building mesa already by hand and tried it with a patched wine. It seems however that for some reason it needs at least r500 level hardware. Sad for me :-(. I will play around with configs later as this is low priority extra for wine gaming only and the main GPU driver problem remains the main focus instead.

                  Maybe I should try a much older mesa build? Can I run a 2-year old mesa with a recent kernel or is that heresy?

                  Btw this is my /etc/xorg.conf:

                  Section "Device"
                          Identifier      "Configured Video Device"
                      Driver          "radeon"
                      Option          "ForceGallium" "True"
                      Option          "AGPMode" "8"
                      Option          "AGPFastWrite" "True"
                      Option          "EnablePageFlip" "True"
                      Option          "SwapbuffersWait" "False"
                      Option          "DRI" "2"
                      Option          "ColorTiling" "on"
                  Section "Monitor"
                          Identifier      "Configured Monitor"
                  Section "Screen"
                          Identifier      "Default Screen"
                          Monitor         "Configured Monitor"
                          Device          "Configured Video Device"
                      DefaultDepth    24
                          SubSection "Display"
                                  Depth    24
                                  Modes    "1024x768"
                  Section "Extensions"
                      Option "Composite" "Disable"
                  My /etc/X11/xorg.conf.d/ directory is completely empty. I see maybe I should put these there, but I have copied my scripts from my old system that changes xorg.conf as I need them (I used to mostly change only Depth to 16 bit for some games that are slow without this change) so I have reused the script to install the configuration. I see it is in affect however. ForceGallium "True" is only added lately as me trying to figure out how to make the driver say it is gallium.

                  As far as I have read, the r300 driver has some experimental hyperz support but I have never had that enabled. Maybe now that I hve built mesa anyways I should get to that too, but I doubt that is where I have lost my performance so far...

                  Next try is to build my kernel and run with the settings we talked about....


                  • #10
                    Kernel is compiling right now...

                    Relevant configuration file parts:


                    Even more:

                    Not only I choose 1000 HZ and the other settings we have talked about, but I've went through make menuconfig completely and for example disable kernel support for multiple cpus as it says deselecting it on a uniprocessor system helps performance too. Debugging is disabled too and RETPOLINE is disabled too because I have measured earlier that attacks are not working on my machine even before the updates having retpoline were installed. I have also added CONFIG_SCHED_MUQSS=y and choose CONFIG_MPENTIUMM=y because this is a pentium M processor and I saw there is a possiblity to choose it directly. Maybe I should have also changed stuff for the gcc command line to optimize directly for my cpu, but this is already good for a lot I think.

                    Results will follow later. Many changes I have made, but most of them seem to be leading towards a better performance so I am wondering what the results will be...

                    Aw.... looking at the generated file I missed out the voluntary_preempt setting for some reason... Never mind I will clean and make once again just to be sure about it is included! Or maybe I will test this one before it first and see how much that single change count.