Announcement

Collapse
No announcement yet.

Likely Radeon Gallium3D Regression On Linux 3.14 + Mesa 10.2

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Yes, kernel is different. I'm more reserved in this regard. I usually track the drm-fixes (~airlied) branch which most of the time means that it at least somehow works. Yet I make sure to have at least one additional kernel installed that I can boot into if the drm-fixes branch breaks for me.

    Comment


    • #32
      Originally posted by genstorm View Post
      When they matter, e.g. in final versions, yes. Have YOU ever used software based on live sources? Regressions may happen on any commit, might be fixed in one of the following commits. In that case, where HyperZ has been disabled, you will get a fine Changelog entry with the final announcement, and all of those costly benchmarking and wondering 'what the heck degraded here' moments have been a waste of time. Time that is so badly missing to improve the poor average quality of other articles.

      You can keep your insults to yourself.
      "live sources" you mean git releases? basically every day

      many people that read phoronix probably do the same w.r.t mesa

      Comment


      • #33
        Originally posted by curaga View Post
        Michael, this is not completely from HyperZ and so still worth investigating. We have reports from Luke and dungeon on this forum that confirm there's another regression besides the intended hyperz change.

        Luke specifically tested it, dungeon's case is media apps that do not use the Z buffer.
        Yep and i don't even use gallium drivers nor any hyperz .

        http://www.phoronix.com/forums/showt...336#post401336

        Mesa version does not metter for me, bug is somewhere in kernel all i can say 3.13 is OK, and this regression is present even in first 3.14-rc1 .

        Nope this halfed performance cant be just from disabled hyperz .

        http://openbenchmarking.org/prospect...e95ef0c8af3a48

        I thinked it is just for very oldish hardware, but glad to see all affected so seems like this will be fixed

        Comment


        • #34
          Originally posted by _SXX_ View Post
          Looks like you talking about AMD Cataclysm.

          Unfortunately AMD open source drivers isn't mature yet, so old "stable" drivers are usually more bugged than newer versions from git. If you want to run actual games that released on Steam with playable framerate have to use recent drivers stack.

          Stable version that included in popular distributions usually missing important functionality. As long as I remember Geometry shaders for R600 won't be merged into 14.04 so what regular user should do, wait for 14.10? Or install Catalyst?

          actually until very recently the open source drivers would lock up my system using the radeon si opensource drivers. the performance is getting better on radeonsi just not quite there and power management is spotty. Catalyst has it's issues but i have been lucky, seriously can't wait for the OS driver to replace the blob but i wish you people wouldn't just dump on the blob. All I see are parrots, all I hear is birds.

          Comment


          • #35
            mesa bug 75112

            amd dev's made meta bug for hyperz:
            bug no 75112

            and i don't think they fix it fast.

            _SXX_
            Looks like you talking about AMD Cataclysm.
            you are my hero

            Comment


            • #36
              Originally posted by frosth View Post
              amd dev's made meta bug for hyperz:
              bug no 75112

              and i don't think they fix it fast.

              _SXX_
              you are my hero
              As Alex say someone need to find specific case

              " We need to figure out what combination(s) of GL state cause a problem with hyperZ, then either disable hyperZ in those cases, or adjust the hyperZ-specific state to avoid the hang in those specific cases. Ideally we'd be able to find a small test case where we can reproduce the issue(s)."

              From my angle when i know it was bugged also in r200 and also disabled by default in mesas from UMS time, and when i read what people now says it is clearly lighting state .

              So maybe developers may enabled it by just disable it for lighting i think .

              Comment


              • #37
                Originally posted by chrisr View Post
                Personally speaking, I'm not arrogant enough to presume to try and tell someone what they can and can't write articles about on their own website. But the bottom line is that Michael chooses to assist with the development of the Open Source drivers, unlike many others who just sit back and complain.
                For someone who is frequently pointing out the amount of work that is producing content for phoronix, while at the same time producing a lot of hot air, I was merely giving advice in how to improve the overall situation.

                Comment


                • #38
                  whatis the problem?

                  kernel or mesa?

                  Comment


                  • #39
                    Originally posted by Andrecorreia View Post
                    kernel or mesa?
                    kernel 3.14-rc1 then lower down to something .

                    Comment


                    • #40
                      Originally posted by dungeon View Post
                      Yep and i don't even use gallium drivers nor any hyperz .

                      http://www.phoronix.com/forums/showt...336#post401336

                      Mesa version does not metter for me, bug is somewhere in kernel all i can say 3.13 is OK, and this regression is present even in first 3.14-rc1 .

                      Nope this halfed performance cant be just from disabled hyperz .

                      http://openbenchmarking.org/prospect...e95ef0c8af3a48

                      I thinked it is just for very oldish hardware, but glad to see all affected so seems like this will be fixed
                      Hmm, are you sure this is actually the same problem? I mean it could easily be that there is a real regression in the r200 driver, and this test is just showing the hyperz change for r600g/radeonsi, right?

                      Well, i hope it is the same, because then we might see it get fixed and have everyone's performance go back up.

                      Comment


                      • #41
                        Michael and interested it seems that going back to arch 3.13 default kernel and forcing active hyperz give me back full FPS as before, using mesa git ofc

                        Radeon 7770 2GB

                        Comment


                        • #42
                          Originally posted by smitty3268 View Post
                          Hmm, are you sure this is actually the same problem? I mean it could easily be that there is a real regression in the r200 driver, and this test is just showing the hyperz change for r600g/radeonsi, right?

                          Well, i hope it is the same, because then we might see it get fixed and have everyone's performance go back up.
                          For majority of tests in this article yes manual hyperz enable will just bring back performance of enabled hyperz , but not in all cases ;D... i assume that will not happen with triangle test and maybe Pray .

                          Comment


                          • #43
                            Worst of all regressions was with the new Xserver, Mesa was about 20%

                            Originally posted by curaga View Post
                            Michael, this is not completely from HyperZ and so still worth investigating. We have reports from Luke and dungeon on this forum that confirm there's another regression besides the intended hyperz change.

                            Luke specifically tested it, dungeon's case is media apps that do not use the Z buffer.
                            In my tests, I found that with Mesa 10.2/Kernel 3.13/Xserver 1.15, framerates were cut in half. Plymouth crashes with the early Linux3.14 rc1 that I tried, blocking my disk decryption system so I have yet to test the new kernel. With Mesa 10.2 but reverting to Xserver 1.14, I got back most of the regression but was still down 10-20%. That was the part that apparently turned out to by from hyper-z in my tests.

                            I got essentialy identical performance using Mesa 10.1 or Mesa 10.2 with hyper-z force-enabled in Critter (only one Z value, it's a 2d game) , but in Scorched3d,

                            R600_HYPERZ=1 scorched3d

                            locked up the whole x server with Hyper-Z and Mesa 10.2, with either version of X. I checked the exact same code with Mesa 10.1, no problems at all in Scorched3d.

                            I am guessing the issue with the new X server relates either to the DRI 3 changeover or to the rewrite of the GLX system referred to in the changelogs.

                            Comment


                            • #44
                              Originally posted by Luke View Post
                              In my tests, I found that with Mesa 10.2/Kernel 3.13/Xserver 1.15, framerates were cut in half. Plymouth crashes with the early Linux3.14 rc1 that I tried, blocking my disk decryption system so I have yet to test the new kernel. With Mesa 10.2 but reverting to Xserver 1.14, I got back most of the regression but was still down 10-20%. That was the part that apparently turned out to by from hyper-z in my tests.

                              I got essentialy identical performance using Mesa 10.1 or Mesa 10.2 with hyper-z force-enabled in Critter (only one Z value, it's a 2d game) , but in Scorched3d,

                              R600_HYPERZ=1 scorched3d

                              locked up the whole x server with Hyper-Z and Mesa 10.2, with either version of X. I checked the exact same code with Mesa 10.1, no problems at all in Scorched3d.

                              I am guessing the issue with the new X server relates either to the DRI 3 changeover or to the rewrite of the GLX system referred to in the changelogs.
                              You can do a git bisect to find out which commit introduced the regression.

                              Comment


                              • #45
                                A git bisect is beyond my skills

                                Originally posted by AnAkIn View Post
                                You can do a git bisect to find out which commit introduced the regression.
                                Sorry about that, but I would not know the slightest thing about doing that. Surely someone out there does,
                                given how many people use Mesa and X.

                                Comment

                                Working...
                                X