Announcement

Collapse
No announcement yet.

Mesa OpenGL Threading Now Ready For Community Testing, Can Bring Big Wins

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by kparal View Post

    I created the wiki here:


    Hopefully people will contribute. Too bad this thread has been spammed with little-to-no-value comments.
    Justice bump. We should try to keep this on as many pages of the thread as possible.

    Comment


    • #82
      Originally posted by theriddick View Post
      I tried to get glthreading working with warthunder and my furyx but it resulted in crashing on startup. Seems other people with different hardware got it working, could just be some Oibaf driver issue at the time.
      theriddick Since some weeks war thunder is crashing with glthread enabled ... with mesa 17.1, war thunder got a increase of performance on my A10-7850k and RX480 of about 15%

      I have open this bug: https://bugs.freedesktop.org/show_bug.cgi?id=101748

      If you can test older kernels, i would be grateful, to help reduce the bisect

      by the way, also check this bug: https://bugs.freedesktop.org/show_bug.cgi?id=101749 and please check if it also happen to you
      Last edited by higuita; 10 July 2017, 09:33 PM.

      Comment


      • #83
        Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.


        Here's my drirc with some games I quickly tested and all work so far and sometimes show noticeable difference (for me it was Serious Sam Fusion), sometimes none or litte. At least all games in the list work no problem from what I can tell.


        BTW, what about games using "generic" executable names? Like XCOM Enemy unknown is using "game.x86_64" etc. Could possibly affect other games, but I've not run into any or any with issues yet. Like Valve using hl2_linux for all Orange Box Stuff games, but works fine so far.

        Comment


        • #84
          Originally posted by M@GOid View Post

          At last in my setup (i7 3770k/RX470) it have a very good gain. For the sake of simplifying things and reproducibility, I did a easy test. Going to the Arena match, after you talk with the armless guy, do a slow 360 with a joypad. After a few ones, I observed ~ 51-97 FPS without gl_thread and ~73-108 with it. In other parts of the game, were before I observed well south of 60 FPS, now it stays close of it.

          So if it did not hurt performance for anybody, I strongly recommend putting The Whitcher 2 on that list.
          I figured, it was capped because of vsync. But even with vsync it keeps it more stably at 60fps for me. Without vsync framerate is generally higher, but tearing is quite annoying. So overall, it's good to enable gl threading for TW2.

          Comment


          • #85
            Originally posted by kparal View Post

            I tried the demo and I also see "eurotrucks2" as the process name in htop and ps aux.

            However, I have issues running Mesa git master on Fedora 26 (using che/mesa copr). Xorg is crashing:





            Has anyone seen the same issue and resolved it?
            Thanks for the info!

            A few more questions:

            1. which kernel do you use? uname -a
            2. is your system fully updated?
            3. do you use any other relevant 3rd party repos?
            4. I guess you did enable the 32bit and 64bit llvm and mesa repos? (to ensure everything updates correctly if you have compat libs installed)
            5. I personally been running the build from yesterday on an intel based macbook pro and a r290x vidcard in a workstation just fine. However i have seen X startup crashes with using amdgpu (with si support turned on) and the 4.11 amd staging kernel.
            6. that said... i have added done a completely new set of llvm and mesa builds. most recent mesa is just building right now. To ensure you end up with the latest components please "dnf clean all" first to empty the cache, then dnf update.


            Comment


            • #86
              Originally posted by schmidtbag View Post
              And you know this because...?
              Ok, first of all let me apologize. I was getting frustrated last night and shouldn't have taken it out in these forums.

              Now, let me assure you I absolutely do understand what you meant, and I'm simply saying it doesn't apply in this situation. It's true I don't have any good scientific proof or anything, but i have seen a number of results so far and everything tends to indicate that the performance is more heavily impacted by app behavior than simply # of cores. Additionally, I've never heard any indications that the proprietary drivers which do these kinds of application-specific profiles with any tests of cpu cores - they're either on or off based on the app name, so I think in general there's not very good evidence that such a feature would be useful.

              Also, once you start down this road where do you stop. Should you test how much ram is available? How fast the cpu is? There are a million different hardware features that could impact things, and going down that road of hardcoding certains ones seems like a mistake. If you're going to do that, you should build in general heuristics and performance testing that automatically cuts out the threading when it slows down, like the nvidia driver has done, and that was purposefully not done yet here because it's complicated and a lot of work to get right. One thing at a time - first, get the threading code running over a widespread number of games where it's most useful, and over time the more complicated features can be added in.

              Furthermore, I am defending the status quo, and the idea that the developers behind this feature know how it works and aren't blind to the IMHO rather obvious arguments you put forward, so I feel like the burden of proof would be on your side to prove that something needs to change, rather than on mine.

              Once again, I apologize for being so short the other day. I'm going to leave the topic here, as I feel like i've said all i have to say on it. If you do have any proof that switching the threading on/off based on # of cores matters to a large degree, I would genuinely be interested in hearing it, because I don't expect that to happen.
              Last edited by smitty3268; 10 July 2017, 11:30 PM.

              Comment


              • #87
                Originally posted by kparal View Post
                However, I have issues running Mesa git master on Fedora 26 (using che/mesa copr). Xorg is crashing:





                Has anyone seen the same issue and resolved it?
                I resolved this by rebuilding mesa against llvm 4.0 instead of 5.0devel. I'll try to make sure that copr repo is fixed as well.

                Comment


                • #88
                  Originally posted by spstarr View Post


                  It shoves the hacks out of the kernel/driver though, if developers want to do whatever they want, there's no silly .drirc conf madness needed for Vulkan...
                  At the same time, it puts said hacks right in the hands of those that cause the hacks to be implemented in the first place.
                  So now instead of bugging Nvidia, AMD and intel, when there's a problem we get to chase down hundred of developers instead.

                  Comment


                  • #89
                    Tried mesa_glthread=true on Total war: Warhammer and the fps didn't increase, still around 30-35fps on 4K with fury. but my tearing is gone Very nice.

                    Comment


                    • #90
                      Originally posted by smitty3268 View Post
                      Now, let me assure you I absolutely do understand what you meant, and I'm simply saying it doesn't apply in this situation. It's true I don't have any good scientific proof or anything, but i have seen a number of results so far and everything tends to indicate that the performance is more heavily impacted by app behavior than simply # of cores.
                      You might've been better off saying that from the beginning... If you really understood what I meant since then, you'd understand I'm here for a specific set of facts and data for a specific reason; I'm not here to rant or defend application developers. I may have asked for sources, but I'd be less persistent.

                      Additionally, I've never heard any indications that the proprietary drivers which do these kinds of application-specific profiles with any tests of cpu cores - they're either on or off based on the app name, so I think in general there's not very good evidence that such a feature would be useful.
                      Proprietary drivers do application-specific tweaks for a lot more things than just multi-threaded rendering. Mesa is known for not doing that. I actually kind of like the fact that it tries being as application-agnostic as possible. That being said, why is this where we draw the line? Why haven't we done something like this for any other features?

                      Also, once you start down this road where do you stop. Should you test how much ram is available? How fast the cpu is? There are a million different hardware features that could impact things, and going down that road of hardcoding certains ones seems like a mistake. If you're going to do that, you should build in general heuristics and performance testing that automatically cuts out the threading when it slows down
                      I don't find any other tests to be relevant enough to make such a black and white difference between regressing and improving. The only difference between enabling GL threading and not is the division in CPU threads, so that just leaves the CPU, the GPU, and the application to be analyzed when something goes sour. The sole reason for my suggestion was (considering the information I personally was provided) it seemed a little presumptuous to point fingers with just 1 tested platform. But again, if I take your word for it that there are other tests showing consistency of these regressions with other hardware, then Marek should go forth with his original plans.

                      Furthermore, I am defending the status quo, and the idea that the developers behind this feature know how it works and aren't blind to the IMHO rather obvious arguments you put forward, so I feel like the burden of proof would be on your side to prove that something needs to change, rather than on mine.
                      Yes, my arguments were obvious. But despite this, not even Marek's first response to me provided information suggesting other hardware was tested. All he would've had to say was "we also tried this on X CPU [or] Y GPU and the regression persisted" and that would've shut me up. I trust his word as a valuable source, so I'd take it seriously. But he didn't say that, which to me implies his tests were on just 1 platform. That isn't very thorough or conclusive. The sole purpose of this feature is to enhance performance, but if we don't even know why some apps regress, what's preventing a game labeled as "improved when GL threading is enabled" to be a regressed on another CPU? All it takes are maybe 5 1-minute tests just to prove a point.

                      Again, I'd do it myself, but I don't have the hardware or software to do so.

                      Comment

                      Working...
                      X