Announcement

Collapse
No announcement yet.

Mesa OpenGL Threading Now Ready For Community Testing, Can Bring Big Wins

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • kparal
    replied
    Originally posted by marek View Post
    - You can observe how glthread is performing by adding 3 more HUD charts: GALLIUM_HUD=fps,API-thread-offloaded-slots+API-thread-direct-slots+API-thread-num-syncs
    - The API charts on the HUD show glthread counters. If they are 0, glthread is disabled. Even if you enable glthread, Mesa can still decide to disable it for compatibility. In order to have a good chance of having higher performance, API-thread-offloaded-slots must be 2x or higher than API-thread-direct-slots.
    marek, I'm testing with your suggested GALLIUM_HUD to quickly see if mesa_glthread=true is respected at all (non-zero counters), and so far it seems that glthread is internally disabled in large majority of games I tested. I did not expect it to be disabled for compatibility reasons so often. Why is that happening? Is that a bug?

    Games in which I see non-zero counters:
    Bioshock Infinite, Natural Selection 2, Metro: Last Light Redux, Screencheat, Mountain, Valhalla Hills

    Games in which I see just zero counters (technically, I sometimes see API-thread-direct-slots starting non-zero but immediately falling to zero, showing a short red line):
    This War of Mine, FORCED, Hand of Fate, Little Inferno, Magicka 2, Mount & Blade: Warband, Serious Sam 3: BFE, Stacking, The Stanley Parable, The Swapper, Trine, XCOM: Enemy Unknown and Enemy Within, Worms Clan Wars, Octodad: Dadliest Catch, Super Splatters

    Leave a comment:


  • M@yeulC
    replied
    Originally posted by dungeon View Post

    Why not, PRO driver did threaded profile exactly targeting 'game.x86_64' of XCOM. Sounds like potentional collision i know hopefully there wouldn't be same game like that.

    PRO uses custom threaded profiles instead of smart, so there would be potentinaly even more problem if some game appear with same name
    I would just submit a bug report to feral asking them to change the executable name. I expect them to be pretty open about this kind of problems. edddeduck_feral, any word on this?

    Leave a comment:


  • dungeon
    replied
    Originally posted by marek View Post
    We can't include names such as game.x86_64 in drirc.
    Why not, PRO driver did threaded profile exactly targeting 'game.x86_64' of XCOM. Sounds like potentional collision i know hopefully there wouldn't be same game like that.

    PRO uses custom threaded profiles instead of smart, so there would be potentinaly even more problem if some game appear with same name
    Last edited by dungeon; 11 July 2017, 05:26 PM.

    Leave a comment:


  • M@yeulC
    replied
    Originally posted by whitecat View Post

    On the command line:
    $ LD_LIBRARY_PATH="/usr/local/lib" LIBGL_DRIVERS_PATH="/usr/local/lib/dri" <your_game>
    It must match what you've set up in your build, in this case I've used "--libdir=/usr/local/lib --prefix=/usr/local" to build mesa.
    Thank you, it worked. I compiled it with --libdir and --prefix to one of my directories under my home folder (the build directory); and adjusted LD_LIBRARY_PATH accordingly.

    Leave a comment:


  • duby229
    replied
    Originally posted by marek View Post

    We can't include names such as game.x86_64 in drirc.
    So then I'm just curious if there is any other way to identify a game binary other than its name? There must be some kind of metadata in them for identity?

    Leave a comment:


  • marek
    replied
    Originally posted by ptr1 View Post
    https://pastebin.com/VKsSSz5i

    Here's my drirc with some games I quickly tested and all work so far and sometimes show noticeable difference (for me it was Serious Sam Fusion), sometimes none or litte. At least all games in the list work no problem from what I can tell.


    BTW, what about games using "generic" executable names? Like XCOM Enemy unknown is using "game.x86_64" etc. Could possibly affect other games, but I've not run into any or any with issues yet. Like Valve using hl2_linux for all Orange Box Stuff games, but works fine so far.
    We can't include names such as game.x86_64 in drirc.

    Leave a comment:


  • leipero
    replied
    Did some testing today, generally with r600 (and FX-4100 @3.6GHz) wine games (regardless if it's CSMT or just wine D3D) have performance loss of about 1-2% with "mesa_glthread=true", from native games, I only tested Portals (1 and 2), I couldn't see much difference in FPS, but with glthread it seems smoother with less (or no) "pauses" when FPS drop from 300 to 80 at some points. Source gained ~2% performance. I don't have much games to test tho..., but it seems there's no conflict with wine-CSMT, since performance loss is universal for all wine trackers.

    EDIT: gallium nine actually gains 2% performance in one game i tested.
    Last edited by leipero; 12 July 2017, 12:11 PM.

    Leave a comment:


  • kparal
    replied
    Originally posted by ptr1 View Post
    https://pastebin.com/VKsSSz5i
    BTW, what about games using "generic" executable names? Like XCOM Enemy unknown is using "game.x86_64" etc. Could possibly affect other games, but I've not run into any or any with issues yet. Like Valve using hl2_linux for all Orange Box Stuff games, but works fine so far.
    I second this question. marek?

    Leave a comment:


  • schmidtbag
    replied
    Originally posted by smitty3268 View Post
    Now, let me assure you I absolutely do understand what you meant, and I'm simply saying it doesn't apply in this situation. It's true I don't have any good scientific proof or anything, but i have seen a number of results so far and everything tends to indicate that the performance is more heavily impacted by app behavior than simply # of cores.
    You might've been better off saying that from the beginning... If you really understood what I meant since then, you'd understand I'm here for a specific set of facts and data for a specific reason; I'm not here to rant or defend application developers. I may have asked for sources, but I'd be less persistent.

    Additionally, I've never heard any indications that the proprietary drivers which do these kinds of application-specific profiles with any tests of cpu cores - they're either on or off based on the app name, so I think in general there's not very good evidence that such a feature would be useful.
    Proprietary drivers do application-specific tweaks for a lot more things than just multi-threaded rendering. Mesa is known for not doing that. I actually kind of like the fact that it tries being as application-agnostic as possible. That being said, why is this where we draw the line? Why haven't we done something like this for any other features?

    Also, once you start down this road where do you stop. Should you test how much ram is available? How fast the cpu is? There are a million different hardware features that could impact things, and going down that road of hardcoding certains ones seems like a mistake. If you're going to do that, you should build in general heuristics and performance testing that automatically cuts out the threading when it slows down
    I don't find any other tests to be relevant enough to make such a black and white difference between regressing and improving. The only difference between enabling GL threading and not is the division in CPU threads, so that just leaves the CPU, the GPU, and the application to be analyzed when something goes sour. The sole reason for my suggestion was (considering the information I personally was provided) it seemed a little presumptuous to point fingers with just 1 tested platform. But again, if I take your word for it that there are other tests showing consistency of these regressions with other hardware, then Marek should go forth with his original plans.

    Furthermore, I am defending the status quo, and the idea that the developers behind this feature know how it works and aren't blind to the IMHO rather obvious arguments you put forward, so I feel like the burden of proof would be on your side to prove that something needs to change, rather than on mine.
    Yes, my arguments were obvious. But despite this, not even Marek's first response to me provided information suggesting other hardware was tested. All he would've had to say was "we also tried this on X CPU [or] Y GPU and the regression persisted" and that would've shut me up. I trust his word as a valuable source, so I'd take it seriously. But he didn't say that, which to me implies his tests were on just 1 platform. That isn't very thorough or conclusive. The sole purpose of this feature is to enhance performance, but if we don't even know why some apps regress, what's preventing a game labeled as "improved when GL threading is enabled" to be a regressed on another CPU? All it takes are maybe 5 1-minute tests just to prove a point.

    Again, I'd do it myself, but I don't have the hardware or software to do so.

    Leave a comment:


  • vein
    replied
    Tried mesa_glthread=true on Total war: Warhammer and the fps didn't increase, still around 30-35fps on 4K with fury. but my tearing is gone Very nice.

    Leave a comment:

Working...
X