*Border Lands 2 should be Borderlands 2.
Announcement
Collapse
No announcement yet.
Mesa OpenGL Threading Now Ready For Community Testing, Can Bring Big Wins
Collapse
X
-
Originally posted by marek View Post
That's unlikely to happen without a strong corporate backing. Currently, we don't have time to bisect the performance regressions we already know of. What do you think would happen with glthread.
Comment
-
Well, these profiles are indeed sort of AI... hardcoded AI per appnames
You can't really test anything as apps allow various settings, so at this one settings perf could be like that and per this one could be something else perf percentage, so various degrees of not really scalable performance and all that in various enviroments of non predictable boundware
It is enough to have for example only one bug or say some broken compositor on linux, so that something else CPU boundware happen
Back into 2004. and Catalyst AI
That is still used and it is just app profile actually (first was hardcoded and later separated) Funny that it mentioned Doom 3 which is i am sure still slow with Mesa drivers in some scenes... 13 years laterLast edited by dungeon; 10 July 2017, 05:28 AM.
Comment
-
Originally posted by dungeon View PostWell, these profiles are indeed sort of AI... hardcoded AI per appnames
You can't really test anything as apps allow various settings, so at this one settings perf could be like that and per this one could be something else perf percentage, so various degrees of not really scalable performance and all that in various enviroments of non predictable boundware
It is enough to have for example only one bug or say some broken compositor on linux, so that something else CPU boundware happen
Comment
-
Originally posted by kenjitamura View Post
I'm not sure I'm understanding your interpretation of Amdahl's law. I thought it was just saying the maximum performance gain possible by adding cores/threads is limited by the sequential operations within a computational workload. So if a game is made up of 50% sequential calls and 50% parallel calls no matter how many cores you throw at it the result will always infinitely approach halving the time of the operations without ever being faster than that.
I don't think adding threads lowers performance unless you're talking about the fact that the maximum single threaded performance is reduced by adding more cores because the overall frequency is lowered slightly to allow more cores without burning out the CPU. But because there are such massive diminished returns in the relationship between frequency and power/temperature above 4 GHz all CPU makers have universally decided it's smarter to lower the overall frequency by a few hundred Hz and add several more cores at the same frequency than it is to have fewer cores and keep pumping the frequency by tens of Hz with each hardware generation.
For the sake of argument let's say that we have 4 tasks, A takes 4 time units, B takes 2 time units, C takes 2 time units, and D takes 3 time units.
now in a single thread scenario we have a latency of A+B+C+D = 11 Time units
in a dual thread scenario we have a latency of A + B, C +D = 6 Time Units
In a tri thread scenario we have a latency of A, B+C, D = 4 Time units
and in a quad thread scenario we have a latency of A, B, C, D = 4 time units
Now lets say for the sake of argument that A is the main thread, and we've threaded out to 4 threads and there's a 1 time unit overhead to threading
A(4),B(2+1 = 3),C (2+1=3),D(3+1=4) still = 4 Time Units
whereas if we have 3 threads
A (4) ,B+C (2+2+1=5),D (3+1=4) = 5 Time Units
in theory 4 threads is doing the most "work" 4+3+3+4 = 14 Time Units vs 4+2+2+3 = 11 Time Units of the single thread approach, but it actually has the best latency.Last edited by Luke_Wolf; 10 July 2017, 06:02 AM.
Comment
-
Originally posted by artivision View PostIt should be used as some known game post processing effects: On when it's expensive and off when it's not, for your current graphics settings.
So is easy to write AI if you have algo to guess that, but to guess perf without running an app is sort of impossible
This threading can be enabled by default, but again it is disabled because it is tested and known to degrade something else... it is same on any GL driver including nVidia... so say it could be enabled if you wait enough at the time that no one cares what it degrade or soLast edited by dungeon; 10 July 2017, 06:25 AM.
Comment
-
-
Originally posted by marek View Post
You need Mesa master. Older glthread performs badly. I'm all for a wiki if somebody maintains it.
Hopefully people will contribute. Too bad this thread has been spammed with little-to-no-value comments.
- Likes 7
Comment
-
Error in the article: "mid-range GPU" should be "mid-range CPU". Although I would argue that an i5 is more high-end than mid-range.
I’m wondering if a white/blacklist makes sense, since results are likely to be different depending on the hardware and game settings used.
The reported improvements are great though!
Comment
Comment