Originally posted by FireBurn
View Post
Announcement
Collapse
No announcement yet.
LTO'ing Mesa Is Getting Discussed For Performance & Binary Size Reasons
Collapse
X
-
On the other hand, a JIT compiler is usually expected to do its work really fast. For example my mesa compile with lto takes now 15 minutes with an average of 534% cpu usage. You probably don't want to run your program and have the JIT use several cores for several minutes before switching to the JIT optimized code. So the JIT optimizations will likely be small and local optimizations.
I wouldn't say it's obvious and that it depends on how much impact LTO really has.
- Likes 1
Comment
-
Originally posted by atomsymbol
In my opinion, given a particular programming language, JIT is by definition faster than LTO because it has more bits of information to its disposal. If it isn't faster then there's something wrong with the JIT compiler.
If you are talking about run-time-information, yeah thats a supposedly holy grail, that doesnt seem to be reachable by anyone. To make good deicsions, you need much information, to get information you need time - run-time which already offsets hypothetical benefits. Then there is cache-interference and the inability to easily share the same physical RAM for the code. Just look at the 70MB vs 13MB figure and then consider your JIT will have atleast a 70MB footprint, likely alot more (needs binary for the final code, source for future compilation, code and working RAM).
Then there is the problem that workload can change, and if you optimise aggressively for an "idle period" your code might be really horrible when a "heavy period" comes around. If you know your workload, you can statically optimize - easily as good or better than JIT, if you dont then JIT can only predict when and how code should be optimized and waste alot time and memory for (mis-)predicting this.
Theres alot to prove that JIT will ever come close to statically compiled code, let alone beat it.Last edited by discordian; 31 May 2016, 04:31 PM.
- Likes 1
Comment
-
There's this about AutoFDO from ClearLinux:
I never tried it though, but maybe the CL devs will comment on its use for mesa.
- Likes 1
Comment
-
Also interessting in this context and as mentioned in the given dev correspondence : https://download.clearlinux.org/rele.../source/SRPMS/ <- extracting the used flags out of the clear linux source packages.
Comment
-
Originally posted by haagch View PostI have tried compiling mesa with lto recently and didn't have any problems - except compilation takes a lot longer. And the worst of it is that for incremental builds, it takes that long every time it links mesa. Is there something gcc can do to cache lto optimizations? Or does it already do that and it's just a high chance that any of the stuff linked together has changed so it needs to relink everything?
Keeping track of functions which does not change since last optimization is technically possible with GCC'S WHOPR - IPA optimizations are performed without modifying the gimple bodies and theoretically all one needs is to check if the bodies are the same and the IPA optimization decisions match. It needs to be implemented though.
- Likes 1
Comment
Comment