Originally posted by ms178
View Post
Announcement
Collapse
No announcement yet.
Parallelizing GCC's Internals Continues To Be Worked On & Showing Promising Potential
Collapse
X
-
-
Originally posted by pal666 View Postspeedup achieved was 1.09 and it still doesn't pass testsuite
at the cost of some other benefits which they are developing instead, like just faster compilation on one thread. and benefits are not enough to replace parallel invocation, they can only augment it with future work
Leave a comment:
-
Originally posted by ms178 View PostAnd the example of GCC shows that a GSoC student can achieve these speed ups
Originally posted by ms178 View PostHence if a concentrated effort had been made earlier, they could have unlocked these benefits way sooner.
Leave a comment:
-
Originally posted by atomsymbolExcept that some C++ header files tend to be quite large because bodies of templates have to be in .h files and cannot be in .cc files. In such cases the .cc file corresponding to the header file is basically empty. Quite a lot of template code optimizations are redundant and can be optimized away.
In the .h file declare the templates, but don't give any implementation details. Then for the types you know you will be using declare extern template specializations.
Then in a cpp file in your project include the actual template implementation from the tcc and define the template specializations.
That builds one single copy of the templates and links it.
And if you do need to use it with unknown types you can include the tcc and live with the extra compile time.
Leave a comment:
-
Originally posted by pal666 View Postgcc was in multi-core world with parallel invocations. multithreading compiler isn't easy and isn't as efficient (you can see, they never got 4x spedup on 4 threads). it's nice it is being done, but you are acting like it's last compiler to do it
Leave a comment:
-
This is off-topic, however, it strikes me that Summer of Code projects might be better directed toward cleaning up poorly written code rather than speeding things up. Sure a project to speed things up has a measurable impact but cleaning up poorly written code would teach someone much more. Inculcating respect for Niklaus Wirth, Donald Knuth and Edsger Dijkstra might lead to a much higher level of competence.
Leave a comment:
-
I very much like the idea of a faster linker. Building with GTK is fine, but linking takes ages (I joke, I was once familiar with clean builds taking overnight with the crew I managed).
Leave a comment:
-
I agree. I should have excluded auto-generated source code from my comment.
Leave a comment:
-
Originally posted by Clive McCarthy View PostMichael, You write:
"One of the most interesting Google Summer of Code projects this year was the student effort to work on better parallelizing GCC's internals to deal with better performance particularly when dealing with very large source files. Fortunately -- given today's desktop CPUs even ramping up their core counts -- this parallel GCC effort is being continued."
The use of very large source files (anything above 64kB) is a clear indication of poor software design and lack of modularity. It should not be encouraged by speeding up the compiler. The linker needs to be fast but the front end is already capable of using all the cores one has available. Parallelizing something that can already be run in parallel is a waste of effort.
Clive.
Leave a comment:
-
Originally posted by atomsymbol
ld.gold is several times faster than ld.bfd, but unfortunately ld.gold is making some packages fail to build.
Leave a comment:
Leave a comment: