If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Announcement
Collapse
No announcement yet.
Parallelizing GCC's Internals Continues To Be Worked On & Showing Promising Potential
How does this affect make's -j argument? If every GCC process can now use 4 threads, does that mean that it's better to use fewer jobs with make now?
I was wondering the same thing with "Would that make -l better than -j?"
Lately I've managed to get some projects to compile a hair faster on my 16 thread system by changing "make -j16" to "make -j32 -l16" -- doubled my thread count and set the load to try to max out 16 threads so each of my threads should always have a job queued up and ready to go. Like, a 22 minute kernel compile finishes in 20 or 21 minutes.
I was wondering the same thing with "Would that make -l better than -j?"
Lately I've managed to get some projects to compile a hair faster on my 16 thread system by changing "make -j16" to "make -j32 -l16" -- doubled my thread count and set the load to try to max out 16 threads so each of my threads should always have a job queued up and ready to go. Like, a 22 minute kernel compile finishes in 20 or 21 minutes.
I wonder if the kernel compile, on a ram disk using that trick might not get even faster on the new EPYC machine of Phoronix
I wonder if the kernel compile, on a ram disk using that trick might not get even faster on the new EPYC machine of Phoronix
Already is, but mine's an 8 year old Westmere box with ECC-DDR3-1333, so, yeah . I could probably up the jobs number because I have the ram to spare (more jobs queued = more ram needed), but I doubt I'd see any other minimal gains on this older system.
Unrelated to this but ram disk turned a gear, but what I'd like to do is to put a 12GB lz4 ram disk on each of my 8GB sticks of ram to see if it's possible to get 50% more ram for free.
I wonder if the kernel compile, on a ram disk using that trick might not get even faster on the new EPYC machine of Phoronix
On my 32G ram rizen 7 (yes just 16 threads) make -jdeb-pkg to build Linux kernel pushes my poor system into consuming in the middle of the process 2-4G of swap.
More threads does equal more ram usage. I do question if this parallelizing will help.
Michael, You write:
"One of the most interesting Google Summer of Code projects this year was the student effort to work on better parallelizing GCC's internals to deal with better performance particularly when dealing with very large source files. Fortunately -- given today's desktop CPUs even ramping up their core counts -- this parallel GCC effort is being continued."
The use of very large source files (anything above 64kB) is a clear indication of poor software design and lack of modularity. It should not be encouraged by speeding up the compiler. The linker needs to be fast but the front end is already capable of using all the cores one has available. Parallelizing something that can already be run in parallel is a waste of effort.
Comment