Apple, Google Agree On More SLP Vectorization
After making more widespread use of the Loop Vectorizer, developers at Apple in Google are at least agreeing that LLVM's SLP Vectorizer should be more widely-used as well.
The LLVM SLP Vectorizer was covered earlier this year on Phoronix (and benchmarked) with its premiere in LLVM 3.3. The SLP Vectorizer is about "Superworld-Level Parallelism" and works towards vectorizing straight-line code over LLVM's already present and proven Loop Vectorizer. The SLP Vectorizer can vectorize memory access, arithmetic operations, comparison operations, and other select operations.
For now in LLVM (3.3 and SVN), the SLP Vectorizer isn't the default but must be enabled via the -fslp-vectorize and -fslp-vectorize-aggressive compiler switches for LLVM/Clang. However, LLVM/Clang developers have been discussing enabling this option for at least the -O3 optimization level.
Earlier this month I wrote initially of the discussion about enabling the LLVM SLP Vectorizer. Apple's Nadav Rotem on Sunday morning reignited the discussion with a new mailing list post to highlight their latest test data.
Nadav Rotem wrote, "As you can see [from the new compiler benchmark results], there is a small number of compile time regressions, a single major runtime *regression, and many performance gains. There is a tiny increase in code size: 30k for the whole test-suite. Based on the numbers below I would like to enable the SLP-vectorizer by default for -O3."
Google's Chandler Carruth followed up with, "I also have some benchmark data. It confirms much of what you posted -- binary size increase is essentially 0, performance increases across the board. It looks really good to me. However, there was one crash that I'd like to check if it still fires. Will update later today (feel free to ping me if you don't hear anything.). That said, why -O3? I think we should just enable this across the board, as it doesn't seem to cause any size regression under any mode, and the compile time hit is really low."
So it seems many are in agreement with enabling the SLP Vectorizer by default for the -O3 optimization level but it's possible that the straight-line code vectorizer could also be enabled for other optimization levels too if this Google compiler engineer gets his way. This change is likely for LLVM/Clang 3.4, which will likely be released around the end of the calendar year.
Coming up soon will be new LLVM 3.4 SVN benchmarks on Phoronix while for now you can see our early benchmark results that are quite positive towards improved performance in LLVM/Clang 3.4. LLVM 3.4 is also really important for AMD R600 GPU users.