Originally posted by torsionbar28
View Post
Announcement
Collapse
No announcement yet.
Arch Linux Looking To Employ LTO By Default, Possibly Raise x86-64 Requirements
Collapse
X
-
Originally posted by linuxgeex View Post
archlinux32.org is still a pretty active community.
Even if it is 3 more arch package sets, that doesn't mean anywhere near 3x the disk space as much of the larger packages are arch-independent. Bandwidth should be about the same either way.
The real difficulty is more likely package build times. Moving to LTO will already double the build time and the build memory of many packages (4x the residency - cores * time * memory), so they will already be facing a crunch if they don't have the budget (or the donated resources) for their build farm.
That is good point, either way, performance benefits are purely theoretical and into up to 10% range, IMO, that isn't worth of losing support for still very capable CPUs.
Comment
-
Originally posted by torsionbar28 View PostYou raise a very important point that has not yet been mentioned. Folks who are new to Linux, are curious about it, want to give it a try, etc. are not willing to re-format their primary PC. Instead, they pull the old PC out of the closet, and use that to experiment with. Sure there is desktop virtualization, bootable thumb drives, etc. but these are not things a newbie will be familiar with using. As a community, we ought to be making it easier for newbies to install and run Linux, not more difficult.
Also, older hardware might have unfixed bugs that could affect performance or stability, or the very age of the hardware itself might cause issues. Again, I don't think a layperson would bother testing the same distro with different hardware to check for hardware issues, they'd just blather in a forum about how "Linux is buggy" and "is not ready for the desktop."
- Likes 1
Comment
-
A while back I was curious about Clear's performance advantage and recompiled glibc, nasm, ffmpeg and x265 with mnative and other supposedly performance enhancer flags, then did a few encoding tests and saw no increased performance. It is quite likely that most software that would benefit from newer instructions, already does it by itself. Alan's mail about using less power is quite interesting and not talked enough I think.
- Likes 1
Comment
-
Many applications have runtime CPU detection for performance critical code paths, but there is still some which have not and can benefit greatly from compiler flags (like G'MIC).
Clear Linux goes a bit beyond that and also patches glibc to supplement math functions with optimized ones (although I understand the optimized functions will eventually find their way to upstream and into distributions).
Comment
-
Originally posted by Mat2 View PostI'll try to run some benchmarks with Phoronix Test Suite later.
Hello, I have benchmarked the performance impact of compiling code for various x86_64 microarchitecture levels: https://openbenchmarking.org/result/2103142-HA-UARCHLEVE55 TL;DR: there is no or negligible performance benefit of -march=nehalem, which corresponds to x86_64-v2, there is a moderate benefit of -march=haswell
Comment
Comment