Announcement

Collapse
No announcement yet.

Arch Linux Looking To Employ LTO By Default, Possibly Raise x86-64 Requirements

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Arch Linux Looking To Employ LTO By Default, Possibly Raise x86-64 Requirements

    Phoronix: Arch Linux Looking To Employ LTO By Default, Possibly Raise x86-64 Requirements

    Arch Linux developers are considering some default enhancements to their Linux distribution that would increase the out-of-the-box performance...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    [..]at the cost of slower compilation times and increased memory usage.
    What requires more ram?
    In the discussion here: https://gitlab.archlinux.org/archlin...rge_requests/4
    they talk about higher ram consumption of gdb:
    LTO increases GDB's memory use: https://sourceware.org/bugzilla/show_bug.cgi?id=23710

    By something like an order of magnitude? I've OOM'd recently (with 32G of RAM) trying to get a backtrace of nm-connection-editor with the app, GLib and GTK3 built with LTO and -g3.
    Is that all?

    Comment


    • #3
      Good to see another Linux distro following OpenMandriva. They was first Linux distro with use LTO enabled by default.

      Comment


      • #4
        Michael Ubuntu also seems to experiment with x86-64-v2: https://launchpad.net/~ubuntu-toolch...untu/x86-64-v2

        While I personally would be fine with raising the requirements, what stops them to offer all flavors (or at least v1 - v3)? Sure it would mean more ressources needed for package infrastructure. But as long as they get a build server up and running uploading each x86-64-v* named package to their respective repositories. On the Arch mailing list they thought about interchangeable packages but that would add unnessecary complexity in my eyes as most users would simply chose the most advanced flavor which their CPU supports.

        Comment


        • #5
          Originally posted by ms178 View Post
          Michael Ubuntu also seems to experiment with x86-64-v2: https://launchpad.net/~ubuntu-toolch...untu/x86-64-v2

          While I personally would be fine with raising the requirements, what stops them to offer all flavors (or at least v1 - v3)? Sure it would mean more ressources needed for package infrastructure. But as long as they get a build server up and running uploading each x86-64-v* named package to their respective repositories. On the Arch mailing list they thought about interchangeable packages but that would add unnessecary complexity in my eyes as most users would simply chose the most advanced flavor which their CPU supports.

          Comment


          • #6
            This change though is receiving some criticism that x86-64-v2 requiring SSE4.1/SSE4.2 rules out some processors still in use by Arch users.
            As someone who just replaced their v1 system due to dying of old age, get a new or used system. $100-200 on eBay can net a person a nice system. Just bought my Mom a system with a Skylake i5-6500, 8GB ram, 128GB SDD system for $180 after tax and shipping. Damn thing runs great and does everything she needs.

            If you're still on a v1 system, wait for Arch Legacy (made up name) or upgrade since your shit is probably 10+ years old and is about to break. There are really only a few reasons to be on a v1 system these days -- you're broke (I get it, why I was on a v1 for so long), you're an "if it ain't broke don't fix it" kind of person, or you're uninformed and bought a crappy ass Pentium G or Celeron. Since that's the case, STFU and get out of the way of progress.

            I'm running SUSE Tumbleweed today . Who knows what I'll be running tomorrow because, believe it or not, the lack of an official zfs-dkms package irks me. Currently using their ZFS package from the filesystems repo. If I install an alternate kernel....welp, no more ZFS disks....

            Comment


            • #7
              Turning on link-time optimizations (LTO) often enhances the performance of the resulting binary thanks to the added optimizations that can be done at link-time on the entire binary.
              Yeah, but this usually is rarely the case. If you combine, however, LTO with PGO improvements are more noticeable.

              Comment


              • #8
                I have been using Archlinux for more than 13 years, and one of my main complaints was that for such a rolling release/bleeding edge distro, they were always very conservative on their compilation options. I don't understand why anyone would require running the latest and greatest on such obsolete hardware. For those who for whatever reason still are using cpus older than Intel's Nehalem (which is a dinosaur by modern standards), they can get another more stable distribution for their old hardware. Even for Archlinux, nothing stops those requiring the older architecture support to just take the upstream arch and create their own repositories with legacy support. I don't see why the vast majority of Archlinux users should suffer reduced performance just so people with more than 12 year old hardware aren't left behind. This is absurd. This is not Debian we are talking about, this is Archlinux.

                Comment


                • #9
                  The unfortunate thing about this retrospective introduction of microarchitectural feature versions is people made their decisions with the information available to them at the time of purchase, and are now left with "obsolete" hardware through no fault of their own. I have no devices higher than x86-64-v2, and most of them are at that level. Although for me it doesn't matter too much since I use Gentoo, that's at least until Steam raises it's requirements...

                  The argument made that there has always been cut-offs with microarchiture advancement all through PC history is valid, although times are now different to the period from the 70s through to the 90s where the entire software ecosystem was periodically replaced with various competing operating systems - even while Microsoft dominated. Since I've been using Linux since the mid-90s there has only been two major breaks that come to mind; the transition from a.out to ELF and libc4/5 to glibc, and the introduction of x86-64. In both cases it was possible to maintain compatibility both forwards and backwards with appropriate userspace.

                  Perhaps the more appropriate comparison, and why I started using Gentoo in the first place was the dropping of 486(/586) CPU support when distributions switched to i686 as the baseline. I used a 150Mhz 486 for a long time! i[2-6]86 and x86-64 were hard ABI breaks which was required to properly use the new hardware, new CPU modes etc, even though backward compatibility was maintained. Since the i686 (for 32bit) and x86-64 (for 64bit) became official ABIs all new extensions have been treated as just that, extensions. Software generally has provided alternative code-paths when they weren't available in order to maintain forward and backward compatibility, hard breaks haven't been needed.

                  It could have made sense to have introduced new versions with mandated particular extensions so that new CPUs from Intel and AMD would give forward compatibility with new software, and we wouldn't be in the situation where Intel today sells CPUs which do not meet the "current" feature version. Of course it didn't happen. So now we have this situation where the software landscape is in danger of dividing the community between those able to aquire new hardware to replace their "obsolete" systems and those who are unable or unwilling to do so.
                  Last edited by s_j_newbury; 09 March 2021, 09:12 AM.

                  Comment


                  • #10
                    Originally posted by TemplarGR View Post
                    I have been using Archlinux for more than 13 years, and one of my main complaints was that for such a rolling release/bleeding edge distro, they were always very conservative on their compilation options. I don't understand why anyone would require running the latest and greatest on such obsolete hardware. For those who for whatever reason still are using cpus older than Intel's Nehalem (which is a dinosaur by modern standards), they can get another more stable distribution for their old hardware. Even for Archlinux, nothing stops those requiring the older architecture support to just take the upstream arch and create their own repositories with legacy support. I don't see why the vast majority of Archlinux users should suffer reduced performance just so people with more than 12 year old hardware aren't left behind. This is absurd. This is not Debian we are talking about, this is Archlinux.
                    Funny enough I fully agree with you. Back in the day when I was a mod on Archlinux irc channel I used to help people srcpac the distro post install to omg optimize all the packages easily LOL.

                    Comment

                    Working...
                    X