Announcement

Collapse
No announcement yet.

Arch Linux Looking To Employ LTO By Default, Possibly Raise x86-64 Requirements

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Well, I doubt that there are big performance gains from recompiling all packages with the x86_64-2 baseline. Please see the thread on Archlinux GitLab, most importantly Filipe Laíns' post there. SSE3/4 instructions are quite specialized and only some programs see big performance speedups from using them. These programs are mostly taking advantage of them already, perhaps with hand-coded assembly or dedicated code branches with C intrinsics. There is - or soon will be - support for loading libraries optimized for the microarchitecture levels, which would suffice for many use cases.

    I did not find reliable benchmarks on this topic. There is this Phoronix article: https://www.phoronix.com/scan.php?pa...0900k-compiler , but in it -march=native is being tested together with -O3. Even so, these parameters bring usually only a modest performance gain, on the order of <10%.

    On the other hand, GNU/Linux has traditionally had good support for older hardware. Many computers were converted from outdated Windows versions to GNU/Linux. It is relatively easy for us because almost all drivers are open source and adopting them to newer kernel versions does not require vendor support. It would be pity to throw all of this out of the window(s).

    In my household there are two computers without support for x86_64-2. They are working really well for the tasks they are used for (web browsing, remote education, word processing, etc.) and it seems that there is currently no need for them to be replaced. One of them is a AMD K10 desktop with 8GB RAM, the other is a Dell laptop with Core 2 Duo (IIRC) with 4GB RAM. Both have SSDs.

    According to Wikipedia, the last processors without support for x86_64-2 are from the AMD Bobcat family, and were replaced by AMD Jaguar in 2013-2014. So it's not that 12 year old hardware is an issue here.

    Comment


    • #62
      To be honest, the boxes that I have which are old enough for me to not care about losing support for, are usually "idle" enough that compiling from source would probably be a viable option. Just how realistic is taking Arch to a build-from-source setup, rather than blindly installing binary packages? (I saw a couple of comments in the thread regarding this...)

      Comment


      • #63
        Originally posted by Mat2 View Post
        On the other hand, GNU/Linux has traditionally had good support for older hardware. Many computers were converted from outdated Windows versions to GNU/Linux. It is relatively easy for us because almost all drivers are open source and adopting them to newer kernel versions does not require vendor support. It would be pity to throw all of this out of the window(s).
        You raise a very important point that has not yet been mentioned. Folks who are new to Linux, are curious about it, want to give it a try, etc. are not willing to re-format their primary PC. Instead, they pull the old PC out of the closet, and use that to experiment with. Sure there is desktop virtualization, bootable thumb drives, etc. but these are not things a newbie will be familiar with using. As a community, we ought to be making it easier for newbies to install and run Linux, not more difficult.

        Comment


        • #64
          Originally posted by Mat2 View Post
          According to Wikipedia, the last processors without support for x86_64-2 are from the AMD Bobcat family, and were replaced by AMD Jaguar in 2013-2014. So it's not that 12 year old hardware is an issue here.
          This headline had me wondering about my Athlon 5350 "Kabini" NAS, but I can confirm the old AM1 boards are indeed x86_64-v2. It does pain me to think of converting my old Phenom X6 1090T build to something like Debian instead of Arch. The venerable K10 only supports up to the SSE4a instruction set. That box has compiled a lot of code since 2010.

          Comment


          • #65
            Originally posted by Mat2 View Post
            On the other hand, GNU/Linux has traditionally had good support for older hardware. Many computers were converted from outdated Windows versions to GNU/Linux. It is relatively easy for us because almost all drivers are open source and adopting them to newer kernel versions does not require vendor support. It would be pity to throw all of this out of the window(s).
            Originally posted by torsionbar28 View Post
            You raise a very important point that has not yet been mentioned. Folks who are new to Linux, are curious about it, want to give it a try, etc. are not willing to re-format their primary PC. Instead, they pull the old PC out of the closet, and use that to experiment with. Sure there is desktop virtualization, bootable thumb drives, etc. but these are not things a newbie will be familiar with using. As a community, we ought to be making it easier for newbies to install and run Linux, not more difficult.
            There are distros for that. However, per Arch Linux's wiki:

            "It is targeted at the proficient GNU/Linux user, or anyone with a do-it-yourself attitude who is willing to read the documentation, and solve their own problems."

            Comment


            • #66
              Originally posted by Mat2 View Post
              Well, I doubt that there are big performance gains from recompiling all packages with the x86_64-2 baseline. Please see the thread on Archlinux GitLab, most importantly Filipe Laíns' post there. SSE3/4 instructions are quite specialized and only some programs see big performance speedups from using them. These programs are mostly taking advantage of them already, perhaps with hand-coded assembly or dedicated code branches with C intrinsics. There is - or soon will be - support for loading libraries optimized for the microarchitecture levels, which would suffice for many use cases.
              biggest gain/improvement would be the 128bit atomics, that way you can implement several lockfree algorithms (without multiple codepaths for fallbacks).
              yeah, see stuff is quite useless outside of specialised vector maths
              ​​​​

              Comment


              • #67
                Originally posted by Mat2 View Post
                On the other hand, GNU/Linux has traditionally had good support for older hardware. Many computers were converted from outdated Windows versions to GNU/Linux. It is relatively easy for us because almost all drivers are open source and adopting them to newer kernel versions does not require vendor support. It would be pity to throw all of this out of the window(s).
                Not arch linux though. Arch was always build for modern CPUs. Firstly i586 and later i686.

                Comment


                • #68
                  Originally posted by horstderheld View Post
                  Not arch linux though. Arch was always build for modern CPUs. Firstly i586 and later i686.
                  The Arch Linux project was started in 2002, while i586 was released in 1993 [1]. I did not find information on when Arch started to require i586/i686. I think that viability time of computers was much shorter then (and using supported OSes with security updates was not so important). Even though, most other distributions at that time worked on older processors, Arch was an exception.

                  The benefits of requiring i686 appear to have been clearer then than vague (and unproven) claims of better performance as discussed in this thread.

                  [1] Pentium MMX, which is also i586, was released in 1997. It looks like in 2002 these processors were still quite viable.

                  Originally posted by torsionbar28 View Post
                  You raise a very important point that has not yet been mentioned. Folks who are new to Linux, are curious about it, want to give it a try, etc. are not willing to re-format their primary PC. Instead, they pull the old PC out of the closet, and use that to experiment with. Sure there is desktop virtualization, bootable thumb drives, etc. but these are not things a newbie will be familiar with using. As a community, we ought to be making it easier for newbies to install and run Linux, not more difficult.
                  I was thinking more of installing Linux on existing computers with obsolete Windows versions. Back in 2014, when Windows XP was EOLed, there were articles similar to this:
                  https://www.zdnet.com/article/how-to...on-your-xp-pc/ with a headline "Installing Linux Mint on an XP PC is something any Windows power user can do.".
                  I myself converted one computer from Windows XP to Linux after Firefox stopped providing updates.

                  Comment


                  • #69
                    Originally posted by torsionbar28 View Post
                    You raise a very important point that has not yet been mentioned. Folks who are new to Linux, are curious about it, want to give it a try, etc. are not willing to re-format their primary PC. Instead, they pull the old PC out of the closet, and use that to experiment with. Sure there is desktop virtualization, bootable thumb drives, etc. but these are not things a newbie will be familiar with using. As a community, we ought to be making it easier for newbies to install and run Linux, not more difficult.
                    This is a non-issue. There can exist distros for older computers. As a community, us REGULAR Linux users do not owe to anyone to use unoptimized software just in case someone wants to test linux on a museum piece and wants to run the latest rolling release distro...

                    Comment


                    • #70
                      I'll try to run some benchmarks with Phoronix Test Suite later. What would be the best benchmarks to run?

                      Comment

                      Working...
                      X