Announcement

Collapse
No announcement yet.

Arch Linux Looking To Employ LTO By Default, Possibly Raise x86-64 Requirements

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by carewolf View Post
    Are you sure that is right?
    Code:
    --help: error while loading shared libraries: --help: cannot open shared object file
    Works fine here.

    Comment


    • #32
      Originally posted by s_j_newbury View Post
      It could have made sense to have introduced new versions with mandated particular extensions so that new CPUs from Intel and AMD would give forward compatibility with new software, and we wouldn't be in the situation where Intel today sells CPUs which do not meet the "current" feature version. Of course it didn't happen. So now we have this situation where the software landscape is in danger of dividing the community between those able to aquire new hardware to replace their "obsolete" systems and those who are unable or unwilling to do so.
      Adding to this, the past decade has not seen the same performance uplift we had in decades past. For example, we started the 1990's with a 486 @ 25 Mhz and 4 MB of RAM. We ended the 1990's with 800+ Mhz P3 and Athlon, with 128 MB. The performance delta was orders of magnitude. A 1990 PC was totally and completely obsolete and unusable in Y2K. It's a similar story over the next decade, between Y2k to 2010.

      We then started 2010 with quad-core x64-64 @ 3 Ghz, and 4 or 8 GB of RAM. A decade later, in 2020, most mainstream users are on a quad-core x86-64 @ 3 Ghz, with 8 GB of RAM. No change! Obviously there many improvements in that time, IPC uplift, process node shrink, DDR3->DDR4, SSD, etc. But the fact remains, that a mid to high end desktop from 2010 is still a perfectly usable machine today for mainstream productivity tasks.

      While I'm not an Arch user, I'm simply making the case that here in 2021, a decade old PC can absolutely be a perfectly usable machine, and this is a fairly unique phenomenon in the history of desktop computing that the distro's should keep in mind. Perhaps I'm biased, but I personally have two machines with Opteron 1389 (quad core 2.9 Ghz), 8 GB of ECC memory, and intel X25-E SSD's. They can run Win10 or the latest Linux distros and are snappy and responsive in daily use. They're circa 2009 builds that are perfectly usable machines today, however they have SSE4a but not 4.1 or 4.2. Lets apply a little intelligence when deciding what to axe, that's all I'm saying.
      Last edited by torsionbar28; 09 March 2021, 12:59 PM.

      Comment


      • #33
        Originally posted by kokoko3k View Post
        What requires more ram?
        There are cases where inlining of functions can increase the resulting code memory requirements. Those are typically modest (at worst), although it is likely there exists pathological examples. And in general the other optimizations that occur with LTO more than make up for it (although, again, there are almost certainly counter examples).
        Last edited by CommunityMember; 09 March 2021, 12:34 PM.

        Comment


        • #34
          Originally posted by schmidtbag View Post
          What is the percent increase in performance with LTO...
          If only there was a trusted site which generated benchmarks....... https://www.phoronix.com/scan.php?pa...0-lto-tr&num=3

          Those that forget history should go read the archives.

          Comment


          • #35
            Originally posted by kpedersen View Post
            Perhaps the Arch community isn't quite large enough to maintain 4 flavours (Modern-only 64-bit Intel, Older 64-bit Intel, 32-bit intel, ARM)
            Never mind the community, it's a large extra burden on the developers.

            A lot of people don't realise that Arch has only 26 devs, who all work on it as a hobby project in their free time. Compare that to all of the other major distros that it regularly competes with that sometimes have hundreds of developers, often full-time paid staff.

            Comment


            • #36
              Originally posted by ms178 View Post

              With v2 as a baseline there is no problem even with these Pentium and Celerons, out of my head I see only older Intel and AMD CPUs would be left at the baseline (Core 2, Phenom and Phenom II don't support SSE 4.2)
              There are a number of processors that were released under the Atom umbrella which were being used in new systems not all that many years ago that will not make the cut.

              What is more useful to measure impact is not so much when capable processors were starting to be sold, but when processors that are not capable *stopped* being sold. On that measure, some people with new systems less than five years old could be impacted. Those numbers may be small enough to be considered insignificant overall, or large enough to really hurt the community, but since no one collects hardware data we will not know until people start to report various failures.

              Comment


              • #37
                Originally posted by torsionbar28 View Post

                Adding to this, the past decade has not seen the same performance uplift we had in decades past. For example, we started the 1990's with a 486 @ 25 Mhz and 4 MB of RAM. We ended the 1990's with 800+ Mhz P3 and Athlon, with 128 MB. The performance delta was orders of magnitude. A 1990 PC was totally and completely obsolete and unusable in Y2K.

                For comparison, we started 2010 with quad-core x64-64 @ 3 Ghz, with 8 GB of RAM. A decade later, in 2020, most mainstream users are on a quad-core x86-64 @ 3 Ghz, with 8 GB of RAM. No change! Obviously there many improvements in that time, IPC uplift, process node shrink, DDR3->DDR4, SSD, etc. But the fact remains, that a mid to high end desktop from 2010 is still a perfectly usable machine today for mainstream productivity tasks.

                While I'm not an Arch user, I'm simply making the case that here in 2021, a decade old PC can absolutely be a perfectly usable machine, and this is a fairly unique phenomenon in the history of desktop computing that the distro's should keep in mind. Perhaps I'm biased, but I personally have two machines with Opteron 1389 (quad core 2.9 Ghz), 8 GB of ECC memory, and intel X25-E SSD's. They can run Win10 or the latest Linux distros and are snappy and responsive in daily use. They're circa 2009 builds that are perfectly usable machines today, however they have SSE4a but not 4.1 or 4.2. Lets apply a little intelligence when deciding what to axe, that's all I'm saying.
                Thank you for adding this, it was on my mind when I wrote the above but I didn't make it explicit. As it happens, one of my "better" machines is a Phenom2 965 from the same era, I happen to use it as a high performance router so it's not so relevant to the discussion per se, but it isn't a *bad* desktop machine and undervolted runs very efficiently and reliably.

                If you move move your baseline even earlier the order of magnitude difference is even more stark: in 1984 my computer had 32KB of RAM, 2Mhz CPU and cassette storage, that's only 6 years before your earliest example! It really is a completely different era now. Micro-electrionics is a mature technology and associated industry and progress has slowed for the mainstream even if the highest extremes continues to advance at a financial premium. The cost-is-no-limit level of technology has never really been directly relevant to what people buy outside of the biggest mega-corporations, government, and higher education.

                Comment


                • #38
                  I guess this means I have to start looking for a new distro as I have a 2007 xeon. If this passes I imagine a few other distros will do the same, I'm just hoping they announce it soon after, I wouldn't want to move to another distro just to find I have to move again short term, after all.

                  Comment


                  • #39
                    I find it quite funny how keen people are to have general purpose distributions chasing the ISA tail. I've always considered one of the main selling points of general purpose distributions is that they work for just about everybody on any (sensible) machine you could throw it at. You could always opt-in to making use of the latest ISA extensions and microarchitectural optimizations by using a specialist or a source based distro. Why has the attitude on this suddenly switched? Why is it now niche not to be on the bleeding-edge?

                    Comment


                    • #40
                      Originally posted by szymon_g View Post

                      I agree with the other part of your post, but why would you use the ZFS? btrfs is perfectly usable (well, maybe not on the weir raid configurations)
                      I've used ZFS for my data disks for around 7 years now and I prefer it over other file systems. Being able to easily access those volumes when using non-default kernels is something I've needed to do from time to time over the years.

                      Comment

                      Working...
                      X