Announcement

Collapse
No announcement yet.

RHEL9 Likely To Drop Older x86_64 CPUs, Fedora Can Better Prepare With "Enterprise Linux Next"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • sfx2000
    replied
    Originally posted by pal666 View Post
    this circus of chasing slighly less ancient hardware is such a waste of resources. one would think redhat has engineers who are able to defer binding of codegen options to host cpu to install time or run time
    Not ancient HW - actually there's a fair amount of Intel current CPU's that do not support AVX/AVX2

    This is fairly current - Intel Pentium Gold G5420 - this is "Coffee Lake", which is a Skylake derivative...

    https://ark.intel.com/content/www/us...-3-80-ghz.html

    And as I mentioned earlier - there's a lot of edge appliances that are on current Silvermont/Airmont boxes - Mostly focused on SDN deployments

    So for a feature set - maybe Westmere... even though many actual legacy Westmere's have been pulled out of service.

    Leave a comment:


  • skeevy420
    replied
    Originally posted by jelabarre59 View Post
    They do that' I'd have to throw out 80% of my hardware. And I'm not in the market to buy any more.

    I guess I made the mistake all those years ago of not becoming Amish...
    Me too. I intentionally buy old hardware. Like 5+ years old. It's cheaper. It's tried, true, and had enough time to get most of the bugs worked out. And in the case of x86_64, as long as the Ghz are above 3.3, it's 10 years or younger, and there are at least 8 threads available, it's good enough to play modern 1080p games if the GPU is new enough.

    My exception is I'll wait a year on a GPU purchase...but, IMHO, a one year old GPU is damn near Legacy using GPU time frames. That's mainly due to a combination of being a Linux user and knowing about AMD's Fine Wine strategy.

    Leave a comment:


  • jelabarre59
    replied
    They do that' I'd have to throw out 80% of my hardware. And I'm not in the market to buy any more.

    I guess I made the mistake all those years ago of not becoming Amish...

    Leave a comment:


  • pegasus
    replied
    Originally posted by Space Heater View Post
    Where can we follow the progress of this feature in gcc? Have patches been submitted for review?
    I remember reading about it either here or on some mailing list but I'm unable to find it now. It's extremely un-googleable and I forgot how the project was called ...

    Leave a comment:


  • Jabberwocky
    replied
    Originally posted by pegasus View Post
    Guys,
    By the time rhel9 becomes a thing, gcc will fully support "fat" binaries with multiple optimized versions of the same function and switching between them at runtime.
    So I see no issue whatsoever to build whole distros with binaries that include all possible optimizations, from generic to avx512 and everything inbetween.
    I also would like to know where development is taking place.

    Last time I checked the idea was dropped:
    As far as I know, so-called "fat binaries"--executable files that contain machine code for multiple systems--are only really used on Apple PCs, and even there it seems like they only used them beca...

    This is not essential for my programs, but merely out of curiosity. Is it possible to, preferably using gcc, compile a 'fat' binary for Linux including multiple architectures such as combinations of

    Leave a comment:


  • cybertraveler
    replied
    I can't remember the name of it right now, but I know there's a compiler tech which compiles multiple versions of functions. Each version uses different CPU instructions, so you can get both the performance benefit and the compatibility. The down-side is the libs/executables will be larger. Also, I guess this would make inlining those affected functions difficult or impossible.

    Leave a comment:


  • skeevy420
    replied
    Originally posted by Sonadow View Post
    That kind of stinks.

    I have a dual Xeon workstation that belongs to the Ivy Bridge era. It still runs rings around today's consumer hardware by virtue of its 192GB memory and dual processor setup, and is only beaten by the Core X and Threadripper HEDT families.

    In addition, I have three low-cost Apollo Lake and Gemini Lake laptops that I bought from China at extremely low prices, and I fully intend to get a few more as spares due to their prices. They are excellent for daily computing such as email, writing, light Wiresharking / TCPdump and Chrome Remote Desktop (only for controlling another person's computer, not the other way round since it does not work on Wayland). And both Lakes do not have AVX. Those laptops are currently running Debian 10 on Wayland/

    If Fedora is going to mandate AVX or AVX2 it's going to cut out a whole bunch of hardware that is fully capable of running a modern Linux distribution without performance issues. Especially on my dual Xeon.
    Yeah, I have dual Xeon Westmeres myself.

    There's a reason I picked AES as the cutoff line -- it was the feature introduced with my processors

    Leave a comment:


  • ssokolow
    replied
    Originally posted by pal666 View Post
    i never said "compiled at install time". i said "bound"(options to arch), leaving mechanism unspecified. one way is to download relevant build
    My mistake. Still, that multiplies the compilation time and storage space on Red Hat's end every time they update a package, rather than just discussing it once for all packages periodically.

    Originally posted by pal666 View Post
    first, with working compiler result should be identical.
    Theory and practice are two different things. For them to be the same requires a level of discipline and quality control from the upstream developers which you just can't rely on in the real world.

    Originally posted by pal666 View Post
    second, they can easily designate some variant as primary and others as "use at your own risk(or pay for it)".
    In which case, the question reverts to "Why doesn't Red Hat think this is a profitable way to burn their CPU cycles and storage budget?"

    Originally posted by pal666 View Post
    compiling is essentially free(it's done once per millions of downloads). storage is cheap and sending doesn't depend on number of variants. and nothing demands "countless" variants, any number greater than 1 is better than 1
    OK, it should be easy for you to convince Red Hat of that, then.

    Originally posted by pal666 View Post
    no, i didn't argue for that. it can be used as implementation, but i'm afraid quality of implementation would siffer. same reason why jit sucks - (optimizing)compilation takes time
    Again, my mistake... but WebAssembly isn't JIT. JIT has to optimize quickly enough to keep the program from juddering. WebAssembly compilation is meant to sit behind a potential progress bar at install or first-run time.

    Leave a comment:


  • pal666
    replied
    Originally posted by ssokolow View Post
    Let's assume that companies were willing to run "Gentoo, but with a Red Hat support contract behind it", including all the "wait for it to compile at install time"
    i never said "compiled at install time". i said "bound"(options to arch), leaving mechanism unspecified. one way is to download relevant build
    Originally posted by ssokolow View Post
    and "QA combinatorial explosion" implications of that.
    first, with working compiler result should be identical. second, they can easily designate some variant as primary and others as "use at your own risk(or pay for it)".
    Originally posted by ssokolow View Post
    Supporting a forest of them could be an even bigger waste of resources because compiling, storing, and sending countless variations on the same binary isn't free.
    compiling is essentially free(it's done once per millions of downloads). storage is cheap and sending doesn't depend on number of variants. and nothing demands "countless" variants, any number greater than 1 is better than 1
    Originally posted by ssokolow View Post
    Though, to be fair to you, you did just basically argue for compiling the entire OS to WebAssembly, so it's not 100% out of the question.
    no, i didn't argue for that. it can be used as implementation, but i'm afraid quality of implementation would siffer. same reason why jit sucks - (optimizing)compilation takes time

    Leave a comment:


  • ssokolow
    replied
    Originally posted by pal666 View Post
    this circus of chasing slighly less ancient hardware is such a waste of resources. one would think redhat has engineers who are able to defer binding of codegen options to host cpu to install time or run time
    Let's assume that companies were willing to run "Gentoo, but with a Red Hat support contract behind it", including all the "wait for it to compile at install time" and "QA combinatorial explosion" implications of that.

    Supporting a forest of them could be an even bigger waste of resources because compiling, storing, and sending countless variations on the same binary isn't free.

    Though, to be fair to you, you did just basically argue for compiling the entire OS to WebAssembly, so it's not 100% out of the question.

    Leave a comment:

Working...
X