Fedora Linux Grappling With New vs. Old Intel Hardware Support For Compute Stack

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • Cryio
    Junior Member
    • Dec 2021
    • 29

    #11
    I am not using Ice Lake on my Surface Pro 7 for anything compute related, but I must be amused how Intel dropped support so soon for a product released in 2019 (and you could buy during 2020), while Broadwell from 2015 was supported for so long.

    At least Mesa continues for GPU driver support. And the Media driver also is still well and good.

    Comment

    • skeevy420
      Senior Member
      • May 2017
      • 8633

      #12
      Originally posted by SofS View Post
      Why not use meta packages for each case? It should be as simple as installing the proper group of packages, unless there are some complex dependencies.
      IMHO, this, AMDVLK, that Spotify thingy, Android phones, and a lot more show that there needs to be some sort of law that forces larger companies to guarantee software support for a decade and that they need to consider some form of stable API from the beginning. Just because something is more than 4 months old doesn't mean it's legacy and defunct.

      What's kind of funny is that AMD could have led the way here when deciding how to handle legacy AMDVLK. Instead of a good solution, AMD went with "just use the older version of AMDVLK if your hardware isn't supported with newer AMDVLK." No encouraging distributions to create an "amdvlk-legacy" package so that's easier to do. No refactoring the code to stabilize the API to ensure this doesn't happen again with some future GPU architecture. Not even the lazy method of shipping all the AMDVLKs at once and use an AMD GPU library to pick the appropriate AMDVLK version based on the detected hardware.

      Nope, just use old software and hope we wrote it correctly to begin with. If you have an issue then you should buy a better supported GPU and pray we don't pull support for that GPU, too. Y'all shouldn't look too much into NVIDIA supporting CUDA with GPUs from before your RX 580 existed. Don't look too much into NVIDIA having regular and legacy drivers. There's nothing to see there. We have FSR3 and Frame Generation. It's Shiny.

      Comment

      • skeevy420
        Senior Member
        • May 2017
        • 8633

        #13
        Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post

        Especially for the latter case, is Fedora a realistic target? I assume those are predominantly RHEL / SUSE / Canonical / Debian.
        You're, unintentionally or not, highlighting the greater problem with Linux and operating systems in general: At what point is something a realistic target or not?

        Yesterday we all had the Annual Phoronix CentOS Freeloader Circlejerk and today with this it has me wondering if all those "freeloaders" should start "leeching" off the Civil Infrastructure Platform to wrangle that into SLTS Distributions.

        Comment

        • Schalefer
          Junior Member
          • Feb 2023
          • 18

          #14
          Originally posted by Teggs View Post
          This is listing a ton of equipment that isn't past its usable lifecycle. I get Intel's financial issues, but it's raw for a customer to have e-waste that shouldn't have been for a few years yet.
          For computational tasks on the iGPU they are past usable lifecycle. Who actually runs e.g. LLMs on their 9 gen iGPU?

          Comment

          • pong
            Senior Member
            • Oct 2022
            • 316

            #15
            I agree it seems like just having some meta package with options for old/new would work.

            But beyond that intel has official container images with some of their runtime and devel stuff in them (openvino / onednn and whatever they are on top of) IIRC.

            So it should be 'easy' for a very container aware fedora to facilitate / suggest an option where the right container is just pulled and "just works" for hosting compute functions on top of the host's kernel.

            The main limitation would be controlling what to share from the host to the container (network, file system) and also in case the container GPU code actually does anything involving graphics (which it probably usually would not for a compute oriented runtime) and then sharing the graphical context with the host (opengl or whatever...) could be more complicated.

            But just for general compute there is a lot of convenience / isolation benefit from just running the right container that can install / use whatever packages it wants largely independent of the host OS.

            Comment

            • frantisekz
              Junior Member
              • Jul 2013
              • 6

              #16
              Originally posted by geerge View Post
              So, what does it take to get battlemage running on fedora 41? Is there a distro that has battlemage running well with compute OOTB with no pissing about?

              edit: Seems like the answer is Ubuntu 24.10, they release binaries as debs, validated on 24.04 but surely a newer kernel is required for battlemage? Just when I thought I could yeet Ubuntu into the sun.
              For compute workloads, Fedora 42 would work if the change is accepted. For video/3d workloads, I'd expect that an updated Fedora 41 would work just fine.

              Comment

              • mrg666
                Senior Member
                • Mar 2023
                • 1065

                #17
                I would build a single distribution for the wide compatibility base as currently done and make available upgrade packages for kernel, libraries, and select applications for specific hardware levels (x86-64-v3, v2, etc.). If the users want, they can do the upgrade.

                Comment

                • FeRD_NYC
                  Junior Member
                  • Aug 2010
                  • 21

                  #18
                  Originally posted by mrg666 View Post
                  I would build a single distribution for the wide compatibility base as currently done and make available upgrade packages for kernel, libraries, and select applications for specific hardware levels (x86-64-v3, v2, etc.). If the users want, they can do the upgrade.
                  Problem is, to realize the maximum bang for your buck you want to compile everything to target the same hardware level as well, so that all of the binaries are optimized to take full advantage of all the available hardware capabilities. Installing JUST a kernel that's tuned for the latest, greatest CPUs doesn't do a whole lot of good, if everything else is still unoptimized and targeting a much broader set of capabilities. But any binaries that are optimized for the newer architecture will no longer be compatible with the more baseline, lowest-common-denominator platform (not to mention any users' machines that lack support for that optimized architecture).

                  It's somewhat akin to the 32-bit / 64-bit split. At some point you have to decide whether you're going to keep building everything 32-bit by default, so it can be run on any processor (but fails to make full use of a 64-bit CPU's available resources), or build it all targeting 64-bit so it has access to larger memory allocations and other more advanced hardware resources. If you build everything 64-bit, users with actual 32-bit processors are out of luck.

                  Linux went all-in on 64-bit builds early on, but maintained parallel distributions for 32-bit CPUs until the demand for them dwindled to practically none. Windows stayed 32-bit-by-default for an _incredibly_ long time, long past the point where anyone was really expecting to run any of that software on an actual 32-bit CPU anymore. (Heck, there's still plenty of 32-bit software out there on Windows.)

                  In that scenario, going all-in on 64-bit today is a no-brainer, because 32-bit processors have become functionally extinct. But finding the "sweet spot" hardware level to optimize your builds for, among all the different generations of 64-bit CPUs with their twisty matrices of capabilities, can be a tricky balancing act. Every bump up to a higher target architecture lets you build better-optimized, more performant software... but adds capability requirements that will leave some CPUs no longer able to run your binaries.

                  Comment

                  • mrg666
                    Senior Member
                    • Mar 2023
                    • 1065

                    #19
                    Originally posted by FeRD_NYC View Post

                    Problem is, to realize the maximum bang for your buck you want to compile everything to target the same hardware level as well, so that all of the binaries are optimized to take full advantage of all the available hardware capabilities. Installing JUST a kernel that's tuned for the latest, greatest CPUs doesn't do a whole lot of good, if everything else is still unoptimized and targeting a much broader set of capabilities. But any binaries that are optimized for the newer architecture will no longer be compatible with the more baseline, lowest-common-denominator platform (not to mention any users' machines that lack support for that optimized architecture).

                    It's somewhat akin to the 32-bit / 64-bit split. At some point you have to decide whether you're going to keep building everything 32-bit by default, so it can be run on any processor (but fails to make full use of a 64-bit CPU's available resources), or build it all targeting 64-bit so it has access to larger memory allocations and other more advanced hardware resources. If you build everything 64-bit, users with actual 32-bit processors are out of luck.

                    Linux went all-in on 64-bit builds early on, but maintained parallel distributions for 32-bit CPUs until the demand for them dwindled to practically none. Windows stayed 32-bit-by-default for an _incredibly_ long time, long past the point where anyone was really expecting to run any of that software on an actual 32-bit CPU anymore. (Heck, there's still plenty of 32-bit software out there on Windows.)

                    In that scenario, going all-in on 64-bit today is a no-brainer, because 32-bit processors have become functionally extinct. But finding the "sweet spot" hardware level to optimize your builds for, among all the different generations of 64-bit CPUs with their twisty matrices of capabilities, can be a tricky balancing act. Every bump up to a higher target architecture lets you build better-optimized, more performant software... but adds capability requirements that will leave some CPUs no longer able to run your binaries.
                    It seems to me, x86-64-v3 is the widest compatible level of hardware spec which misses only AVX512 but provides compatibility back to Haswell architecture (i.e. 11 years back). I target my kernel builds to that level knowing that I don't even have AVX512 consistently with all CPUs I have.

                    32-bit is still important, i mean emulation, mainly for games running in Steam and Wine. I am looking forward to that day when Steam drops the 32-bit emulation dependence so that I can build my kernel pure 64-bit and remove all 32-bit libraries. But it is just my OCD side I guess. Otherwise it does not hurt much to have 32-bit emulation capability in the kernel.

                    Comment

                    Working...
                    X