Announcement

Collapse
No announcement yet.

Linux Foundation Launches The High Performance Software Foundation

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux Foundation Launches The High Performance Software Foundation

    Phoronix: Linux Foundation Launches The High Performance Software Foundation

    Back at Supercomputing 23, the Linux Foundation announced their intent on forming the High Performance Software Foundation for helping to advance open-source software for high performance computing (HPC). The Linux Foundation is now using ISC 24 this week in Hamburg, Germany for announcing that the High Performance Software Foundation has launched...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Finally doing something with the lots of money it has from its members, compared to just wasting it on CEO and others from leadership like Mozilla does!
    Since for Linux on desktops it doesn't do anything anyway.

    Comment


    • #3
      Never heard of the Spark package manager before but after going to their website it sounds like that Spack is the Flatpack or Snap of supercomputers. It is interesting that Spack labels itself as a package manage for supercomputers, Linux and MacOS. That’s curious. They seem to be saying that Spack is not solely relegated to supercomputers. Yes, obviously the vast majority of supers run on Linux but I don’t know of a single one that runs on MacOS. I’m curious to hear from anyone who has used Spack and could compare and contrast it with Flatpak and Snap.

      Comment


      • #4
        Lawrence Livermore National Laboratory (LLNL) -- OpenZFS. They do more than that, but that's what I know them for.

        Comment


        • #5
          Originally posted by Danny3 View Post
          Since for Linux on desktops it doesn't do anything anyway.
          While much of what the HPC/Hyperscaler customers require does not have an immediate use case on the desktop, sometimes there are lessons learned (the good, the bad, and the ugly) that do trickle down to the desktop users when the desktop use cases catch up to the HPC use cases (who would have thought twenty five years ago that a desktop/workstation might have 96 CPU cores across various complexes?)

          Comment


          • #6
            guess this project will achieve as much as all the hundred other vaporware money pit graves of the Linux Foundation, ...

            Comment


            • #7
              Considering this is "The Linux Foundation" we are talking about they will end promoting Windows somehow.

              Comment


              • #8
                Originally posted by Jumbotron View Post
                Never heard of the Spark package manager before but after going to their website it sounds like that Spack is the Flatpack or Snap of supercomputers. It is interesting that Spack labels itself as a package manage for supercomputers, Linux and MacOS. That’s curious. They seem to be saying that Spack is not solely relegated to supercomputers. Yes, obviously the vast majority of supers run on Linux but I don’t know of a single one that runs on MacOS. I’m curious to hear from anyone who has used Spack and could compare and contrast it with Flatpak and Snap.
                Spack is more like a toolbox for compiling software used for HPC. It is not a distribution mechanism for binaries like Flatpak or Snap.

                The HPC software problem basically boils down to having a uniform software platform that is stable, and building the software solution to fit your problem on top of that. The cluster might provide a base compiler and libraries that are tailored for the technologies used (like MPI, UCX, GPU drivers, and so on). This proves problematic because their versions are usually locked for stability.
                Most of the time software is provided as environment modules which act like containers for software. You could compare them to Python or Conda virtual environments, but more granular. A typical workflow is to load a compiler module (for which in HPC there's usually more choice than just GCC vs. LLVM), at which point the available module list changes to software only compatible with it (if using LMod), and then to load whatever software you want to use.

                What Spack helps with is creating the software stack for the end users of HPC. Most software will be packaged using it by cluster's administration and published as env modules, but there's nothing stopping users from using it to create their own software installations with it. In fact that's one of its more popular goals.

                Spack takes care of compiling compilers, dependencies like libraries and finally the software requested. All in isolation from each other. They curate a list of recipes for getting software running. Including selecting specific versions and variants, which is a method of configuring software, for example here you can see that mpich can be configured for CUDA with specific version. This is not something that classical binary package managers do, but source-based ones can.

                HPC software distribution is a fascinating problem with some ingenious solutions like EESSI, which uses the globally distributed CernVM-FS network filesystem to share common HPC software without the hassle of having to build everything yourself. They use EasyBuild instead of Spack, and a very interesting compatibility layer based on Gentoo.

                Comment


                • #9
                  Originally posted by Jumbotron View Post
                  They seem to be saying that Spack is not solely relegated to supercomputers. Yes, obviously the vast majority of supers run on Linux but I don’t know of a single one that runs on MacOS..
                  Windows support is also under development. Keep in mind that package development is typically done on desktops.

                  Comment


                  • #10
                    Originally posted by numacross View Post

                    Spack is more like a toolbox for compiling software used for HPC. It is not a distribution mechanism for binaries like Flatpak or Snap.

                    The HPC software problem basically boils down to having a uniform software platform that is stable, and building the software solution to fit your problem on top of that. The cluster might provide a base compiler and libraries that are tailored for the technologies used (like MPI, UCX, GPU drivers, and so on). This proves problematic because their versions are usually locked for stability.
                    Most of the time software is provided as environment modules which act like containers for software. You could compare them to Python or Conda virtual environments, but more granular. A typical workflow is to load a compiler module (for which in HPC there's usually more choice than just GCC vs. LLVM), at which point the available module list changes to software only compatible with it (if using LMod), and then to load whatever software you want to use.

                    What Spack helps with is creating the software stack for the end users of HPC. Most software will be packaged using it by cluster's administration and published as env modules, but there's nothing stopping users from using it to create their own software installations with it. In fact that's one of its more popular goals.

                    Spack takes care of compiling compilers, dependencies like libraries and finally the software requested. All in isolation from each other. They curate a list of recipes for getting software running. Including selecting specific versions and variants, which is a method of configuring software, for example here you can see that mpich can be configured for CUDA with specific version. This is not something that classical binary package managers do, but source-based ones can.

                    HPC software distribution is a fascinating problem with some ingenious solutions like EESSI, which uses the globally distributed CernVM-FS network filesystem to share common HPC software without the hassle of having to build everything yourself. They use EasyBuild instead of Spack, and a very interesting compatibility layer based on Gentoo.
                    Very interesting! Thank you for that explanation!

                    Comment

                    Working...
                    X