Announcement

Collapse
No announcement yet.

OpenBLAS 0.3.16 Brings Various CPU Fixes, More Optimizations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • OpenBLAS 0.3.16 Brings Various CPU Fixes, More Optimizations

    Phoronix: OpenBLAS 0.3.16 Brings Various CPU Fixes, More Optimizations

    OpenBLAS as the popular open-source high performance BLAS/LAPACK implementation has seen a new release with more CPU/architecture specific work as well as some new common optimizations...

    https://www.phoronix.com/scan.php?pa....3.16-Released

  • #2
    Does Intel or AMD contribute to OpenBLAS?
    Does IBM or ARM contribute to OpenBLAS?

    Which projects use OpenBLAS?

    Comment


    • #3
      Originally posted by uid313 View Post
      Does Intel or AMD contribute to OpenBLAS?
      Does IBM or ARM contribute to OpenBLAS?

      Which projects use OpenBLAS?
      OpenBLAS is used by R, Fortran and Matlab.

      It defines many useful mathematic functions that are used inside Statistics.

      IDK whether these big companies contribute to it.

      Intel however, provides their own implementation called MKL that can be used with aforementioned softwares that contains closed source binaries optimized for Intel CPU only (sometimes they also run well on AMD cpu) and is huge.

      I installed on my computer before and its libraries can be as large as 1GB each.

      Thus it only benefits workflows with sufficiently large ram to fit the data set and these huge libraries and actually makes a lot of function calls into it.

      openBLAS itself is optimized enough for many projects that it won’t be a bottleneck (which also can be explained as people doing data sci are using powerful enough hardware).

      Comment


      • #4
        There are also existing some AMD Math Librarys https://developer.amd.com/amd-aocl/
        OpenBLAS is also used by Octave ...not sure but numpy might also use it under the hood.

        Comment


        • #5
          Required By: Arch Linux - openblas 0.3.15-1 (x86_64)

          Comment


          • #6
            Originally posted by CochainComplex View Post
            There are also existing some AMD Math Librarys https://developer.amd.com/amd-aocl/
            OpenBLAS is also used by Octave ...not sure but numpy might also use it under the hood.
            It seems that numpy has optional openBLAS support https://stackoverflow.com/questions/...as-integration

            Comment


            • #7
              Originally posted by NobodyXu View Post

              It seems that numpy has optional openBLAS support https://stackoverflow.com/questions/...as-integration
              oh you are right. What is the standart lib then?

              Comment


              • #8
                OpenBLAS is an optimised BLAS/LAPACK implementation to do linear algebra (e.g. working with matrices and vectors). If you do anything "numpy-like" in your code, there is probably a more efficient way of doing it with this. So it's mainly used in scientific code and, more recently, machine-learning.

                The standard lib is a standard implementation to compare to. It is correct and "naive" in the sense it implements the algorithms strictly as defined (though compilers are allowed to optimise as they see fit). But in spite of that those naive algorithms are slower than the optimised ones (sometimes by orders of magnitude depending on the problem and architecture).

                It does not seem more widely used/known because your everyday programmer is rubbish at maths and does not see mathematical patterns in what they do: they implement things "naively" in a sense...

                OpenBLAS is an open source implementation that is performant, compliant, easy to ship with a program and that does not cripple itself when used on a CPU provided by a competitor.

                Comment


                • #9
                  ...thx for your post. But i know what openblas, mkl, lapack,blas, nvblas etc are ( I'm a physicist using hpc). My question was what is numpy using as their native blas implemention that's what I have meant by what is its standard lib?
                  numpy supposed to be a rather fast pyhton library dispite of its python interpreter language nature. So it has to have some linear algebra acceleration based on a blas library implementation on the backend side.
                  Talking about interpreter languages...
                  Afaik Matlab e.g. is build against mkl (a reason why some amd user havn't had the full performance of their ryzens at the beginning of zens launch) .
                  Octave can be built with a varienty of blas implementations default is openblas. But independently of its build structure you can drop in the nvdia GPU accelerated blas lib with environment variables.

                  .
                  Last edited by CochainComplex; 13 July 2021, 07:05 AM.

                  Comment


                  • #10
                    Ah I see! I thought you were talking about the Netlib (standard) implementation...

                    Numpy has support for anything that implements the BLAS/LAPACK API. It used to be that, by default, it was using Netlib's (correct but slow) implementation.

                    Nowadays, it tries to find a better implementation at build time.

                    So what you get completely depends on what your distribution chose to do. You can still determine what your version has been linked to.

                    Numpy in anaconda comes with MKL by default. If you chose to add the `nomkl` virtual package to prevent conda from using MKL (say if you are on an AMD system or if you figured MKL is not the fastest or is too big or is not compliant enough for another lib, etc.), then OpenBLAS is used and replaces MKL transparently.

                    You should be able to use nvidia's cuBLAS as well if you want to. If its ABI is compliant, setting a few environment variables should be enough to swicth. If not, you can try to build numpy yourself. This time, if the API is compliant all should go well.

                    Comment

                    Working...
                    X