Is ATLAS still a thing ?
Announcement
Collapse
No announcement yet.
OpenBLAS 0.3.16 Brings Various CPU Fixes, More Optimizations
Collapse
X
-
Originally posted by CochainComplex View Post...thx for your post. But i know what openblas, mkl, lapack,blas, nvblas etc are ( I'm a physicist using hpc). My question was what is numpy using as their native blas implemention that's what I have meant by what is its standard lib?
numpy supposed to be a rather fast pyhton library dispite of its python interpreter language nature. So it has to have some linear algebra acceleration based on a blas library implementation on the backend side.
Talking about interpreter languages...
Afaik Matlab e.g. is build against mkl (a reason why some amd user havn't had the full performance of their ryzens at the beginning of zens launch) .
Octave can be built with a varienty of blas implementations default is openblas. But independently of its build structure you can drop in the nvdia GPU accelerated blas lib with environment variables.
.
That’s the whole point since python itself is too slow and does not support any real multithreading for computation.
numpy provides an homogeneous array implementation that is much more space efficient and has faster operations on it compared to the heterogeneous builtin list of python.
According to my knowledge, all other operations of numpy is built around that.
- Likes 1
Comment
-
Originally posted by User42 View PostAh I see! I thought you were talking about the Netlib (standard) implementation...
Numpy has support for anything that implements the BLAS/LAPACK API. It used to be that, by default, it was using Netlib's (correct but slow) implementation.
Nowadays, it tries to find a better implementation at build time.
So what you get completely depends on what your distribution chose to do. You can still determine what your version has been linked to.
Numpy in anaconda comes with MKL by default. If you chose to add the `nomkl` virtual package to prevent conda from using MKL (say if you are on an AMD system or if you figured MKL is not the fastest or is too big or is not compliant enough for another lib, etc.), then OpenBLAS is used and replaces MKL transparently.
You should be able to use nvidia's cuBLAS as well if you want to. If its ABI is compliant, setting a few environment variables should be enough to swicth. If not, you can try to build numpy yourself. This time, if the API is compliant all should go well.
Comment
-
Originally posted by NobodyXu View Post
Just in case you don’t know, numpy is writen in C.
That’s the whole point since python itself is too slow and does not support any real multithreading for computation.
numpy provides an homogeneous array implementation that is much more space efficient and has faster operations on it compared to the heterogeneous builtin list of python.
According to my knowledge, all other operations of numpy is built around that.
Comment
-
Originally posted by tchiwam View PostIs ATLAS still a thing ?
Note that both BLAS and CBLAS interfaces are needed for a properly optimized build of NumPy.
The default order for the libraries are:- MKL
- BLIS
- OpenBLAS
- ATLAS
- BLAS (NetLIB)
The detection of BLAS libraries may be bypassed by defining the environment variable NPY_BLAS_LIBS , which should contain the exact linker flags you want to use (interface is assumed to be Fortran 77). Also define NPY_CBLAS_LIBS (even empty if CBLAS is contained in your BLAS library) to trigger use of CBLAS and avoid slow fallback code for matrix calculations.
But if I have found the correct repo on sourceforge development is rather slow compared to e.g. OpenBLAS
Comment
Comment