This version (2024/10/24 10:28) is a draft.
Approvals: 0/1
The Previously approved version (2014/07/25 10:04) is available.Diff

There are several implementations of the Basic Linear Algebra Subprograms (BLAS) libraries available. These provide highly optimized routines for matrix and vector operations and are key to high performance applications.

We recommend to use one of the fastest available libraries: * GOTO BLAS by Kazushige Goto (http://www.tacc.utexas.edu/tacc-projects/gotoblas2) available in the servers and compute nodes by <code>-L/opt/goto/ifort -lgoto2_barcelonap-r1.13</code> You may want to set THREADS=1.

  • Intel Math Kernel Library (MKL): to use e.g. with
    -L/opt/intel/composerxe/mkl/lib/intel64/ -lmkl_intel_lp64 -lmkl_sequential -lmkl_core

    or – with the Intel compiler suite – by simply using

    -mkl
  • AMD Core Math Library (ACML): to use e.g. with
    -L/opt/acml5.1.0/ifort64/lib/ -lacml

MKL libraries exist in single and multi threaded versions.

  • To use the MKL multi threaded version use -lmkl_threads instead of -lmkl_sequential

* To use GOTO multi threaded version use -L/opt/goto/ifort -lgoto2_barcelonap-r1.13 -lpthread

The reference BLAS is installed on some nodes (-lblas) but significantly slower. We recommend not to use it, even if available.

If shared libraries are used, resetting the variable LD_LIBRARY_PATH is required, e.g.

export LD_LIBRARY_PATH=/opt/somewhere/somelib/:$LD_LIBRARY_PATH
  • doku/blas.txt
  • Last modified: 2024/10/24 10:28
  • by 127.0.0.1