This is an old revision of the document!
SPACK - migration to setup without environment
Problems of SPACK environments
Having worked with spack environments for some time, we have encountered several severe issues which have convinced us that we need to find a more practical way of maintaining software packages on VSC. In particular we came across the following drawbacks:
- hard to read and lengthy output of
spack find
commands. - long response times of all
spack
commands when the number of installed packages grows. - difficult and for practical purposes impossible maintenance of a consistent database of software packages (
spack.yaml
,spack.lock
files). - unresolvable discrepancies between the list of software packages found in spack as compared to those found in modules.
- unreliable concretisation procedure (discrepancies between what is shown in
spack spec -I ..
and what is actually installed withspack install …
).
New approach without environments
There are now three separate spack installation trees corresponding to the CPU/GPU architectures on VSC:
- skylake - Intel CPUs; works on Intel Skylake and Cascadelake CPUs
- zen - AMD CPUs; works on Zen 2 and 3 CPUs
- cuda-zen - AMD CPUs + NVIDIA GPUs; works on all nodes equipped with graphics cards
By default the spack installation tree suitable for the current compute/login node is activated and will be indicated by a prefix on the command line, e.g.:
zen [user@l51 ~]$
Info
The installation trees can be found in:
/opt/sw/skylake/spack-0.19.0 /opt/sw/zen/spack-0.19.0 /opt/sw/cuda-zen/spack-0.19.0
Only the software packages and modules of the currently active tree (denoted by the prefix) will be searched by spack find/load
and module avail/load
commands.
It is easily possible to switch between installation trees with the short commands:
alias cuz='spackup cuda-zen' alias sky='spackup skylake' alias zen='spackup zen'
This will be required e.g. when software intended to run on GPU nodes needs to be compiled on a login node of VSC-5.
zen [userl51 ~]$ cuz cuz [userl51 ~]$ ... build your software ...
The command spackup
will
- set PATH
- set MODULEPATH
- set PS1 prompt
- source /path/to/spack/instance/share/spack/setup-env.sh
Migration script - "spacksearch"
Method 1
If you need to find a list of packages or modules which correspond to the software you have been using until now, you can use the alias spacksearch to do that
alias spacksearch='/usr/bin/python3.9 /opt/sw/spack-scripts/search/search.py'
The script can be used like in the following example:
zen [user@l51 ~] spacksearch <hash>
where <hash>
is the 7 character hash of the package that you see at the end of the module name or in the output of spack find -l <package>
Usage Example
Let's assume you have used this netcdf-c module:
netcdf-c/4.8.1-gcc-11.2.0-jsfjwaz
which can be found with spack find -l netcdf-c %gcc
as:
jsfjwaz netcdf-c@4.8.1
You can use the migration script to find netcdf-c installations with identical build options:
zen [user@l51 ~]$ spacksearch jsfjwaz ==> The package hash jsfjwaz refers to netcdf-c and belongs to vsc5 as: netcdf-c ~dap~fsync~hdf4~jna+mpi~parallel-netcdf+pic+shared ==> Searching similar netcdf-c modules at zen... ---/gpfs/opt/sw/zen/spack-0.19.0/share/spack/modules/linux-almalinux8-zen3--- netcdf-c/4.9.0-aocc-4.0.0-6fetjps netcdf-c/4.9.0-aocc-4.0.0-vyf5okn netcdf-c/4.9.0-gcc-12.2.0-usqc3ay netcdf-c/4.9.0-gcc-12.2.0-oljzgqt netcdf-c/4.9.0-gcc-12.2.0-hbo7ve6 netcdf-c/4.9.0-gcc-12.2.0-62buxj4 netcdf-c/4.9.0-gcc-12.2.0-awvdfta netcdf-c/4.9.0-gcc-12.2.0-oppx7bm ---/gpfs/opt/sw/zen/spack-0.19.0/share/spack/modules/linux-almalinux8-zen--- netcdf-c/4.9.0-gcc-8.5.0-7qwm3uz ==> Load any one of these packages with 'module load mypackage', e.g.: module load netcdf-c/4.9.0-gcc-8.5.0-7qwm3uz ==> Get some additional info with 'spack find -lvd mypackage', e.g.: spack find -lvd /7qwm3u
To search for packages in a different spack tree you can just change to that tree and then run spacksearch there.
For example to search in the 'cuda-zen' spack tree:
zen [user@l51 search]$ cuz cuda-zen [user@l51 search]$ spacksearch jsfjwaz ==> The package hash jsfjwaz refers to netcdf-c and belongs to vsc5 as: netcdf-c ~dap~fsync~hdf4~jna+mpi~parallel-netcdf+pic+shared ==> Searching similar netcdf-c modules at cuda-zen... ---/gpfs/opt/sw/cuda-zen/spack-0.19.0/share/spack/modules/linux-almalinux8-zen--- netcdf-c/4.9.0-gcc-9.5.0-fx6pjb6 netcdf-c/4.9.0-gcc-9.5.0-4gdf6vm netcdf-c/4.9.0-gcc-9.5.0-o5eb5rf netcdf-c/4.9.0-gcc-9.5.0-upkxxip ==> Load any one of these packages with 'module load mypackage', e.g.: module load netcdf-c/4.9.0-gcc-9.5.0-upkxxip ==> Get some additional info with 'spack find -lvd mypackage', e.g.: spack find -lvd /upkxxi
Method 2
Add this function to your .bashrc
file:
## spack command wrapper, so we can include our own `spack something` ## commands, like `spack search`: spack () { case "$1" in "search") python3 /opt/sw/spack-scripts/search/search.py "$2" ;; *) command spack "$@" esac }
Usage example:
skylake [user@l41 ~]$ spack search asd ==> The package hash asd refers to python and belongs to vsc4 as: python +bz2+ctypes+dbm~debug+libxml2+lzma~nis~optimizations+pic+pyexpat+pythoncmd+readline+shared+sqlite3+ssl~tix~tkinter~ucs4+uuid+zlib ==> Searching similar python modules at skylake... ---/gpfs/opt/sw/skylake/spack-0.19.0/share/spack/modules/linux-almalinux8-skylake_avx512--- python/3.10.8-gcc-9.5.0-qh22vnd python/3.10.8-gcc-12.2.0-apbi5uz python/3.10.8-intel-2021.7.1-p4x6jid ==> Load any one of these packages with 'module load mypackage', e.g.: module load python/3.10.8-intel-2021.7.1-p4x6jid ==> Get some additional info with 'spack find -lvd mypackage', e.g.: spack find -lvd /p4x6ji
Continue working with old modules (from environments)
You may continue to use modules from the spack environments skylake and zen3 by adjusting the MODULEPATH
variable:
For zen3
:
export MODULEPATH=/opt/sw/vsc4/VSC/Modules/TUWien:/opt/sw/vsc4/VSC/Modules/Intel/oneAPI:/opt/sw/vsc4/VSC/Modules/Parallel-Environment:/opt/sw/vsc4/VSC/Modules/Libraries:/opt/sw/vsc4/VSC/Modules/Compiler:/opt/sw/vsc4/VSC/Modules/Debugging-and-Profiling:/opt/sw/vsc4/VSC/Modules/Applications:/opt/sw/vsc4/VSC/Modules/p71545::/opt/sw/spack-0.17.1/var/spack/environments/zen3/modules/linux-almalinux8-zen:/opt/sw/spack-0.17.1/var/spack/environments/zen3/modules/linux-almalinux8-zen2:/opt/sw/spack-0.17.1/var/spack/environments/zen3/modules/linux-almalinux8-zen3
For skylake
:
export MODULEPATH=/opt/sw/vsc4/VSC/Modules/TUWien:/opt/sw/vsc4/VSC/Modules/Intel/oneAPI:/opt/sw/vsc4/VSC/Modules/Parallel-Environment:/opt/sw/vsc4/VSC/Modules/Libraries:/opt/sw/vsc4/VSC/Modules/Compiler:/opt/sw/vsc4/VSC/Modules/Debugging-and-Profiling:/opt/sw/vsc4/VSC/Modules/Applications:/opt/sw/vsc4/VSC/Modules/p71545:/opt/sw/vsc4/VSC/Modules/p71782::/opt/sw/spack-0.19.0/var/spack/environments/skylake/modules/linux-almalinux8-x86_64:/opt/sw/spack-0.19.0/var/spack/environments/skylake/modules/linux-almalinux8-skylake
Warning
Loading modules with prerequisites
Some module have prerequisites which need to be loaded in addition to the module. You can check if a module has prerequisites with module show <module>
, e.g.:
zen [user@l51 ~]$ module show py-numpy/1.23.4-gcc-12.2.0-xbac5zw |grep prereq prereq openblas/0.3.21-gcc-12.2.0-gcn6jxp prereq py-setuptools/59.4.0-gcc-12.2.0-qphisr6 prereq python/3.9.15-gcc-12.2.0-my6jxu2
These prerequisites can be loaded together with the module automatically module load –auto <module>
, e.g.:
zen [user@l51 ~]$ module load --auto py-numpy/1.23.4-gcc-12.2.0-xbac5zw Loading py-numpy/1.23.4-gcc-12.2.0-xbac5zw Loading requirement: openblas/0.3.21-gcc-12.2.0-gcn6jxp python/3.9.15-gcc-12.2.0-my6jxu2 py-setuptools/59.4.0-gcc-12.2.0-qphisr6
Setting LD_LIBRARY_PATH
Loading a module does no longer automatically set the LD_LIBRARY_PATH
environment variable, as with some software packages this has lead to conflicts with system libraries. If you have to set LD_LIBRARY_PATH
you may use:
export LD_LIBRARY_PATH=$LIBRARY_PATH
Alternatively you may have to set it to specific paths where the needed libraries can be found.