This is an old revision of the document!
SPACK - migration to setup without environment
Motivation: problems of SPACK environments
Having worked with spack environments for some time, we have encountered several severe issues which have convinced us that we need to find a more practical way of maintaining software packages on VSC. In particular we came across the following drawbacks:
- hard to read and lengthy output of
spack find
commands. - long response times of all
spack
commands when the number of installed packages grows. - difficult and for practical purposes impossible maintenance of a consistent database of software packages (
spack.yaml
,spack.lock
files). - unresolvable discrepancies between the list of software packages found in spack as compared to those found in modules.
- unreliable concretisation procedure (discrepancies between what is shown in
spack spec -I ..
and what is actually installed withspack install …
).
New approach without environments
There are now three separate spack installation trees corresponding to the CPU/GPU architectures on VSC:
- skylake - Intel CPUs; works on Intel Skylake and Cascadelake CPUs
- zen - AMD CPUs; works on Zen 2 and 3 CPUs
- cuda-zen - AMD CPUs + NVIDIA GPUs; works on all nodes equipped with graphics cards
By default the spack installation tree suitable for the current compute/login node is activated and will be indicated by a prefix on the command line, e.g.:
zen [user@l51 ~]$
Info
zen
) does not mean that a python virtual environment is loaded.The installation trees can be found in:
/opt/sw/skylake/spack-0.19.0 /opt/sw/zen/spack-0.19.0 /opt/sw/cuda-zen/spack-0.19.0
Only the software packages and modules of the currently active tree (denoted by the prefix) will be searched by spack find/load
and module avail/load
commands.
It is easily possible to switch between installation trees with the short commands cuz
,zen
, and sky
defined as aliases:
alias cuz='spackup cuda-zen' alias sky='spackup skylake' alias zen='spackup zen'
Packages depending on GPU, like cuda
are only installed in the cuda-zen
tree. If you want to compile software intended to run on GPU nodes, you need to:
- login on a VSC-5 login node (or a VSC-5 compute node).
- type
cuz
switch tocuda-zen
. - compile your code.
zen [userl51 ~]$ cuz cuda-zen [userl51 ~]$ ... build your software ...
The commands cuz
/zen
/sky
ultimately call the shell function spackup <arch>
which will:
- set PATH
- set MODULEPATH
- set PS1 prompt
- source /path/to/spack/instance/share/spack/setup-env.sh
You can view the shell function spackup
with type spackup
, or take a look at the whole script at /etc/profile.d/spack.sh
.
Migration script - "spack search"
If you need to find a list of packages or modules which correspond to the software you have been using until now, you can use the shell function spack search
to do that:
zen [user@l51 ~] spack search <hash>
where <hash>
is the 7 character hash of the package that you see at the end of the module name or in the output of spack find -l <package>
.
Usage Example
Let's assume you have used this netcdf-c
module:
netcdf-c/4.8.1-gcc-11.2.0-jsfjwaz
which can be found with spack find -l netcdf-c %gcc
as:
jsfjwaz netcdf-c@4.8.1
You can use the migration script to find netcdf-c
installations with identical build options:
zen [user@l51 ~]$ spack search jsfjwaz ==> The hash 'jsfjwaz7qp52fjxfeg6mbhtt2lj3l573' refers to 'netcdf-c' from 'vsc5' with parameters: netcdf-c ~dap~fsync~hdf4~jna+mpi~parallel-netcdf+pic+shared ==> Searching similar 'netcdf-c' modules in installation 'skylake' ... -- /opt/sw/skylake/spack-0.19.0/share/spack/modules/linux-almalinux8-skylake_avx512 -- netcdf-c/4.9.0-gcc-12.2.0-xck6m4e netcdf-c/4.9.0-gcc-12.2.0-vcjclck netcdf-c/4.9.0-intel-2021.7.1-u6wt7yr netcdf-c/4.9.0-intel-2021.7.1-k2p5vx2 ==> Load any one of these packages with 'module load mypackage' e.g.: module load netcdf-c/4.9.0-intel-2021.7.1-k2p5vx2 ==> Get detailed package info with 'spack find -lvd mypackage' e.g.: spack find -lvd /k2p5vx
To search for packages in a different spack tree you can just change to that tree with cuz
/sky
/zen
, and then run spack search
there.
For example to search in the cuda-zen
spack tree:
zen [user@l51 search]$ cuz cuda-zen [user@l51 search]$ spacksearch jsfjwaz ==> The package hash jsfjwaz refers to netcdf-c and belongs to vsc5 as: netcdf-c ~dap~fsync~hdf4~jna+mpi~parallel-netcdf+pic+shared ==> Searching similar netcdf-c modules at cuda-zen... ---/gpfs/opt/sw/cuda-zen/spack-0.19.0/share/spack/modules/linux-almalinux8-zen--- netcdf-c/4.9.0-gcc-9.5.0-fx6pjb6 netcdf-c/4.9.0-gcc-9.5.0-4gdf6vm netcdf-c/4.9.0-gcc-9.5.0-o5eb5rf netcdf-c/4.9.0-gcc-9.5.0-upkxxip
Deprecated: Continue working with old modules (from environments)
Warning
You may continue to use modules from the spack environments skylake and zen3 by adjusting the MODULEPATH
variable:
On zen3
export MODULEPATH=/opt/sw/vsc4/VSC/Modules/TUWien:/opt/sw/vsc4/VSC/Modules/Intel/oneAPI:/opt/sw/vsc4/VSC/Modules/Parallel-Environment:/opt/sw/vsc4/VSC/Modules/Libraries:/opt/sw/vsc4/VSC/Modules/Compiler:/opt/sw/vsc4/VSC/Modules/Debugging-and-Profiling:/opt/sw/vsc4/VSC/Modules/Applications:/opt/sw/vsc4/VSC/Modules/p71545::/opt/sw/spack-0.17.1/var/spack/environments/zen3/modules/linux-almalinux8-zen:/opt/sw/spack-0.17.1/var/spack/environments/zen3/modules/linux-almalinux8-zen2:/opt/sw/spack-0.17.1/var/spack/environments/zen3/modules/linux-almalinux8-zen3
On skylake
export MODULEPATH=/opt/sw/vsc4/VSC/Modules/TUWien:/opt/sw/vsc4/VSC/Modules/Intel/oneAPI:/opt/sw/vsc4/VSC/Modules/Parallel-Environment:/opt/sw/vsc4/VSC/Modules/Libraries:/opt/sw/vsc4/VSC/Modules/Compiler:/opt/sw/vsc4/VSC/Modules/Debugging-and-Profiling:/opt/sw/vsc4/VSC/Modules/Applications:/opt/sw/vsc4/VSC/Modules/p71545:/opt/sw/vsc4/VSC/Modules/p71782::/opt/sw/spack-0.19.0/var/spack/environments/skylake/modules/linux-almalinux8-x86_64:/opt/sw/spack-0.19.0/var/spack/environments/skylake/modules/linux-almalinux8-skylake
Deprecated: Continue working with old spack environments
Warning
You may also continue to work with the spack environments. If you wish to do so you need the following commands:
On zen3
export SPACK_ROOT=/opt/sw/spack-0.17.1 source /opt/sw/spack-0.17.1/share/spack/setup-env.sh spacktivate zen3
On skylake
export SPACK_ROOT=/opt/sw/spack-0.19.0 source /opt/sw/spack-0.19.0/share/spack/setup-env.sh spacktivate skylake
Loading modules with prerequisites
Some module have prerequisites which need to be loaded in addition to the module. You can check if a module has prerequisites with module show <module>
, e.g.:
zen [user@l51 ~]$ module show py-numpy/1.23.4-gcc-12.2.0-xbac5zw |grep prereq prereq openblas/0.3.21-gcc-12.2.0-gcn6jxp prereq py-setuptools/59.4.0-gcc-12.2.0-qphisr6 prereq python/3.9.15-gcc-12.2.0-my6jxu2
These prerequisites can be loaded together with the module automatically module load –auto <module>
, e.g.:
zen [user@l51 ~]$ module load --auto py-numpy/1.23.4-gcc-12.2.0-xbac5zw Loading py-numpy/1.23.4-gcc-12.2.0-xbac5zw Loading requirement: openblas/0.3.21-gcc-12.2.0-gcn6jxp python/3.9.15-gcc-12.2.0-my6jxu2 py-setuptools/59.4.0-gcc-12.2.0-qphisr6
Setting LD_LIBRARY_PATH
Loading a module does no longer automatically set the LD_LIBRARY_PATH
environment variable, as with some software packages this has lead to conflicts with system libraries. If you have to set LD_LIBRARY_PATH
you may use:
export LD_LIBRARY_PATH=$LIBRARY_PATH
Alternatively you may have to set it to specific paths where the needed libraries can be found.