Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
doku:mathematica [2015/05/13 12:04] – [Example: Parallel Mathematica Task] irdoku:mathematica [2022/09/20 13:41] (current) – [SLURM] groda
Line 1: Line 1:
 ======= Mathematica Batch Jobs ======= ======= Mathematica Batch Jobs =======
-===== VSC-2 ===== 
-==== Sun Grid Engine ==== 
  
-The simplest way is to create a file with your mathematica commands, e.g. 'math-input.m', and just input that in the batch file. 
-<code> 
-# the following option makes sure the job will run in the current directory 
-#$ -cwd 
-# this option makes sure the job has the same environment variables as the submission shell 
-#$ -V 
  
-MATHDIR=/opt/sw/Mathematica/90/bin +=== SLURM === 
-$MATHDIR/math < math-input.m + 
-</code> +In order to be able to use Mathematica, you need to load the program with the ''module load'' command
-If we name the above file as math-job.sh and place it in the same directory as math-input.m we can submit it from the same directory with+
 <code> <code>
-user@l01 qsub math-job.sh+[username@l32]module load Mathematica    # load the default Mathematica module 
 +[username@l32]$ module list                # check loaded modules 
 +Currently Loaded Modulefiles: 
 +  [...]  
 +      18) Mathematica/12.3.
 </code> </code>
  
-The output will be left in file ''math-job.sh.o$JOB_ID'' and any errors in ''math-job.e$JOB_ID''. +(See also the introduction to the [[https://wiki.vsc.ac.at/doku.php?id=doku:slurm|module command.]]) 
- +Now, Mathematica can be called by
-You could also incorporate mathematica commands in the job file itself, rather then have them in a separate file+
 <code> <code>
-#-cwd +[username@l32]$ math 
-#$ -V +
- +
-MATHDIR=/opt/sw/Mathematica/90/bin +
- +
-$MATHDIR/math <<END_MATH_COMMANDS +
-1+1 +
-3*3 +
-END_MATH_COMMANDS+
 </code> </code>
  
-The above notations means that everything between "<<END_MATH_COMMANDS" and "END_MATH_COMMANDS" will be used as math program's input. You can again submit this job with qsub.+==== Example: Parallel Mathematica Task ====
  
-==== Parallel Usage ==== +Mathematica script example ''math-vsc3.m'':
-Jobscript:+
 <code> <code>
-#$ -N parallel 
-#$ -q all.q 
-#$ -pe mpich 32 
  
 +GetEnvironment["MATH_BIN"]
 +math = Environment["MATH_BIN"]
 +kernelsperhost = 16
 +hosts=Import["nodelist","List"];
  
-# $TMPDIR/machines shows which host machines are reserved for the current job; 
-# copy this file to a temporary file which will be read in by Mathematica; 
-# in the following the small Mathematics m-file will start a kernel on each machine; 
-# since the kernel on the master is already running, it is sufficient 
-# to start the kernels beginning with the second host in the machines file. 
  
-cp -v $TMPDIR/machines machines_tmp 
- 
-/opt/sw/Mathematica/90/bin/math -run "<<math.m" 
- 
-</code> 
- 
-Mathematica script named ''math.m'': 
- 
-<code> 
-(* configuration for starting remote kernels *) 
  
 Needs["SubKernels`RemoteKernels`"] Needs["SubKernels`RemoteKernels`"]
-$RemoteCommand= "ssh `1` --l `3` \"/opt/sw/Mathematica/90/bin/math -mathlink -linkmode Connect `4` -linkname `2` -subkernel -noinit >& /dev/null &\""+$RemoteCommand = "ssh -x -f -l `3` `1` <> math <> " -mathlink -linkmode Connect `4` -linkname '`2`-subkernel -noinit" 
  
  
-(* initialize the kernels on all machines defined in the host file *)+(* first host with one kernel less *) 
 +LaunchKernels[RemoteMachine[hosts[[1]], kernelsperhost-1]];
  
-hosts=Import["machines_tmp","List" 
- 
-(* since the kernel on the master is already running, initialization starts with host 2 *) 
 imin=2; imin=2;
 imax=Length[hosts]; imax=Length[hosts];
 idelta=1; idelta=1;
  
-Do[ +Do[  
- Print["starting Kernel: ",i," on ",hosts[[i]]]; +        LaunchKernels[RemoteMachine[hosts[[i]], kernelsperhost]]; 
- LaunchKernels[RemoteMachine[hosts[[i]]]];, +        , {i, imin, imax, idelta}] 
- {i,imin,imax,idelta} +
-] +
  
-(* actual calculation *) 
 primelist = ParallelTable[Prime[k], {k, 1, 20000000}]; primelist = ParallelTable[Prime[k], {k, 1, 20000000}];
 Print[primelist] Print[primelist]
- 
 </code> </code>
  
- 
-===== VSC-3 =====  
-=== SLURM === 
-In order to be able to use Mathematica, you have to load the program with the ''module load xyz'' command 
-<code> 
-[username@l32]$ module avail  # select a Mathematica version 
-[username@l32]$ module load Mathematica/10.0.2 # load selected version 
-[username@l32]$ module list   # check loaded modules 
-Currently Loaded Modulefiles: 
-  1) Mathematica/10.0.2   
-</code> 
-(See also the introduction to the [[https://wiki.vsc.ac.at/doku.php?id=doku:slurm|module command.]]) 
- 
-Now, Mathematica can be called by 
-<code> 
-[username@l32]$ math  
-</code> 
- 
-==== Example: Parallel Mathematica Task ==== 
- 
-Mathematica script example ''math-vsc3.m'': 
-<code> 
-(*Limits Mathematica to requested resources*) 
-Unprotect[$ProcessorCount];$ProcessorCount = 2*16; 
- 
-(*Prints the machine name that each kernel is running on*) 
-Print[ParallelEvaluate[$MachineName]]; 
- 
-(*Prints all Mersenne PRime numbers less than 2000*) 
-Print[Parallelize[Select[Range[2000],PrimeQ[2^#-1]&]]]; 
-</code> 
 sbatch job script  sbatch job script 
 ''jobPar.sh'': ''jobPar.sh'':
Line 123: Line 55:
 #!/bin/bash #!/bin/bash
 # #
-#SBATCH -J par                      # job name +#SBATCH -J par 
-#SBATCH -N 2                        # number of nodes=+#SBATCH -N 2 
-#SBATCH --ntasks-per-node=16        # uses all cpus of one node       +#SBATCH -L mathematica@vsc
-#SBATCH --ntasks-per-core=1 +
-#SBATCH --threads-per-core=1+
  
 +
 +module purge
 module load Mathematica/10.0.2 # load desired version module load Mathematica/10.0.2 # load desired version
 +
 +
 +export MATH_BIN=`which math`
 +#export MATH_PROC=16
 +scontrol show hostnames $SLURM_NODELIST > nodelist
 +#execute prolog for getting access to license everywhere
 +srun hostname
 +
  
 math -run < math-vsc3.m math -run < math-vsc3.m
Line 138: Line 78:
 [username@l32]$ squeue -u username    # check state of your job [username@l32]$ squeue -u username    # check state of your job
 </code> </code>
 +
 +==== remote kernel connection ====
 +
 +The setup follows
 +[[https://cc-mathematik.univie.ac.at/services/vsc3-cluster/remote-mathematica-kernel/|Local mathematica with remote kernel]]
 +with the
 +
 +**Shell command to launch kernel:**
 +
 +for VSC3+
 +<code>
 +/Users/<LOCAL-USERNAME>/Library/tunnel.sh <VSC-USERNAME>@localhost:9998 /opt/sw/x86_64/generic/Mathematica/11.3/bin/math `linkname`
 +</code>
 +
 +for VSC4
 +<code>
 +/Users/<LOCAL-USERNAME>/Library/tunnel.sh <VSC-USERNAME>@localhost:9998 /opt/sw/vsc4/VSC/x86_64/generic/Mathematica/12.3.1/bin/math `linkname`
 +</code>
 +
 +Note that on VSC4, the salloc command [as described on the
 +cc-mathematik.univie.ac.at web link] does not need the Mathematica
 +license. Just leave the -L parameter away.
  • doku/mathematica.1431518645.txt.gz
  • Last modified: 2015/05/13 12:04
  • by ir