Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
doku:forge [2020/09/24 11:37] shdoku:forge [2022/11/04 10:14] (current) – [Usage of map:] goldenberg
Line 1: Line 1:
-===== forge = map + ddt (VSC-3) =====+===== forge = map + ddt =====
 ==== Synopsis: ====  ==== Synopsis: ==== 
 <html> <font color=#cc3300><b>map</b></font> </html> and <html> <font color=#cc3300><b>ddt</b></font> </html> are ARM's (formerly Allinea's)  advanced tools for performance analysis, see [[https://developer.arm.com/tools-and-software/server-and-hpc/debug-and-profile/arm-forge]]. <html> <font color=#cc3300><b>map</b></font> </html> and <html> <font color=#cc3300><b>ddt</b></font> </html> are ARM's (formerly Allinea's)  advanced tools for performance analysis, see [[https://developer.arm.com/tools-and-software/server-and-hpc/debug-and-profile/arm-forge]].
-Licenses for up to 512 parallel tasks are available. Of additional note, <html> <font color=#cc3300><b>perf-report</b></font> </html> --- a related lightweight profiling tool --- has now been integrated into forge in more recent releases.+Licenses for up to 512 parallel tasks are available. Of additional note, [[doku:perf-report|perf-report]] --- a related lightweight profiling tool --- has now been integrated into forge in more recent releases.
  
  
Line 13: Line 13:
    #SBATCH -J map           #SBATCH -J map       
    #SBATCH -N 4    #SBATCH -N 4
-   #SBATCH -L allinea@vsc 
    #SBATCH --ntasks-per-node 16    #SBATCH --ntasks-per-node 16
    #SBATCH --ntasks-per-core  1    #SBATCH --ntasks-per-core  1
        
    module purge    module purge
-   module load  intel/18  intel-mpi/2018 allinea/20.1_FORGE+   module load  intel/18  intel-mpi/2018 arm/20.1_FORGE
        
    map --profile srun --jobid $SLURM_JOB_ID --mpi=pmi2 -n 64 ./a.out    map --profile srun --jobid $SLURM_JOB_ID --mpi=pmi2 -n 64 ./a.out
Line 24: Line 23:
 which generates a *.map file (note the mention of #tasks and #nodes together with the date/time stamp in the filename) that may then be analyzed via the gui, ie which generates a *.map file (note the mention of #tasks and #nodes together with the date/time stamp in the filename) that may then be analyzed via the gui, ie
  
-   ssh vsc3.vsc.ac.at -l my_uid -X+   ssh vsc4.vsc.ac.at -l my_uid -X
    cd wherever/the/map/file/may/be    cd wherever/the/map/file/may/be
        
Line 34: Line 33:
  
 ==== Usage of ddt: ==== ==== Usage of ddt: ====
-Debugging with ''ddt'' is currently limited to the __Remote Launch__ option. There are two ways to launch this type of ''ddt''-sessions: +Debugging with ''ddt'' is currently limited to the __Remote Launch__ option.  
 +<html><!-- There are two ways to launch this type of ''ddt''-sessions: --></html> 
 +Best is to launch ''ddt''-sessions on separate compute nodes.
  
-<!-- === 1.) ddt (semi-interactive via sbatch): === +<html><!-- === 1.) ddt (semi-interactive via sbatch): === --></html> 
-<!-- The following steps need to be carried out: +<html><!--The following steps need to be carried out: --></html> 
-<!--  +<html><!-- --></html> 
-<!--    ssh vsc3.vsc.ac.at -l my_uid -X +<html><!--    ssh vsc3.vsc.ac.at -l my_uid -X --></html> 
-<!--    my_uid@l33$  module load allinea/18.2_FORGE  +<html><!--    my_uid@l33$  module load allinea/18.2_FORGE --></html> 
-<!--    my_uid@l33$  rm -rf ~/.allinea/  ( to get rid of obsolete configurations from previous sessions ) +<html><!--    my_uid@l33$  rm -rf ~/.allinea/  ( to get rid of obsolete configurations from previous sessions ) --></html> 
-<!--    my_uid@l33$  ddt & +<html><!--    my_uid@l33$  ddt & --></html> 
-<!--                 ... select 'Remote Launch - Configure' +<html><!--                 ... select 'Remote Launch - Configure' --></html> 
-<!--                 ... click  'Add'  +<html><!--                 ... click  'Add' --></html> 
-<!--                 ... set my_uid@l33.vsc.ac.at as 'Host Name' depending on which l3[1-5] node we currently are +<html><!--                 ... set my_uid@l33.vsc.ac.at as 'Host Name' depending on which l3[1-5] node we currently are --></html> 
- <!--                ... set 'Remote Installation Directory' to /opt/sw/x86_64/glibc-2.17/ivybridge-ep/allinea/18.2_FORGE +<html><!--                ... set 'Remote Installation Directory' to /opt/sw/x86_64/glibc-2.17/ivybridge-ep/allinea/18.2_FORGE --></html> 
-<!--                 ... check it with 'Test Remote Launch'     ( will ask for a Password/OTP then monitor successful testing ) +<html><!--                 ... check it with 'Test Remote Launch'     ( will ask for a Password/OTP then monitor successful testing ) --></html> 
-<!--                 ... click OK twice to close the dialogues +<html><!--                ... click OK twice to close the dialogues --></html> 
-<!--                ... click Close to exit from the Configure menu +<html><!--                ... click Close to exit from the Configure menu --></html> 
-<!--                ... next really select 'Remote Launch' by clicking the name tag that was auto-assigned   ( will again ask for a Password/OTP, then the licence label should come up ok in the lower left corner and the connecting client should appear in the lower right corner )  --> +<html><!--                ... next really select 'Remote Launch' by clicking the name tag that was auto-assigned   ( will again ask for a Password/OTP, then the licence label should come up ok in the lower left corner and the connecting client should appear in the lower right corner ) --></html
-<!--                   --> +<html><!-- --></html
-<!--    ssh vsc3.vsc.ac.at -l my_uid   ( a second terminal will be needed to actually start the debug session )  --> +<html><!--    ssh vsc3.vsc.ac.at -l my_uid   ( a second terminal will be needed to actually start the debug session ) --></html
-<!--    my_uid@l35$  cd wherever/my/app/may/be  --> +<html><!--    my_uid@l35$  cd wherever/my/app/may/be  --></html
-<!--    my_uid@l35$  module load intel/16  intel-mpi/ ( or whatever else suite of MPI )  --> +<html><!--    my_uid@l35$  module load intel/16  intel-mpi/ ( or whatever else suite of MPI )  --></html
-<!--    my_uid@l35$  mpiicc -g -O0 my_app.c  --> +<html><!--    my_uid@l35$  mpiicc -g -O0 my_app.c  --></html
-<!--    my_uid@l35$  vi  ./run.ddt.slrm.scrpt  -->  +<html><!--    my_uid@l35$  vi  ./run.ddt.slrm.scrpt  --></html
-<!--                 ... insert the usual '#SBATCH ...' commands  --> +<html><!--                 ... insert the usual '#SBATCH ...' commands  --></html
-<!--                 ... don't forget to include 'module load allinea/18.2_FORGE' and '#SBATCH -L allinea@vsc'  --> +<html><!--                 ... don't forget to include 'module load allinea/18.2_FORGE' and '#SBATCH -L allinea@vsc'  --></html
-<!--                 ... the actual program execution should be prefixed with 'ddt --connect   -->--np=32 ...' ie better avoid express launch style  --> +<html><!--                 ... the actual program execution should be prefixed with 'ddt --connect   --np=32 ...' ie better avoid express launch style  --></html
-<!--    my_uid@l35$  sbatch ./run.ddt.slrm.scrpt  ( in the other ddt-window sent into the background initially, a separate window pops up saying 'Reverse Connect Request' which needs to be accepted, then the usual ddt options will become available and the actual session may be launched by clicking 'Run' )   -->+<html><!--    my_uid@l35$  sbatch ./run.ddt.slrm.scrpt  ( in the other ddt-window sent into the background initially, a separate window pops up saying 'Reverse Connect Request' which needs to be accepted, then the usual ddt options will become available and the actual session may be launched by clicking 'Run' )   --></html>
  
  
-=== 2.) ddt (fully interactive via salloc): ===+=== ddt (fully interactive via salloc): ===
 The following steps need to be carried out: The following steps need to be carried out:
  
    ssh vsc3.vsc.ac.at -l my_uid -X    ssh vsc3.vsc.ac.at -l my_uid -X
-   my_uid@l33$  module load  intel/16  intel-mpi/ allinea/18.2_FORGE   ( or whatever else suite of MPI ) 
    my_uid@l33$  cd wherever/my/app/may/be    my_uid@l33$  cd wherever/my/app/may/be
-   my_uid@l33$  mpicc -g -O0 my_app.c 
    my_uid@l33$  salloc -N 4 -L allinea@vsc    my_uid@l33$  salloc -N 4 -L allinea@vsc
-   my_uid@l33$  scontrol show hostnames $SLURM_NODELIST  ./machines.txt +   my_uid@l33$  echo $SLURM_JOB_ID    ( just to figure out the current job ID, say it's 8909346 ) 
-                ... let's assume we got n12-[045,051,068,071] which should now be listed inside file 'machines.txt' +   my_uid@l33$  srun --jobid 8909346 -n 4 hostname | tee ./machines.txt ( this is important ! it looks like a redundant command but will actually fix a lot of the prerequisites usually taken care of in the SLURM prologue of regular submit scripts, one of them being provisioning of required licenses ) 
 +                ... let's assume we got n305-[044,057,073,074] which should now be listed inside file 'machines.txt' 
    my_uid@l33$  rm -rf ~/.allinea/   ( to get rid of obsolete configurations from previous sessions )    my_uid@l33$  rm -rf ~/.allinea/   ( to get rid of obsolete configurations from previous sessions )
-   my_uid@l33$  srun -n 4 hostname   ( this is important ! it looks like a redundant command but will actually fix a lot of the prerequisites usually taken care of in the SLURM prologue of regular submit scripts, one of them being provisioning of required licenses +   my_uid@l33$  module purge 
-   my_uid@l33$  ddt &+   my_uid@l33$  module load  intel/18  intel-mpi/2018  allinea/20.1_FORGE   ( or whatever else suite of MPI ) 
 +   my_uid@l33$  mpiicc -g -O0 my_app.c 
 +   my_uid@l33$  ddt &     ( gui should open )
                 ... select 'Remote Launch - Configure'                 ... select 'Remote Launch - Configure'
                 ... click  'Add'                    ... click  'Add'   
-                ... set my_uid@n12-045 as 'Host Name' or any other node from the above list +                ... set my_uid@n305-044 as 'Host Name' or any other node from the above list 
-                ... set 'Remote Installation Directory' to /opt/sw/x86_64/glibc-2.17/ivybridge-ep/allinea/18.2_FORGE +                ... set 'Remote Installation Directory' to /opt/sw/x86_64/glibc-2.17/ivybridge-ep/allinea/20.1_FORGE 
-                ... check it with 'Test Remote Launch'     ( should be ok )+                ... keep auto-selected defaults for the rest, then check it with 'Test Remote Launch'     ( should be ok )
                 ... click OK twice to close the dialogues                 ... click OK twice to close the dialogues
                 ... click Close to exit from the Configure menu                 ... click Close to exit from the Configure menu
-                ... next really select 'Remote Launch' by clicking the name tag that was auto-assigned above   ( licence label should be ok in the lower left corner and the connecting client should appear in the lower right corner )+                ... next really select 'Remote Launch' by clicking the name tag that was auto-assigned above   ( licence label should be ok in the lower left corner and the hostname of the connecting client should appear in the lower right corner )
                                  
    ssh vsc3.vsc.ac.at -l my_uid   ( a second terminal will be needed to actually start the debug session )       ssh vsc3.vsc.ac.at -l my_uid   ( a second terminal will be needed to actually start the debug session )   
-   my_uid@l34$  ssh n12-045       ( log into that compute node that was selected/prepared above for remote launch ) +   my_uid@l34$  ssh n305-044       ( log into that compute node that was selected/prepared above for remote launch ) 
-   my_uid@n12-045$  module load  intel/16  intel-mpi/ allinea/18.2_FORGE +   my_uid@n305-044$  module purge 
-   my_uid@n12-045$  cd wherever/my/app/may/be +   my_uid@n305-044$  module load  intel/18  intel-mpi/2018  allinea/20.1_FORGE 
-   my_uid@n12-045$  mpirun -np 16  -machinefile ./machines.txt  hostname   ( just a dummy check to see whether all is set up correctly ) +   my_uid@n305-044$  cd wherever/my/app/may/be 
-   my_uid@n12-045$  ddt --connect mpirun -np 64  -machinefile ./machines.txt ./a.out -arg1 -arg2   ( in the initial ddt-window a dialogue will pop up prompting for a Reverse Connection request; accept it and click Run and the usual debug session will start )+   my_uid@n305-044$  srun --jobid 8909346 -n 16 hostname    ( just a dummy check to see whether all is set up and working correctly ) 
 +   my_uid@n305-044$  ddt --connect srun --jobid 8909346 --mpi=pmi2 -n 64 ./a.out -arg1 -arg2   ( in the initial ddt-window a dialogue will pop up prompting for a Reverse Connection request; accept it and click Run and the usual debug session will start )
  
        
 ==== Further Reading: ==== ==== Further Reading: ====
-''/opt/sw/x86_64/glibc-2.17/ivybridge-ep/allinea/18.2_FORGE/doc/userguide-forge.pdf''+''/opt/sw/x86_64/glibc-2.17/ivybridge-ep/allinea/20.1_FORGE/doc/userguide-forge.pdf''
        
        
  • doku/forge.1600947475.txt.gz
  • Last modified: 2020/09/24 11:37
  • by sh