Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revisionBoth sides next revision
doku:forge [2020/09/24 13:09] shdoku:forge [2024/07/05 07:51] – [ddt (fully interactive via salloc):] grokyta
Line 1: Line 1:
 ===== forge = map + ddt ===== ===== forge = map + ddt =====
 ==== Synopsis: ====  ==== Synopsis: ==== 
-<html> <font color=#cc3300><b>map</b></font> </html> and <html> <font color=#cc3300><b>ddt</b></font> </html> are ARM's (formerly Allinea's)  advanced tools for performance analysis, see [[https://developer.arm.com/tools-and-software/server-and-hpc/debug-and-profile/arm-forge]].+**<color #ed1c24>map</color>** and <color #ed1c24>**ddt**</color> are ARM's (formerly Allinea's)  advanced tools for performance analysis, see [[https://developer.arm.com/tools-and-software/server-and-hpc/debug-and-profile/arm-forge]].
 Licenses for up to 512 parallel tasks are available. Of additional note, [[doku:perf-report|perf-report]] --- a related lightweight profiling tool --- has now been integrated into forge in more recent releases. Licenses for up to 512 parallel tasks are available. Of additional note, [[doku:perf-report|perf-report]] --- a related lightweight profiling tool --- has now been integrated into forge in more recent releases.
  
Line 13: Line 13:
    #SBATCH -J map           #SBATCH -J map       
    #SBATCH -N 4    #SBATCH -N 4
-   #SBATCH -L allinea@vsc 
    #SBATCH --ntasks-per-node 16    #SBATCH --ntasks-per-node 16
    #SBATCH --ntasks-per-core  1    #SBATCH --ntasks-per-core  1
        
    module purge    module purge
-   module load  intel/18  intel-mpi/2018 allinea/20.1_FORGE+   module load  intel/18  intel-mpi/2018 arm/20.1_FORGE
        
    map --profile srun --jobid $SLURM_JOB_ID --mpi=pmi2 -n 64 ./a.out    map --profile srun --jobid $SLURM_JOB_ID --mpi=pmi2 -n 64 ./a.out
Line 24: Line 23:
 which generates a *.map file (note the mention of #tasks and #nodes together with the date/time stamp in the filename) that may then be analyzed via the gui, ie which generates a *.map file (note the mention of #tasks and #nodes together with the date/time stamp in the filename) that may then be analyzed via the gui, ie
  
-   ssh vsc3.vsc.ac.at -l my_uid -X+   ssh vsc4.vsc.ac.at -l my_uid -X
    cd wherever/the/map/file/may/be    cd wherever/the/map/file/may/be
        
Line 35: Line 34:
 ==== Usage of ddt: ==== ==== Usage of ddt: ====
 Debugging with ''ddt'' is currently limited to the __Remote Launch__ option.  Debugging with ''ddt'' is currently limited to the __Remote Launch__ option. 
-<html><!-- There are two ways to launch this type of ''ddt''-sessions: --></html>+There are two ways to launch this type of ''ddt''-sessions:
 Best is to launch ''ddt''-sessions on separate compute nodes. Best is to launch ''ddt''-sessions on separate compute nodes.
  
-<html><!-- === 1.) ddt (semi-interactive via sbatch): === --></html> +=== ddt (semi-interactive via sbatch): ===  
-<html><!--The following steps need to be carried out: --></html> +The following steps need to be carried out: 
-<html><!-- --></html+<code
-<html><!--    ssh vsc3.vsc.ac.at -l my_uid -X --></html> +ssh vsc3.vsc.ac.at -l my_uid -X  
-<html><!--    my_uid@l33$  module load allinea/18.2_FORGE --></html> +my_uid@l33$  module load allinea/18.2_FORGE  
-<html><!--    my_uid@l33$  rm -rf ~/.allinea/  to get rid of obsolete configurations from previous sessions ) --></html> +my_uid@l33$  rm -rf ~/.allinea/  to get rid of obsolete configurations from previous sessions  
-<html><!--    my_uid@l33$  ddt & --></html> +my_uid@l33$ ddt &      # gui should open 
-<html><!--                 ... select 'Remote Launch - Configure' --></html> +                ... select 'Remote Launch - Configure' 
-<html><!--                 ... click  'Add' --></html> +                ... click  'Add'  
-<html><!--                 ... set my_uid@l33.vsc.ac.at as 'Host Name' depending on which l3[1-5] node we currently are --></html> +                ... set my_uid@l33.vsc.ac.at as 'Host Name' depending on which l3[1-5] node we currently are  
-<html><!--                ... set 'Remote Installation Directory' to /opt/sw/x86_64/glibc-2.17/ivybridge-ep/allinea/18.2_FORGE --></html> +                ... set 'Remote Installation Directory' to /opt/sw/x86_64/glibc-2.17/ivybridge-ep/allinea/18.2_FORGE  
-<html><!--                 ... check it with 'Test Remote Launch'     will ask for a Password/OTP then monitor successful testing ) --></html> +                ... check it with 'Test Remote Launch'     will ask for a Password/OTP then monitor successful testing  
-<html><!--                ... click OK twice to close the dialogues --></html> +                ... click OK twice to close the dialogues  
-<html><!--                ... click Close to exit from the Configure menu --></html> +                ... click Close to exit from the Configure menu 
-<html><!--                ... next really select 'Remote Launch' by clicking the name tag that was auto-assigned   ( will again ask for a Password/OTP, then the licence label should come up ok in the lower left corner and the connecting client should appear in the lower right corner ) --></html+                ... next really select 'Remote Launch' by clicking the name tag that was auto-assigned   (will again ask for a Password/OTP, then the licence label should come up  
-<html><!-- --></html+                ok in the lower left corner and the connecting client should appear in the lower right corner ) 
-<html><!--    ssh vsc3.vsc.ac.at -l my_uid   a second terminal will be needed to actually start the debug session ) --></html> +</code
-<html><!--    my_uid@l35$  cd wherever/my/app/may/be  --></html> +<code
-<html><!--    my_uid@l35$  module load intel/16  intel-mpi/5  or whatever else suite of MPI )  --></html> +ssh vsc3.vsc.ac.at -l my_uid   a second terminal will be needed to actually start the debug session  
-<html><!--    my_uid@l35$  mpiicc -g -O0 my_app.c  --></html> +my_uid@l35$  cd wherever/my/app/may/be 
-<html><!--    my_uid@l35$  vi  ./run.ddt.slrm.scrpt  --></html> +my_uid@l35$  module load intel/16  intel-mpi/5  or whatever else suite of MPI  
-<html><!--                 ... insert the usual '#SBATCH ...' commands  --></html> +my_uid@l35$  mpiicc -g -O0 my_app.c 
-<html><!--                 ... don't forget to include 'module load allinea/18.2_FORGE' and '#SBATCH -L allinea@vsc'  --></html> +my_uid@l35$  vi  ./run.ddt.slrm.scrpt 
-<html><!--                 ... the actual program execution should be prefixed with 'ddt --connect   --np=32 ...' ie better avoid express launch style  --></html> +                ... insert the usual '#SBATCH ...' commands 
-<html><!--    my_uid@l35$  sbatch ./run.ddt.slrm.scrpt  ( in the other ddt-window sent into the background initially, a separate window pops up saying 'Reverse Connect Request' which needs to be accepted, then the usual ddt options will become available and the actual session may be launched by clicking 'Run'  --></html> +                ... don't forget to include 'module load allinea/18.2_FORGE' and '#SBATCH -L allinea@vsc' 
 +                ... the actual program execution should be prefixed with 'ddt --connect   --np=32 ...' we better avoid express launch style  
 +my_uid@l35$  sbatch ./run.ddt.slrm.scrpt  (in the other ddt-window sent into the background initially, a separate window pops up saying 'Reverse Connect Request'  
 +which needs to be accepted, then the usual ddt options will become available and the actual session may be launched by clicking 'Run'  
 +</code>
  
 === ddt (fully interactive via salloc): === === ddt (fully interactive via salloc): ===
 The following steps need to be carried out: The following steps need to be carried out:
 +<code>
    ssh vsc3.vsc.ac.at -l my_uid -X    ssh vsc3.vsc.ac.at -l my_uid -X
    my_uid@l33$  cd wherever/my/app/may/be    my_uid@l33$  cd wherever/my/app/may/be
    my_uid@l33$  salloc -N 4 -L allinea@vsc    my_uid@l33$  salloc -N 4 -L allinea@vsc
-   my_uid@l33$  echo $SLURM_JOB_ID    just to figure out the current job ID, say it's 8909346 ) +   my_uid@l33$  echo $SLURM_JOB_ID    just to figure out the current job ID, say it's 8909346  
-   my_uid@l33$  srun --jobid 8909346 -n 4 hostname | tee ./machines.txt ( this is important ! it looks like a redundant command but will actually fix a lot of the prerequisites usually taken care of in the SLURM prologue of regular submit scripts, one of them being provisioning of required licenses )+   my_uid@l33$  srun --jobid 8909346 -n 4 hostname | tee ./machines.txt (this is important ! it looks like a redundant command but will actually fix a lot of the prerequisites usually taken care of in the SLURM prologue of regular submit scripts, one of them being provisioning of required licenses )
                 ... let's assume we got n305-[044,057,073,074] which should now be listed inside file 'machines.txt'                  ... let's assume we got n305-[044,057,073,074] which should now be listed inside file 'machines.txt' 
-   my_uid@l33$  rm -rf ~/.allinea/   to get rid of obsolete configurations from previous sessions )+   my_uid@l33$  rm -rf ~/.allinea/   to get rid of obsolete configurations from previous sessions 
    my_uid@l33$  module purge    my_uid@l33$  module purge
-   my_uid@l33$  module load  intel/18  intel-mpi/2018  allinea/20.1_FORGE   or whatever else suite of MPI )+   my_uid@l33$  module load  intel/18  intel-mpi/2018  allinea/20.1_FORGE   or whatever else suite of MPI 
    my_uid@l33$  mpiicc -g -O0 my_app.c    my_uid@l33$  mpiicc -g -O0 my_app.c
-   my_uid@l33$  ddt &     gui should open )+   my_uid@l33$  ddt &     gui should open 
                 ... select 'Remote Launch - Configure'                 ... select 'Remote Launch - Configure'
                 ... click  'Add'                    ... click  'Add'   
                 ... set my_uid@n305-044 as 'Host Name' or any other node from the above list                 ... set my_uid@n305-044 as 'Host Name' or any other node from the above list
                 ... set 'Remote Installation Directory' to /opt/sw/x86_64/glibc-2.17/ivybridge-ep/allinea/20.1_FORGE                 ... set 'Remote Installation Directory' to /opt/sw/x86_64/glibc-2.17/ivybridge-ep/allinea/20.1_FORGE
-                ... keep auto-selected defaults for the rest, then check it with 'Test Remote Launch'     should be ok )+                ... keep auto-selected defaults for the rest, then check it with 'Test Remote Launch'     should be ok 
                 ... click OK twice to close the dialogues                 ... click OK twice to close the dialogues
                 ... click Close to exit from the Configure menu                 ... click Close to exit from the Configure menu
-                ... next really select 'Remote Launch' by clicking the name tag that was auto-assigned above   ( licence label should be ok in the lower left corner and the hostname of the connecting client should appear in the lower right corner )+                ... next really select 'Remote Launch' by clicking the name tag that was auto-assigned above   ( licence label should be ok in the lower left corner  
 +                and the hostname of the connecting client should appear in the lower right corner )
                                  
-   ssh vsc3.vsc.ac.at -l my_uid   a second terminal will be needed to actually start the debug session )    +   ssh vsc3.vsc.ac.at -l my_uid   a second terminal will be needed to actually start the debug session  
-   my_uid@l34$  ssh n305-044       ( log into that compute node that was selected/prepared above for remote launch )+   my_uid@l34$  ssh n305-044      # log into that compute node that was selected/prepared above for remote launch
    my_uid@n305-044$  module purge    my_uid@n305-044$  module purge
    my_uid@n305-044$  module load  intel/18  intel-mpi/2018  allinea/20.1_FORGE    my_uid@n305-044$  module load  intel/18  intel-mpi/2018  allinea/20.1_FORGE
    my_uid@n305-044$  cd wherever/my/app/may/be    my_uid@n305-044$  cd wherever/my/app/may/be
-   my_uid@n305-044$  srun --jobid 8909346 -n 16 hostname    just a dummy check to see whether all is set up and working correctly ) +   my_uid@n305-044$  srun --jobid 8909346 -n 16 hostname    just a dummy check to see whether all is set up and working correctly  
-   my_uid@n305-044$  ddt --connect srun --jobid 8909346 --mpi=pmi2 -n 64 ./a.out -arg1 -arg2   ( in the initial ddt-window a dialogue will pop up prompting for a Reverse Connection request; accept it and click Run and the usual debug session will start ) +   my_uid@n305-044$  ddt --connect srun --jobid 8909346 --mpi=pmi2 -n 64 ./a.out -arg1 -arg2   (in the initial ddt-window a dialogue will pop up prompting  
 +   for a Reverse Connection request; accept it and click Run and the usual debug session will start ) 
 +</code>
        
 ==== Further Reading: ==== ==== Further Reading: ====
  • doku/forge.txt
  • Last modified: 2024/07/05 07:51
  • by grokyta