1. Log in to your university's designated login server via SSH:
    # Uni Wien
    ssh <username>@vsc.univie.ac.at
    
    # TU Wien
    ssh <username>@vsc.tuwien.ac.at
    
    # Boku Wien
    ssh <username>@vsc.boku.ac.at
  2. Transfer your programs and data/input files to your home directory.
        scp <filename> <username>@vsc.univie.ac.at:~/
        
  3. (Re-)Compile your application. Please use the latest “Intel” MPI-Environment as described in MPI Environment.
#$ -N <job_name>
#$ -pe mpich <slots>
#$ -V
#$ -l h_rt=hh:mm:ss
#$ -M <email address to notify of job events>
#$ -m beas  # all job events sent via email 
  1. “<job_name>” is a freely chosen descriptive name,
  2. “<slots>” is the number of processor cores that you want to use for the calculation. To ensure exclusive reservation of the compute nodes for your job, the value for “<slots>” has to be a multiple of 8.
  3. “-V” declares that all environment variables in the qsub command's environment are to be exported to the batch job.
  4. “-l” specifies the job's runtime. This explicit specification is in particular advisable for jobs with short run times, i.e., several hours or even minutes. In order to reduce the time in the queue, see also the section on maximum runtime specification.
  5. “-M <email address>; -m beas” request E-Mail notifications concerning job events (b .. beginning, e .. end, a .. abort or reschedule, s .. suspend).

run executable

The job can be started in several ways,

  1. as single core job on one core (no MPI) task
    ./<executable>
  2. as parallel single core job (no MPI) on parallel cores (see also Sequential code)
  3. as MPI-enabled application
    mpirun  -m $TMPDIR/machines -np $NSLOTS <executable>

    “<executable>” is substituted by the path of the MPI-enabled application.

Please note that the particular options to mpirun depend on the MPI version that you use. Current IntelMPI versions, for example, require the option -machinefile instead of -m:

mpirun  -machinefile $TMPDIR/machines -np $NSLOTS <executable>

Please always check for the correct options with

mpirun -help

Here is an example job-script, requesting 32 processor cores, which will run for a maximum of 3 hours and sends emails at the beginning and at the end of the job:

#$ -N hitchhiker
#$ -pe mpich 32
#$ -V
#$ -M my.name@example.at
#$ -m be
#$ -l h_rt=03:00:00

mpirun -machinefile $TMPDIR/machines -np $NSLOTS ./myjob
  1. qsub <job_file>

    where “<job_file>” is the name of the file you just created.

  2. Check if and where your job has been scheduled:
    qstat
  3. Inspect the job output. Assuming your job was assigned the id “42” and your job's name was “hitchhiker”, you should be able to find the following files in the directory you started it from:
    $ ls -l
    hitchhiker.o42
    hitchhiker.e42
    hitchhiker.po42
    hitchhiker.pe42

    In this example hitchhiker.o42 contains the output of your job. hitchhiker.e42 contains possible error messages. In hitchhiker.po42 and hitchhiker.pe42 you might find additional information related to the parallel computing environment.

  4. Delete Jobs:
    $ qdel <job_id>
  5. View all jobs in the queue:
    $ qstat -u \*

For advanced topics see also Sun grid engine (SGE).

  • doku/vsc1quickstart.txt
  • Last modified: 2014/11/04 13:11
  • by ir