preloader
  • Home
  • How to Submit Jobs

HSUper resources are managed using Slurm Workload Manager. To submit jobs, you need to write and schedule job scripts that describe how to execute a specific compute task. Examples of submitting jobs to HSUper can be found on this page.

How to Submit Jobs

Exemplary slurm job scripts are provided further below. They can be scheduled using sbatch, for example, with sbatch helloworld-omp.job. For details on slurm, refer to its documentation.

HSUper provides different nodes (see technical Specifications) and also different partitions in which compute jobs can be run. Make sure you submit your job to a fitting available partitions, considering their limitations. A detailed partition list can be found here. Be aware of partition-independet constraints.

Before executing your software on HSUper, double-check that a time limit is set (that must not differ drastically from the actual run time) and that the job is capable of exploiting the requested resources efficiently:

  • Is your software parallelized? If it is parallelized, is it shared-memory parallelized (e.g. with OpenMP, Intel TBB, Cilk)? Then you can use single compute nodes, but typically not multiple of them. If it is parallelized, is it distributed-memory parallelized (e.g. with MPI or some PGAS approach such as used in Fortran 2008, UPC++, Chapel)? Then your program can potentially also run on multiple compute nodes at a time.

  • Does your software support execution on graphics cards (GPUs)? If yes, you might want to consider using the GPU nodes of HSUper.

  • Is your application extremely memory-intensive? If it requires even more than 256 GB of memory and is not parallelized for distributed-memory systems, you might still be able to execute your software on the fat memory nodes of HSUper. Note: this should be the exception!, typically distributed-memory parallelism is highly recommended for most applications!

  • Does your software only require a subset of the resources of single nodes? Consider using the small_shared partition.

Job Script Examples

In the following, you find job script examples that execute a simple C++ test program on HSUper. The test program is the following — you may simply copy-paste it into a file helloworld.cpp:

Test Program

#include 
#ifdef MYMPI
#include 
#endif
#ifdef MYOMP
#include 
#endif

int main(int argc, char *argv[])
{
    int rank = 0;
    int size = 1;
    int threads = 1;
#ifdef MYMPI
    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
#endif
#ifdef MYOMP
    threads = omp_get_max_threads();
#endif

    if (rank == 0)
    {
        std::cout << "Hello World on root rank " << rank << std::endl;
        std::cout << "Total number of ranks: " << size << std::endl;
        std::cout << "Number of threads/rank: " << threads << std::endl;
    }
#ifdef MYMPI
    MPI_Finalize();
#endif
    return 0;
}

To run the examples, you may compile the program in different variants:

  • Sequential (not parallel):

    g++ helloworld.cpp -o helloworld_gnu

    remark: sequential programs are not well-suited for execution on HSUper! Please make sure to run parallelized programs to efficiently exploit the given HPC hardware! For a respective job script, you could simply rely on the shared-memory example, excluding the use of threads and OpenMP.

  • Shared-memory parallel using OpenMP:

    g++ -DMYOMP -fopenmp helloworld.cpp -lgomp -o helloworld_gnu_omp

    remark: this is sufficient to use up to all cores and threads, respectively, of one single compute node! Remember that HSUper is particularly good for massively parallel compute jobs, leveraging many compute nodes at a time (see next bullet)

  • Distributed-memory parallel using MPI:

    mpicxx -DMYMPI helloworld.cpp -lmpi -o helloworld_gnu_mpi

    remark: this is sufficient to use all cores and, potentially, several nodes of HSUper. Make sure to load a valid MPI compiler beforehand (see section on software: module load … with … representing an existing MPI module; check available modules via module avail)

  • Distributed- and shared-memory parallel using both MPI and OpenMP:

    mpicxx -DMYMPI -DMYOMP -fopenmp helloworld.cpp -lmpi -lgomp -o helloworld_gnu_mpi_omp

Single-node shared-memory parallel job using OpenMP

The following batch script describes a job using 1 compute node (equipped with 72 cores), on which the OpenMP-parallel program is executed with 13 threads. A 2-minute time limit is set (the program should actually finish within few seconds). Output is written to the file helloworld-omp-%j.log, where %j corresponds to the job ID.

Please make sure to adapt the path to your script before you submit the job. The example suggests to have all files in the folder helloworld in your home directory. You can download the file here: helloworld-omp.slurm.

#!/bin/bash
#SBATCH --job-name=helloworld-omp      # specifies a user-defined job name
#SBATCH --nodes=1                      # number of compute nodes to be used
#SBATCH --ntasks=1                     # number of MPI processes
#SBATCH --partition=small              # partition (small_shared, small, medium, small_fat, small_gpu)
                                       # special partitions: large (for selected users only!)
                                       # job configuration testing partition: dev
#SBATCH --cpus-per-task=72             # number of cores per process
#SBATCH --time=00:02:00                # maximum wall clock limit for job execution
#SBATCH --output=helloworld-omp_%j.log # log file which will contain all output

#< commands to be executed >
cd $HOME/helloworld               #cd /beegfs/home/YOUR_PATH_TO_COMPILED_PROGRAM/helloworld
export OMP_NUM_THREADS=13
./helloworld_gnu_omp

Multi-node distributed-memory parallel job using MPI

The following batch script describes a job using 3 compute nodes. On every compute node, 36 MPI processes are launched. One core is reserved per MPI process. A 2-minute time limit is set (the program should actually finish within few seconds). Output is written to the file helloworld-mpi-%j.log, where %j corresponds to the job ID.

Please make sure to adapt the path to your script before you submit the job. The example suggests to have all files in the folder helloworld in your home directory. You may search for MPI implementations utilizing ml spider mpi or ml spider openmpi. You can download the file also from here: helloworld-mpi.slurm.

#!/bin/bash
#SBATCH --job-name=helloworld-mpi      # specifies a user-defined job name
#SBATCH --nodes=3                      # number of compute nodes to be used
#SBATCH --ntasks-per-node=36           # number of MPI processes per node
#SBATCH --partition=small              # partition (small_shared, small, medium, small_fat, small_gpu)
                                       # special partitions: large (for selected users only!)
                                       # job configuration testing partition: dev
#SBATCH --cpus-per-task=1              # number of cores per process
#SBATCH --time=00:02:00                # maximum wall clock limit for job execution
#SBATCH --output=helloworld-mpi_%j.log # log file which will contain all output
### some additional information (you can delete those lines)
echo "#==================================================#"
echo " num nodes: " $SLURM_JOB_NUM_NODES
echo " num tasks: " $SLURM_NTASKS
echo " cpus per task: " $SLURM_CPUS_PER_TASK
echo " nodes used: " $SLURM_JOB_NODELIST
echo " job cpus used: " $SLURM_JOB_CPUS_PER_NODE
echo "#==================================================#"

# commands to be executed
# modify the following line to load a specific MPI implementation
module load mpi
cd $HOME/helloworld               #cd /beegfs/home/YOUR_PATH_TO_COMPILED_PROGRAM/helloworld
# use the SLURM variable "ntasks" to set the number of MPI processes;
# here, ntasks is computed from "nodes" and "ntasks-per-node"; alternatively
# specify, e.g., ntasks directly (instead of ntasks-per-node)
mpirun -np $SLURM_NTASKS ./helloworld_gnu_mpi

Multi-node distributed-memory/shared-memory parallel job using MPI/OpenMP

The following batch script describes a job using 3 compute nodes. On every compute node, 2 MPI processes are launched. Each process comprises 36 cores. A 2-minute time limit is set (the program should actually finish within few seconds). Output is written to the file helloworld-mpi-omp-%j.log, where %j corresponds to the job ID.

Please make sure to adapt the path to your script before you submit the job. The example suggests to have all files in the folder helloworld in your home directory. You may search for MPI implementations utilizing ml spider mpi or ml spider openmpi. You can download the file also from here: helloworld-mpi-omp.slurm.

#!/bin/bash
#SBATCH --job-name=helloworld-mpi-omp      # specifies a user-defined job name
#SBATCH --nodes=3                          # number of compute nodes to be used
#SBATCH --ntasks-per-node=2                # number of MPI processes per node
#SBATCH --partition=small                  # partition (small_shared, small, medium, small_fat, small_gpu)
                                           # special partitions: large (for selected users only!)
                                           # job configuration testing partition: dev
#SBATCH --cpus-per-task=36                 # number of cores per process
#SBATCH --time=00:02:00                    # maximum wall clock limit for job execution
#SBATCH --output=helloworld-mpi-omp_%j.log # log file which will contain all output
### some additional information (you can delete those lines)
echo "#==================================================#"
echo " num nodes: " $SLURM_JOB_NUM_NODES
echo " num tasks: " $SLURM_NTASKS
echo " cpus per task: " $SLURM_CPUS_PER_TASK
echo " nodes used: " $SLURM_JOB_NODELIST
echo " job cpus used: " $SLURM_JOB_CPUS_PER_NODE
echo "#==================================================#"
# commands to be executed
# modify the folowing line to load a specific MPI implementation
module load mpi
cd $HOME/helloworld                   #cd /beegfs/home/YOUR_PATH_TO_COMPILED_PROGRAM/helloworld
# use the SLURM variable "ntasks" to set the number of MPI processes;
# here, ntasks is computed from "nodes" and "ntasks-per-node"
export OMP_NUM_THREADS=36
mpirun -np $SLURM_NTASKS ./helloworld_gnu_mpi_omp

All example files in one zip