bwUniCluster Support

About

An introductory as well as a wiki page to High Performance Computing Clusters for users from University of Applied Sciences Esslingen, also known as Hochschule Esslingen (HE).

HE academic researchers are provided with direct access to bwGriD and bwUniCluster platforms, free of charge, in order to run any non commercial calculations or simulations.

Each cluster has its own infrastructure and uses cluster specific resource managment tools, software packages, libraries, development tools, etc. An user may have to adjust the work procedures for each cluster accordingly.

Note: after you choose the cluster, in the upper right corner you'll get a cluster specific navigation menu, for cluster specific wiki sections.


bwUniCluster 2.0

As part of bwHPC project, bwUniCluster is a modern system which consists of more than 840 SMP nodes with 64-bit Intel Xeon processors, providing access to users from multiple universities of Baden-Württemberg.

Each node on the cluster has at least two Intel Xeon processor, local memory from 96GB to 3TB, local SSD disks, network adapters and optionally accelerators (NVIDIA Tesla V100). All nodes are connected over a fast InfiniBand interconnect and also connected to an external file system based on Lustre.

More info about hardware and architecture you may find here.

Workload manager: SLURM.


bwGRiD Esslingen (no longer active)

The Baden-Württemberg Grid (bwGRiD) was part of the D-Grid initiative and provided more than 12,000 cores for research and science at 8 locations in Baden-Württemberg. Participating partners were the Universities of Freiburg, Heidelberg, Tübingen, Mannheim and the Ulm/Konstanz network, the Esslingen University of Applied Sciences, the Karlsruhe Institute of Technology and the High Performance Computing Centre (HLRS) in Stuttgart.

A NEC LX-2400 cluster, it is an old project with an outdated hardware, access to which is still granted for HE users. Currently it has about 75 Nodes with Intel Nehalem processors with 2.27GHz and 2.8GHz with 8 cores each and 22 Nodes with 32 virtual cores, totaling in about 1264 active cores. Each node has 24GB and 64GB memory respectively. All systems are without local hard disk.

The file system used is the NEC LXFS with 36 TB, a high-performance parallel file syastem based on Lustre.

Each blade is connected to the local network with Gigabit Ethernet. This network is used for administration and for logging on to the systems. Each blade has a QDR InfiniBand interface (transfer rate: 40 GBit/Sec) for the transfer of data and results of calculations. The InfiniBand network is designed as a HyperCube with a total of 192 edge ports. Both the data exchange of parallel programs and the connection to the NEC LXFS are carried out via InfiniBand.

Workload manager is a combination between MOAB-TORQUE and PBS.


Disclaimer

These systems are not designed to collect and backup any data. Therefore, it is highly recommended not to store any data you are not afraid to lose on the clusters.

For some extra questions you may try to contact M.V?gtle on: michael.voegtle[at]hs-esslingen.de

ANSYS on bwUniCluster 2.0

Reaching HE ANSYS license servers from bwUniCluster over a SSH tunnel.

ANSYS packages are connected to university's FLEXlm servers through two ports: 1055 and 2325.

Besides these 2 ports, an extra port, required by the FLEXlm daemon, should be tunneled as well. In the example below we named it DAEMON_PORT and set it to an arbitrary port, 49100.

This port is either random or, usually configured by the license servers administrator. Later in the scripts it will be referred with a placeholder <DAEMON_PORT>

The following command will create an SSH master control socket with the respective ports tunneled through the comserver.hs-esslingen.de:

Useful: SSH-Tunnel will need a passwordless connection from the cluster to the comserver.hs-esslingen.de through a SSH-key.

DAEMON_PORT=49100
ssh -MS ansys-socket -fnNT -L 2325:lizenz-ansys.hs-esslingen.de:2325 \
-L 1055:lizenz-ansys.hs-esslingen.de:1055 -L \
${DAEMON_PORT}:lizenz-ansys.hs-esslingen.de:${DEAMON_PORT} \
userID@comserver.hs-esslingen.de


where, userID is the id of the user on comserver.hs-esslingen.de

After you finish your simulations, don't forget to close the socket with:

ssh -S tunnel-socket -O exit userID@comserver.hs-esslingen.de

ANSYS Fluent sbatch job using HE managed licenses.

As you may know from, bwUniCluster 2 Slurm common features, in order to start a job on the cluster you will need to send a specific batch script to sbatch Linux application.

Below, you may consult a sample of a script which will start a FLUENT job on bwUniCluster, using ANSYS v.2020R2, a single node, 28 processors.

The job will be using Esslingen's university ANSYS license manager, with the host name lizenz-ansys.hs-esslingen.de.

Note: Value placeholders in the script will be:

  • HE user ID - <USER_ID>
  • FLEXlm Daemon Port - <D_PORT>

FLEXlm DAEMON_PORT variable will be

#!/bin/bash
# Allocate one node
#SBATCH --nodes=1
# Number of program instances to be executed
#SBATCH --ntasks-per-node=28
# Queue class wiki.bwhpc.de/e/BwUniCluster_2.0_Batch_Queues
#SBATCH --partition=single
# Maximum run time of job
#SBATCH --time=2:00:00
# Give job a reasonable name
#SBATCH --job-name=fluent-test
# File name for standard output (%j will be replaced by job id)
#SBATCH --output=fluent-test-%j.out
# File name for error output
#SBATCH --error=fluent-test-%j.out
# send an e-mail when a job begins, aborts or ends
#SBATCH --mail-type=ALL
# e-mail address specification
#SBATCH --mail-user=<USER_ID>@hs-esslingen.de

echo "Starting at "
date

# load the software package
module load cae/ansys/2022R2_no_license

HE_USER_ID=<USER_ID>
HE_LIZENZ_SERVER='lizenz-ansys.hs-esslingen.de'
HE_COM_SERVER='comserver.hs-esslingen.de'

# start a SSH tunnel, creating a control socket.
DAEMON_PORT=<D_PORT>
ssh -M -S ansys-socket -fnNT -L 2325:${HE_LIZENZ_SERVER}:2325 \
-L 1055:${HE_LIZENZ_SERVER}:1055 \
-L ${DAEMON_PORT}:${HE_LIZENZ_SERVER}:${DAEMON_PORT} \
${HE_USER_ID}@${HE_COM_SERVER}

# export license environment variables
export ANSYSLMD_LICENSE_FILE=1055@localhost
export ANSYSLI_SERVERS=2325@localhost

# Create the hosts file 'fluent.hosts'
HOSTS="fluent.hosts"
scontrol show hostname ${SLURM_JOB_NODELIST} > ${HOSTS}

# set number of nodes variable
nrNodes=${SLURM_NTASKS}

echo "number of nodes: $nrNodes"

# run fluent in parallel, where fluentJournal.jou is a fluent Journal File

echo "Starting fluent..."
fluent 3d -t$nrNodes -g -env -pib -mpi=openmpi -cnf=${HOSTS} -i fluentJournal.jou &&

# close the SSH control socket
ssh -S ansys-socket -O exit ${HE_USER_ID}@${HE_COM_SERVER}

[[ -f ${HOSTS} ]] && rm -rf ${HOSTS}

echo "Run completed at "
date

Download FLUENT Script


CFX sbatch job on a single node using HE managed licenses.

The script below, will start a CFX5 job on bwUniCluster, using ANSYS v.19.2, on 40 processors of a single node.

This script can also be downloaded clicking the liks in the page footer.

The job in an CFX5 def file named cfx_example.def will be using Esslingen's university ANSYS license manager, with the host name lizenz-ansys.hs-esslingen.de.

Note: Value placeholders in the script will be:

  • HE user ID - <USER_ID>
  • FLEXlm Daemon Port - <D_PORT>

FLEXlm DAEMON_PORT variable will be

#!/bin/bash

# Allocate one node
#SBATCH --nodes=1
# Number of program instances to be executed
#SBATCH --ntasks-per-node=40
# Queue class wiki.bwhpc.de/e/BwUniCluster_2.0_Batch_Queues
#SBATCH --partition=single
# Maximum run time of job
#SBATCH --time=8:00:00
# Give job a reasonable name
#SBATCH --job-name=cfx5-job
# File name for standard output (%j will be replaced by job id)
#SBATCH --output=cfx-test-%j.out
# File name for error output
#SBATCH --error=cfx-test-%j.out
# send an e-mail when a job begins, aborts or ends
#SBATCH --mail-type=ALL
# e-mail address specification
#SBATCH --mail-user=<USER_ID>@hs-esslingen.de

echo "Starting at "
date

# load the software package
module load cae/ansys/2022R2_no_license

HE_USER_ID=<USER_ID>
HE_COM_SERVER='comserver.hs-esslingen.de'
HE_LIZENZ_SERVER='lizenz-ansys.hs-esslingen.de'
INPUT='cfx_example.def'

# start a SSH tunnel, creating a control socket.
DEAMON_PORT=<D_PORT>
SOCKET_NAME="cfx-socket"
[[ -f ${SOCKET_NAME} ]] && rm -f ${SOCKET_NAME}
ssh -M -S ${SOCKET_NAME} -fnNT -L 2325:${HE_LIZENZ_SERVER}:2325 \
-L 1055:${HE_LIZENZ_SERVER}:1055 \
-L ${DEAMON_PORT}:${HE_LIZENZ_SERVER}:${DEAMON_PORT} \
${HE_USER_ID}@${HE_COM_SERVER}

# export license environment variables
export ANSYSLMD_LICENSE_FILE=1055@localhost
export ANSYSLI_SERVERS=2325@localhost

cfx5solve -batch -def $INPUT -part=${SLURM_NTASKS}

# close the SSH control socket
ssh -S ${SOCKET_NAME} -O exit ${HE_USER_ID}@${HE_COM_SERVER}

echo "Run completed at "
date

Download CFX Script


CFX sbatch job on multiple nodes using HE managed licenses.

The script below, will start a CFX5 job on bwUniCluster, using ANSYS v.19.2, 4 nodes with 40 tasks each, totaling in 160 processors.

This script can also be downloaded clicking the liks in the page footer.

The job in an CFX5 def file named cfx_example.def will be using Esslingen's university ANSYS license manager, with the host name lizenz-ansys.hs-esslingen.de.

Note: Value placeholders in the script will be:

  • HE user ID - <USER_ID>
  • FLEXlm Daemon Port - <D_PORT>

Note: You will also have to load system/ssh_wrapper/0.1 to allow cfx5solve node-communication over ssh.

#!/bin/bash
# Allocate one node
#SBATCH --nodes=4
# Number of program instances to be executed
#SBATCH --ntasks-per-node=40
# Queue class wiki.bwhpc.de/e/BwUniCluster_2.0_Batch_Queues
#SBATCH --partition=multiple
# Maximum run time of job
#SBATCH --time=8:00:00
# Give job a reasonable name
#SBATCH --job-name=cfx5-job
# File name for standard output (%j will be replaced by job id)
#SBATCH --output=cfx-test-%j.out
# File name for error output
#SBATCH --error=cfx-test-%j.out
# send an e-mail when a job begins, aborts or ends
#SBATCH --mail-type=ALL
# e-mail address specification
#SBATCH --mail-user=<USER_ID>@hs-esslingen.de

echo "Starting at "
date

# load the software package
module load system/ssh_wrapper/0.1
module load cae/ansys/2022R2_no_license

HE_USER_ID=<USER_ID>
HE_COM_SERVER='comserver.hs-esslingen.de'
HE_LIZENZ_SERVER='lizenz-ansys.hs-esslingen.de'
INPUT="cfx_example.def"

# start a SSH tunnel, creating a control socket.
DEAMON_PORT=<D_PORT>
SOCKET_NAME="cfx-socket"
[[ -f ${SOCKET_NAME} ]] && rm -f ${SOCKET_NAME}
ssh -M -S ${SOCKET_NAME} -fnNT -L 2325:${HE_LIZENZ_SERVER}:2325 \
-L 1055:${HE_LIZENZ_SERVER}:1055 \
-L ${DEAMON_PORT}:${HE_LIZENZ_SERVER}:${DEAMON_PORT} \
${HE_USER_ID}@${HE_COM_SERVER}

# export license environment variables
export ANSYSLMD_LICENSE_FILE=1055@localhost
export ANSYSLI_SERVERS=2325@localhost

# create hostslist
export jms_nodes=`srun hostname -s`
export hostslist=`echo $jms_nodes | sed "s/ /,/g"`

cfx5solve -batch -def $INPUT -par-dist ${hostslist}

# close the SSH control socket
ssh -S ${SOCKET_NAME} -O exit ${HE_USER_ID}@${HE_COM_SERVER}

echo "Run completed at "
date

Download CFX Script

STAR-CCM+ on bwUniCluster 2.0

Licenses

CD-adapco, the company behind STAR-CCM+ offers, cheap or free of charge licenses for academical projects.

These licenses are managed by CD-adapco own license manager and can be used on demand by the user owning a STAR-CCM+ POD-key.

The license server is already connected to bwUniCluster and any user can run jobs pointing the software to it (flex.cd-adapco.com), and the POD-key.

To start a STAR-CCM+ job on bwUniCluster, you'll have to send a sbatch script, to sbatch application.

As the STAR-CCM+ license server in HE is managing all the licenses in the university, including the ones in the PC-pools, you risk to block any STAR-CCM+ calculation made in "house" at the university, as the total number of licenses is limited, and using some of them for HPC jobs will put all the other machines at the university in a license queue, waiting for the HPC job to finish.


A video tutorial on how to submit a STAR-CCM+ Job on bwUniCluster with licenses from the university server over a SSH tunnel

Your browser does not support the video tag.

It's a basic example with some voice explanations of how to submit a STAR-CCM+ job and what are all those lines and words in a submit script used for. More data and examples you can of course find in text form on this page.


Running a STAR-CCM+ Job on a Single Node with a POD-Key License

Description: Learn how to efficiently run a STAR-CCM+ job on a single node with a POD-key license. This guide provides a batch script and step-by-step instructions to execute the job, utilizing 40 processors and STAR-CCM+ v.2021.3.

Optimizing the usage of computing resources is crucial for executing STAR-CCM+ simulations effectively. Running a job on a single node with a POD-key license offers a streamlined approach for specific tasks.

Batch Script. The provided batch script enables you to initiate the STAR-CCM+ job on a single node, leveraging the power of 40 processors and utilizing the POD-key license seamlessly.

Note: Assuming the job is started by an Esslingen user with the following keyholders:

  • HE User ID - <HE_USER_ID>
  • POD key - a string instead of <POD_KEY>
#!/bin/bash
# Allocate one node
#SBATCH --nodes=1
# Number of program instances to be executed
#SBATCH --ntasks=40
# Queue class wiki.bwhpc.de/e/BwUniCluster_2.0_Batch_Queues
#SBATCH --partition=single
# Maximum run time of job
#SBATCH --time=4:00:00
# Give job a reasonable name
#SBATCH --job-name=starccm-single
# File name for standard output (%j will be replaced by job id)
#SBATCH --output=logs-%j.out
# File name for error output
#SBATCH --error=logs-%j.out
# send an e-mail when a job begins, aborts or ends
#SBATCH --mail-type=ALL
# e-mail address specification
#SBATCH --mail-user=<HE_USER_ID>@hs-esslingen.de

echo "Starting at "
date

# specify the STAR-CCM+ version to load (available on the cluster)
VERSION="2021.3"

# specify sim case file name
INPUT="test_case.sim"

# specify java macro file name if any
JAVA_FILE=""

# load the available STAR-CCM+ module
module load cae/starccm+/${VERSION}

# calculate number of nodes
np=${SLURM_NTASKS}
echo "number of nodes: $np"

# set license variables: server address and POD key string
export CDLMD_LICENSE_FILE=1999@flex.cd-adapco.com
export LM_PROJECT=<POD_KEY>

# start parallel star-ccm+ job
starccm+ -power -np $np -batch ${JAVA_FILE} ${INPUT}

echo "Run completed at "
date

Download STAR-CCM+ Script


Running a STAR-CCM+ Job on Multi-Nodes with a POD-Key License

Description: Discover how to execute a parallel STAR-CCM+ job across multiple nodes with a POD-key license. This guide provides a batch script and step-by-step instructions to run the job on 4 nodes, each equipped with 40 processors, totaling 160 tasks. The STAR-CCM+ version used is v.2021.3.

Running STAR-CCM+ jobs on multi-nodes with a POD-key license is an efficient way to utilize computing resources and achieve faster simulations.

Batch Script. The provided batch script will initiate the STAR-CCM+ job on 4 nodes, with each node having 40 processors, summing up to 160 tasks. The script utilizes the POD-key license for seamless access.

Note: Assuming the job is started by an Esslingen user with the following keyholders:

  • HE User ID - <HE_USER_ID>
  • POD key - a string instead of <POD_KEY>
#!/bin/bash
# Allocate nodes
#SBATCH --nodes=4
# Number of program instances to be executed
#SBATCH --ntasks-per-node=40
# Queue class wiki.bwhpc.de/e/BwUniCluster_2.0_Batch_Queues
#SBATCH --partition=multiple
# Maximum run time of job
#SBATCH --time=4:00:00
# Give job a reasonable name
#SBATCH --job-name=starccm-multi
# File name for standard output (%j will be replaced by job id)
#SBATCH --output=logs-%j.out
# File name for error output
#SBATCH --error=logs-%j.out
# send an e-mail when a job begins, aborts or ends
#SBATCH --mail-type=ALL
# e-mail address specification
#SBATCH --mail-user=<HE_USER_ID>@hs-esslingen.de

echo "Starting at "
date

# specify the STAR-CCM+ version to load (available on the cluster)
VERSION="2021.3"

# specify sim case file name
INPUT="test_case.sim"

# specify java macro file name if any
JAVA_FILE=""

# create machinefile
machinefile=hosts.star
scontrol show hostname ${SLURM_JOB_NODELIST} > ${machinefile}

# load the available STAR-CCM+ module
module load cae/starccm+/${VERSION}

# calculate number of nodes
np=${SLURM_NTASKS}
echo "number of procs: $np"

# set license variables: server address and POD key string
export CDLMD_LICENSE_FILE=1999@flex.cd-adapco.com
export LM_PROJECT=<POD_KEY>

# start parallel star-ccm+ job
starccm+ -power -np ${np} -rsh ssh -mpi openmpi -machinefile ${machinefile} -batch ${JAVA_FILE} ${INPUT}

[[ -f ${machinefile} ]] && rm -f ${machinefile}

echo "Run completed at "
date

Download STAR-CCM+ Script

Jupyter on bwUniCluster 2.0

Accessing

To view which Jupyter modules are available on the cluster, run the following command:

$ module avail jupyter


To use Jupyter on the HPC resources of SCC, the access requirements for bwUniCluster 2.0 apply. A registration is required. Please note, You should've completed registration and tested your login once using Secure Shell (ssh).

For login to Jupyter Hub, your username, your password and a 2-factor authentication are required.

If the maximum selectable time of 4 hours from JupyterHub is not sufficient, an interactive session can also be started on the compute nodes (salloc command). Within the interactive session, the corresponding Jupyter module must then be loaded using the module load command.

For further information visit the Jupyter Wiki Page

apply

Interested? Apply now! for the summersemester 2025

Your personal contactContact us

Prof. Dr. rer. nat. Gabriele Gühring

Tel: +49 711 397-4376
E-Mail: Gabriele.Guehring@hs-esslingen.de
Send message