CIIRC Computational Cluster Description
The CIIRC computational cluster serves CIIRC researches, students, and their collaborators for intensive batch computations. The CIIRC cluster also offers acceleration on graphics cards. We have been extending and improving the CIIRC cluster.
The CIIRC cluster is XSEDE Compatible Basic Cluster based on OpenHPC project. The CIIRC cluster runs on CentOS 7 operating system. It uses a workload manager and a job scheduler Slurm. Cluster management and orchestration is done by Warewulf toolkit.
User software is managed by EasyBuild, a software build and installation framework, and uses Lmod environment module system. The container platform in the CIIRC cluster is Singularity.
The core of the cluster consists of five NVIDIA DGX-1 nodes suplemented by other gpu and computational nodes. See table below for further details:
# of nodes | 26 |
CPUs Total: | 1872 |
RAM Total | 10.5 TB |
# of V100-32-MaxQ GPU | 40 |
# of A40, 48 GB GPU | 24 |
# of GTX 1080 Ti; 11 GB GPU | 18 |
Theoretical CPU performance | 37.5 TFLOPS |
Theoretical GPU performance SP | 1517 TFLOPS |
Nodes are interconnected by 10 Gbps Ethernet and100 Gbps EDR IB (InfiniBand). All nodes contain a local ssd scratch disk. The whole cluster is connected to the 600 TB Isilon NAS storage.
Cluster storages:
- 600 TB storage exported as nfs over 10 Gbps, serving as home and project data.
- 17 TB all flash storage exported as beegfs parallel filesystem over 100 Gbps EDR Infiniband, serving as shared scratch for nodes with fast GPUs.
- All nodes contain a local ssd scratch disk.
Funding acknowledgment:
Building and continuous extending of the CIIRC computational cluster needs substantial financial resources. The money comes mainly from national and European Commission projects, e.g. the Czech Government investment which created CIIRC, Josef Urban’s ERC Consolidator project AI4REASON and his project AI & Reasoning; Josef Šivic’s project IMPACT, Robert Babuška’s project R4I, etc.
Responsible: Jan Kreps; Last update 2022-11-28