HPC in Germany - Who? What? Where?
Center for Scientific Computing
![](https://gauss-allianz.de/media/cache/cphase_image/images/profile/cluster_gufam_loewecsc.jpg)
823 TFlop/s (Peak Performance) , 69.00 TiB (Main Memory)
848 Nodes (Unknown QDR-Infiniband, FDR-Infiniband) , 18,960 CPU Cores (Intel, AMD)
700 GPGPUs (AMD)
![](https://gauss-allianz.de/media/cache/cphase_image/images/profile/cluster_gufam_fuchs.jpg)
41 TFlop/s (Peak Performance) , 18.00 TiB (Main Memory)
358 Nodes (Unknown QDR-Infiniband) , 6,456 CPU Cores (AMD)
Deutscher Wetterdienst
![HPC-System LU](https://gauss-allianz.de/media/cache/cphase_image/images/cluster//HPC-LU_240513052031.jpg)
11.220 TFlop/s (Peak Performance) , 216.00 TiB (Main Memory)
508 Nodes , 36,864 CPU Cores (NEC)
![HPC-System LU](https://gauss-allianz.de/media/cache/cphase_image/images/cluster//HPC-LU_240513052031.jpg)
8.630 TFlop/s (Peak Performance) , 166.00 TiB (Main Memory)
391 Nodes , 28,352 CPU Cores (NEC)
Deutsches Elektronen Synchrotron
2.321 TFlop/s (Peak Performance) , 393.00 TiB (Main Memory)
764 Nodes , 26,732 CPU Cores (Intel, AMD)
282 GPGPUs (Nvidia)
Deutsches Klimarechenzentrum GmbH
![HPC-System Levante](https://gauss-allianz.de/media/cache/cphase_image/images/cluster//HPC-System_Levante_240503104836.jpg)
16.600 TFlop/s (Peak Performance) , 863.00 TiB (Main Memory)
3,042 Nodes , 389,376 CPU Cores (AMD)
240 GPGPUs (Nvidia)
Competence Center High Performance Computing (CC-HPC)
![](https://gauss-allianz.de/media/cache/cphase_image/images/profile/cluster_cchpc_beehive_copyright_thomas_brenner.jpg)
67 TFlop/s (Peak Performance) , 14.00 TiB (Main Memory)
198 Nodes (Unknown FDR-Infiniband) , 3,224 CPU Cores (Intel)
2 GPGPUs (Nvidia)
![](https://gauss-allianz.de/media/cache/cphase_image/images/profile/cluster_cchpc_seislab_copyright_fraunhofer_itwm.jpg)
35 TFlop/s (Peak Performance) , 6.00 TiB (Main Memory)
90 Nodes (Unknown FDR-Infiniband, QDR-Infiniband) , 1,584 CPU Cores (Intel)
3 GPGPUs (Nvidia) , 2 Many Core Processors (Intel)
Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen
![](https://gauss-allianz.de/media/cache/cphase_image/images/profile/cluster_gwdg_hpccluster.jpg)
2.883 TFlop/s (Peak Performance) , 90.00 TiB (Main Memory)
402 Nodes (Unknown QDR-Infiniband, FDR-Infiniband, Intel Omnipath, FDR-Infiniband) , 16,640 CPU Cores (Intel, AMD)
278 GPGPUs (Nvidia) , 302 Applications (454 Versions)
![](https://gauss-allianz.de/media/cache/cphase_image/images/cluster/HLRNIV_Emmy_Phase_1_190829162006.jpg)
8.261 TFlop/s (Peak Performance) , 487.00 TiB (Main Memory)
1,473 Nodes (Intel Omnipath) , 116,152 CPU Cores (Intel)
12 GPGPUs (Nvidia)
Hochschulrechenzentrum
![](https://gauss-allianz.de/media/cache/cphase_image/images/cluster/Lichtenberg_2_220330153758.jpg)
8.500 TFlop/s (Peak Performance) , 549.00 TiB (Main Memory)
1,229 Nodes ( HDR100-Infiniband, Mellanox HDR100-Infiniband) , 122,816 CPU Cores (Intel, AMD)
84 GPGPUs (Nvidia, Intel)
Höchstleistungsrechenzentrum Stuttgart
![HPC-System Hawk](https://gauss-allianz.de/media/cache/cphase_image/images/cluster/HPE_Apollo_system_Hawk__200512155945.jpg)
26.000 TFlop/s (Peak Performance) , 1.00 PiB (Main Memory)
5,656 Nodes (Mellanox HDR200-Infiniband) , 723,968 CPU Cores (AMD)
192 GPGPUs (Nvidia)
![HPC-System vulcan](https://gauss-allianz.de/media/cache/cphase_image/images/cluster//HPC-System_vulcan_240513071228.jpg)
120.00 TiB (Main Memory)
476 Nodes , 14,480 CPU Cores (Intel, NEC)
16 GPGPUs (AMD, Nvidia)
IT Center of RWTH Aachen University
![HPC-System CLAIX-2018](https://gauss-allianz.de/media/cache/cphase_image/images/cluster/CLAIX2018_200507182330.jpg)
4.965 TFlop/s (Peak Performance) , 245.00 TiB (Main Memory)
1,307 Nodes (Intel Omnipath) , 62,736 CPU Cores (Intel)
96 GPGPUs (Nvidia)
![HPC-System CLAIX-2023](https://gauss-allianz.de/media/cache/cphase_image/images/cluster//HPC-System_CLAIX-2023_240513070921.jpeg)
11.484 TFlop/s (Peak Performance) , 226.00 TiB (Main Memory)
684 Nodes , 65,664 CPU Cores (Intel)
208 GPGPUs (Nvidia)
Jülich Supercomputing Centre (JSC)
![Supercomputer JUWELS am Jülich Supercomputing Centre](https://gauss-allianz.de/media/cache/cphase_image/images/misc/JUWELS_Supercomputer_am_JSC_190212173451.jpg)
85.000 TFlop/s (Peak Performance) , 749.00 TiB (Main Memory)
3,515 Nodes (Mellanox EDR-Infiniband, HDR200-Infiniband) , 168,208 CPU Cores (Intel, AMD)
3956 GPGPUs (Nvidia)
![HPC-System JURECA](https://gauss-allianz.de/media/cache/cphase_image/images/cluster//HPC-System_JURECA_240507120851.png)
18.520 TFlop/s (Peak Performance) , 444.00 TiB (Main Memory)
780 Nodes (Mellanox HDR100-Infiniband) , 99,840 CPU Cores (AMD)
768 GPGPUs (Nvidia)
Zuse-Institut Berlin
![Foto HPC-System Lise](https://gauss-allianz.de/media/cache/cphase_image/images/cluster/HPCSystem_Lise_200416131716.jpg)
7.907 TFlop/s (Peak Performance) , 444.00 TiB (Main Memory)
1,146 Nodes (Intel Omnipath) , 110,016 CPU Cores (Intel)
Leibniz-Rechenzentrum der Bayerischen Akademie der Wissenschaften
![](https://gauss-allianz.de/media/cache/cphase_image/images/cluster//HPC-System_SuperMUC-NG_240507105441.png)
54.860 TFlop/s (Peak Performance) , 822.00 TiB (Main Memory)
6,720 Nodes (Intel Omnipath, Mellanox HDR200-Infiniband) , 337,920 CPU Cores (Intel)
960 GPGPUs (Intel)
Max Planck Computing & Data Facility
![HPC System COBRA](https://gauss-allianz.de/media/cache/cphase_image/images/cluster/HPC_System_COBRA_190925141851.jpg)
12.720 TFlop/s (Peak Performance) , 518.00 TiB (Main Memory)
3,424 Nodes (Intel Omnipath) , 136,960 CPU Cores (Intel)
368 GPGPUs (Nvidia)
![](https://gauss-allianz.de/media/cache/cphase_image/images/cluster//Raven_240503101217.jpg)
24.800 TFlop/s (Peak Performance) , 517.00 TiB (Main Memory)
1,784 Nodes , 128,448 CPU Cores (Intel)
768 GPGPUs (Nvidia)
Paderborn Center for Parallel Computing
![](https://gauss-allianz.de/media/cache/cphase_image/images/misc/Noctua_System_Paderborn_190121081628.jpg)
835 TFlop/s (Peak Performance) , 51.00 TiB (Main Memory)
274 Nodes (Intel Omnipath) , 10,960 CPU Cores (Intel)
18 GPGPUs (Nvidia)
![](https://gauss-allianz.de/media/cache/cphase_image/images/misc/Noctua_2_-_Paderborn_Center_for_Parallel_Computing_221017095322.jpg)
7.100 TFlop/s (Peak Performance) , 347.00 TiB (Main Memory)
1,121 Nodes (Mellanox HDR100-Infiniband) , 143,488 CPU Cores (AMD)
136 GPGPUs (Nvidia) , 80 FPGAs (Bittware, AMD XILINX)
Regionales Hochschulrechenzentrum Kaiserslautern-Landau (RHRZ)
3.072 TFlop/s (Peak Performance) , 52.00 TiB (Main Memory)
489 Nodes (Unknown QDR-Infiniband, Intel Omnipath) , 10,520 CPU Cores (Intel, AMD)
56 GPGPUs (Nvidia) , 56 Applications (228 Versions)
Regionales Rechenzentrum der Universität zu Köln
![](https://gauss-allianz.de/media/cache/cphase_image/images/profile/cluster_rrzk_cheops.jpg)
100 TFlop/s (Peak Performance) , 35.00 TiB (Main Memory)
841 Nodes (Unknown QDR-Infiniband) , 9,712 CPU Cores (Intel)
Scientific Computing Center
![bwUniCluster 2.0 Stufe 1](https://gauss-allianz.de/media/cache/cphase_image/images/cluster/bwUniCluster_2.0_Stufe_1_230105142021.jpg)
155.00 TiB (Main Memory)
837 Nodes ( HDR200-Infiniband) , 40,608 CPU Cores (Intel)
196 GPGPUs (Nvidia)
![HoreKa](https://gauss-allianz.de/media/cache/cphase_image/images/cluster/HoreKa__230105145347.jpg)
242.00 TiB (Main Memory)
769 Nodes ( HDR200-Infiniband) , 58,444 CPU Cores (Intel)
668 GPGPUs (Nvidia)
Steinbuch Centre for Computing
Zentrum für Datenverarbeitung
![](https://gauss-allianz.de/media/cache/cphase_image/images/profile/cluster_zdv_mogon.jpg)
379 TFlop/s (Peak Performance) , 88.00 TiB (Main Memory)
570 Nodes (Unknown QDR-Infiniband) , 35,760 CPU Cores (Intel, AMD)
52 GPGPUs (Nvidia) , 8 Many Core Processors (Intel)
106 TFlop/s (Peak Performance) , 10.00 TiB (Main Memory)
320 Nodes (Unknown QDR-Infiniband) , 5,120 CPU Cores (Intel)
3.125 TFlop/s (Peak Performance) , 190.00 TiB (Main Memory)
1,948 Nodes (Intel Omnipath) , 52,248 CPU Cores (Intel)
188 GPGPUs (Nvidia, NEC)
Center for Information Services and High Performance Computing
5.443 TFlop/s (Peak Performance) , 34.00 TiB (Main Memory)
34 Nodes ( HDR200-Infiniband) , 1,632 CPU Cores (AMD)
272 GPGPUs (Nvidia)
![HPC-System Barnard](https://gauss-allianz.de/media/cache/cphase_image/images/cluster//HPC-System_Barnard_240513044952.jpg)
4.050 TFlop/s (Peak Performance) , 315.00 TiB (Main Memory)
630 Nodes ( HDR100-Infiniband) , 65,520 CPU Cores (Intel)
Erlangen National Center for High Performance Computing (NHR@FAU)
![](https://gauss-allianz.de/media/cache/cphase_image/images/cluster/HPCCluster_Meggie_RRZE_170322105607.jpg)
511 TFlop/s (Peak Performance) , 46.00 TiB (Main Memory)
728 Nodes (Intel Omnipath) , 14,560 CPU Cores (Intel)
21 Applications (61 Versions)
![](https://gauss-allianz.de/media/cache/cphase_image/images/cluster/HPCCluster_TinyGPU_RRZE_170322105223.jpg)
5.00 TiB (Main Memory)
45 Nodes , 1,392 CPU Cores (Intel, AMD)
208 GPGPUs (Nvidia) , 3 Applications (8 Versions)
![HPC-System Alex](https://gauss-allianz.de/media/cache/cphase_image/images/cluster//HPC-System_Alex_240507131708.jpg)
6.080 TFlop/s (Peak Performance) , 78.00 TiB (Main Memory)
82 Nodes (Mellanox HDR200-Infiniband, HDR200-Infiniband) , 10,496 CPU Cores (AMD)
656 GPGPUs (Nvidia)
![HPC-System Fritz](https://gauss-allianz.de/media/cache/cphase_image/images/cluster//HPC-System_Fritz_240507130900.jpg)
5.450 TFlop/s (Peak Performance) , 248.00 TiB (Main Memory)
992 Nodes (Mellanox HDR100-Infiniband) , 71,424 CPU Cores (Intel)