HPC in Germany - Who? What? Where?
Center for Scientific Computing

823 TFlop/s (Peak Performance) , 70 TB (Main Memory)
848 Nodes (Unknown QDR-Infiniband, FDR-Infiniband) , 18,960 CPU Cores (Intel, AMD)
700 GPGPUs (AMD)

41 TFlop/s (Peak Performance) , 18 TB (Main Memory)
358 Nodes (Unknown QDR-Infiniband) , 6,456 CPU Cores (AMD)
Deutscher Wetterdienst

1.073 TFlop/s (Peak Performance) , 125 TB (Main Memory)
976 Nodes (Cray Aries) , 29,952 CPU Cores (Intel)

1.073 TFlop/s (Peak Performance) , 125 TB (Main Memory)
976 Nodes (Cray Aries) , 29,952 CPU Cores (Intel)
Deutsches Elektronen Synchrotron
German Climate Computing Center

3.590 TFlop/s (Peak Performance) , 266 TB (Main Memory)
3,336 Nodes (Unknown FDR-Infiniband, Mellanox FDR-Infiniband) , 101,196 CPU Cores (Intel)
42 GPGPUs (Nvidia)
Competence Center High Performance Computing (CC-HPC)

67 TFlop/s (Peak Performance) , 14 TB (Main Memory)
198 Nodes (Unknown FDR-Infiniband) , 3,224 CPU Cores (Intel)
2 GPGPUs (Nvidia)

35 TFlop/s (Peak Performance) , 6 TB (Main Memory)
90 Nodes (Unknown FDR-Infiniband, QDR-Infiniband) , 1,584 CPU Cores (Intel)
3 GPGPUs (Nvidia) , 2 Many Core Processors (Intel)
Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen

2.883 TFlop/s (Peak Performance) , 92 TB (Main Memory)
402 Nodes (Unknown QDR-Infiniband, FDR-Infiniband, Intel Omnipath, FDR-Infiniband) , 16,640 CPU Cores (Intel, AMD)
278 GPGPUs (Nvidia) , 302 Applications (454 Versions)

8.261 TFlop/s (Peak Performance) , 498 TB (Main Memory)
1,473 Nodes (Intel Omnipath) , 116,152 CPU Cores (Intel)
12 GPGPUs (Nvidia)
Hochschulrechenzentrum

3.148 TFlop/s (Peak Performance) , 251 TB (Main Memory)
643 Nodes ( HDR100-Infiniband) , 61,824 CPU Cores (Intel, AMD)
56 GPGPUs (Nvidia)
Höchstleistungsrechenzentrum Stuttgart

26.000 TFlop/s (Peak Performance) , 1 TB (Main Memory)
5,632 Nodes (Unknown HDR200-Infiniband) , 720,896 CPU Cores (AMD)
IT Center of RWTH Aachen University

678 TFlop/s (Peak Performance) , 88 TB (Main Memory)
633 Nodes (Intel Omnipath) , 16,152 CPU Cores (Intel)
20 GPGPUs (Nvidia)

4.965 TFlop/s (Peak Performance) , 251 TB (Main Memory)
1,307 Nodes (Intel Omnipath) , 62,736 CPU Cores (Intel)
96 GPGPUs (Nvidia)
Jülich Supercomputing Centre (JSC)

12.000 TFlop/s (Peak Performance) , 286 TB (Main Memory)
2,575 Nodes , 123,088 CPU Cores (Intel)
196 GPGPUs (Nvidia)
Konrad-Zuse-Zentrum für Informationstechnik Berlin

7.907 TFlop/s (Peak Performance) , 455 TB (Main Memory)
1,146 Nodes (Intel Omnipath) , 110,016 CPU Cores (Intel)
Leibniz-Rechenzentrum der Bayerischen Akademie der Wissenschaften

3.580 TFlop/s (Peak Performance) , 197 TB (Main Memory)
3,072 Nodes (Unknown FDR-Infiniband) , 86,016 CPU Cores (Intel)
26.900 TFlop/s (Peak Performance) , 719 TB (Main Memory)
6,480 Nodes (Intel Omnipath) , 311,040 CPU Cores (Intel)
Max Planck Computing & Data Facility

12.720 TFlop/s (Peak Performance) , 530 TB (Main Memory)
3,424 Nodes (Intel Omnipath) , 136,960 CPU Cores (Intel)
368 GPGPUs (Nvidia)
Paderborn Center for Parallel Computing

240 TFlop/s (Peak Performance) , 47 TB (Main Memory)
616 Nodes (Mellanox QDR-Infiniband) , 9,920 CPU Cores (Intel)
48 GPGPUs (Nvidia) , 75 Applications (173 Versions)

835 TFlop/s (Peak Performance) , 52 TB (Main Memory)
272 Nodes (Intel Omnipath) , 10,880 CPU Cores (Intel)
32 FPGAs (Bittware)
Regionales Hochschulrechenzentrum Kaiserslautern (RHRK)

134 TFlop/s (Peak Performance) , 17 TB (Main Memory)
319 Nodes (Unknown QDR-Infiniband) , 5,624 CPU Cores (Intel)
29 GPGPUs (Nvidia) , 6 Many Core Processors (Intel) , 56 Applications (228 Versions)
53 TB (Main Memory)
489 Nodes (Unknown QDR-Infiniband, Intel Omnipath) , 10,520 CPU Cores (Intel, AMD)
56 GPGPUs (Nvidia) , 56 Applications (228 Versions)
Regionales Rechenzentrum der Universität zu Köln

100 TFlop/s (Peak Performance) , 36 TB (Main Memory)
841 Nodes (Unknown QDR-Infiniband) , 9,712 CPU Cores (Intel)
Steinbuch Centre for Computing

1.171 TFlop/s (Peak Performance) , 136 TB (Main Memory)
1,701 Nodes (Unknown FDR-Infiniband, EDR-Infiniband) , 34,800 CPU Cores (Intel)
84 GPGPUs (Nvidia)

444 TFlop/s (Peak Performance) , 86 TB (Main Memory)
872 Nodes (Unknown FDR-Infiniband) , 18,304 CPU Cores (Intel)
Zentrum für Datenverarbeitung

379 TFlop/s (Peak Performance) , 90 TB (Main Memory)
570 Nodes (Unknown QDR-Infiniband) , 35,760 CPU Cores (Intel, AMD)
52 GPGPUs (Nvidia) , 8 Many Core Processors (Intel)
106 TFlop/s (Peak Performance) , 10 TB (Main Memory)
320 Nodes (Unknown QDR-Infiniband) , 5,120 CPU Cores (Intel)
3.125 TFlop/s (Peak Performance) , 194 TB (Main Memory)
1,948 Nodes (Intel Omnipath) , 52,248 CPU Cores (Intel)
188 GPGPUs (Nvidia, NEC)
Center for Information Services and High Performance Computing

2.621 TFlop/s (Peak Performance) , 279 TB (Main Memory)
1,782 Nodes (Unknown FDR-Infiniband, Mellanox HDR100-Infiniband) , 64,536 CPU Cores (Intel, IBM, AMD)
320 GPGPUs (Nvidia) , 415 Applications (848 Versions)
35 TB (Main Memory)
34 Nodes ( HDR200-Infiniband) , 1,632 CPU Cores (AMD)
272 GPGPUs (Nvidia)
Erlangen National Center for High Performance Computing (NHR@FAU)

232 TFlop/s (Peak Performance) , 36 TB (Main Memory)
556 Nodes (Mellanox QDR-Infiniband) , 11,088 CPU Cores (Intel)
20 GPGPUs (Nvidia) , 44 Applications (131 Versions)

511 TFlop/s (Peak Performance) , 47 TB (Main Memory)
728 Nodes (Intel Omnipath) , 14,560 CPU Cores (Intel)
21 Applications (61 Versions)

5 TB (Main Memory)
45 Nodes , 1,392 CPU Cores (Intel, AMD)
208 GPGPUs (Nvidia) , 3 Applications (8 Versions)

65 TB (Main Memory)
70 Nodes (Mellanox HDR200-Infiniband, HDR200-Infiniband) , 8,960 CPU Cores (AMD)
560 GPGPUs (Nvidia)