Reset
Icon loader

HPC in Germany - Who? What? Where?

Refine by

Processor

Manufacturer

Generation

Model

Cores
Cores

 

GPU/ManyCore/FPGA

Manufacturer

Model

Count
Count

 

Memory
GB
GB
GB

 

Tags
TB
Gb/s
TB

 

Tags
Tags
43 HPC systems, 38 file systems, 4 archives at 21 locations

Center for Scientific Computing

Frankfurt am Main
Megware
GPU nodes MPP system Tier 2 hpc system

823 TFlop/s (Peak Performance) , 69.00 TiB (Main Memory)

848 Nodes (Unknown QDR-Infiniband, FDR-Infiniband) , 18,960 CPU Cores (Intel, AMD)

700 GPGPUs (AMD)

Learn more

Filesystems
CSC-LOEWE (FhGFS)
764 TB
Clustervision
MPP system Tier 3 hpc system

41 TFlop/s (Peak Performance) , 18.00 TiB (Main Memory)

358 Nodes (Unknown QDR-Infiniband) , 6,456 CPU Cores (AMD)

Learn more

Filesystems
FUCHS (FhGFS)
600 TB

Deutscher Wetterdienst

Offenbach
HPC-System LU
NEC SX Aurora Tsubasa A412, SX Aurora Tsubasa C401
Tier 2 hpc system VE nodes

11.220 TFlop/s (Peak Performance) , 216.00 TiB (Main Memory)

508 Nodes , 36,864 CPU Cores (NEC)

Learn more

HPC-System LU
NEC SX Aurora Tsubasa A412, SX Aurora Tsubasa C401
Tier 2 hpc system VE nodes

8.630 TFlop/s (Peak Performance) , 166.00 TiB (Main Memory)

391 Nodes , 28,352 CPU Cores (NEC)

Learn more

Deutsches Elektronen Synchrotron

Hamburg
Tier 3 hpc system

16 TFlop/s (Peak Performance)

143 Nodes , 2,288 CPU Cores (Intel)

Learn more

Unknown -
GPU nodes MPP system Tier 3 hpc system

2.321 TFlop/s (Peak Performance) , 393.00 TiB (Main Memory)

764 Nodes , 26,732 CPU Cores (Intel, AMD)

282 GPGPUs (Nvidia)

Learn more

Deutsches Klimarechenzentrum GmbH

Hamburg
HPC-System Levante
Atos BullSequana XH2000
GPU nodes MPP system SMP nodes Tier 2 hpc system

16.600 TFlop/s (Peak Performance) , 863.00 TiB (Main Memory)

3,042 Nodes , 389,376 CPU Cores (AMD)

240 GPGPUs (Nvidia)

Learn more

Competence Center High Performance Computing (CC-HPC)

Kaiserslautern
Dell PowerEdge
GPU nodes MPP system SMP nodes Tier 3 hpc system

67 TFlop/s (Peak Performance) , 14.00 TiB (Main Memory)

198 Nodes (Unknown FDR-Infiniband) , 3,224 CPU Cores (Intel)

2 GPGPUs (Nvidia)

Learn more

Filesystems
Beehive (BeeGFS)
240 TB
GPU nodes MPP system Tier 3 hpc system Xeon-Phi nodes

35 TFlop/s (Peak Performance) , 6.00 TiB (Main Memory)

90 Nodes (Unknown FDR-Infiniband, QDR-Infiniband) , 1,584 CPU Cores (Intel)

3 GPGPUs (Nvidia) , 2 Many Core Processors (Intel)

Learn more

Filesystems
Seislab (BeeGFS)
600 TB

Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen

Göttingen
Sysgen GPU Cluster, Compute Cluster, Transtec Compute Cluster, DELTA Computer Compute Cluster, Atos Intel PCSD, Clustervision Compute Cluster
GPU nodes MPP system Tier 3 hpc system

2.883 TFlop/s (Peak Performance) , 90.00 TiB (Main Memory)

402 Nodes (Unknown QDR-Infiniband, FDR-Infiniband, Intel Omnipath, FDR-Infiniband) , 16,640 CPU Cores (Intel, AMD)

278 GPGPUs (Nvidia) , 302 Applications (454 Versions)

Learn more

Filesystems
Bioinformatics WORK (BeeGFS-Storage-Server)
243 TB
MPS WORK (Storage Server)
140 TB
SCC HOME (StorNext - LAN Client)
1900 TB
SCC HOME (Stor Next - NFS)
900 TB
SCC scratch2 (Storage Server)
110 TB
SCC scratch (MDC) (Storage Server)
1865 TB
Archives
SCC Archivspeicher (Scalar i6000)
22000 TB
MPP system Tier 2 hpc system

8.261 TFlop/s (Peak Performance) , 487.00 TiB (Main Memory)

1,473 Nodes (Intel Omnipath) , 116,152 CPU Cores (Intel)

12 GPGPUs (Nvidia)

Learn more

Filesystems
HLRN-IV Work (EXAScaler)
9412 TB
HLRN-IV Home (GRIDScaler)
340 TB
Archives
HLRN Archivspeicher (Scalar i6000)
7037 TB

Hochschulrechenzentrum

Darmstadt
Intel Cascadelake, Lenovo ThinkSystem SD650 V3
GPU nodes SMP nodes Tier 2 hpc system

8.500 TFlop/s (Peak Performance) , 549.00 TiB (Main Memory)

1,229 Nodes ( HDR100-Infiniband, Mellanox HDR100-Infiniband) , 122,816 CPU Cores (Intel, AMD)

84 GPGPUs (Nvidia, Intel)

Learn more

Filesystems
GLOBAL (DSS)
4000 TB

Höchstleistungsrechenzentrum Stuttgart

Stuttgart
HPC-System Hawk
Hewlett Packard Enterprise (HPE) Apollo,
GPU nodes MPP system Tier 1 hpc system

26.000 TFlop/s (Peak Performance) , 1.00 PiB (Main Memory)

5,656 Nodes (Mellanox HDR200-Infiniband) , 723,968 CPU Cores (AMD)

192 GPGPUs (Nvidia)

Learn more

HPC-System vulcan
NEC
GPU system Tier 3 hpc system VE nodes

120.00 TiB (Main Memory)

476 Nodes , 14,480 CPU Cores (Intel, NEC)

16 GPGPUs (AMD, Nvidia)

Learn more

IT Center of RWTH Aachen University

Aachen
HPC-System CLAIX-2018
NEC HNS2600BPB
GPU nodes MPP system SSD storage Tier 2 hpc system

4.965 TFlop/s (Peak Performance) , 245.00 TiB (Main Memory)

1,307 Nodes (Intel Omnipath) , 62,736 CPU Cores (Intel)

96 GPGPUs (Nvidia)

Learn more

Filesystems
HPCWORK (Claix 16) (Lustre)
3000 TB
HOME / WORK (NFS)
1500 TB
HPCWORK (Claix 18) (Lustre)
10000 TB
HPC-System CLAIX-2023
NEC HPC1808Rk-2
GPU nodes MPP system SMP nodes Tier 2 hpc system Tier 3 hpc system

11.484 TFlop/s (Peak Performance) , 226.00 TiB (Main Memory)

684 Nodes , 65,664 CPU Cores (Intel)

208 GPGPUs (Nvidia)

Learn more

Jülich Supercomputing Centre (JSC)

Jülich
Supercomputer JUWELS am Jülich Supercomputing Centre
Atos BullSequana X1000
GPU nodes MPP system SMP nodes Tier 1 hpc system

85.000 TFlop/s (Peak Performance) , 749.00 TiB (Main Memory)

3,515 Nodes (Mellanox EDR-Infiniband, HDR200-Infiniband) , 168,208 CPU Cores (Intel, AMD)

3956 GPGPUs (Nvidia)

Learn more

HPC-System JURECA
Atos BullSequana XH2000
GPU nodes MPP system SMP nodes

18.520 TFlop/s (Peak Performance) , 444.00 TiB (Main Memory)

780 Nodes (Mellanox HDR100-Infiniband) , 99,840 CPU Cores (AMD)

768 GPGPUs (Nvidia)

Learn more

Zuse-Institut Berlin

Berlin-Dahlem
Foto HPC-System Lise
Atos Bull Intel cluster
MPP system Tier 2 hpc system

7.907 TFlop/s (Peak Performance) , 444.00 TiB (Main Memory)

1,146 Nodes (Intel Omnipath) , 110,016 CPU Cores (Intel)

Learn more

Filesystems
HLRN-IV WORK (DDN EXAScaler)
8192 TB
HLRN-IV HOME (DDN GRIDScaler)
340 TB
Archives
Archivspeicher (StorageTEK SL8500)
25000 TB

Leibniz-Rechenzentrum der Bayerischen Akademie der Wissenschaften

Garching bei München
Lenovo ThinkSystem SD 650 DWC, ThinkSystem SD650-I V3
GPU nodes MPP system SMP nodes Tier 1 hpc system

54.860 TFlop/s (Peak Performance) , 822.00 TiB (Main Memory)

6,720 Nodes (Intel Omnipath, Mellanox HDR200-Infiniband) , 337,920 CPU Cores (Intel)

960 GPGPUs (Intel)

Learn more

Max Planck Computing & Data Facility

Garching
Lenovo ThinkSystem SD650 V2
GPU nodes MPP system SMP nodes Tier 2 hpc system

24.800 TFlop/s (Peak Performance) , 517.00 TiB (Main Memory)

1,784 Nodes , 128,448 CPU Cores (Intel)

768 GPGPUs (Nvidia)

Learn more

Eviden XH3000
MPP system SMP nodes Tier 2 hpc system

4.900 TFlop/s (Peak Performance) , 445.00 TiB (Main Memory)

768 Nodes , 98,304 CPU Cores (AMD)

Learn more

Paderborn Center for Parallel Computing

Paderborn
Arbor Workstation 2HE C612 INTEL
FPGA nodes Tier 3 hpc system

512.00 GiB (Main Memory)

8 Nodes , 32 CPU Cores (Intel)

16 FPGAs (Alpha Data)

Learn more

Cray CS500
GPU nodes MPP system Tier 3 hpc system

835 TFlop/s (Peak Performance) , 51.00 TiB (Main Memory)

274 Nodes (Intel Omnipath) , 10,960 CPU Cores (Intel)

18 GPGPUs (Nvidia)

Learn more

Filesystems
Scratch (Cray ClusterStor)
720 TB
Atos Bull Sequana XH2000, Nvidia DGX -A100
FPGA nodes GPU nodes MPP system Tier 2 hpc system

7.100 TFlop/s (Peak Performance) , 347.00 TiB (Main Memory)

1,121 Nodes (Mellanox HDR100-Infiniband) , 143,488 CPU Cores (AMD)

136 GPGPUs (Nvidia) , 80 FPGAs (Bittware, AMD XILINX)

Learn more

Filesystems
Scratch (DDN Exascaler 7990X with NVMe accelerator)
6000 TB

Regionales Hochschulrechenzentrum Kaiserslautern-Landau (RHRZ)

Kaiserslautern
HPC-Cluster Elwetritsch II der TUK
Dell PowerEdge R, NEC HPC1812Rh-1, Nvidia DGX-2, DGX-1
GPU nodes GPU system MPP system SMP nodes SSD storage Tier 3 hpc system Xeon-Phi nodes

3.072 TFlop/s (Peak Performance) , 52.00 TiB (Main Memory)

489 Nodes (Unknown QDR-Infiniband, Intel Omnipath) , 10,520 CPU Cores (Intel, AMD)

56 GPGPUs (Nvidia) , 56 Applications (228 Versions)

Learn more

NEC HPC2824Ri-2, DELTA Computer Products GmbH D22z-M2-ZG

66.00 TiB (Main Memory)

243 Nodes ( Omnipath) , 4,128 CPU Cores (AMD)

48 GPGPUs (Nvidia)

Learn more

Regionales Rechenzentrum der Universität zu Köln

Köln
Bull bullx S
MPP system SMP nodes Tier 3 hpc system

100 TFlop/s (Peak Performance) , 35.00 TiB (Main Memory)

841 Nodes (Unknown QDR-Infiniband) , 9,712 CPU Cores (Intel)

Learn more

Filesystems
CHEOPS (Lustre)
500 TB

Scientific Computing Center

Karlsruhe
bwUniCluster 2.0 Stufe 1
Hewlett Packard Enterprise (HPE) ProLiant, Lenovo ThinkSystem
GPU nodes MPP system SMP nodes Tier 3 hpc system

155.00 TiB (Main Memory)

837 Nodes ( HDR200-Infiniband) , 40,608 CPU Cores (Intel)

196 GPGPUs (Nvidia)

Learn more

HoreKa
Lenovo ThinkSystem
GPU nodes MPP system Tier 2 hpc system

242.00 TiB (Main Memory)

769 Nodes ( HDR200-Infiniband) , 58,444 CPU Cores (Intel)

668 GPGPUs (Nvidia)

Learn more

Eggenstein-Leopoldshafen

Zentrum für Datenverarbeitung

Mainz
Megware
GPU nodes MPP system Tier 2 hpc system

379 TFlop/s (Peak Performance) , 88.00 TiB (Main Memory)

570 Nodes (Unknown QDR-Infiniband) , 35,760 CPU Cores (Intel, AMD)

52 GPGPUs (Nvidia) , 8 Many Core Processors (Intel)

Learn more

Filesystems
Lustre (Lustre)
7394 TB
Transtec
MPP system Tier 3 hpc system

106 TFlop/s (Peak Performance) , 10.00 TiB (Main Memory)

320 Nodes (Unknown QDR-Infiniband) , 5,120 CPU Cores (Intel)

Learn more

Megware MiriQuid, NEC NEC Cluster, Intel NEC Cluster
MPP system SSD storage Tier 2 hpc system

3.125 TFlop/s (Peak Performance) , 190.00 TiB (Main Memory)

1,948 Nodes (Intel Omnipath) , 52,248 CPU Cores (Intel)

188 GPGPUs (Nvidia, NEC)

Learn more

Filesystems
Mogon (GPFS)
1024 TB
Lustre (Lustre)
7394 TB

Center for Information Services and High Performance Computing

Dresden
NEC HPC 22S8Ri-4

5.443 TFlop/s (Peak Performance) , 34.00 TiB (Main Memory)

34 Nodes ( HDR200-Infiniband) , 1,632 CPU Cores (AMD)

272 GPGPUs (Nvidia)

Learn more

Filesystems
Taurus (Lustre)
44 TB
home + interact (ES200NVX2)
1000 TB
HPC-System Barnard
Eviden XH2000

4.050 TFlop/s (Peak Performance) , 315.00 TiB (Main Memory)

630 Nodes ( HDR100-Infiniband) , 65,520 CPU Cores (Intel)

Learn more

Filesystems
horse (ES400NVX2)
20000 TB
home + interact (ES200NVX2)
1000 TB
NEC Gigabyte

94.00 TiB (Main Memory)

188 Nodes , 24,064 CPU Cores (AMD)

Learn more

Filesystems
home + interact (ES200NVX2)
1000 TB
IBM IBM Power9

13.00 TiB (Main Memory)

30 Nodes , 1,320 CPU Cores (IBM)

180 GPGPUs (Nvidia)

Learn more

Filesystems
horse (ES400NVX2)
20000 TB
home + interact (ES200NVX2)
1000 TB
Hewlett Packard Enterprise (HPE) Superdome Flex

48.00 TiB (Main Memory)

25,088 CPU Cores (Intel)

Learn more

Filesystems
horse (ES400NVX2)
20000 TB
home + interact (ES200NVX2)
1000 TB

Erlangen National Center for High Performance Computing (NHR@FAU)

Erlangen
Megware Linux-Cluster
MPP system Tier 3 hpc system

511 TFlop/s (Peak Performance) , 46.00 TiB (Main Memory)

728 Nodes (Intel Omnipath) , 14,560 CPU Cores (Intel)

21 Applications (61 Versions)

Learn more

Filesystems
Global (GPFS/NFS)
50 TB
Saturn (HPE Apollo 4510 Gen10)
450 TB
Titan (HPE Apollo 4510 Gen10)
450 TB
Fundus-Disk (GPFS)
6000 TB
Atuin (PowerEdge XE7100/XE7440)
1000 TB
Archives
Fundus-Tape (IBM TS4500)
15000 TB
Sysgen GPU-Nodes II, GPU-Nodes III, GPU-Nodes V, Supermicro 2124GQ-NART, Nvidia GPU-Nodes IV, TRADEX SYSTEMS Sp. z o.o. Gigabyte G481-H80
GPU nodes GPU system Tier 3 hpc system

5.00 TiB (Main Memory)

45 Nodes , 1,392 CPU Cores (Intel, AMD)

208 GPGPUs (Nvidia) , 3 Applications (8 Versions)

Learn more

Filesystems
Global (GPFS/NFS)
50 TB
Saturn (HPE Apollo 4510 Gen10)
450 TB
Titan (HPE Apollo 4510 Gen10)
450 TB
Fundus-Disk (GPFS)
6000 TB
Atuin (PowerEdge XE7100/XE7440)
1000 TB
Archives
Fundus-Tape (IBM TS4500)
15000 TB
Megware Supermicro X10DRH, RaidMedia Gigabyte H262-Z66

22.00 TiB (Main Memory)

47 Nodes , 2,484 CPU Cores (Intel, AMD)

Learn more

Filesystems
Global (GPFS/NFS)
50 TB
Saturn (HPE Apollo 4510 Gen10)
450 TB
Titan (HPE Apollo 4510 Gen10)
450 TB
Fundus-Disk (GPFS)
6000 TB
Atuin (PowerEdge XE7100/XE7440)
1000 TB
Archives
Fundus-Tape (IBM TS4500)
15000 TB
HPC-System Alex
Megware GPGPU Cluster, GPU-Cluster, NF5488A5
GPU nodes GPU system Tier 2 hpc system Tier 3 hpc system

6.080 TFlop/s (Peak Performance) , 78.00 TiB (Main Memory)

82 Nodes (Mellanox HDR200-Infiniband, HDR200-Infiniband) , 10,496 CPU Cores (AMD)

656 GPGPUs (Nvidia)

Learn more

Filesystems
Global (GPFS/NFS)
50 TB
Saturn (HPE Apollo 4510 Gen10)
450 TB
Titan (HPE Apollo 4510 Gen10)
450 TB
Fundus-Disk (GPFS)
6000 TB
Atuin (PowerEdge XE7100/XE7440)
1000 TB
Fritz (Lustre)
3700 TB
Archives
Fundus-Tape (IBM TS4500)
15000 TB
HPC-System Fritz
Megware D50TNP
MPP system Tier 2 hpc system Tier 3 hpc system

5.450 TFlop/s (Peak Performance) , 248.00 TiB (Main Memory)

992 Nodes (Mellanox HDR100-Infiniband) , 71,424 CPU Cores (Intel)

Learn more

Filesystems
Global (GPFS/NFS)
50 TB
Saturn (HPE Apollo 4510 Gen10)
450 TB
Titan (HPE Apollo 4510 Gen10)
450 TB
Fundus-Disk (GPFS)
6000 TB
Atuin (PowerEdge XE7100/XE7440)
1000 TB
Fritz (Lustre)
3700 TB
Archives
Fundus-Tape (IBM TS4500)
15000 TB