You are here: Home Services IT-Services High Performance Computing

HPC Service Overview

Introduction

The IT-Services Group operates  high performance cluster computers  for scientific calculations (numerical simulation experiments) since 1993. 

The current cluster - an IBM / Lenovo NeXtScale based system - had been installed during summer of 2015 after an EU-wide competitive bidding and selection process (Wettbewerblicher Dialog mit Teilnahmewettbewerb). The system had been co-funded by the German federal government, the Land Brandenburg and the European Union.

According to the national supercomputer classification of the Deutsche Forschungsgemeinschaft (DFG) it is a tier-3 system. Its main purpose is to serve as a base for model development and capacity computing production runs. It ranked place 354 on the official June 2015 TOP500 list of the fastest supercomputers worldwide while installation still being in the final phase and will stay on the list at least until summer 2016.

 

Rank Site System Cores Rmax (TFlop/s) Rpeak (TFlop/s) Power (kW)
354 Potsdam Institute for Climate Impact Research
Germany
HLR2015 - Lenovo NeXtScale nx360M5, Xeon E5-2667v3 8C 3.2GHz, Infiniband FDR14
Lenovo/IBM
5,040 212.8 258.0 128

 

The cluster computer is  available to all scientists of the institute and to external scientists affiliated with the institute through co-operation agreements.  Registration with IT-Services is required prior to accessing the system.

 

The IBM / Lenovo NextScale Cluster (2015)

Five racks of the direct water cooled IBM NeXtScale Cluster installed at PIK in summer 2015 viewed from the back.

Photo cortesy of Lothar Lindenhan (PIK)

Cluster Highlights

  • State-of-the art  Intel Haswell processors with scalar frequencies of up to 3.4 GHz and 4 GByte DDR4 memory per core,
  • A set of graphical coprocessors to support development of new applications,
  • High available parallel file system with 2 PByte capacity and 20 GBps read/write bandwidth,
  • Non-blocking high performance FDR Infiniband  network ,
  • Direct watercooled processors and memory with waste heat  used to heat office building(s) during winter season.

 

Cluster Basic Metrics

Hardware

  • 12 support servers in high availability pairs to be used as:
  • 312 direct water cooled Lenovo nx360 M5 compute server each equipped with:
    • two Intel Xeon E5-2667 v3 8C, 3.2GHz, 20MB, 2133MHz, 135W processors,
    • 64 GByte DDR4 RAM 2133 MHz
    • Mellanox Connect-IB FDR port (56 Gb/s)
  • 6 air cooled compute servers each equipped with:
    • two Intel Xeon E5-2667 v3 8C, 3.2GHz, 20MB, 2133MHz, 135W processors,
    • 256 GByte DDR4 RAM 2133 MHz
    • Mellanox Connect-IB FDR port (56 Gb/s)
    • nVidia Kepler K40 accelerator (only two systems).
  • A total of 5.088 CPU cores, primarily to be used for batch processing.
  • A Mellanox SX6536 648-Port FDR InfiniBand Director Switch
  • 2 PByte total net files system capacity.
    Based on two IBM X Series / Lenovo GSS-24 systems, equipped with a total of 464 6TB disk drives and attached to the FDR Infiniband network.

System Software

 

Acknowledgments

Authors are encouraged to acknowledge the funding agencies, if their papers rely on numerical experiments conducted on the high performance computer of the institute. An example text is provided here [log-in required].

 

Cluster Access

Login via secure shell (ssh) command using publickey authentication to interactive login host(s): cluster.pik-potsdam.de  [log-in required].

 

Cluster Documentation

The complete set of user documentation is available here .

 

Cluster Support

Questions and comments should be sent to: this e-mail address  [log-in required].


Cluster Utilization

A set of cluster statistics by user / group, month and year is available here  [log-in required].

 

Cost of Computing  and Storage Capacity

Estimates for the cost of computing  per CPU/h and for storage capacity per TByte  are provided  here  [log-in required].

 


Document Actions