The IAP has an excellent computing infrastructure available. The core is build of backbone switches by the company of Cisco. They supplies all desktops with 1 GBit/s and some servers with 10 GBit/s ethernet network connections in IAP.
The IAP is connected redundantly to the internet via the Deutsches Forschungsnetz (DFN). The band width of the connections are each 500 MBit/s. Corresponding equipment secures the access to the IAP network. There are used different technologies to prevent unauthorized access, impact of viruses and spam.
Demanding scientific tasks require demanding computer equipment placed at IAP in the High Performance Computing (HPC) segment. Central resource is a hybrid computer system by the company of SGI. On the one hand it consist of 36 cluster nodes with 12 cores Intel Nehalem EP 2,67 Ghz respectively and 48 GByte main memory and on the other hand of a Shared Memory Machine (Ultraviolet 2000) with 460 Cores Intel Ivy Bridge EP and 2,8 TByte main memory. With it the IAP disposes of a Computing System with 892 Cores and 4,8 TByte main memory. Using Hyperthreading (2 logical cpu's per core) there are even 1584 ! The batch system PBSpro by the company of Altair provides for an efficient distribution of the compute power to each of the computers.
Not least caused by these HPC computers the need of storage capacity is immense at IAP. Scientific experiments and measurements require and produce enormous amounts of data. To cope with corresponding data the IAP has available a magnetic tape robot system, a so called tape library by the company of Quantum. The capacity amounts to 2,8 PByte in the moment, controlled by a computer system by the company of SGI.
Each of the 3 departments of IAP has a smaller Server system as well as a multitude of desktop and experimental PC's for more important tasks available.