The ITU - National High Performance Computing Application and Research Center (UHeM, formerly UYBHM), supported by the Presidency's Strategy and Budget Department, has been providing high performance computing (HPC) and data storage services to both academia and industry since 2006.
1. What is High Performance Computing?
High Performance Computing (HPC), in the simplest term, is defined as distributing a computing job to multiple processors instead of running it on a single processor sequentially for a long duration. Running simulations to measure the impact of an earthquake of a certain intensity in Istanbul could be an example for such a task. Such simulations are quite large computations which, depending on the model’s details, can take months, and even years on a single computer (or single processor). To tackle this problem, it could be simplified by partitioning the city into small chunks of land, and assigning the calculations that need to be done on each chunk, to a separate computer/processor. By running all the calculations simultaneously, one can greatly speed up the time to solution and attempt to solve problems that simply cannot be solved using just one processor. It is important to note that not all problems could be easily divided into pieces, and in cases where they can, modeling the interactions between the pieces accurately could be crucial. For instance, in the earthquake simulation case above, one should carefully model the interactions between the separate land chunks; a task which requires a fast communication network between separate processors. That is, HPC not only requires high performing computing devices (processors, memory, accelerators etc.), but also high performance networks. The whole of this assembly is referred to as an HPC system, or simply, a supercomputer. It is worth pointing out that an important ingredient in HPC, along the hardware, is the complex software that allows for the use of all this specialized hardware as well as user applications that can be run in a parallel fashion.
Today, HPC is used in many different areas to save time and energy, and to conduct “virtual experiments” which are simply not possible or practical to do in the lab. It enables research in a variety of fields such as weather and economic forecasting, drug/vaccine development, defense, aerodynamics, etc. In addition, HPC has a potential use in managing the effects of natural disasters. Preventative measures can be taken by analyzing various disaster scenarios in a short period of time by running simulations. For example recently, upon the forecast of an approaching typhoon similar to the one that caused 10,000 deaths in 1999, simulations performed at the C-DAC supercomputing center in India called for an evacuation of an area inhabited by 700,000 people, which prevented a potential loss of thousands of lives. Another example for the impact of HPC can be given from the automotive industry: production planning of vehicles is guided by the results of crash tests and safety simulations done using HPC resources.
As the importance and usage of HPC in R&D activities, services, and in basic science studies increase, so does the investment in the field, resulting in the fast development of state of the art computational facilities. UHeM strives to bring such resources to the use of the academic and industrial research community, as well as to provide support and guidance to its users, and to spread awareness about the importance of HPC in scientific, technological, and economic growth.