You can find useful information about the ITU - National High Performance Computing Center given below. In addition to the notes on the available resources and the application process, you can also find a short explanation of what a supercomputer is and where it is used.

ITU - National High Performance Computing Center

The ITU - National High Performance Computing Application and Research Center (UHeM, formerly UYBHM), supported by the Presidency's Strategy and Budget Department, has been providing high performance computing (HPC) and data storage services to both academia and industry since 2006.

1. What is High Performance Computing?

High Performance Computing (HPC), in the simplest term, is defined as distributing a computing job to multiple processors instead of running it on a single processor sequentially for a long duration. Running simulations to measure the impact of an earthquake of a certain intensity in Istanbul could be an example for such a task. Such simulations are quite large computations which, depending on the model’s details, can take months, and even years on a single computer (or single processor). To tackle this problem, it could be simplified by partitioning the city into small chunks of land, and assigning the calculations that need to be done on each chunk, to a separate computer/processor. By running all the calculations simultaneously, one can greatly speed up the time to solution and attempt to solve problems that simply cannot be solved using just one processor. It is important to note that not all problems could be easily divided into pieces, and in cases where they can, modeling the interactions between the pieces accurately could be crucial. For instance, in the earthquake simulation case above, one should carefully model the interactions between the separate land chunks; a task which requires a fast communication network between separate processors. That is, HPC not only requires high performing computing devices (processors, memory, accelerators etc.), but also high performance networks. The whole of this assembly is referred to as an HPC system, or simply, a supercomputer. It is worth pointing out that an important ingredient in HPC, along the hardware, is the complex software that allows for the use of all this specialized hardware as well as user applications that can be run in a parallel fashion.

Today, HPC is used in many different areas to save time and energy, and to conduct “virtual experiments” which are simply not possible or practical to do in the lab. It enables research in a variety of fields such as weather and economic forecasting, drug/vaccine development, defense, aerodynamics, etc. In addition, HPC has a potential use in managing the effects of natural disasters. Preventative measures can be taken by analyzing various disaster scenarios in a short period of time by running simulations. For example recently, upon the forecast of an approaching typhoon similar to the one that caused 10,000 deaths in 1999, simulations performed at the C-DAC supercomputing center in India called for an evacuation of an area inhabited by 700,000 people, which prevented a potential loss of thousands of lives. Another example for the impact of HPC can be given from the automotive industry: production planning of vehicles is guided by the results of crash tests and safety simulations done using HPC resources.

As the importance and usage of HPC in R&D activities, services, and in basic science studies increase, so does the investment in the field, resulting in the fast development of state of the art computational facilities. UHeM strives to bring such resources to the use of the academic and industrial research community, as well as to provide support and guidance to its users, and to spread awareness about the importance of HPC in scientific, technological, and economic growth.

1.1 Hardware

The first supercomputers were highly specialized devices in terms of hardware and software; they were usually being built to solve very specific problems. As computing became part of many different disciplines, the trend shifted towards building more “generic” supercomputers which, though not perfect for all, suited the needs of most. Today, most HPC systems can be thought of as scalable machines consisting of many, more or less ordinary workstations, connected with a very fast communication network. For such a system, the theoretical performance, or the “size” of the computer, is roughly the sum of its individual workstations, or “compute nodes”.

The components of a supercomputer, when thought in terms of ordinary personal computers can be summarized as follows:

  • CPU and RAM

  • storage (hard disk)

  • performance network

    Usually, the above components are being selected with high performance in mind, and this is what separates supercomputers from ordinary ones. For example, while a personal computer such as a notebook used for e-mail and office work contains a processor with two to four cores, the latest system acquired by UHeM consists of dual processor nodes, each processor having 64 cores (128 cores per compute node). When a calculation generates or needs data that does not fit into the volatile memory, again with high performance in mind, special file systems are being used which allow for writing data to a collection of hard disks in a parallel fashion. Similarly, whenever communication (data exchange) is needed between two processors located on separate machines, specialized networks allowing for fast data transfers such as 200 gigabit per second are being used. Below in the figures, example configurations for high performance systems can be seen, both belonging to UHeM.

    1.2 Software

    In HPC systems, just like the hardware, both system software and user applications play a critical role. A given program does not necessarily run faster just because it is being run on a supercomputer; in fact, often, a serial program may run faster on a new notebook computer, just because it most probably would have a newer generation processor. To be useful, applications that are to be run on a supercomputer should be coded using parallel programming techniques, so that the calculations could be distributed to many processors working simultaneously. Partitioning of the problem at hand into smaller chunks, each to be run on a single processor, is the responsibility of the programmer, and is the most important step in assuring a program’s efficiency.

page3image17922832

Figure 1: UHeM's Altay computing system

page3image17892448

Figure 2: UHeM's Sarıyer computing system

2. How can I use UHeM’s resources in my academic projects?

UHeM is open to project-based Research & Development activities. Even though most of these are academic projects funded by The Scientific and Technological Research Council of Turkey (TÜBİTAK) or by universities, UHeM could also be used in industrial R&D projects. The former is being heavily subsidized by UHeM. To encourage research and grant applications to various agencies, UHeM is also offering an “Academic Preparation Program” which provides a free start-up allocation to all faculty affiliated with a university in the country, as a means of obtaining preliminary results. Similarly, certain amounts of computing resources are being allocated free of charge to graduate and undergraduate students to let them take advantage of computational research facilities in their theses, homework, and course projects. The current size of the above mentioned allocations are listed on our web site (see the links at the end of this document).

Our web site also includes information on the project/account application process. Once the process is completed and a user account is established, UHeM’s users can connect to our facilities remotely and perform their computations. To do so, minimal knowledge of the Linux operating system is required, which can also be found among our Wiki pages. The Wiki also contains a lot of very useful information about submitting jobs to the system, checking the status and the efficiency of running calculations, visualizing the results, frequently used programs, etc.

In addition to providing computational resources, UHeM also organizes introductory level free lectures and workshops on topics such as parallel programming and Python, Linux seminars and UHeM orientations aimed at first time users. These are usually being advertized on the university’s activities page, and individuals could also become aware of our trainings by e-mail when they provide an e-mail address via our training portal (see bottom of https://training.uhem.itu.edu.tr).

Links:

Main page: https://www.uhem.itu.edu.tr
Academic Preparation Program: https://www.uhem.itu.edu.tr/akademik-hazirlik.html 
Project application portal: https://portal.uhem.itu.edu.tr/
Wiki site with useful information on system use, Linux, parallel programming, etc.: https://wiki.uhem.itu.edu.tr/w/index.php?title=English
Information related to academic and student accounts: https://www.uhem.itu.edu.tr/kullanim-turleri.html

İTÜ Informatics Institute

bilisim-anasayfa-hakkimizda

ITU Informatics Institute provides graduate-level education and research in applied informatics, computer sciences, computational science and engineering, communication systems under the following programs.

Faculty members and students conduct research supported by national and international organızatıons in the fields of electromagnetic fields, communication systems/regulations, computational materials design, computational chemistry/biology, cryptography, signal/data processing/visualization, big data management, climate and ocean sciences, 

  • List of Most Influential Scientists; Associate Professor. B. Uğur Töreyin (article by Dr. John PA Ioannidis, K. W. Boyack and J. Baas published in the journal PLOS Biology)
  • Beltus Nkwawir Wiysobunri, the best project award in the Science category, in the 2020 International Students Project Competition
  • Argenit company, of which Dr. Abdulkerim Çapar is among the founding partners, received the "National-International Supports" First Prize of ITU ARI Teknokent.
  • TÜBİTAK 2242 University Students Project Competition in Priority Areas: Istanbul region first place - Ahmet Burak Özyurt
  • Best Presentation Award at ICAT'18 Conference: Sena Efsun Cebeci, 2018
  • Tubitak Incentive Award; 2016 Assoc. Prof. Adem Tekin
  • “Technical Paper” and “Willis H. Carrier” Award by the American Heating, Cooling and Air Conditioning Association; 2016 Assist. Prof. Dr. H. Salih Erden
  • Science Heroes Association Young Scientist of the Year Award; 2016 Assoc. B. Uğur Töreyin Best Poster Award at PRACEdays 2016 conference; Samet Demir
  • ITU Most Successful Thesis Award; 2016 Hatice Gokcan

There is also a High Performance Computing Laboratory established with the support of the State Planning Organization within the Institute.