On the use of federal scientific telecommunication infrastructure for high performance computing
Vestnik Ûžno-Uralʹskogo gosudarstvennogo universiteta. Seriâ Vyčislitelʹnaâ matematika i informatika, Tome 9 (2020) no. 1, pp. 20-35 Cet article a éte moissonné depuis la source Math-Net.Ru

Voir la notice de l'article

The article is devoted to the prospects for the development of scientific telecommunications infrastructure based on the new generation national research computer network (NRCN), formed by the integration of departmental scientific and educational networks RUNNet and RASNet. The new network' capabilities for combining supercomputer resources and providing barrier-free access to them are shown. Based on the generalized world experience, it has been shown that supercomputer infrastructures have special requirements for a telecommunication network for data transmission and the presence of a number of additional services. These requirements go far beyond the services of commercial telecom providers and, as a rule, can only be satisfied by the combined efforts of national scientific and educational networks. The key elements of the federal telecommunications infrastructure necessary for combining high-performance computing resources are considered: high-performance communication channels with a specified quality of service, their automatic allocation on demand and on schedule, trusted network environment, federated authentication and authorization, reliability and security, end-to-end monitoring of the data transmission path between end users. Based on the analysis of the life cycle of the supercomputer job migrating at the distributed network, the requirements for the NRCN telecommunications infrastructure and services based on it are formulated.
Keywords: national science and education network, supercomputer center, shared research facilities, distributed computing
Mots-clés : telecommunications infrastructure.
@article{VYURV_2020_9_1_a1,
     author = {G. I. Savin and B. M. Shabanov and A. V. Baranov and A. P. Ovsyannikov and A. A. Gonchar},
     title = {On the use of federal scientific telecommunication infrastructure for high performance computing},
     journal = {Vestnik \^U\v{z}no-Uralʹskogo gosudarstvennogo universiteta. Seri\^a Vy\v{c}islitelʹna\^a matematika i informatika},
     pages = {20--35},
     year = {2020},
     volume = {9},
     number = {1},
     language = {ru},
     url = {http://geodesic.mathdoc.fr/item/VYURV_2020_9_1_a1/}
}
TY  - JOUR
AU  - G. I. Savin
AU  - B. M. Shabanov
AU  - A. V. Baranov
AU  - A. P. Ovsyannikov
AU  - A. A. Gonchar
TI  - On the use of federal scientific telecommunication infrastructure for high performance computing
JO  - Vestnik Ûžno-Uralʹskogo gosudarstvennogo universiteta. Seriâ Vyčislitelʹnaâ matematika i informatika
PY  - 2020
SP  - 20
EP  - 35
VL  - 9
IS  - 1
UR  - http://geodesic.mathdoc.fr/item/VYURV_2020_9_1_a1/
LA  - ru
ID  - VYURV_2020_9_1_a1
ER  - 
%0 Journal Article
%A G. I. Savin
%A B. M. Shabanov
%A A. V. Baranov
%A A. P. Ovsyannikov
%A A. A. Gonchar
%T On the use of federal scientific telecommunication infrastructure for high performance computing
%J Vestnik Ûžno-Uralʹskogo gosudarstvennogo universiteta. Seriâ Vyčislitelʹnaâ matematika i informatika
%D 2020
%P 20-35
%V 9
%N 1
%U http://geodesic.mathdoc.fr/item/VYURV_2020_9_1_a1/
%G ru
%F VYURV_2020_9_1_a1
G. I. Savin; B. M. Shabanov; A. V. Baranov; A. P. Ovsyannikov; A. A. Gonchar. On the use of federal scientific telecommunication infrastructure for high performance computing. Vestnik Ûžno-Uralʹskogo gosudarstvennogo universiteta. Seriâ Vyčislitelʹnaâ matematika i informatika, Tome 9 (2020) no. 1, pp. 20-35. http://geodesic.mathdoc.fr/item/VYURV_2020_9_1_a1/

[1] V. E. Fortov, G. I. Savin, V. K. Levin, A. V. Zabrodin, B. M. Shabanov, “Creation and application of a high-performance computing system based on high-speed network technologies”, Journal of Information Technologies and Computing, 2002, no. 1, 3

[2] Deutschen Forschungsnetz } {\tt https://www.dfn.de

[3] CANARIE } {\tt https://www.canarie.ca

[4] Internet2 } {\tt https://www.internet2.edu

[5] SURFnet } {\tt https://www.surf.nl/en

[6] AARNET } {\tt https://www.aarnet.edu.au

[7] China Educational and Research Network. } {\tt http://www.edu.cn/english

[8] NORDUnet. Nordic gateway for Research and Education. } {\tt https://www.nordu. net

[9] GEANT } {\tt https://www.geant.org

[10] Asi@Connect } {\tt http://www.tein.asia

[11] Asia Pacific Advanced Network } {\tt https://apan.net

[12] RedCLARA. Latin American Cooperation of Advanced Networks } {\tt https://www.redclara.net

[13] AfricaConnect2 } {\tt https://www.africaconnect2.net

[14] C. Catlett, “The philosophy of TeraGrid: building an open, extensible, distributed TeraScale facility”, Cluster Computing and the Grid 2nd IEEE/ACM International Symposium (CCGRID 2002), 2002 | DOI

[15] XSEDE — The Extreme Science and Engineering Discovery Environment } {\tt https://www.xsede.org

[16] S. Bassini, C. Cavazonni, C. Gheller, “European actions for High-Performance Computing: PRACE, DEISA and HPC-Europa”, Il Nuovo Cimento C, 2009, 93–97

[17] PRACE — Partnetship for Advanced Computing in Europe } {\tt http://www.prace-ri.eu

[18] S. Matsuoka, S. Shimojo, M. Aoyagi, S. Sekiguchi, H. Usami, K. Miura, “Japanese Computational Grid Research Project: NAREGI”, Proceedings of the IEEE, 93:3 (2005), 522–533 | DOI

[19] PRACE: Europe’s supercomputing infrastructure relies on GEANT } {\tt https://impact.geant.org/portfolio/prace

[20] MD-VPN Product Description } {\tt https://wiki.geant.org/display/PLMTES/MD-VPN+Product+Description

[21] XSEDE System Requirements Specification v3.1 } {\tt http://hdl.handle.net/2142/45102

[22] B. Shabanov, A. Ovsiannikov, A. Baranov, S. Leshchev, B. Dolgov, D. Derbyshev, “The distributed network of the supercomputer centers for collaborative research”, Program systems: Theory and applications, 8:4 (2017), 245–262 | DOI | MR

[23] B. M. Shabanov, P. N. Telegin, A. P. Ovsyannikov, A. V. Baranov, A. I. Tikhomirov, D. S. Lyakhovets, “The Jobs Management System for the Distributed Network of the Supercomputer Centers”, The Proceeding of the Scientific Research Institute for System Analysis of the Russian Academy of Sciences, 8:6 (2018), 65–73 | DOI

[24] A. V. Baranov, A. I. Tikhomirov, “Methods and Tools for Organizing the Global Job Queue in the Geographically Distributed Computing System.”, Bulletin of the South Ural State University Series: Computational Mathematics and Software Engineering, 6:4 (2017), 28–42 | DOI

[25] B. M. Shabanov, P. N. Telegin, A. V. Baranov, D. V. Semenov, A. V. Chuvaev, “Dynamic Configurator for Virtual Distributed Computing Environment”, Software Journal: Theory and Applications, 2017, no. 4, 2311–6749 | DOI | MR

[26] A. V. Baranov, G. I. Savin, B. M. Shabanov, et al., “Methods of Jobs Containerization for Supercomputer Workload Managers”, Lobachevskii Journal of Mathematics, 40:5 (2019), 525–534 | DOI | MR

[27] B. M. Shabanov, O. I. Samovarov, “Building the Software Defined Data Center”, Proceedings of the Institute for System Programming, 30:6 (2018), 7–24 | DOI

[28] A. Baranov, P. Telegin, A. Tikhomirov, “Comparison of Auction Methods for Job Scheduling with Absolute Priorities”, Parallel Computing Technologies (PaCT 2017), Lecture Notes in Computer Science, 2017, 387–395 | DOI

[29] A. P. Ovsyannikov, G. I. Savin, B. M. Shabanov, “Identity federation of the research and educational networks”, Software Systems, 2012, no. 4, 3–7 | MR

[30] A. V. Baranov, B. M. Shabanov, A. P. Ovsyannikov, “Federative Identity for the Distributed Infrastructure of the Supercomputer Centers”, The Proceeding of the Scientific Research Institute for System Analysis of the Russian Academy of Sciences, 8:6 (2018), 79–83 | DOI | MR

[31] S. Koulouzis, A. Belloum, M. Bubak, P. Lamata, D. Nolte, D. Vasyunin, L. a. de, “Distributed Data Management Service for VPH Applications”, IEEE Internet Computing, 20:2 (2016), 34–41 | DOI

[32] A. Kapadia, S. Varma, K. Rajana, Implementing Cloud Storage with OpenStack Swift, Packt Publishing, 2014, 105 pp.

[33] Jones M., Anatomy of a cloud storage infrastructure. Models, features, and internals, }, 2010 {\tt https://www.ibm.com/developerworks/ru/library/cl-cloudstorage/cl-cloudstorage-pdf.pdf

[34] A. V. Baranov, D. Y. Derbyshev, B. V. Dolgov, S. A. Leshchev, A. P. Ovsyannikov, B. M. Shabanov, D. V. Vershinin, “Effective usage of the link between geographically distributed supercomputer centers”, The Proceeding of the Scientific Research Institute for System Analysis of the Russian Academy of Sciences, 7:4 (2017), 137–142

[35] A. Hanemann, et al., “PerfSONAR: A Service Oriented Architecture for Multi-domain Network Monitoring”, Lecture Notes in Computer Science, 3826, 2005, 241–254 | DOI