Artificial Neural Network as a Universal Model of Nonlinear Dynamical Systems
Russian journal of nonlinear dynamics, Tome 17 (2021) no. 1, pp. 5-21.

Voir la notice de l'article provenant de la source Math-Net.Ru

We suggest a universal map capable of recovering the behavior of a wide range of dynamical systems given by ODEs. The map is built as an artificial neural network whose weights encode a modeled system. We assume that ODEs are known and prepare training datasets using the equations directly without computing numerical time series. Parameter variations are taken into account in the course of training so that the network model captures bifurcation scenarios of the modeled system. The theoretical benefit from this approach is that the universal model admits applying common mathematical methods without needing to develop a unique theory for each particular dynamical equations. From the practical point of view the developed method can be considered as an alternative numerical method for solving dynamical ODEs suitable for running on contemporary neural network specific hardware. We consider the Lorenz system, the Rцssler system and also the Hindmarch–Rose model. For these three examples the network model is created and its dynamics is compared with ordinary numerical solutions. A high similarity is observed for visual images of attractors, power spectra, bifurcation diagrams and Lyapunov exponents.
Keywords: neural network, dynamical system, numerical solution, universal approximation theorem, Lyapunov exponents.
@article{ND_2021_17_1_a1,
     author = {P. V. Kuptsov and A. V. Kuptsova and N. V. Stankevich},
     title = {Artificial {Neural} {Network} as a {Universal} {Model} of {Nonlinear} {Dynamical} {Systems}},
     journal = {Russian journal of nonlinear dynamics},
     pages = {5--21},
     publisher = {mathdoc},
     volume = {17},
     number = {1},
     year = {2021},
     language = {en},
     url = {http://geodesic.mathdoc.fr/item/ND_2021_17_1_a1/}
}
TY  - JOUR
AU  - P. V. Kuptsov
AU  - A. V. Kuptsova
AU  - N. V. Stankevich
TI  - Artificial Neural Network as a Universal Model of Nonlinear Dynamical Systems
JO  - Russian journal of nonlinear dynamics
PY  - 2021
SP  - 5
EP  - 21
VL  - 17
IS  - 1
PB  - mathdoc
UR  - http://geodesic.mathdoc.fr/item/ND_2021_17_1_a1/
LA  - en
ID  - ND_2021_17_1_a1
ER  - 
%0 Journal Article
%A P. V. Kuptsov
%A A. V. Kuptsova
%A N. V. Stankevich
%T Artificial Neural Network as a Universal Model of Nonlinear Dynamical Systems
%J Russian journal of nonlinear dynamics
%D 2021
%P 5-21
%V 17
%N 1
%I mathdoc
%U http://geodesic.mathdoc.fr/item/ND_2021_17_1_a1/
%G en
%F ND_2021_17_1_a1
P. V. Kuptsov; A. V. Kuptsova; N. V. Stankevich. Artificial Neural Network as a Universal Model of Nonlinear Dynamical Systems. Russian journal of nonlinear dynamics, Tome 17 (2021) no. 1, pp. 5-21. http://geodesic.mathdoc.fr/item/ND_2021_17_1_a1/

[1] Dokl. Akad. Nauk SSSR (N.S.), 108 (1956), 179–182 (Russian) | MR | MR | Zbl

[2] Dokl. Akad. Nauk SSSR, 114 (1957), 953–956 (Russian) | MR | MR | Zbl | Zbl

[3] Dokl. Akad. Nauk SSSR, 114 (1957), 679–681 (Russian) | MR | MR | Zbl | Zbl

[4] Ku̇rková, V., “Kolmogorov's Theorem and Multilayer Neural Networks”, Neural Netw., 5:3 (1992), 501–506 | DOI | MR

[5] Hornik, K., Stinchcombe, M., and White, H., “Multilayer Feedforward Networks Are Universal Approximators”, Neural Netw., 2:5 (1989), 359–366 | DOI | MR | Zbl

[6] Cybenko, G., “Approximation by Superpositions of a Sigmoidal Function”, Math. Control Signals Syst., 2:4 (1989), 303–314 | DOI | MR | Zbl

[7] Hecht-Nielson, R.., “Kolmogorov's Mapping Neural Network Existence Theorem”, Proc. of the IEEE 1st Internat. Conf. on Neural Networks (San Diego, Calif.): Vol. 3, IEEE, Piscataway, N.J., 1987, 11–13

[8] Funahashi, K., “On the Approximate Realization of Continuous Mappings by Neural Networks”, Neural Netw., 2:3 (1989), 183–192 | DOI

[9] Light, W., “Ridge Functions, Sigmoidal Functions and Neural Networks”, Approximation Theory VII, eds. E. W. Cheney, C. K. Chui, L. L. Schumaker, Acad. Press, Boston, 1992, 158–201 | MR

[10] Haykin, S., Neural Networks and Learning Machines, 3rd ed., Pearson, New York, 2009, 936 pp.

[11] Kline, M., Mathematical Thought from Ancient to Modern Times, v. 2, Oxford Univ. Press, New York, 1990, 480 pp. | MR

[12] Kingma, D. P. and Ba, J., Adam: A Method for Stochastic Optimization, 2014, arXiv: 1412.6980 [cs.LG] | Zbl

[13] Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Zh., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mane, D., Monga, R., Moore, Sh., Murray, D., Olah, Ch., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viegas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., and Zheng, X., TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems, , 2015 https://www.tensorflow.org/

[14] Nickolls, J., Buck, I., Garland, M., and Skadron, K., “Scalable Parallel Programming with CUDA”, ACM Queue, 6:2 (2008), 42–53 | DOI

[15] Benettin, G., Galgani, L., Giorgilli, A., and Strelcyn, J.-M., “Lyapunov Characteristic Exponents for Smooth Dynamical Systems and for Hamiltonian Systems:A Method for Computing All of Them: P. 1: Theory”, Meccanica, 15 (1980), 9–20 | DOI | Zbl

[16] Shimada, I. and Nagashima, T., “A Numerical Approach to Ergodic Problem of Dissipative Dynamical Systems”, Prog. Theor. Phys., 61:6 (1979), 1605–1616 | DOI | MR | Zbl

[17] Zhang, G. P., “Neural Networks for Time-Series Forecasting”, Handbook of Natural Computing, eds. G. Rozenberg, Th. Bäck, J. N. Kok, Springer, Berlin, 2012, 461–477 | DOI | MR

[18] Lewis, N. D., Deep Time Series Forecasting with Python: An Intuitive Introduction to Deep Learning for Applied Time Series Modeling, Createspace, Scotts Valley, Calif., 2016, 212 pp.

[19] Brownlee, J., Deep Learning for Time Series Forecasting: Predict the Future with MLPs, CNNs and LSTMs in Python, Machine Learning Mastery, San Francisco, Calif., 2018, 557 pp.

[20] Wei, Y., Zhou, J., Wang, Y., Liu, Y., Liu, Q., Luo, J., Wang, C., Ren, F., and Huang, L., “A Review of Algorithm Hardware Design for AI-Based Biomedical Applications”, IEEE Trans. Biomed. Circuits Syst., 14:2 (2020), 145–163 | DOI | MR

[21] Talib, M. A., Majzoub, S., Nasir, Q., and Jamal, D., “A Systematic Literature Review on Hardware Implementation of Artificial Intelligence Algorithms”, J. Supercomput., 77 (2021), 1897–1938 | DOI

[22] Lorenz, E. N., “Deterministic Nonperiodic Flow”, J. Atmos. Sci., 20:2 (1963), 130–141 | 2.0.CO;2 class='badge bg-secondary rounded-pill ref-badge extid-badge'>DOI | MR | Zbl

[23] Sparrow, C., The Lorenz Equations: Bifurcations, Chaos, and Strange Attractors, Springer, Berlin, 1982, xii, 270 pp. | MR | Zbl

[24] Schuster, H. G. and Just, W., Deterministic Chaos: An Introduction, rev. and enl., 4th ed., Wiley-VCH, Weinheim, 2005, 312 pp. | MR | Zbl

[25] Rössler, O. E., “An Equation for Continuous Chaos”, Phys. Lett. A, 57:5 (1976), 397–398 | DOI | MR | Zbl

[26] Kuznetsov, S. P., Dynamical Chaos, 2nd ed., Fizmatlit, Moscow, 2006, 356 pp. (Russian)

[27] Hindmarsh, J. L. and Rose, R. M., “A Model of Neuronal Bursting Using Three Coupled First Order Differential Equations”, Proc. R. Soc. Lond. Ser. B Biol. Sci., 221:1222 (1984), 87–102 | DOI

[28] Wang, X.-J., “Genesis of Bursting Oscillations in the Hindmarsh – Rose Model and Homoclinicity to a Chaotic Saddle”, Phys. D, 62:1 (1993), 263–274 | DOI | MR | Zbl

[29] Petzold, L., “Automatic Selection of Methods for Solving Stiff and Nonstiff Systems of Ordinary Differential Equations”, SIAM J. Sci. Statist. Comput., 4:1 (1983), 136–148 | DOI | MR | Zbl

[30] Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A., “Automatic Differentiation in PyTorch”, Proc. of the 31st Conf. on Neural Information Processing Systems (NIPS, Long Beach, Calif., USA, 2017), 4 pp.