On the gradient of neuronetwork function
Vestnik rossijskih universitetov. Matematika, Tome 22 (2017) no. 3, pp. 552-557
Cet article a éte moissonné depuis la source Math-Net.Ru
The paper proposes a matrix formula for the gradient of neuronetwork function $\nabla_W f(X;W)$ with respect to the parameter vector $W$.
Keywords:
neuronetwork function, neural network, backpropagation algorithm
Mots-clés : Hadamard product.
Mots-clés : Hadamard product.
@article{VTAMU_2017_22_3_a6,
author = {N. M. Mishachev and A. M. Shmyrin},
title = {On the gradient of neuronetwork function},
journal = {Vestnik rossijskih universitetov. Matematika},
pages = {552--557},
year = {2017},
volume = {22},
number = {3},
language = {ru},
url = {http://geodesic.mathdoc.fr/item/VTAMU_2017_22_3_a6/}
}
N. M. Mishachev; A. M. Shmyrin. On the gradient of neuronetwork function. Vestnik rossijskih universitetov. Matematika, Tome 22 (2017) no. 3, pp. 552-557. http://geodesic.mathdoc.fr/item/VTAMU_2017_22_3_a6/
[1] C. Haykin, Neural Networks: A Comprehensive Foundation, Viliams, Moscow, 2006 (In Russian)
[2] D. E. Rumelhart, G. E. Hinton, R. J. Williams, “Learning Internal Representations by Error Propagation”, chapter 8, Parallel Distributed Processing, MIT Press, Cambridge, Massachusetts, 1986, 318–362