The UD RLS Algorithm for Training Feedforward Neural Networks
International Journal of Applied Mathematics and Computer Science, Tome 15 (2005) no. 1, pp. 115-123.

Voir la notice de l'article provenant de la source Library of Science

A new algorithm for training feedforward multilayer neural networks is proposed. It is based on recursive least squares procedures and U-D factorization, which is a well-known technique in filter theory. It will be shown that due to the U-D factorization method, our algorithm requires fewer computations than the classical RLS applied to feedforward multilayer neural network training.
Keywords: neural networks, learning algorithms, recursive least squares method, UD factorization
Mots-clés : sieć neuronowa, algorytm uczenia, metoda najmniejszych kwadratów
@article{IJAMCS_2005_15_1_a8,
     author = {Bilski, J.},
     title = {The {UD} {RLS} {Algorithm} for {Training} {Feedforward} {Neural} {Networks}},
     journal = {International Journal of Applied Mathematics and Computer Science},
     pages = {115--123},
     publisher = {mathdoc},
     volume = {15},
     number = {1},
     year = {2005},
     language = {en},
     url = {http://geodesic.mathdoc.fr/item/IJAMCS_2005_15_1_a8/}
}
TY  - JOUR
AU  - Bilski, J.
TI  - The UD RLS Algorithm for Training Feedforward Neural Networks
JO  - International Journal of Applied Mathematics and Computer Science
PY  - 2005
SP  - 115
EP  - 123
VL  - 15
IS  - 1
PB  - mathdoc
UR  - http://geodesic.mathdoc.fr/item/IJAMCS_2005_15_1_a8/
LA  - en
ID  - IJAMCS_2005_15_1_a8
ER  - 
%0 Journal Article
%A Bilski, J.
%T The UD RLS Algorithm for Training Feedforward Neural Networks
%J International Journal of Applied Mathematics and Computer Science
%D 2005
%P 115-123
%V 15
%N 1
%I mathdoc
%U http://geodesic.mathdoc.fr/item/IJAMCS_2005_15_1_a8/
%G en
%F IJAMCS_2005_15_1_a8
Bilski, J. The UD RLS Algorithm for Training Feedforward Neural Networks. International Journal of Applied Mathematics and Computer Science, Tome 15 (2005) no. 1, pp. 115-123. http://geodesic.mathdoc.fr/item/IJAMCS_2005_15_1_a8/

[1] Abid S., Fnaiech F. and Najim M. (2001): A fast feedforward training algorithm using a modified form of the standard backpropagation algorithm. — IEEE Trans. Neural Netw. Vol. 12, No. 2, pp. 424–434.

[2] Ampazis N. and Perantonis J. (2002): Two highly efficient second-order algorithms for training feedforward networks. — IEEE Trans. Neural Netw. Vol. 13, No. 5, pp. 1064–1074.

[3] Azimi-Sadjadi M.R. and Liou R.J. (1992): Fast learning process of multi-layer neural network using recursive least squares method. — IEEE Trans. Signal Process. Vol. 40, No. 2, pp. 443–446.

[4] Bilski J. (1995): Fast learning procedures for neural networks. — Ph.D. Thesis, AGH University of Science and Technology, (in Polish).

[5] Bilski J. and Rutkowski L. (1996): The recursive least squares method versus the backpropagation learning algorithms. —Second Conf. Neural Networks and Their Applications, Szczyrk, Poland, pp. 25–31.

[6] Bilski J. and Rutkowski L. (1998): A fast training algorithm for neural networks. — IEEE Trans. Circuits Syst. II, Vol. 45, No. 6, pp. 749–753,

[7] Bilski J. and Rutkowski L. (2003): A family of the RLS neural network learning algorithms. — Techn. Report, Dept. Comp. Eng., Technical University of Częstochowa, Poland.

[8] Bishop C.M. (1995): Neural Networks for Pattern Recognition. —Oxford: Clarendon Press.

[9] Bojarczak O.S.P. and Stodolski M. (1996): Fast second-order learning algorithm for feedforward multilayer neural networks and its application. — Neural Netw. Vol. 9, No. 9, pp. 1583–1596.

[10] Chen X.- H.Y.G.- A. (1992): Efficient backpropagation learning using optimal learning rate and momentum. — Neural Netw., Vol. 10, No. 3, pp. 517–527.

[11] Joost M. and Schiffmann W. (1998): Speeding up backpropagation algorithms by using cross-entropy combined with pattern normalization.—Int. J. Uncert. Fuzz. Knowledge-Based Syst., Vol. 6, No. 2, pp. 117–126.

[12] Karayiannis N.B. and Venetsanopoulos A.N. (1993): Efficient Learning Algorithms for Neural Networks (ELEANNE).—IEEE Trans. Syst. Man Cybern., Vol. 23, No. 5, pp. 1372–1383.

[13] Kitano H. (1994): Neurogenetic learning: an integrated method of designing and training neural networks using genetic algorithms.—Physica D., Vol. 75, No. 1–3, pp. 225–238.

[14] Korbicz J., Obuchowicz A., Uciński D. (1994): Artificial Neural Networks. Fundamentals and Applications.—Warsaw: Akademicka Oficyna Wydawnicza PLJ, (in Polish).

[15] Lera G. and Pinzolas M. (2002): Neighbourhood based Levenberg-Marquardt algorithm for neural network training.— IEEE Trans. Neural Netw. Vol. l3, No. 5, pp. 1200–1203.

[16] Leung Ch.S., Tsoi Ah.Ch. and Chan L. W. (2001): Two regularizers for recursive least squared algorithms in feedforward multilayered neural networks.—IEEE Trans. Neural Netw. Vol. 12, No. 6, pp. 1314–1332.

[17] Moller M. (1993): A scaled conjugate gradient algorithm for fast supervised learning. — Neural Netw. Vol. 6, No. 4, pp. 525–533.

[18] Perantonis S. and Karras D. (1995): An efficient constrained learning algorithm with momentum acceleration. — Neural Netw. Vol. 8, No. 2, pp. 237–249.

[19] Rutkowski L. (1994): Adaptive Signal Processing: Theory and Applications. — Warsaw: WNT, (in Polish).

[20] Strobach P. (1990): Linear Prediction Theory – A Mathematical Basis for Adaptive Systems. — New York: Springer-Verlag.

[21] Sum J., Chan L.W., Leung C.S. and Young G. (1998): Extended Kalman filter-based pruning method for recurrent neural networks. — Neural Comput. Vol. 10, No. 6, pp. 1481–1505.

[22] Sum J., Leung C., Young G.H. and Kan W. (1999): On the Kalman filtering method in neural-network training and pruning. — IEEE Trans. Neural Netw., Vol. 10, No. 1, pp. 161–166.

[23] Wellstead P.E. and Zarrop M.B. (1991): Self-Tuning Systems Control and Signal Processing.—Chichester Wiley.

[24] Yao X. (1999): Evolving artificial neural networks. — Proc. IEEE, Vol. 87, No. 9, pp. 1423–1447.

[25] Zhang Y. and Li R. (1999): A fast U-D factorization-based learning algorithm with applications to nonlinear system modelling and identification.—IEEE Trans. Neural Netw. Vol. 10, No. 4, pp. 930–938.