Home            Contact us            FAQs
    
      Journal Home      |      Aim & Scope     |     Author(s) Information      |      Editorial Board      |      MSP Download Statistics

     Research Journal of Applied Sciences, Engineering and Technology


A Novel Strategy for Speed up Training for Back Propagation Algorithm via Dynamic Adaptive the Weight Training in Artificial Neural Network

Mohameed Sarhan Al_Duais, AbdRazak Yaakub, Nooraini Yusoff and Faudziah Ahmed
Department of Computer Science, University Utara Malaysia, 06010 Sintok, Kedah, Malaysia
Research Journal of Applied Sciences, Engineering and Technology  2015  3:189-200
http://dx.doi.org/10.19026/rjaset.9.1394  |  © The Author(s) 2015
Received: June ‎25, ‎2014  |  Accepted: July ‎19, ‎2014  |  Published: January 25, 2015

Abstract

The drawback of the Back Propagation (BP) algorithm is slow training and easily convergence to the local minimum and suffers from saturation training. To overcome those problems, we created a new dynamic function for each training rate and momentum term. In this study, we presented the (BPDRM) algorithm, which training with dynamic training rate and momentum term. Also in this study, a new strategy is proposed, which consists of multiple steps to avoid inflation in the gross weight when adding each training rate and momentum term as a dynamic function. In this proposed strategy, fitting is done by making a relationship between the dynamic training rate and the dynamic momentum. As a result, this study placed an implicit dynamic momentum term in the dynamic training rate. This $$ α_{dmic} =f(\frac{1}{η_{dmic}})$$. This procedure kept the weights as moderate as possible (not to small or too large). The 2-dimensional XOR problem and buba data were used as benchmarks for testing the effects of the ‘new strategy’. All experiments were performed on Matlab software (2012a). From the experiment’s results, it is evident that the dynamic BPDRM algorithm provides a superior performance in terms of training and it provides faster training compared to the (BP) algorithm at same limited error.

Keywords:

Artificial neural network, dynamic back propagation algorithm, dynamic momentum term, dynamic training rate , speed up training,


References

  1. Al-Duais, M.S., A.R. Yaakub and N. Yusoff, 2013. Dynamic training rate for back propagation learning algorithm. Proceeding of IEEE Malaysia International Conference on Communications (MICC), pp: 277-282.
  2. Asaduzzaman, M.D., M. Shahjahan and K. Murase, 2009. Faster training using fusion of activation functions for feed forward neural networks. Int. J. Neural Syst., 6: 437-448.
    CrossRef    PMid:20039466    
  3. Abdulkadir, S.J., S.M. Shamsuddin and R. Sallehuddin, 2012. Three term back propagation network for moisture prediction. Proceeding of International Conference on Clean and GreenEnergy, 27: 103-107. Bassil, Y., 2012. Neural network model for path-planning of robotic rover systems. Int. J. Sci. Technol., 2: 94-100.
  4. Burse, K., M. Manoria and V.P. Kirar, 2010. Improved back propagation algorithm to avoid local minima in multiplicative neuron model. World Acad. Sci. Eng. Technol., 72: 429-432.
  5. Cheung, C.C., S.C. Ng, A.K. Lui and S.S. Xu, 2010. Enhanced two-phase method in fast learning algorithms. Proceeding of the International Joint Conference on Neural Networks, pp: 1-7.
    CrossRef    
  6. Dai, Q. and N. Liu, 2012. Alleviating the problem of local minima in back propagation through competitive learning. Neurocomputing, 94: 152-158.
    CrossRef    
  7. Gong, B., 2009. A novel learning algorithm of back-propagation neural network. Proceeding of International Conference on Control, Automation and Systems Engineering, pp: 411-414.
    CrossRef    
  8. Hamid, N.A., N.M. Nawi, R. Ghazali, M. Salleh and M. Najib, 2012. Improvements of back propagation algorithm performance by adaptively changing gain, momentum and learning rate. Int. J. Database Theor. Appl., 4: 65-76.
  9. Huang, Y., 2007. Advances in artificial neural networks-methodological development and application. Algorithms, 3: 973-1007.
  10. Iranmanesh, S. and M.S. Mahdavi, 2009. A differential adaptive learning rate method for back-propagation neural networks. World Acad. Sci. Eng. Technol., 50: 285-288.
  11. Kwan, L., S. Shao and K. Yiu, 2013. YOA new optimization algorithm for single hidden layer feed forward neural networks. Appl. Soft Comput. J., 13: 2857-2862.
    CrossRef    
  12. Kotsiopoulos, A.E. and N. Grapsa, 2009. Self-scaled conjugate gradient training algorithms. Neurocomputing, 72: 3000-3019.
    CrossRef    
  13. Latifi, N. and A. Amiri, 2011. A novel VSS-EBP algorithm based on adaptive variable learning rate. Proceeding of 3rd International Conference on Computational Intelligence, Modelling and Simulation, pp: 14-17.
    CrossRef    
  14. Li, J., Lian and M.E. Min, 2010. An accelerating method of training neural networks based on vector epsilon algorithm function. Proceeding of International Conferences on Information and Computing, pp: 292-295.
  15. Li, Y., Y. Fu, H. Li and S.W. Zhang, 2009. The improved training algorithm of back propagation neural network with self-adaptive learning rate. Proceeding of International Conference in Computational Intelligence and Natural Computing, pp: 73-76.
    CrossRef    
  16. Moalem, P. and S.A. Ayoughi, 2010. Improving backpropgation via an efficient combination of a saturation suppression method. Neural Network World, 10: 207-22.
  17. Nand, S., P.P. Sarkar and A. Das, 2012. An improved gauss-newtons method based back-propagation algorithm for fast convergence. Int. J. Comput. Appl., 39: 1-7.
    CrossRef    
  18. Nawi, N.M., N.M. Hamid and R.S. Ransing, 2011. Enhancing back propagation neural network algorithm with adaptive gain on classification problems. Int. J. Database Theor. Appl., 2: 65-76.
  19. Negnevitsky, M., 2005. Multi-layer neural networks with improved learning algorithms. Proceeding of International Conference on Digital Image Computing: Techniques and Applications, (2005), pp: 34-34.
  20. Nasr, M. B. and M. Chtourou, 2011. A self-organizing map-based initialization for hybrid training of feedforward neural networks. Applied Soft Computing, 11(8): 4458-4464.
    CrossRef    
  21. Oh, S. and Y. Lee, 1995. A modified error function to improve the error back-propagation algorithm for multi-layer. ETRI J., 1: 1-22.
    CrossRef    
  22. Scanzio, S., S. Cumani, R. Gemello, F. Mana and F.P. Laface, 2010. Parallel implementation of artificial neural network training for speech recognition. Pattern Recogn. Lett., 11: 1302-1309.
    CrossRef    
  23. Shao, H. and G. Zheng, 2009. A new BP algorithm with adaptive momentum for FNNs training. Proceeding of WRI Global Congress on Intelligent Systems (GCIS '09), pp: 16-20.
    CrossRef    
  24. Saki, F., A. Tahmasbi, H. Soltanian-Zadeh and S. B. Shokouhi, 2013. Fast opposite weight learning rules with application in breast cancer diagnosis. Computers in biology and medicine, 43(1): 32-41.
    CrossRef    PMid:23182603    
  25. Thiang, H.K. and R. Pangaldus, 2009. Artificial neural network with steepest descent back propagation training algorithm for modeling inverse kinematics of manipulator. World Academy of Science, Engineering and Technology, 60: 530-533.
  26. Tieding, L., Z. Shijian, G. Yunlan and T. Chengfang, 2009. Application of Improved BP Neural Network to GPS Height Conversion. In Information Engineering and Computer Science, 2009. ICIECS 2009. International Conference on IEEE, pp: 1-4.
  27. Xiaozhong, L. and L. Qiu, 2008. A parameter adjustment algorithm of BP neural network. Proceeding of 3rd International Conference on Intelligent System and Knowledge Engineering (ISKE), pp: 892-895.
    CrossRef    
  28. Yang, C. and R.C. Xu, 2009. Adaptation learning rate algorithm of feed-forward neural networks. Proceeding of International Conference on Information Engineering and Computer Science (ICIECS), pp: 4244-4994.
    CrossRef    
  29. Zakaria, Z., N. Ashidi, M. Isa and S.A. Suandi, 2010. A study on neural network training algorithm for multi-face detection in static images. World Acad. Sci. Eng. Technol., 62: 170-173.
  30. Zhang, Z., 2010. Convergence analysis on an improved mind evolutionary algorithm. Proceeding of 6th International Conference on Natural Computation (ICNC), pp: 2316-2320.
    CrossRef    
  31. Zhang, C., W. Wu, X.H. Chen and Y. Xiong, 2008. Convergence of BP algorithm for product unit neural networks with exponential weights. Neurocomputing, 72: 513-520.
    CrossRef    
  32. Zhang, N., 2009. An online gradient method with momentum for two-layer feed forward neural networks q. Appl. Math. Computat., 2: 488-498.
    CrossRef    
  33. Zhixin, S. and Bingqing, 2011. Research of improved back-propagation neural?! network algorithm. Proceeding of 12th IEEE International Conference on Communication Technology (ICCT), pp: 763-766.

Competing interests

The authors have no competing interests.

Open Access Policy

This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Copyright

The authors have no competing interests.

ISSN (Online):  2040-7467
ISSN (Print):   2040-7459
Submit Manuscript
   Information
   Sales & Services
Home   |  Contact us   |  About us   |  Privacy Policy
Copyright © 2024. MAXWELL Scientific Publication Corp., All rights reserved