Acceleration Strategies For The Backpropagation Neural Network Learning Algorithm

Zainuddin, Zarita (2001) Acceleration Strategies For The Backpropagation Neural Network Learning Algorithm. PhD thesis, Universiti Sains Malaysia.

[img]
Preview
PDF
Download (1MB) | Preview

Abstract

Algoritma perambatan balik telah terbukti sebagai salah satu algoritma rangkaian neural yang paling berjaya. Namun demikian, seperti kebanyakan kaedah pengoptimuman yang berasaskan kecerunan, ianya menumpu dengan lamb at dan keupayaannya berkurangan bagi tugas-tugas yang lebih besar dan kompleks. Dalam tesis ini, faktor-faktor yang menguasai kepantasan pembelajaran algoritma perambatan balik diselidik dan dianalisa secara matematik untuk membangunkan strategi-strategi bagi memperbaiki prestasi algoritma pembelajaran rangkaian neural ini. Faktor-faktor ini meliputi pilihan pemberat awal, pilihan fungsi pengaktifan dan nilai sasaran serta dua parameter perambatan, iaitu kadar pembelajaran dan faktor momentum. The backpropagation algorithm has proven to be one of the most successful neural network learning algorithms. However, as with many gradient based optimization methods, it converges slowly and it scales up poorly as tasks become larger and more complex. In this thesis, factors that govern the learning speed of the backpropagation algorithm are investigated and mathematically analyzed in order to develop strategies to improve the performance of this neural network learning algorithm. These factors include the choice of initial weights, the choice of activation function and target values, and the two backpropagation parameters, the learning rate and the momentum factor.

Item Type: Thesis (PhD)
Subjects: Q Science > QA Mathematics > QA1-939 Mathematics
Divisions: Pusat Pengajian Sains Matematik (School of Mathematical Sciences)
Depositing User: HJ Hazwani Jamaluddin
Date Deposited: 06 Jan 2017 07:49
Last Modified: 06 Jan 2017 07:49
URI: http://eprints.usm.my/id/eprint/31464

Actions (login required)

View Item View Item
Share