Zainuddin, Zarita
(2001)
Acceleration Strategies For The Backpropagation
Neural Network Learning Algorithm.
PhD thesis, Universiti Sains Malaysia.
Abstract
Algoritma perambatan balik telah terbukti sebagai salah satu algoritma rangkaian neural
yang paling berjaya. Namun demikian, seperti kebanyakan kaedah pengoptimuman
yang berasaskan kecerunan, ianya menumpu dengan lamb at dan keupayaannya
berkurangan bagi tugas-tugas yang lebih besar dan kompleks.
Dalam tesis ini, faktor-faktor yang menguasai kepantasan pembelajaran algoritma
perambatan balik diselidik dan dianalisa secara matematik untuk membangunkan
strategi-strategi bagi memperbaiki prestasi algoritma pembelajaran rangkaian neural ini.
Faktor-faktor ini meliputi pilihan pemberat awal, pilihan fungsi pengaktifan dan nilai
sasaran serta dua parameter perambatan, iaitu kadar pembelajaran dan faktor
momentum.
The backpropagation algorithm has proven to be one of the most successful neural
network learning algorithms. However, as with many gradient based optimization
methods, it converges slowly and it scales up poorly as tasks become larger and more
complex.
In this thesis, factors that govern the learning speed of the backpropagation algorithm
are investigated and mathematically analyzed in order to develop strategies to improve
the performance of this neural network learning algorithm. These factors include the
choice of initial weights, the choice of activation function and target values, and the two
backpropagation parameters, the learning rate and the momentum factor.
Actions (login required)
|
View Item |