Sathasivam, Saratha
(2003)
Optimization Methods In Training Neural Networks.
Masters thesis, Universiti Sains Malaysia.
Abstract
Terdapat beberapa teknik pengekstremuman bagi menyelesaikan masalah aIjabar linear
dan tak linear. Kaedah Newton mempunyai sifat yang dipanggil penamatan kuadratik
yang bermaksud ia meminimumkan suatu fungsi kuadratik dalam bilangan le1aran yang
terhingga. Walaubagaimanapun, kaedah ini memerlukan pengiraan dan pengstoran
terbitan kedua bagi fungsi kuadratik yang terlibat. Apabila bilangan parameter n adalah
besar, ianya mungkin tidak praktikat· untuk mengira semua terbitap kedua. Hal ini
adalah benar bagi rangkaian neural di mana kebanyakan aplikasi praktikal memerlukan
beberapa ratus atau ribu pemberat. Bagi masalah-masalah sedemikian, kaedah
pengoptimuman yang hanya memerlukan terbitan pertama tetapi masih mempunyai sifat
penamatan kuadratik lebih diutamakan.
There are a number of extremizing techniques to solve linear and nonlinear algebraic •
problems. Newton's method has a property called quadratic termination~ which
means that it minimizes a quadratic function exactly in a finite number of iterations.
Unfortunately, it requires calculation and storage of the second derivatives of the
quadratic function involved. When the number of parameters, n, is large, it may be
impractical to compute all the second derivatives. This is especially true for neural
networks, where practical applications can require several hundred to many thousands
weights. Eor these particular cases, methods that require ,only first derivatives bMt still
have quadratic termination are preferred.
Actions (login required)
|
View Item |