forum.alglib.net http://forum.alglib.net/ |
|
A bug for neural networks functions? http://forum.alglib.net/viewtopic.php?f=2&t=133 |
Page 1 of 1 |
Author: | thanhtuan [ Fri Dec 03, 2010 7:28 am ] |
Post subject: | A bug for neural networks functions? |
Hi, I am trying to use mlpbase and mlptrain properly. I'm not sure if this is a bug but when I run mlptrain in a loop with same dataset, same parameters, all variables are local to each iteration, the results are VERY different across different iterations, though they are supposed to be exactly the same. Furthermore, when I run one iteration on different compile and run, they show exactly the same result. So I guess there are some static variables in mlpbase or mlptrain that is modified after each time some function is called, and the next time the function is called, maybe it is re-used wrongly. Please help, Thanks, Tuan |
Author: | Sergey.Bochkanov [ Fri Dec 03, 2010 10:00 am ] |
Post subject: | Re: A bug for neural networks functions? |
Neural networks are randomized before training - it is the common practice in machine learning. Behavior you've described is just a consequence of randomization. |
Author: | thanhtuan [ Fri Dec 03, 2010 10:28 am ] |
Post subject: | Re: A bug for neural networks functions? |
So even if I don't use mlprandomize() or mlprandomizefull(), are the weights still randomized? |
Author: | Sergey.Bochkanov [ Fri Dec 03, 2010 10:40 am ] |
Post subject: | Re: A bug for neural networks functions? |
Yes, they are randomized internally, inside mlptrain(). |
Author: | thanhtuan [ Fri Dec 03, 2010 10:48 am ] |
Post subject: | Re: A bug for neural networks functions? |
If so, I believe the randomization is not really good. Because the results are so different. The errors tend to be much higher for next iterations. The first iterations always perform much better than the next ones. |
Author: | Sergey.Bochkanov [ Fri Dec 03, 2010 11:57 am ] |
Post subject: | Re: A bug for neural networks functions? |
Can you give me some compact code which demonstrates this pattern - "first iterations always perform much better than the next ones"? mlptrain() uses really simple RNG - one which is included into the standard library. It may lead to correlations in the adjacent weights, but not to the pattern you mentioned. |
Page 1 of 1 | All times are UTC |
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group http://www.phpbb.com/ |