forum.alglib.net

ALGLIB forum
It is currently Sun Dec 01, 2024 5:33 pm

All times are UTC


Forum rules


1. This forum can be used for discussion of both ALGLIB-related and general numerical analysis questions
2. This forum is English-only - postings in other languages will be removed.



Post new topic Reply to topic  [ 6 posts ] 
Author Message
 Post subject: A bug for neural networks functions?
PostPosted: Fri Dec 03, 2010 7:28 am 
Offline

Joined: Fri Dec 03, 2010 7:22 am
Posts: 3
Hi,
I am trying to use mlpbase and mlptrain properly. I'm not sure if this is a bug but when I run mlptrain in a loop with same dataset, same parameters, all variables are local to each iteration, the results are VERY different across different iterations, though they are supposed to be exactly the same. Furthermore, when I run one iteration on different compile and run, they show exactly the same result.
So I guess there are some static variables in mlpbase or mlptrain that is modified after each time some function is called, and the next time the function is called, maybe it is re-used wrongly.
Please help,
Thanks,
Tuan


Top
 Profile  
 
 Post subject: Re: A bug for neural networks functions?
PostPosted: Fri Dec 03, 2010 10:00 am 
Offline
Site Admin

Joined: Fri May 07, 2010 7:06 am
Posts: 927
Neural networks are randomized before training - it is the common practice in machine learning. Behavior you've described is just a consequence of randomization.


Top
 Profile  
 
 Post subject: Re: A bug for neural networks functions?
PostPosted: Fri Dec 03, 2010 10:28 am 
Offline

Joined: Fri Dec 03, 2010 7:22 am
Posts: 3
So even if I don't use mlprandomize() or mlprandomizefull(), are the weights still randomized?


Top
 Profile  
 
 Post subject: Re: A bug for neural networks functions?
PostPosted: Fri Dec 03, 2010 10:40 am 
Offline
Site Admin

Joined: Fri May 07, 2010 7:06 am
Posts: 927
Yes, they are randomized internally, inside mlptrain().


Top
 Profile  
 
 Post subject: Re: A bug for neural networks functions?
PostPosted: Fri Dec 03, 2010 10:48 am 
Offline

Joined: Fri Dec 03, 2010 7:22 am
Posts: 3
If so, I believe the randomization is not really good. Because the results are so different. The errors tend to be much higher for next iterations. The first iterations always perform much better than the next ones.


Top
 Profile  
 
 Post subject: Re: A bug for neural networks functions?
PostPosted: Fri Dec 03, 2010 11:57 am 
Offline
Site Admin

Joined: Fri May 07, 2010 7:06 am
Posts: 927
Can you give me some compact code which demonstrates this pattern - "first iterations always perform much better than the next ones"?

mlptrain() uses really simple RNG - one which is included into the standard library. It may lead to correlations in the adjacent weights, but not to the pattern you mentioned.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 6 posts ] 

All times are UTC


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group