forum.alglib.net

ALGLIB forum
It is currently Sat Nov 23, 2024 3:18 am

All times are UTC


Forum rules


1. This forum can be used for discussion of both ALGLIB-related and general numerical analysis questions
2. This forum is English-only - postings in other languages will be removed.



Post new topic Reply to topic  [ 1 post ] 
Author Message
 Post subject: Convergence property of Multin-Logit training "mnltrainh"
PostPosted: Fri Mar 08, 2013 2:48 pm 
Offline

Joined: Wed Jan 23, 2013 1:43 pm
Posts: 4
For ALGLIB's Multinomial Logistic Regression:

Looking at the source code for function "mnltrainh", I see that training ends only when both of the following two conditions are met:
a) function "spdmatrixcholeskysolve" returns true, and
b) function "logit_mnlmcsrch" returns some desired values (related to uncertainty tolerance, etc.)

In general, the "training loop" inside "mnltrainh" consists of:
i) calculate Hessian
ii) calculate Gradient
iii) multidimensional line search to move towards optimal parameter values

Questions:
1) Is it true that after each iteration of the above loop, the network weights are moved closer to the "optimal" weights? i.e. Is the distance between the current iteration of the network weights and the optimal network weights strictly decreasing with iteration?
2) Is the answer to question 1) in any way dependent on the return values of "spdmatrixcholeskysolve" or "logit_mnlmcsrch"? i.e. if I randomly terminate the training loop, regardless of those return values, can I be guaranteed that the current network weights are "better" than previous ones?

Thank you!


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 1 post ] 

All times are UTC


Who is online

Users browsing this forum: No registered users and 5 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group