forum.alglib.net

ALGLIB forum
It is currently Sat Dec 21, 2024 4:58 pm

All times are UTC


Forum rules


1. This forum can be used for discussion of both ALGLIB-related and general numerical analysis questions
2. This forum is English-only - postings in other languages will be removed.



Post new topic Reply to topic  [ 5 posts ] 
Author Message
 Post subject: Accessing Hessian or unscaled covariance of LM fit?
PostPosted: Fri Apr 05, 2024 8:40 pm 
Offline

Joined: Fri Apr 05, 2024 8:18 pm
Posts: 4
Hello,

I have a question regarding getting access to the Hessian or the unscaled covariance following a Levenberg Marqurdt fit. I have the functional form of the equation as well as the derivatives of that functional form with respect to the model parameters. However, I do not have an analytial expression for the Hessian. As a result, I am using lsfitcreatefg (as opposed to the related "f" or fgh" versions of lsfitcreate) like this:
double epsx = 0.000001;
int maxits = 5000;
int info = 0;
alglib.lsfitstate state;
alglib.lsfitreport report;

alglib.lsfitcreatefg (x, y, c, false, out state);
alglib.lsfitsetcond (state, epsx, maxits);
alglib.lsfitfit (state, function_myEq_func, function_myEq_grad, null, null);
alglib.lsfitresults (state, out info, out c, out report);

The fit works really well. It returns the same coefficients that I get when I perform the fit with Python. That said, how can I obtain the Hessian evaluated at the minimizing point? It looks like the Hessian should be available as state.innerobject.h. Although h is the nomenclature for the Hessian in the calls to lsfit (as noted above) and h is a 2D array of doubles, h has a size of 0x0. Is there a flag that I need to set to ask that the Hessian be reported?

Related to this is the covariance matrix that is reported in report.covpar. This value is equivalent to the unscaled covariance matrix that is reported by a call to the Python Method minimize with the parameter scale_covar = true (its default value). With scale_covar = false, the unscaled covariance matrix is returned by Python. How can I either "unscale" the covariance matrix reported as report.covpar or ask ALGLIB to return the unscaled version of the covariance matrix?

This question looks similar to this one: http://forum.alglib.net/viewtopic.php?f=2&t=293&hilit=covariance


Top
 Profile  
 
 Post subject: Re: Accessing Hessian or unscaled covariance of LM fit?
PostPosted: Mon Apr 08, 2024 2:58 am 
Offline

Joined: Fri Apr 05, 2024 8:18 pm
Posts: 4
I'm following-up here with an implemented example.

In the image below is an image of functioning example of Python. For a single dataset, a fit to a second order polynomial (y = a0 + a1*x + a2*x^2) is performed and either the scaled or unscaled covariance matrix is returned.
Attachment:
Python_scale_covar_question.png
Python_scale_covar_question.png [ 122.29 KiB | Viewed 8893 times ]


Below is code that utilizes ALGLIB to perform a fit to the same functional form (i.e., a second order polynomial) of the same dataset. The same coefficients are found. Based on a check of the diagonal elements (see the output at the bottom of the image), the covariance matrix that is calculated by ALGLIB is the scaled covariance matrix - the diagonal values agree with the diagonal elements of the scaled covariance matrix returned by Python, not the diagonal elements of the unscaled covariance matrix returned by Python.

How can I get the unscaled covariance matrix using ALGLIB?

Additionally, how can I get the Hessian evaluated at the minimizing point?

Code:
      [TestMethod]
      public void TestAlgLibSecondOrderModel()
      {

         var x      = new double[,] { { 0 },          { 1 },               { 2 },           { 3 },            { 4 }             };
        //var y      = new double[]  { 3.9 + 0.0 + 0.0, 3.9 + 2.0 + (-0.0), 3.9 + 8.0 + 0.0, 3.9 + 18.0 + 0.0, 3.9 + 32.0 + -0.0 };   // No noise
          var y      = new double[]  { 3.9 + 0.0 + 0.3, 3.9 + 2.0 + (-0.2), 3.9 + 8.0 + 0.4, 3.9 + 18.0 + 0.5, 3.9 + 32.0 + -0.4 };
         var c      = new double[]  { 10.0, 0.0, 0.0 };
         var epsx   = 0.000001;
         var maxits   = 5000;
         var info   = 0;
         alglib.lsfitstate state;
         alglib.lsfitreport rep;

         // Fitting without weights
         alglib.lsfitcreatefg   (x, y, c, false, out state);
         alglib.lsfitsetcond      (state, epsx, maxits);
         alglib.lsfitfit         (state, function_SecondOrder_func, function_SecondOrder_grad, null, null);
         alglib.lsfitresults      (state, out info, out c, out rep);

         // Assertions
         // Expected values come from Python & are also found in TestMathnetLMSolver.TestLMSolverResultsForEquation25aFitUnconstrainedCoefficients()
         Assert.AreEqual(3.97, c[0], 0.01);
         Assert.AreEqual(0.30, c[1], 0.01);
         Assert.AreEqual(1.91, c[2], 0.01);
       
         // Check the solver's scaled covariance results
         var cov = rep.covpar;
         Assert.AreEqual(0.20295512, cov[0, 0], 1.00);  // From Python (these are scaled covariance values).
         Assert.AreEqual(0.28479186, cov[1, 1], 0.01);  // From Python (these are scaled covariance values).
         Assert.AreEqual(0.01636735, cov[2, 2], 0.01);  // From Python (these are scaled covariance values).
      }


For completeness, here is the Python code:
Code:
from lmfit   import minimize, Parameters
import numpy as np

# The Method that encapsulates the second order polynomial Model that will be fit to measured pressure data
def SecondOrderModel(params, x):
    a0      = params['a0']
    a1      = params['a1']
    a2      = params['a2']

    returnValue = a0 + a1 * x + a2 * pow(x, 2)

    return(returnValue)

# This Method returns the difference between the measured and fitted values and is needed by the LM fitting routine
def residualSecondOrderModel(params, x, y, uncertainty):

    theModel    = SecondOrderModel(params, x)
    returnValue = (y - theModel)/(uncertainty)

    return(returnValue)

# --- Main Routine ---
# Initial Guesses (IGs).
IG_a0 = 10.0
IG_a1 =  0.0
IG_a2 =  0.0

x_ToFit = np.array([0.0,             1.0,                2.0,             3.0,              4.0              ])
#y_ToFit = np.array([3.9 + 0.0 + 0.0, 3.9 + 2.0 + (-0.0), 3.9 + 8.0 + 0.0, 3.9 + 18.0 + 0.0, 3.9 + 32.0 + -0.0]) # 3.9 + 0x + 2x^2 with no noise
y_ToFit = np.array([3.9 + 0.0 + 0.3, 3.9 + 2.0 + (-0.2), 3.9 + 8.0 + 0.4, 3.9 + 18.0 + 0.5, 3.9 + 32.0 + -0.4]) # 3.9 + 0x + 2x^2 with a bit of hand-made noise

# Set all uncertainties to 1.0 - Numerical Recipes, Third Edition, Section 15.4 (page 788) & initialize parameters
uncertainty     = np.ones(x_ToFit.size)
params2ndOrder  = Parameters()
params2ndOrder.add("a0",       value = IG_a0)
params2ndOrder.add("a1",       value = IG_a1)
params2ndOrder.add("a2",       value = IG_a2)

out2ndOrder_lmfit_T = minimize(residualSecondOrderModel, params2ndOrder, args = (x_ToFit, y_ToFit, uncertainty), scale_covar=True)
print("--- SCALED COVARIANCE (scale_covar = True) ---")
print(out2ndOrder_lmfit_T.params["a0"])
print(out2ndOrder_lmfit_T.params["a1"])
print(out2ndOrder_lmfit_T.params["a2"])
print(out2ndOrder_lmfit_T.covar)

out2ndOrder_lmfit_F = minimize(residualSecondOrderModel, params2ndOrder, args = (x_ToFit, y_ToFit, uncertainty), scale_covar=False)
print("--- UNSCALED COVARIANCE (scale_covar = False) ---")
print(str(out2ndOrder_lmfit_F.params["a0"]) + "   a0_T/a0_F = " + str(out2ndOrder_lmfit_T.covar[0][0] / out2ndOrder_lmfit_F.covar[0][0]))
print(str(out2ndOrder_lmfit_F.params["a1"]) + "   a1_T/a1_F = " + str(out2ndOrder_lmfit_T.covar[1][1] / out2ndOrder_lmfit_F.covar[1][1]))
print(str(out2ndOrder_lmfit_F.params["a2"]) + "   a2_T/a2_F = " + str(out2ndOrder_lmfit_T.covar[2][2] / out2ndOrder_lmfit_F.covar[2][2]))
print(out2ndOrder_lmfit_F.covar)


Top
 Profile  
 
 Post subject: Re: Accessing Hessian or unscaled covariance of LM fit?
PostPosted: Mon Apr 08, 2024 9:16 pm 
Offline

Joined: Fri Apr 05, 2024 8:18 pm
Posts: 4
After a bit more investigation I've found that the scaling factor is the reduced chi squared statistic. The rep.noise variable is an array of doubles whose size is dictated by the number of elements in the dataset. In my example, I have 5 data points (one each x = 0, 1, 2, 3, 4) and the square root of the chi squared statistic is found in the rep.noise variable, all 5 elements of which are equal to 0.47868868499564171. The square of this value is 0.229142857 which is the value of the chi squared statistic reported by Python.

This begs the question of why is the chi squared statistic reported once per data point?

My C# code that utilizes ALGLIB has been updated to reflect this scaling factor (see below).

Code:
      [TestMethod]
      public void TestAlgLibSecondOrderModel()
      {

         var x      = new double[,] { { 0 },          { 1 },               { 2 },           { 3 },            { 4 }             };
       //var y      = new double[]  { 3.9 + 0.0 + 0.0, 3.9 + 2.0 + (-0.0), 3.9 + 8.0 + 0.0, 3.9 + 18.0 + 0.0, 3.9 + 32.0 + -0.0 };   // No noise
         var y      = new double[]  { 3.9 + 0.0 + 0.3, 3.9 + 2.0 + (-0.2), 3.9 + 8.0 + 0.4, 3.9 + 18.0 + 0.5, 3.9 + 32.0 + -0.4 };
         var c      = new double[]  { 10.0, 0.0, 0.0 };
         var epsx   = 0.000001;
         var maxits   = 5000;
         var info   = 0;
         alglib.lsfitstate state;
         alglib.lsfitreport rep;

         // Fitting without weights
         alglib.lsfitcreatefg   (x, y, c, false, out state);
         alglib.lsfitsetcond      (state, epsx, maxits);
         alglib.lsfitfit         (state, function_SecondOrder_func, function_SecondOrder_grad, null, null);
         alglib.lsfitresults      (state, out info, out c, out rep);

         // Assertions
         // Expected values come from Python & are also found in TestMathnetLMSolver.TestLMSolverResultsForEquation25aFitUnconstrainedCoefficients()
         Assert.AreEqual(3.97, c[0], 0.01);
         Assert.AreEqual(0.30, c[1], 0.01);
         Assert.AreEqual(1.91, c[2], 0.01);
       
         // Check the solver's scaled covariance results
         var cov = rep.covpar;
         Assert.AreEqual(0.20295512, cov[0, 0], 0.001);  // From Python (these are scaled covariance values).
         Assert.AreEqual(0.28479186, cov[1, 1], 0.001);  // From Python (these are scaled covariance values).
         Assert.AreEqual(0.01636735, cov[2, 2], 0.001);  // From Python (these are scaled covariance values).

         // Check the solver's unscaled covariance results
         var scalingFactor = Math.Pow(rep.noise[0], 2);   // Noise has 5 elements (the same number of data points defined above) that all appear to be identical. Will the noise in every cell always be identical?
         Assert.AreEqual(0.88571438, cov[0, 0] / scalingFactor, 0.001);  // From Python (these are unscaled covariance values).
         Assert.AreEqual(1.24285726, cov[1, 1] / scalingFactor, 0.001);  // From Python (these are unscaled covariance values).
         Assert.AreEqual(0.07142857, cov[2, 2] / scalingFactor, 0.001);  // From Python (these are unscaled covariance values).
      }


Aside - the following Python article on github is a useful reference for the scaled covariance matrix.
Attachment:
2024-04-08 13_30_39-lmfit-py_doc_fitting_rst at master_lmfit_lmfit-py.png
2024-04-08 13_30_39-lmfit-py_doc_fitting_rst at master_lmfit_lmfit-py.png [ 153.05 KiB | Viewed 8890 times ]


Top
 Profile  
 
 Post subject: Re: Accessing Hessian or unscaled covariance of LM fit?
PostPosted: Sat Apr 13, 2024 3:11 pm 
Offline
Site Admin

Joined: Fri May 07, 2010 7:06 am
Posts: 927
Hi!

Sorry for the delay in replying! The lsfitstate.h field is intended for communication with the user and this particular protocol is no longer publicly supported, so you can't get Hessian from it. It is possible to include Hessian into reports, I will add it to the issues tracker. However, I am not sure whether it will be 4.02 which is scheduled for April-May, or 4.03. I will reply with my decision later.


Top
 Profile  
 
 Post subject: Re: Accessing Hessian or unscaled covariance of LM fit?
PostPosted: Mon Apr 15, 2024 3:46 am 
Offline

Joined: Fri Apr 05, 2024 8:18 pm
Posts: 4
Understood. Thanks for the response!

Related to this I have been looking at the noise field of the report class. In the simple example that I described in my April 08 post I was fitting 5 data points and the noise field was an array of doubles that had 5 elements and all 5 elements were equal to the same value, namely 0.47868868499564171. Is it fair to assume that if I fit N data points the noise field will be an array of N doubles and all N of them will be equal to the same value?


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 5 posts ] 

All times are UTC


Who is online

Users browsing this forum: No registered users and 19 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group