RcdMathLib_doc
Open Source Library for Linear and Non-linear Algebra
optimization_test.c File Reference

Examples of optimization algorithms. More...

#include <stdio.h>
#include <math.h>
#include "matrix.h"
#include "vector.h"
#include "levenberg_marquardt.h"
#include "modified_gauss_newton.h"

Go to the source code of this file.

Functions

void optimization_get_J (vector_t x0_vec[], matrix_t J[][3])
 Calculate the Jacobian matrix of the function optimization_get_f_error. More...
 
void optimization_get_f_error (vector_t x0_vec[], vector_t measured_data_vec[], vector_t f_vec[])
 Calculate the error vector of the approximation. More...
 
void optimization_test (void)
 Examples of optimization algorithms using the LVM and GN algorithms.
 
void optimization_get_exp_f (vector_t x_vec[], vector_t data_vec[], vector_t f_vec[])
 Calculate the error vector using exponential data. More...
 
void optimization_get_exp_Jacobian (vector_t x_vec[], matrix_t J[][2])
 Calculate the Jacobian matrix using exponential data. More...
 
void optimization_exponential_data_test (void)
 Examples of optimization algorithms using exponential data. More...
 
void optimization_get_sin_f (vector_t x_vec[], vector_t data_vec[], vector_t f_vec[])
 Calculate the error vector using sinusoidal data. More...
 
void optimization_get_sin_Jacobian (vector_t x_vec[], matrix_t J[][4])
 Calculate the Jacobian matrix using sinusoidal data. More...
 
void optimization_sinusoidal_data_test (void)
 Examples of optimization algorithms using sinusoidal data. More...
 

Detailed Description

Examples of optimization algorithms.

Optimization algorithms examples (see the modified GN and LVM optimization methods).

Author
Zakaria Kasmi zkasm.nosp@m.i@in.nosp@m.f.fu-.nosp@m.berl.nosp@m.in.de

Definition in file optimization_test.c.

Function Documentation

◆ optimization_exponential_data_test()

void optimization_exponential_data_test ( void  )

Examples of optimization algorithms using exponential data.

The model function is: $ g(\vec{x}, t) = x_1 \mathrm{e}^{x_2t}, $ where $\vec{x} = [x_1, x_2]^T$ and $ \vec{x_0} = [6,.3]$ is the initial guess. The data set is $ d(t_i, y_i)$, whereby $ t_i $ is equal to $ \lbrace 1, \hdots, 8 \rbrace$ and $ y_i$ is equal to $\lbrace 8.3, 11.0, 14.7, 19.7, 26.7, 35.2, 44.4, 55.9 \rbrace $.

Definition at line 264 of file optimization_test.c.

References matrix_t, modified_gauss_newton(), opt_levenberg_marquardt(), optimization_get_exp_f(), optimization_get_exp_Jacobian(), vector_clear(), and vector_t.

◆ optimization_get_exp_f()

void optimization_get_exp_f ( vector_t  x_vec[],
vector_t  data_vec[],
vector_t  f_vec[] 
)

Calculate the error vector using exponential data.

The error function is: $ \vec{f}(x_1, x_2) = \begin{bmatrix} x_1 \mathrm{e}^{x_2} - y_1, ~\hdots, x_1\mathrm{e}^{8x_2}-y_8 \end{bmatrix}^{T}, $

Parameters
[in]x_vec[]start vector.
[in]data_vec[]data vector.
[out]f_vec[]calculated error vector.

Definition at line 215 of file optimization_test.c.

References vector_t.

Referenced by optimization_exponential_data_test().

◆ optimization_get_exp_Jacobian()

void optimization_get_exp_Jacobian ( vector_t  x_vec[],
matrix_t  J[][2] 
)

Calculate the Jacobian matrix using exponential data.

The Jacobian matrix is: $ {J_f} = \begin{bmatrix} \frac{\partial f_1}{\partial x1} & \frac{\partial f_1}{\partial x_2} \\ \frac{\partial f_2}{\partial x1} & \frac{\partial f_2}{\partial x_2} \\ \vdots & \vdots & \\ \frac{\partial f_n}{\partial x1} & \frac{\partial f_n}{\partial x_2} \\ \end{bmatrix} = \begin{bmatrix} \mathrm{e}^{x_2} & \mathrm{e}^{x_2} x_1\\ \mathrm{e}^{2 x_2} & 2\mathrm{e}^{2x_2} x_1 \\ \vdots & \vdots & \\ \mathrm{e}^{8 x_2} & 8\mathrm{e}^{8x_2} x_1 \\ \end{bmatrix}. $

Parameters
[in]x_vec[]start vector.
[in]J[]Jacobian matrix.

Definition at line 251 of file optimization_test.c.

References vector_t.

Referenced by optimization_exponential_data_test().

◆ optimization_get_f_error()

void optimization_get_f_error ( vector_t  x0_vec[],
vector_t  measured_data_vec[],
vector_t  f_vec[] 
)

Calculate the error vector of the approximation.

Parameters
[in]x0_vec[]start values.
[in]measured_data_vec[]measured data vector.
[out]f_vec[]calculated error vector.

Definition at line 70 of file optimization_test.c.

References matrix_t.

Referenced by optimization_test().

◆ optimization_get_J()

void optimization_get_J ( vector_t  x0_vec[],
matrix_t  J[][3] 
)

Calculate the Jacobian matrix of the function optimization_get_f_error.

Parameters
[in]x0_vec[]start values.
[out]J[][]calculated Jacobian matrix.

Definition at line 39 of file optimization_test.c.

References matrix_t.

Referenced by optimization_test().

◆ optimization_get_sin_f()

void optimization_get_sin_f ( vector_t  x_vec[],
vector_t  data_vec[],
vector_t  f_vec[] 
)

Calculate the error vector using sinusoidal data.

The error function is: $ \vec{f}(x_1, x_2, x_3, x_4)= \begin{bmatrix} x_1 \sin\left( x_2 +x_3\right) +x_4 - y_1 \\ \vdots \\ x_1 \sin\left( 12 x_2 +x_3\right) +x_4 - y_{12} \end{bmatrix}. $

Parameters
[in]x_vec[]start vector.
[in]data_vec[]data vector.
[out]f_vec[]calculated error vector.

Definition at line 325 of file optimization_test.c.

References vector_t.

Referenced by optimization_sinusoidal_data_test().

◆ optimization_get_sin_Jacobian()

void optimization_get_sin_Jacobian ( vector_t  x_vec[],
matrix_t  J[][4] 
)

Calculate the Jacobian matrix using sinusoidal data.

The Jacobian matrix is $ J_f = \left[\begin{matrix} \sin\left( x_2 +x_3\right) & x_1 \cos\left( x_2 +x_3\right) & x_1\cos\left( x_2 +x_3\right) \\ \sin\left( 2 x_2 +x_3\right) & 2 x_1 \cos\left( 2 x_2 +x_3\right) & x_1 \cos\left( 2 x_2 +x_3\right)\\ \vdots & \vdots \\ \sin\left( 12 x_2 +x_3\right) & 12 x_1 \cos\left( 12 x_2 +x_3\right) & x_1 \cos\left( 12 x_2 +x_3\right) \end{matrix}\right]. $

Parameters
[in]x_vec[]start vector.
[in]J[]Jacobian matrix.

Definition at line 360 of file optimization_test.c.

References vector_t.

Referenced by optimization_sinusoidal_data_test().

◆ optimization_sinusoidal_data_test()

void optimization_sinusoidal_data_test ( void  )

Examples of optimization algorithms using sinusoidal data.

The model function is: $ g(\vec{x}, t) = x_1 \sin\left( x_2t +x_3\right) + x_4, $ whereby $\vec{x} = [x_1, x_2, x_3, x_4]^T$ and $\vec{x_0} = [17, 0.5, 10.5, 77]$ is the initial guess. The set of data points is $ d(t_i, y_i) $, where $t_i $ is equal to $\lbrace 1, \hdots, 12 \rbrace$ and $y_i$ is equal to $\lbrace 61, 65, 72, 78, 85, 90, 92, 92, 88, 81, 72, 63 \rbrace$.

Definition at line 376 of file optimization_test.c.

References matrix_t, modified_gauss_newton(), opt_levenberg_marquardt(), optimization_get_sin_f(), optimization_get_sin_Jacobian(), vector_clear(), and vector_t.