Up | Next | Prev | PrevTail | Tail |
The Fletcher Reeves version of the steepest descent algorithms is used to find the minimum of a function of one or more variables. The function must have continuous partial derivatives with respect to all variables. The starting point of the search can be specified; if not, random values are taken instead. The steepest descent algorithms in general find only local minima.
Syntax:
num_min (〈exp⟩, 〈\(var_1\)⟩\(\,[\)=\(val_1\)\(]\,\) [,\(var_2\)[=\(val_2\)] …] |
[,accuracy=〈a⟩][,iterations=〈i⟩]) |
or
|
num_min (〈exp⟩, { 〈\(var_1\)⟩\(\,[\)=\(val_1\)\(]\,\) [,\(var_2\)[=\(val_2\)] …]} |
[,accuracy=〈a⟩][,iterations=〈i⟩]) |
where 〈exp⟩ is a function expression, 〈\(var_{1}\)⟩, 〈\(var_{2}\)⟩, … are the variables in 〈exp⟩ and 〈\(val_{1}\)⟩, 〈\(val_{2}\)⟩, … are the (optional) start values.
num_min
tries to find the next local minimum along the descending path starting at the
given point. The result is a list with the minimum function value as first element followed
by a list of equations, where the variables are equated to the coordinates of the result
point.
Examples:
num_min(sin(x)+x/5, x); {-0.0775896851944,{x=4.51103102502}} num_min(sin(x)+x/5, x=0); {-1.33422674662,{x=-1.77215826714}} % Rosenbrock function (well known as hard to minimize). fktn := 100*(x1**2-x2)**2 + (1-x1)**2; num_min(fktn, x1=-1.2, x2=1, iterations=200); {0.000000218702254529,{x1=0.999532844959,x2 =0.99906807243}}
Up | Next | Prev | PrevTail | Front |