You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/algorithms.md
+23
Original file line number
Diff line number
Diff line change
@@ -86,6 +86,7 @@ install optimagic.
86
86
f in the stopping criterion.
87
87
- **stopping.maxiter** (int): If the maximum number of iterations is reached,
88
88
the optimization stops, but we do not count this as convergence.
89
+
- **display** (bool): Set to True to print convergence messages. Default is False. Scipy name: **disp**.
89
90
90
91
```
91
92
@@ -122,6 +123,7 @@ install optimagic.
122
123
- **convergence.ftol_abs** (float): Absolute difference in the criterion value between
123
124
iterations that is tolerated to declare convergence. As no relative tolerances can be passed to Nelder-Mead,
124
125
optimagic sets a non zero default for this.
126
+
- **display** (bool): Set to True to print convergence messages. Default is False. SciPy name: **disp**.
125
127
- **adaptive** (bool): Adapt algorithm parameters to dimensionality of problem.
126
128
Useful for high-dimensional minimization (:cite:`Gao2012`, p. 259-277). scipy's default is False.
127
129
@@ -165,6 +167,7 @@ install optimagic.
165
167
the optimization stops but we do not count thisas convergence.
166
168
- **stopping.maxiter** (int): If the maximum number of iterations is reached, the optimization stops,
167
169
but we do not count this as convergence.
170
+
- **display** (bool): Set to True to print convergence messages. Default is False. SciPy name: **disp**.
168
171
169
172
```
170
173
@@ -190,7 +193,22 @@ install optimagic.
190
193
- **norm** (float): Order of the vector norm that is used to calculate the gradient's "score" that
191
194
is compared to the gradient tolerance to determine convergence. Default is infinite which means that
192
195
the largest entry of the gradient vector is compared to the gradient tolerance.
196
+
- **display** (bool): Set to True to print convergence messages. Default is False. SciPy name: **disp**.
197
+
- **convergence_xtol_rel** (float): Relative tolerance for `x`. Terminate successfully if step size is less than `xk * xrtol` where `xk` is the current parameter vector. Default is 1e-5. SciPy name: **xrtol**.
198
+
- **armijo_condition** (float): Parameter for Armijo condition rule. Default is 1e-4. Ensures
so each step yields at least a fraction **armijo_condition** of the predicted decrease. Smaller ⇒ more aggressive steps, larger ⇒ more conservative ones. SciPy name: **c1**.
205
+
- **curvature_condition** (float): Parameter for curvature condition rule. Default is 0.9. Ensures
0 commit comments