DGM is a Fortran implementation of the discrete gradient method for derivative free optimization. To apply DGM, one only needs to compute at every point the value of the objective function. The subgradient will be approximated. The software is free for academic teaching and research purposes but I ask you to refer the reference given below if you use it.

References in zbMATH (referenced in 39 articles , 1 standard article )

Showing results 1 to 20 of 39.
Sorted by year (citations)

1 2 next

  1. Christof, Constantin; De los Reyes, Juan Carlos; Meyer, Christian: A nonsmooth trust-region method for locally Lipschitz functions with application to optimization problems constrained by variational inequalities (2020)
  2. Gaudioso, Manlio; Giallombardo, Giovanni; Miglionico, Giovanna: Essentials of numerical nonsmooth optimization (2020)
  3. Hare, Warren; Planiden, Chayne; Sagastizábal, Claudia: A derivative-free (\mathcalV\mathcalU)-algorithm for convex finite-max problems (2020)
  4. Bagirov, Adil; Taheri, Sona; Asadi, Soodabeh: A difference of convex optimization algorithm for piecewise linear regression (2019)
  5. Jüngel, Ansgar; Stefanelli, Ulisse; Trussardi, Lara: Two structure-preserving time discretizations for gradient flows (2019)
  6. Larson, Jeffrey; Menickelly, Matt; Wild, Stefan M.: Derivative-free optimization methods (2019)
  7. Liu, Shuai: A simple version of bundle method with linear programming (2019)
  8. Liuzzi, Giampaolo; Lucidi, Stefano; Rinaldi, Francesco; Vicente, Luis Nunes: Trust-region methods for the derivative-free optimization of nonsmooth black-box functions (2019)
  9. Audet, Charles; Hare, Warren: Algorithmic construction of the subdifferential from directional derivatives (2018)
  10. Bagirov, A. M.; Ugon, J.: Nonsmooth DC programming approach to clusterwise linear regression: optimality conditions and algorithms (2018)
  11. Dolgopolik, M. V.: A convergence analysis of the method of codifferential descent (2018)
  12. Gaudioso, Manlio; Giallombardo, Giovanni; Mukhametzhanov, Marat: Numerical infinitesimals in a variable metric method for convex nonsmooth optimization (2018)
  13. Golestani, M.; Sadeghi, H.; Tavan, Y.: Nonsmooth multiobjective problems and generalized vector variational inequalities using quasi-efficiency (2018)
  14. Hare, W.; Planiden, C.: Computing proximal points of convex functions with inexact subgradients (2018)
  15. Khan, Kamil A.; Larson, Jeffrey; Wild, Stefan M.: Manifold sampling for optimization of nonconvex functions that are piecewise linear compositions of smooth components (2018)
  16. Hare, W.; Sagastizábal, C.; Solodov, M.: A proximal bundle method for nonsmooth nonconvex functions with inexact information (2016)
  17. Yousefpour, Rohollah: Combination of steepest descent and BFGS methods for nonconvex nonsmooth optimization (2016)
  18. Akbari, Z.; Yousefpour, R.; Reza Peyghami, M.: A new nonsmooth trust region algorithm for locally Lipschitz unconstrained optimization problems (2015)
  19. Bagirov, Adil M.; Ugon, Julien; Mirzayeva, Hijran G.: Nonsmooth optimization algorithm for solving clusterwise linear regression problems (2015)
  20. Ozturk, Gurkan; Bagirov, Adil M.; Kasimbeyli, Refail: An incremental piecewise linear classifier based on polyhedral conic separation (2015)

1 2 next