CMA-ES
CMA-ES stands for Covariance Matrix Adaptation Evolution Strategy. Evolution strategies (ES) are stochastic, derivative-free methods for numerical optimization of non-linear or non-convex continuous optimization problems. They belong to the class of evolutionary algorithms and evolutionary computation. An evolutionary algorithm is broadly based on the principle of biological evolution, namely the repeated interplay of variation (via mutation and recombination) and selection: in each generation (iteration) new individuals (candidate solutions, denoted as x) are generated by variation, usually in a stochastic way, and then some individuals are selected for the next generation based on their fitness or objective function value f(x). Like this, over the generation sequence, individuals with better and better f-values are generated. In an evolution strategy, new candidate solutions are sampled according to a multivariate normal distribution in the mathbb{R}^n. Pairwise dependencies between the variables in this distribution are represented by a covariance matrix. The covariance matrix adaptation (CMA) is a method to update the covariance matrix of this distribution. This is particularly useful, if the function f is ill-conditioned. Adaptation of the covariance matrix amounts to learning a second order model of the underlying objective function similar to the approximation of the inverse Hessian matrix in the Quasi-Newton method in classical optimization. In contrast to most classical methods, fewer assumptions on the nature of the underlying objective function are made. Only the ranking between candidate solutions is exploited for learning the sample distribution and neither derivatives nor even the function values themselves are required by the method.
(Source: http://plato.asu.edu)
Keywords for this software
References in zbMATH (referenced in 75 articles )
Showing results 1 to 20 of 75.
Sorted by year (- MacAlpine, Patrick; Stone, Peter: Overlapping layered learning (2018)
- Rocha, Ana Maria A.C.; Costa, M.Fernanda P.; Fernandes, Edite M.G.P.: On a smoothed penalty-based algorithm for global optimization (2017)
- Samothrakis, Spyridon; Fasli, Maria; Perez, Diego; Lucas, Simon: Default policies for global optimisation of noisy functions with severe noise (2017)
- Amaran, Satyajith; Sahinidis, Nikolaos V.; Sharda, Bikram; Bury, Scott J.: Simulation optimization: a review of algorithms and applications (2016)
- Balesdent, Mathieu; Morio, Jér^ome; Brevault, Loïc: Rare event probability estimation in the presence of epistemic uncertainty on input probability distribution parameters (2016)
- Breunig, U.; Schmid, V.; Hartl, R.F.; Vidal, T.: A large neighbourhood based heuristic for two-echelon routing problems (2016)
- Di Pillo, G.; Liuzzi, G.; Lucidi, S.; Piccialli, V.; Rinaldi, F.: A DIRECT-type approach for derivative-free constrained global optimization (2016)
- Durantin, Cédric; Marzat, Julien; Balesdent, Mathieu: Analysis of multi-objective Kriging-based methods for constrained global optimization (2016)
- Gouv^ea, Érica J.C.; Regis, Rommel G.; Soterroni, Aline C.; Scarabello, Marluce C.; Ramos, Fernando M.: Global optimization using $q$-gradients (2016)
- Lazar, Markus; Jarre, Florian: Calibration by optimization without using derivatives (2016)
- Métivier, L.; Brossier, R.; Mérigot, Q.; Oudet, E.; Virieux, J.: An optimal transport approach for seismic tomography: application to 3D full waveform inversion (2016)
- Regis, Rommel G.: On the convergence of adaptive stochastic search methods for constrained and multi-objective black-box optimization (2016)
- Gil, Debora; Roche, David; Borràs, Agnés; Giraldo, Jesús: Terminating evolutionary algorithms at their steady state (2015)
- Hadjidoukas, P.E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.: $\Pi$4U: a high performance computing framework for Bayesian uncertainty quantification of complex models (2015)
- Schmidhuber, Jürgen: Deep learning in neural networks: an overview (2015) ioport
- Amaran, Satyajith; Sahinidis, Nikolaos V.; Sharda, Bikram; Bury, Scott J.: Simulation optimization: a review of algorithms and applications (2014)
- Caraffini, Fabio; Neri, Ferrante; Picinali, Lorenzo: An analysis on separability for memetic computing automatic design (2014) ioport
- Gong, Wenyin; Cai, Zhihua; Liang, Dingwen: Engineering optimization by means of an improved constrained differential evolution (2014)
- Hansen, Nikolaus; Auger, Anne: Principled design of continuous stochastic search: from theory to practice (2014)
- Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk: Automated parameterization of intermolecular pair potentials using global optimization techniques (2014)
Further publications can be found at: https://www.lri.fr/~hansen/publications.html#hansenReview2006