PyEC provides an implementation of several well-known optimization methods with a particular focus on methods of evolutionary computation or natural computation. If you are just interested in optimizing functions, see the section on Basic Usage below.
For the time being, this code is in an experimental stage and will likely change substantially with the next few minor versions.
PyEC is intended for three main groups of people:
Anyone who needs to optimize functions. PyEC can be used as a drop-in module to perform optimization quickly and accurately. The most common optimization methods have been packaged in an easily accessible form for this purpose, as described in Basic Usage below.
Researchers in Evolutionary Computation. PyEC contains tools to construct, modify, and parameterize optimization methods. It can be used as a toolbox for the researcher in evolutionary computation to construct evolutionary algorithms on complex search domains, or to test new genetic operators or new combinations of existing evolutionary components. These features are not well documented yet, but the class-specific Documentation should help you get started.
Anyone interested in Stochastic Optimization. PyEC efficiently implements the most common methods in stochastic optimization. It can be used to test and experiment how different algorithms work, and what effects different parameter settings can have.
PyEC is distributed to PyPi and can be downloaded with setuptools using the name pyec, like so:
$ easy_install pyec
PyEC has dependencies on the linear algebra package NumPy<http://numpy.scipy.org/> and the scientific computing package SciPy<http://www.scipy.org/>.
PyEC provides various optimization methods. If you just want to optimize, the following examples should help. PyEC provides easily accessible optimization routines for Evolutionary Annealing, Differential Evolution, Nelder-Mead, Generating Set Search, CMA-ES, Particle Swarm Optimization, and Simulated Annealing.
Suppose we start a Python terminal session as follows:
Python 2.6.5 (r265:79359, Mar 24 2010, 01:32:55)
[GCC 4.0.1 (Apple Inc. build 5493)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyec.optimize
>>> from numpy import *
>>> def branin(x):
... return (x[1] - (5.1/(4*(pi**2)))*(x[0]**2) + (5./pi)*x[0] - 6.)**2 + 10 * (1. - 1./(8.*pi))*cos(x[0])+10.
...
To start with, we are going to optimize Branin’s function. This is a real function in two dimensions. It is defined as
and it has three global optima with \(f(-\pi, 12.275) = f(\pi, 2.275) = f(9.42478, 2.475) = 0.39788735\). Usually, the optimization is constrained by \(-5 < x < 10\) and \(0 < y < 15\).
Here are some examples of PyEC on Branin’s function with no constraints, on about 2,500 function evaluations (default):
>>> pyec.optimize.evolutionary_annealing(branin, dimension=2)
(array([ 3.14159266, 2.27500002]), 0.39788735772973816)
>>> pyec.optimize.differential_evolution(branin, dimension=2)
(array([ 3.14100091, 2.25785439]), 0.39819904998361366)
>>> pyec.optimize.cmaes(branin, dimension=2)
(array([ 3.14159266, 2.27500001]), 0.39788735772973816)
>>> pyec.optimize.nelder_mead(branin, dimension=2)
(array([ 3.14159308, 2.27499873]), 0.39788735773149675)
>>> pyec.optimize.generating_set_search(branin, dimension=2)
(array([ 3.12552433, 2.29660443]), 0.39920864246465015)
>>> pyec.optimize.particle_swarm_optimization(branin, dimension=2)
(array([ 3.1282098 , 2.26735578]), 0.39907497646047574)
In these examples, all we had to do was to pass our function to one of PyEC’s optimizers along with the dimension (Branin’s has 2 inputs variables), and these methods used default configurations in order to locate the global minimum (evolutionary_annealing() requires the space_scale parameter for unconstrained optimization). The return values are a tuple with two items. The first element is the best solution found so far, and the second element is the function value at that solution. In this case, many of the optimizers were not particularly accurate. Some methods missed the global minimum by an error on the order of 0.01, which is much larger than we would prefer, especially for this simple problem. Most of these methods can be made more accurate by replacing some of the default parameters:
>>> pyec.optimize.differential_evolution(branin, dimension=2, generations=250, population=50, CR=.2, F=.2)
(array([ 3.14159085, 2.27498739]), 0.39788735794177654)
>>> pyec.optimize.generating_set_search(branin, dimension=2, generations=5000, expansion_factor=1., initial_step=0.5)
(array([ 3.1386125 , 2.27810758]), 0.3979306095569175)
>>> pyec.optimize.particle_swarm_optimization(branin, dimension=2, omega=-0.5, phi_p=0.0)
(array([ 3.14159258, 2.27500003]), 0.39788735772976125)
Now all but one of the methods have found the global optimum accurately up to eight decimal places.
If we had wished to find the maximum values rather than the minimum values of the function, we could have passed the parameter minimize=False to any of these optimizers:
>>> pyec.optimize.differential_evolution(branin, dimension=2, minimize=False)
In general, a function \(f(x)\) can be maximized by minimizing the alternate function \(-f(x)\) instead, which is what PyEC does internally when minimize=False.
Branin’s function is relatively easy to optimize; we would like to try a harder function. PyEC ships with several benchmark multimodal functions, many of which are defined in higher dimensions as well. These benchmarks can be referenced by name when calling PyEC’s optimizers. One example is Rastrigin’s function (see <http://en.wikipedia.org/wiki/Rastrigin_function>):
def rastrigin(x):
return 10 * len(x) + ((x ** 2) - 10 * cos(2 * pi * x)).sum()
Rastrigin’s has a minimum at the origin, where its function value is 0. PyEC’s optimizers can find this minimum with a little tweaking:
>>> pyec.optimize.differential_evolution("rastrigin", dimension=10, generations=2500, population=100, CR=.2, F=.2)
(array([ -3.09226981e-05, 2.19169568e-05, 4.46486498e-06,
-1.50452001e-05, 6.03987807e-05, -6.17905562e-06,
4.82476074e-05, -1.02580314e-05, -2.07212921e-05,
9.15748483e-06]), 1.6496982624403245e-06)
PyEC’s optimizers are stochastic; that is, they search the space randomly. No two runs of any optimizer are the same. You can get widely varying results from different runs of the same algorithm, so it’s best to run an algorithm a few times if you’re not satisfied with the results:
>>> pyec.optimize.cmaes("rastrigin", dimension=10, generations=2500,population=250)
(array([ 1.48258949e-03, 2.25335429e-04, -5.35427662e-04,
-2.74244483e-03, 3.20044246e-03, -4.59549462e-03,
-2.09654701e-03, 9.93491865e-01, 8.95951435e-04,
-9.95219709e-01]), 1.9996060669315057)
>>> pyec.optimize.cmaes("rastrigin", dimension=10, generations=2500,population=250)
(array([ -4.26366209e-04, -7.29513508e-04, 5.97365406e-04,
-9.93842635e-01, 4.47482962e-04, -3.32484925e-03,
-3.98886672e-03, 4.06692711e-04, -1.49134732e-03,
3.80257643e-03]), 1.0041502969750979)
>>> pyec.optimize.cmaes("rastrigin", dimension=10, generations=2500,population=250)
(array([ -4.24651080e-04, 7.78200373e-04, 4.80037528e-03,
1.06188871e-03, 2.50392639e-04, -3.00255770e-03,
-9.91998151e-01, 5.52063421e-03, -3.44827888e-03,
-9.97582491e-01]), 2.0081777356006967)
Every optimizer performs best on some set of problems, and performs worse on others. Here, differential_evolution() performs well on rastrigin, whereas CMA-ES is less reliable on this function. The opposite situation can hold true for other optimization problems.
PyEC supports constrained optimization as well. pyec.optimize.Hyperrectangle boundaries can be implemented by:
>>> pyec.optimize.evolutionary_annealing(lambda x: x[0]-x[1], dimension=2, constraint=pyec.optimize.Hyperrectangle(center=5.0,scale=2.5))
(array([ 2.5, 7.5]), -5.0)
Other constraints can be implemented by creating a subclass of pyec.optimize.Constraint and passing it to the optimizer via the keyword argument constraint.
Version 0.2 has changed substantially from version 0.1. Here are some of the changes:
The following changes are planned for Version 0.3:
Release 0.2 is a transitional release, and substantial changes will be made to the internal components for the next few releases.
A constraint region for optimization. The __call__ method of the object should return whether an object satisfies the constrains, and the extent method should specify parameters for a hypercube completely containing the constraint region.
A Hyperrectangle constraint region.
Parameters: |
|
---|
Apply Correlated Matrix Adaptation Evolution Strategy (CMA-ES) to optimize a function. See <http://en.wikipedia.org/wiki/CMA-ES>.
Calls optimize().
Extra keyword arguments:
Apply differential evolution (DE) to optimize a function. See <http://en.wikipedia.org/wiki/Differential_evolution>.
Calls optimize().
Extra keyword arguments:
Apply differential evolution (DE) to optimize a function. See <http://en.wikipedia.org/wiki/Differential_evolution>.
Calls optimize().
Extra keyword arguments:
Apply evolutionary annealing to optimize a function. See Chapter 11 of <http://nn.cs.utexas.edu/downloads/papers/lockett.thesis.pdf>.
Calls optimize().
Extra keyword arguments:
Apply evolutionary annealing to optimize a function. See Chapter 11 of <http://nn.cs.utexas.edu/downloads/papers/lockett.thesis.pdf>.
Calls optimize().
Extra keyword arguments:
Apply a basic generating set search to optimize a function. See <http://smartfields.stanford.edu/documents/080403_kolda.pdf>.
Uses no search heuristic, and uses the d+1 size basis in dimension d.
Calls optimize().
Extra keyword arguments:
Apply a basic generating set search to optimize a function. See <http://smartfields.stanford.edu/documents/080403_kolda.pdf>.
Uses no search heuristic, and uses the d+1 size basis in dimension d.
Calls optimize().
Extra keyword arguments:
Apply Nelder-Mead method to optimize a function. See <http://en.wikipedia.org/wiki/Nelder-Mead_method>.
Calls optimize().
Extra keyword arguments:
Apply Nelder-Mead method to optimize a function. See <http://en.wikipedia.org/wiki/Nelder-Mead_method>.
Calls optimize().
Extra keyword arguments:
Configure and run an optimizer on a function using a ConfigBuilder object in order to instantiate the optimizer.
By default the function will be minimize, but maximization can be performed by setting the keyword argument minimize to False.
Benchmark functions can be optimized by name. The following names are supported:
Parameters: |
|
---|---|
Returns: | A tuple (best solution, best value) where the first element is the best solution located during optimization and best value is the value of the function at best solution. |
Keyword arguments:
Apply particle swarm optmization to optimize a function. See <http://en.wikipedia.org/wiki/Particle_swarm_optimization>.
Calls optimize().
Extra keyword arguments:
Apply particle swarm optmization to optimize a function. See <http://en.wikipedia.org/wiki/Particle_swarm_optimization>.
Calls optimize().
Extra keyword arguments:
Apply simulated annealing to optimize a function. See <http://en.wikipedia.org/wiki/Simulated_annealing>.
Calls optimize().
Extra keyword arguments:
Apply simulated annealing to optimize a function. See <http://en.wikipedia.org/wiki/Simulated_annealing>.
Calls optimize().
Extra keyword arguments:
A configuration object for an optimization algorithm. Each optimizer is created with a configuration object that is used to parameterize the optimizer instance. A configuration object may have arbitrary properties and methods. Default versions of methods used by several optimizers are provided.
A builder to generate configuration objects with certain parameters.
A builder creates a specific configuration object for a specific optimization method. Its __init__ method is used to set the optimization class and the default parameters. The cfg property can then be modified to replace the defaults, and then configure can be called to generate an optimizer with the desired configuration. When an optimizer is instantiated, a copy of the default configuration is used, so the the builder can be reused.
Several default training parameters are placed into the Config object; view the source for details.
Parameters: | algcls (class) – A class object or other generator that produces a PopulationDistribution instance when called. |
---|
Creates an optimizer instance and applies the built configuration to the optimizer.
Parameters: |
|
---|---|
Returns: | The configured optimizer instance, usually a PopulationDistribution instance. |
Called by configure to install any final properties into the Config object after it has been copied. Properties that are changed by postConfigure are not shared among different optimizer instances. This method should be used to install any objects that contain state that is specific to an optimizer.
Parameters: | cfg (Config) – The copied Config object. |
---|
An Exception to indicate that an optimizer provided as an argument cannot be run by Trainer.
A simple stats recording tool that can be used to aggregate teh amount of time spent in various methods of an optimizer. Keeps track of variables by key names and outputs the time spent between start and stop for each key. For a recorded key, the average time spent between start and stop can be retrieved by [], like so:
def checkStats():
stats = RunStats()
# start the timer
stats.start("test")
.. .
# stop the timer
stats.stop("test")
# print just the key "test"
print stats["test"]
# print all
print stats
Tool to run an population-based optimizer on a problem.
Parameters: |
|
---|
An int indicating how to group the data.
A Bernoulli distribution over the binary cube. Returns a numpy.ndarray with values of 0.0 or 1.0.
Uses config.dim to determine the number of bits. The probability of bit being on is given by config.bitFlipProb, which defaults to 0.5.
Params config: | The configuration parameters. |
---|
A Bernoulli distribution that allows for missing variables. Generates a TernaryString with all bits filled in randomly. The variable config.dim is used to determine the number of bits. The probability of a bit being on is 0.50. More complicated distributions over TernaryString are available in pyec.distribution.ec.mutators.
Params config: | The configuration parameters. |
---|
A Distribution that can be sampled.
Parameters: | config (Config) – A set of configuration parameters for the distribution. |
---|
A uniform distribution over a fixed cube in Euclidean space. Used as an initial distribution for constrained optimization problems.
Uses config.dim to determine the dimension.
Checks config.in_bounds to verify that the produced points are inside of the constrained region.
Uses config.in_bounds.extent to determine the size of the hypercube to generate points for. If this function does not exist, config.center and config.scale are used instead.
Params config: | The configuration parameters. |
---|
A Gaussian Proposal Distribution. Used as an initial distribution by several algorithms, and as a proposal distribution by Simulated Annealing.
Params config: | The configuration parameters. |
---|
A distribution governing a population-based optimizer.
This is the parent class for optimizers in PyEC. Its core methods are batch, inherited from Distribution, and update, which reports the population and the scores after a generation is complete.
Params config: | The configuration parameters. |
---|
Return a Config object
Convert a point to a scorable representation. Deprecated. Use Config.convert instead.
Params x: | The candidate solution |
---|
Create a Trainer object and run this PopulationDistribution instance to maximize a fitness function.
After running this method, the property PopulationDistribution.trainer will contain the Trainer object used to optimize the function.
Parameters: |
|
---|---|
Returns: | The maximum score discovered during optimization. |
Update the state of the PopulationDistribution with the latest population and its fitness scores.
Params population: | |
---|---|
The previous population with its fitness scores. If self.config.sorted is True, the list will be sorted by descending score by Trainer. Outside of a Trainer instance, the caller must ensure that the population is sorted (or not sorted) as necessary. |
A proposal distribution for e.g. simulated annealing.
A ConfigBuilder for a standard genetic algorithm (binary encoding) to search in real spaces.
See source code for defaults (16-bit representation, mutation rate .05).
A genetic algorithm for optimization of real functions on Euclidean spaces.
Support various selection, crossover, and mutation methods. Supports binary and real encodings.
Config parameters * elitist = One of (True, False), whether to keep the best solution so far in the population. * selection = One of (proportional, tournament, ranking), different selection methods. * rankingMethod = One of (linear, nonlinear), when using ranking selection. * crossover = One of (none, uniform, onePoint, twoPoint, intermediate, onePointDual), the type of crossover to use. * crossoverOrder = Integer, the number of parents for recombination (default 2). * space = One of (real, binary); the type of encoding. * mutation = One of (gauss, cauchyOne), the type of mutation in real encodings (binary uses a standard Bernoulli mutation). * varInit = float or float array, standard deviation (for real space). * bitFlipProbs = float or float array, mutation probability (for binary space).
Binary encoding uses other parameters to encode/decode the parameters * rawdim – The number of real dimensions. * rawscale – The scale of the space in real dimensions. * rawcenter – The center of the space in real dimensions. * binaryDepth – The number of bits to use for each real parameter.
Generic config parameters * dim – The dimension of the search domain (for binary, the total number of bits) * center – The center of the search domain (.5 for binary) * scale – The scale of the search domain (.5 for binary) * bounded – Whether the optimization is constrained.
See pyec.distribution.ec.selectors for selection methods and pyec.distribution.ec.mutators for mutation distributions.
Params config: | The configuration parameters. |
---|
A ConfigBuilder for a real-coded Genetic Algorithm.
See the source code for defaults (uniform crossover, ranking selection, gaussian mutation).
A ConfigBuilder for a standard (mu/rho +, lambda)–ES.
By default:
Implements a configurable Evolution Strategy.
See <http://en.wikipedia.org/wiki/Evolution_strategy> for details and references.
Config parameters:
The parameter “lambda” is determined by the populationSize and mu, along with the choice of selection (plus or comma).
In real space, initial distribution is either a FixedCube if the search is constrained, or a Gaussian if not. In binary space, the initial distribution is a random Bernoulli.
Standard mutation (“es”) adapts the mutation parameters for each member of the population. If mutation is “cma”, then the algorithm of Hansen and Ostermeier (1996) is used. Note that the 1996 algorithm for CMA differs from the modern version (2001) and maintains separate mutation parameters for each solution.
Adaptive parameters are not implemented for binary spaces.
Extra parameters for CMA:
See pyec.distribution.ec.selectors for selection methods and pyec.distribution.ec.mutators for mutation distributions.
A ConfigBuilder for a binary-encoded evolutionary annealing instance.
See souce for defaults (Tournament annealing selection, uniform crossover, 16-bit conversions, area-sensitive bernoulli mutation).
A ConfigBuilder for an evolutionary annealing structure search for a Bayesian network.
See source for defaults (Tournament annealing, DAGs, area-sensitive structure proposals).
The evolutionary annealing algorithm as described in
<http://nn.cs.utexas.edu/downloads/papers/lockett.thesis.pdf>
Lockett, Alan. General-Purpose Optimization Through Information Maximization. Ph.D. Thesis (2011).
Config parameters: * learningRate – A multiplier for the annealed cooling schedule * selection – One of (proportional, tournament), the type of annealed selection to use * crossover – One of (none, uniform, onePoint, twoPoint, intermediate, merge, crossbayes), the type of crossover to use. * mutation – One of (gauss, uniform, uniformBinary, cauchy, bernoulli, structure), the type of mutation to use. Must match the space being searched. * passArea – Whether to use area-sensitive mutation distributions. * varInit – Scaling factor for standard deviation in Gaussian spaces. Defaults to 1.0. * initialDistribution – The initial distribution to sample.
See pyec.distribution.ec.selectors for selection methods and pyec.distribution.ec.mutators for mutation distributions.
Base class for annealed selection. See e.g.
Lockett, A. and Miikkulainen, R. Real-space Evolutionary Annealing, 2011.
For a more up-to-date version, see Chapters 11-13 of
<http://nn.cs.utexas.edu/downloads/papers/lockett.thesis.pdf>.
Config parameters * taylorCenter – Center for Taylor approximation to annealing densities. * taylorDepth – The number of coefficients to use for Taylor approximation of the annealing density. * activeField – One of (point, binary, bayes, other); refers to the properties of pyec.util.partitions.Point and differs according to the space being searched. Future versions will deprecate this aspect. * initialDistribution – A distribution used to select central points in the first generation. * anneal – Whether to apply annealing or use a constant temperature of 1.0; defaults to True. * segment – The name of the pyec.util.partitions.Segment object that refers to the points and partitions.
Parameters: | config (Config) – The configuration parameters. |
---|
Child classes should override this method in order to select a point from the active segment in pyec.util.partitions.Point.
Select the best member of the population.
Parameters: | config (Config) – The configuration parameters. |
---|
Implements elitism by replacing the worst member of the population with the best solution seen so far. If the best solution is a member of the current population, nothing is changed.
Elitism can be applied to an algorithm by convolving elitism with the algorithm.
Parameters: | config (Config) – The configuration parameters. |
---|
Implements standard selection for evolution strategies.
The property config.selection determines the type of selection, either “plus” or “comma”.
Parameters: | config (Config) – The configuration parameters. |
---|
Computes the probability of selecting a solution from a population using linear ranking selection.
See e.g. <http://www.pohlheim.com/Papers/mpga_gal95/gal2_3.html>
Computes the probability of selecting a solution from a population using nonlinear ranking selection.
See e.g. <http://www.pohlheim.com/Papers/mpga_gal95/gal2_3.html>
Fitness proportional selection (roulette wheel selection).
See <http://en.wikipedia.org/wiki/Fitness_proportionate_selection>.
Fitness values must be nonnegative.
Parameters: | config (Config) – The configuration parameters. |
---|
Proportional Annealing, as described in
Lockett, A. And Miikkulainen, R. Real-space Evolutionary Annealing (2011).
See Annealing for more details.
A policy to allocate selection probabilities under ranking selection.
Ranking selection, e.g. <http://www.pohlheim.com/Papers/mpga_gal95/gal2_3.html>
Takes a ranking function which weights individuals according to rank Rank is 1 for lowest, K for highest in population of size K
Config parameters * ranker – A Ranker instance for the ranking policy.
Parameters: | config (Config) – The configuration parameters. |
---|
Tournament selection on the entire population.
See <http://en.wikipedia.org/wiki/Tournament_selection>.
Config parameters: * selectionPressure – The probability of choosing the best member of the population.
Parameters: | config (Config) – The configuration parameters. |
---|
Tournament Annealing, as described in Chapter 11 of
<http://nn.cs.utexas.edu/downloads/papers/lockett.thesis.pdf>
See Annealing for more details.
Like AreaSensitiveGaussian but with a Bernoulli. The bit flipping prob decays with the logarithm of the area of a partition region, i.e.:
bf = self.config.varInit / (-log(area))
where area is the volume of a partition region.
Config parameters * varInit – The scaling factor for the bit flipping probability.
Parameters: | config (Config) – The configuration parameters. |
---|
Gaussian mutation with a variance that depends on the size of a partition of the space (see pyec.util.partitions.Partition).
Config parameters: * varInit – Scaling factor for the standard deviation; defaults to 1.0. * scale – The scale of the space (half the width). * spaceScale – A scaling factor for the space size.
Standard deviation is determined by:
sd = (varInit * scale * (area ** (1/dim)) / sqrt(generation)
where dim is the dimension of the space (config.dim) and area is the volume of the partition region for the object being mutated.
Parameters: | config (Config) – The configuration parameters. |
---|
Bernoulli mutation on a TernaryString object. Randomly flips bits on a fully specified binary string.
Config parameters * bitFlipProbs – The probability of flipping a bit.
Parameters: | config (Config) – The configuration parameters. |
---|
Cauchy mutation.
Config parameters * stddev – The width parameter of the Cauchy (pi)
Parameters: | config (Config) – The configuration parameters. |
---|
Implements an old (1996) version of mutation for the Correlated Matrix Adaption method of Hansen and Ostermeier.
This method is obsolete and is very different from the modern version.
take a N x N matrix and make a N(N-1)/2 array
take a N(N-1)/2 array and make a N x N matrix
Base class to perform recombination on a given list of organisms.
Parameters: | config (Config) – The configuration parameters. |
---|
Performs recombination using a crossover policy given by a Crosser instance.
Parameters: |
|
---|
Like DecayedGaussian, but with a Bernoulli distribution instead of a Gaussian; the bit flipping probability decays to zero.
Parameters: | config (Config) – The configuration parameters. |
---|
Like DecayedGaussian, except with a Cauchy distribution.
Parameters: | config (Config) – The configuration parameters. |
---|
Gaussian mutation with a variance that decreases over time according to fixed formula.
Config parameters * varInit – A scale factor on the standard deviation. * varDecay – A multiplicative decay factor. * varExp – An exponential decay factor.
Standard deviation is determined by:
sd = varInit * exp(-(generation * varDecay) ** varExp)
Parameters: | config (Config) – The configuration parameters. |
---|
Cross multiple organisms using generalized uniform crossover.
Gaussian mutation for evolution strategies, where the adaptive mutation parameters are encoded with the organism.
Parameters: | config (Config) – The configuration parameters. |
---|
Gaussian mutation. Randomly mutates a random selection of components of a real vector.
Config parameters * stddev – The standard deviation to use. * mutationProb – The probability of mutating each component; defaults to 1.0
Parameters: | config (Config) – The configuration parameters. |
---|
Cross multiple organisms by averaging their values component-wise.
Normally used by evolution strategies.
Base class for mutation operators. Provides a method mutate that performs mutation on a single member of a population.
Parameters: | config (Config) – The configuration parameters. |
---|
Mutates just one random component of an organism with a Cauchy distribution.
Config parameter * stddev – The width parameter of the Cauchy (pi) * mutationProb – The probability of applying the mutation.
Parameters: | config (Config) – The configuration parameters. |
---|
Cross two organisms at one point and return only one crossed version.
Unlike OnePointDualCrosser, this discards the second variant.
See <http://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)#One-point_crossover>.
One point crossover, mixes two individuals to create two new individuals by splicing them together at a random index.
So:
xxxxxx
oooooo
might become:
xxoooo
ooxxxx
And both items are returned.
See <http://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)#One-point_crossover>.
Cross two organism by wrapping them into a circle, picking two indices, and combining the two rings.
See <http://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)#Two-point_crossover>.
Uniform mutation within a partition region. The region is determined by the organism being mutated.
Parameters: | config (Config) – The configuration parameters. |
---|
Like UniformArea, but in a binary space. Flips the bits on an organism so that after the mutation, the object remains in the same partition region, and is uniformly chosen within the partition region. See pyec.util.partitions.Partition.
Config params * varInit – Scaling factor for the bit flipping probability.
Parameters: | config (Config) – The configuration parameters. |
---|
Uniform crossover.
See <http://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)#Uniform_Crossover_and_Half_Uniform_Crossover>
A ConfigBuilder object to create a DifferentialEvolution instance.
Default parameters are CR = .2, F = .5; these can be changed by editing the cfg property of this object.
Implements Differential Evolution (DE) as described by:
Storn, Ranier and Price, Kenneth. Differential Evolution – A simple and efficient adaptive scheme for global optimization over continuous spaces. 1995.
See <http://en.wikipedia.org/wiki/Differential_evolution> for algorithm details.
See <http://www.hvass-labs.org/people/magnus/publications/pedersen10good-de.pdf> for a good discussion of parameter settings for DE.
Config parameters:
Parameters: | cfg (Config) – The configuration object for Differential Evolution. |
---|
The Correlated Matrix Adaptation Evolution Strategy algorithm as described by:
Hansen, N. and Ostermeier, A. Completely derandomized self-adaptation in evolution strategies. In Evolutionary Computation 9(2), pp 159–195, 2001.
See <http://en.wikipedia.org/wiki/CMA-ES> for details.
A fast-converging population-based optimizer that finds reasonably good solutions with relatively few resources. In general, each population is sampled from a multi-variate Gaussian whose parameters are altered to optimize the probability of finding good solutions.
Akimoto (2010) proved that CMA-ES is an instance of Natural Evolution Strategies (Wierstra, 2008), which implements a gradient-following method at the population level.
Config parameters * muProportion – the ratio of mu to lambda, i.e. the proportion of solutions from each generation used as parents for the next. By default equal to 0.5. * initialDistribution – the distribution used to sample the initial mean * populationSize – the size of the population for each generation * scale – the scale of the space, used to initialize the variance of the Gaussian * dim – the dimension of the real vector space being searched. * restart – whether to restart when the distribution collapses; defaults to True (see e.g. `Auger and Hansen, 2005)
Params cfg: | The configuration object for CMA-ES. |
---|
Based on Kolda, Lewis, and Torczon, Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods (2003)
Primarily implemented to enable MADS (Audet and Dennis, 2006)