.. _BayesianOptimization: ================================================ BayesianOptimization ================================================ Contents ========= :`Purpose`_: The purpose of the driver. :`Tutorials`_: Tutorials demonstrating the application of this driver. :`Driver Interface`_: Driver-specific methods of the Python interface. :`Configuration`_: Configuration of the driver. Purpose ======= The driver is based on the `ActiveLearning` driver and runs a standard Bayesian optimization (i.e. minimization) of an expensive scalar function. Tutorials ========= .. toctree:: ../tutorials/vanilla_bayesian_optimization ../tutorials/benchmark Driver Interface ================ The driver instance can be obtained by :attr:`.Study.driver`. .. currentmodule:: jcmoptimizer .. autoclass:: BayesianOptimization :members: :inherited-members: Configuration ============= The configuration parameters can be set by calling, e.g. .. code-block:: python study.configure(example_parameter1 = [1,2,3], example_parameter2 = True) .. _BayesianOptimization.max_iter: max_iter (int) """""""""""""" Maximum number of evaluations of the studied system. Default: Infinite number of evaluations. .. _BayesianOptimization.max_time: max_time (float) """""""""""""""" Maximum run time of study in seconds. The time is counted from the moment, that the parameter is set or reset. Default: ``inf`` .. _BayesianOptimization.num_parallel: num_parallel (int) """""""""""""""""" Number of parallel evaluations of the studied system. Default: ``1`` .. _BayesianOptimization.scaling: scaling (float) """"""""""""""" Scaling parameter of the model uncertainty. For scaling :math:`\gg 1.0` (e.g. ``scaling=10.0``) the search is more explorative. For scaling :math:`\ll 1.0` (e.g. ``scaling=0.1``) the search becomes more greedy (e.g. any local minimum is intensively exploited). Default: ``1.0`` .. _BayesianOptimization.vary_scaling: vary_scaling (bool) """"""""""""""""""" If true, the scaling parameter is randomly varied between 0.1 and 10. Default: ``True`` .. _BayesianOptimization.parameter_distribution: parameter_distribution (dict) """"""""""""""""""""""""""""" Probability distribution of design and environment parameters. Default: ``{'include_study_constraints': False, 'distributions': [], 'constraints': []}`` Probability distribution of design and environment parameters defined by distribution functions and constraints. The definition of the parameter distribution can have several effects: * In a call to the method ``get_statistics`` of the driver interface the value of interest is averaged over samples drawn from the space distribution. * In a call to the method ``run_mcmc`` of the driver interface the space distribution acts as a prior distribution. * In a call to the method ``get_sobol_indices`` of the driver interface the space distribution acts as a weighting factor for determining expectation values. * In an :ref:`ActiveLearning` driver, one can access the value of the log-probability density (up to an additive constant) by the name ``'log_prob'`` in any expression, e.g. in :ref:`ExpressionVariable`, :ref:`LinearCombinationVariable`. See :ref:`parameter_distribution configuration ` for details. .. toctree:: :maxdepth: 100 :hidden: SpaceDistribution1 .. _BayesianOptimization.detect_noise: detect_noise (bool) """"""""""""""""""" If true, the noise of the function values and function value derivatives are estimated by means of two hyperparameters. Default: ``False`` .. _BayesianOptimization.warping_function: warping_function (str) """""""""""""""""""""" The name of the warping function :math:`w`. A warping function performs a transformation :math:`y \to w(y,\mathbf{b})` of any function value :math:`y = f(x)` by means of a strongly monotonic increasing function :math:`w(y,\mathbf{b})` that depends on a set of hyperparameters :math:`\mathbf{b}`. The hyperparameters are chosen automatically by a maximum likelihood estimate. The choice ``identity`` leads to no warping of the function values while ``sinh`` uses a sinus hyperbolicus function for warping. Using ``sinh`` can result in better predictions at the cost of increasing the computational effort. It should be therefore only applied for more expensive black box functions. Default: ``'identity'`` Choices: ``'identity'``, ``'sinh'``. .. _BayesianOptimization.min_val: min_val (float) """"""""""""""" The minimization of the objective is stopped when the observed objective value is below the specified minimum value. Default: ``-inf`` .. _BayesianOptimization.min_PoI: min_PoI (float) """"""""""""""" The study is stopped if the maximum probability of improvement (PoI) of the last 5 iterations is below ``min_PoI``. Default: ``1e-16`` .. _BayesianOptimization.min_acq_val: min_acq_val (float) """"""""""""""""""" The study is stopped if the maximum acquisition value (usually expected improvement) of the last 5 iterations is below ``min_acq_val``. Default: ``-inf`` .. _BayesianOptimization.strategy: strategy (str) """""""""""""" Minimization strategy of the acquisition. The choices are expected improvement (EI), lower confidence bound (LCB), and probability of improvement (PoI). Default: ``'EI'`` Choices: ``'EI'``, ``'LCB'``, ``'PoI'``. .. _BayesianOptimization.localize: localize (bool) """"""""""""""" If true, a local search is performed, i.e. samples are not drawn in regions with large uncertainty. Default: ``False`` .. _BayesianOptimization.num_training_samples: num_training_samples (int) """""""""""""""""""""""""" Number of pseudo-random initial samples before the samples are drawn according to the acquisition function. Default: Automatic choice depending depending on dimensionality of design space.