.. _active_surrogate_training: Active learning of a global surrogate model ========================================================================================== :Driver: :ref:`ActiveLearning` :Download script: :download:`active_surrogate_training.py` The target of the study is to train a surrogate model of a vectorial function. We consider an active training loop. That is, in each iteration, training data is generated by evaluating the vectorial function at the point of maximal prediction uncertainty. As an example problem we consider the frequency dependent transmission function of a Fabry-Pérot etalon (angular frequency :math:`\omega`). The etalon consists of a resonator of length :math:`l` formed by a pair of mirrors with reflectivities :math:`R_1` and :math:`R_2`. The propagation losses inside the resonator are quantified by the intensity-loss coefficient :math:`\alpha=0.05`. The transmission function is given as (see also `Wikipedia entry on Fabry-Pérot etalon `_) .. math:: A_\text{trans}(\omega, R_1, R_2, l) = \frac {(1-R_{1})(1-R_{2})e^{-\alpha l }}{\left({1-{\sqrt {R_{1}R_{2}}}e^{-\alpha l }}\right)^{2}+4{\sqrt {R_{1}R_{2}}}e^{-\alpha l }\sin^{2}(\phi )}. The round-trip phase shift of the light field inside the resonator is :math:`\phi(\omega, l) = \omega \cdot l`. We consider the case that we wish to learn the vectorial mapping from etalon parameters :math:`(R_1, R_2, l)` to transmission spectra .. math:: \mathbf{f}(R_1, R_2, l) = \begin{bmatrix} A_\text{trans}(\omega_1, R_1, R_2, l) \\ A_\text{trans}(\omega_2, R_1, R_2, l) \\ \vdots \\ A_\text{trans}(\omega_{50}, R_1, R_2, l) \end{bmatrix} with :math:`\omega_k = 2\pi k/50`. It is possible to learn this vectorial mapping using multi-output a Gaussian process or a multi-output Bayesian neural network. Here, we present the approach to learn instead the 4d scalar function :math:`A_\text{trans}(\omega, R_1, R_2, l)` using a single-output Bayesian neural network. This has the advantage that the vector entries are not learned independently, but that correlations between similar frequencies are taken into account. Moreover, after training one can get predictions for arbitrarily fine omega scans. .. literalinclude:: ./active_surrogate_training.py :language: python :linenos: .. figure:: images/active_surrogate_training/etalon_prediction.svg :alt: Prediction of etalon transmission The figure shows for different parameters :math:`R_1, R_2` and :math:`l` the predicted transmission function (solid lines, shading indicates uncertainty of prediction) in comparison to the analytical transmission value (dashed lines). The blue line corresponds to the prediction with the largest average uncertainty. The other lines correspond to the etalon parameters :math:`R_1 = 0.1, R_2 = 0.1, l = 0.5` (orange), :math:`R_1 = 0.1, R_2 = 0.7, l = 0.75` (green) and :math:`R_1 = 0.7, R_2 = 0.7, l = 1.0` (red). Considering the small number of 50 data points, the agreement between prediction and analytical value is very good.