Benchmarking different studies against each other
Download script: benchmark.m
In some cases, it is not clear, which driver or which driver configuration is able to minimize an objective function with the smallest number of iterations. If similar minimizations are performed on a regular basis, it can be worthwhile to find the best choice by benchmarking several studies with different drivers or configurations against each other.
As an example, the usual 2D Rastrigin function on a circular domain shall be minimized:
We compare the heuristic minimization methods CMAES and DifferentialEvolution, the multi-start local minimization method ScipyMinimizer, and Bayesian optimization. For the latter, we use the standard BayesianOptimization driver and the ActiveLearning driver that is configured to use Neural Networks instead of a Gaussian process to learn the objective function.
We assume that we can evaluate the function 2 times in parallel and that each evaluation takes 2 seconds. Hence, the driver has to compute new samples in less than a second in order to avoid significant overhead. This is an edge case for the Bayesian optimization approach, that has a considerable overhead typically larger than a second. I this case the other methods with much faster sample computation times can be more appropriate. Nevertheless, in this case of a multi-modal objective Bayesian optimization reaches the global minim still well before the other optimization methods. Gaussian process regression is computational efficient for this small number of iterations. In comparison the Neural Network training time of the ActiveLearning driver leads to a larger overhead.
1server = jcmoptimizer.Server();
2client = jcmoptimizer.Client('host', server.host);
3
4% Definition of the search domain
5design_space = {...
6 struct('name', 'x1', 'type', 'continuous', 'domain', [-1.5,1.5]), ...
7 struct('name', 'x2', 'type', 'continuous', 'domain', [-1.5,1.5]), ...
8};
9
10% Definition of fixed environment parameter
11environment = {...
12 struct('name', 'radius', 'type', 'fixed', 'domain', 1.5) ...
13};
14
15% Definition of a constraint on the search domain
16constraints = {...
17 struct('name', 'circle', 'expression', 'sqrt(x1^2 + x2^2) <= radius')...
18};
19
20%Creation of studies to benchmark against each other
21studies = struct();
22drivers = {"BayesianOptimization", "ActiveLearning", "CMAES", ...
23 "DifferentialEvolution", "ScipyMinimizer"};
24for i = 1:length(drivers)
25 studies.(drivers{i}) = client.create_study( ...
26 'design_space', design_space, ...
27 'environment', environment, ...
28 'constraints', constraints, ...
29 'driver', drivers{i}, ...
30 'study_name', drivers{i}, ...
31 'study_id', sprintf("benchmark_%s", drivers{i}), ...
32 'open_browser' , false ...
33 );
34end
35
36%Configuration of all studies
37config = {'num_parallel', 1, 'max_iter', 250};
38min_val = 1e-3; %Stop study when this value was observed
39for i = 1:length(drivers)
40 driver = drivers{i};
41 study = studies.(driver);
42 if strcmp(driver, "ActiveLearning")
43 study.configure( ...
44 'surrogates', {struct('type', "NN")}, ...
45 'objectives', {struct('type', "Minimizer", 'min_val', min_val)}, ...
46 config{:});
47 elseif strcmp(driver, "ScipyMinimizer")
48 %For a more global search, 3 initial gradient-free Nelder-Mead are started
49 study.configure( ...
50 'method', "Nelder-Mead", 'num_initial', 3, ...
51 'min_val', min_val, config{:});
52 else
53 study.configure('min_val', min_val, config{:});
54 end
55end
56% Evaluation of the black-box function for specified design parameters
57function observation = evaluate(study, sample)
58
59 pause(2); % make objective expensive
60 observation = study.new_observation();
61 x1 = sample.x1;
62 x2 = sample.x2;
63 observation.add(10*2 ...
64 + (x1.^2-10*cos(2*pi*x1)) ...
65 + (x2.^2-10*cos(2*pi*x2)) ...
66 );
67end
68
69% Creation of a benchmark with 6 repetitions and add all 4 studies
70benchmark = client.create_benchmark('num_average', 6);
71for i = 1:length(drivers)
72 benchmark.add_study(studies.(drivers{i}));
73end
74
75% Run the benchmark - this will take a while
76benchmark.set_evaluator(@evaluate);
77benchmark.run();
78
79% Plot cummin convergence w.r.t. number of evaluations and time
80fig = figure('Position', [0, 0, 1000, 500]);
81subplot(1, 2, 1)
82data = benchmark.get_data('x_type', "num_evaluations");
83
84for i = 1:length(data.names)
85 std_error = data.sdev(i,:) ./ sqrt(6);
86 p = plot(data.X(i,:), data.Y(i,:), 'LineWidth', 2.0);
87 hold on
88 patch([data.X(i,:), fliplr(data.X(i,:))], ...
89 [data.Y(i,:) - std_error, fliplr(data.Y(i,:) + std_error)], ...
90 p(1).Color, 'FaceAlpha', 0.2, 'EdgeAlpha', 0.2);
91end
92grid()
93xlabel("Number of Evaluations", 'FontSize', 12)
94ylabel("Average Cummulative Minimum", 'FontSize', 12)
95ylim([-0.4, 10])
96
97subplot(1, 2, 2)
98data = benchmark.get_data('x_type', "time");
99plots=[];
100for i = 1:length(data.names)
101 std_error = data.sdev(i,:) ./ sqrt(6);
102 p = plot(data.X(i,:), data.Y(i,:), 'LineWidth', 2.0, 'DisplayName', data.names{i});
103 plots(i) = p;
104 hold on
105 patch([data.X(i,:), fliplr(data.X(i,:))], ...
106 [data.Y(i,:) - std_error, fliplr(data.Y(i,:) + std_error)], ...
107 p(1).Color, 'FaceAlpha', 0.2, 'EdgeAlpha', 0.2);
108end
109legend(plots, data.names{:});
110grid()
111xlabel("Time (sec)", 'FontSize', 12)
112ylim([-0.4, 10])
113saveas(fig, "benchmark.png")
Left: The left graph shows the cummulative minimum as a function of the number
of evaluations of the objective function. The results are averaged over six
independent runs of each driver. The shaded area shows the uncertainty of the mean
value. Clearly, the BayesianOptimization driver performs the best and finds
the global minimum zero after about 70 iterations while the other methods did not
converge to zero even after 250 iterations.
Right: The convergence of the cummulative minimum as a function of the total
optimization time looks slightly different. As expected, the ActiveLearning
and BayesianOptimization drivers show an overhead for computing new samples.
If one would have an optimization budget on less than 50 seconds, the multi-start
local ScipyMinimizer performs better. It can be also seen that the training of
Neural Networks leads to a larger overhead compared to the Gaussian process
regression used by the BayesianOptimization driver.