Multi-objective optimization
- Driver:
Download script: multi_objective_optimization.m
Many optimization task do not consist of only a single objective \(f(\mathbf{x})\) to be minimized in some domain \(\mathbf{x}\in \mathcal{X}\subset \mathbb{R}^d\), but rather in a set of objectives \(\mathbf{f}(\mathbf{x}) = [f_1(\mathbf{x}), f_2(\mathbf{x}),\dots,f_k(\mathbf{x})]^{\rm T}\) to be minimized together.
A solution \(\mathbf{a} = \mathbf{f}(\mathbf{x}_1)\) is said to dominate another solution \(\mathbf{b} = \mathbf{f}(\mathbf{x}_2)\) if \(a_i \le b_i\) for all \(i \in [1, \dots, k]\) and there exists at least on entry \(j \in [1, \dots, k]\) such that \(a_j < b_j\).
A solution is called Pareto optimal if it is not dominated by any other solution. The set of Pareto optimal solutions is called the Pareto front. The aim of a multi-objective optimization is to find the Pareto front for the set of objectives \(\mathbf{f}(\mathbf{x})\).
We consider an analytic multi-objective optimization problem with \(k=2\) objectives:
It can be shown, that the problem has Pareto-optimal solutions \((x_1, x_2^*)\), where \(x_2^*\) is the globally minimum solution of \(g(x_2)\) and \(x_1\) can take any value [DEB1999].
As an example, we consider the multi-modal function
with local minima approximately located at integer values of \(x_2\) and the global minimum at \(g(x_2^* = 0)=10\). That is, the Pareto front is defined as \(f_2 = \frac{10}{f_1}\).
Deb, K., “Multi-objective genetic algorithms: Problem difficulties and construction of test problems”. Evolutionary computation, 7(3), pp.205-230 (1999)
1jcm_optimizer_path = '<JCM_OPTIMIZER_PATH>';
2addpath(fullfile(jcm_optimizer_path, 'interface', 'matlab'));
3
4server = jcmoptimizer.Server();
5client = jcmoptimizer.Client(server.port);
6
7% Definition of the search domain
8design_space = { ...
9 struct('name', 'x1', 'type', 'continuous', 'domain', [1, 4]), ...
10 struct('name', 'x2', 'type', 'continuous', 'domain', [1, 4]), ...
11};
12
13 % Creation of the study object with study_id 'multi_objective_optimization'
14study = client.create_study( ...
15 'design_space', design_space, ...
16 'driver','ActiveLearning',...
17 'name','Multi-objective optimization',...
18 'study_id', 'multi_objective_optimization');
19%Lower and upper reference point for hypervolume definition (see figure)
20lower_ref=[0, 0];
21upper_ref=[5, 50];
22
23study.configure( ...
24 'max_iter', 50, ...
25 'surrogates', { ...
26 ...A multi-output Gaussian process that learns the dependence of
27 ...the model on the design parameters.
28 struct('type', "GP", 'name', "model_vector", 'output_dim', 2, ...
29 'correlate_outputs', false)
30 }, ...
31 'variables', { ...
32 ...Selectors for the values of f1 and f2
33 struct('type', "SingleSelector", 'name', "f1", ...
34 'input', "model_vector", 'select_by_name', "model_vector0"), ...
35 struct('type', "SingleSelector", 'name', "f2", ...
36 'input', "model_vector", 'select_by_name', "model_vector1") ...
37 }, ...
38 'objectives', { ...
39 ... Multi-minimization objective for f1, f2
40 struct('type', "MultiMinimizer", 'variables', ["f1", "f2"], 'name', "objective", ...
41 'lower_reference', lower_ref, 'upper_reference', upper_ref) ...
42 } ...
43);
44
45% Evaluation of the black-box function for specified design parameters
46function observation = evaluate(study, sample)
47
48 pause(2); % make objective expensive
49 observation = study.new_observation();
50 %determine list of the two objective values
51 g = 20 + (sample.x2^2 - 10*cos(2*pi*sample.x2));
52 observation.add([sample.x1, g/sample.x1]);
53
54end
55
56% Run the minimization
57study.set_evaluator(@evaluate);
58study.run();
59
60
61fig = figure('Position', [0, 0, 1000, 500]);
62
63%Plot all observations of (f1, f2)
64f1 = study.driver.get_observed_values('object_type', "variable", 'name', "f1");
65f2 = study.driver.get_observed_values('object_type', "variable", 'name', "f2");
66plot(f1.means, f2.means, "o", 'DisplayName', "Donimated observations");
67hold on;
68
69%Plot the estimate of the Paretro front, i.e. nondominated observations
70pf = study.driver.get_state("pareto_front");
71nondom_f1 = pf.f1;
72nondom_f2 = pf.f2;
73plot(nondom_f1, nondom_f2, "o", 'DisplayName', "Pareto front estimate");
74
75%Plot analytic Pareto front
76X = linspace(1, 4, 100);
77plot(X, 10./X, "k--", 'DisplayName', "Analytic Pareto front");
78
79%Plot nondominated yypervolume
80plot(lower_ref(1), lower_ref(2), "rx", 'DisplayName', "Lower reference");
81plot(upper_ref(1), upper_ref(2), "bx", 'DisplayName', "Upper reference");
82X = [lower_ref(1), repelem(nondom_f1, 2), upper_ref(1)];
83nondom_lower = repelem(lower_ref(2), length(X));
84nondom_f2_aug = [upper_ref(2), nondom_f2];
85nondom_upper = repelem(nondom_f2_aug, 2);
86patch([X, fliplr(X)], [nondom_lower, fliplr(nondom_upper)], 'b', ...
87 'FaceAlpha', 0.2, 'EdgeAlpha', 0.2, ...
88 'DisplayName', "Nondominated hypervolume");
89
90xlabel("f1");
91ylabel("f2");
92legend('Location', "north");
93grid();
94saveas(fig, "multi_objective_minimization.png");
Result of the multi-objective minimization in the \((f_1, f_2)\)-plane.
The blue dots show all observations, whereas the orange dots show nondominated solution, i.e. an estimate of the Pareto front. The dashed line shows the true analytic Pareto front within the domain of \((x_1, x_2)\).
The progress of the study is shown in terms of the nondominated hypervolume (blue shaded area) that is defined by the nondominating observations as well as a lower and upper reference point (red and blue cross).