Multi-objective optimization
- Driver:
- Download script:
Many optimization task do not consist of only a single objective \(f(\mathbf{x})\) to be minimized in some domain \(\mathbf{x}\in \mathcal{X}\subset \mathbb{R}^d\), but rather in a set of objectives \(\mathbf{f}(\mathbf{x}) = [f_1(\mathbf{x}), f_2(\mathbf{x}),\dots,f_k(\mathbf{x})]^{\rm T}\) to be minimized together.
A solution \(\mathbf{a} = \mathbf{f}(\mathbf{x}_1)\) is said to dominate another solution \(\mathbf{b} = \mathbf{f}(\mathbf{x}_2)\) if \(a_i \le b_i\) for all \(i \in [1, \dots, k]\) and there exists at least on entry \(j \in [1, \dots, k]\) such that \(a_j < b_j\).
A solution is called Pareto optimal if it is not dominated by any other solution. The set of Pareto optimal solutions is called the Pareto front. The aim of a multi-objective optimization is to find the Pareto front for the set of objectives \(\mathbf{f}(\mathbf{x})\).
We consider an analytic multi-objective optimization problem with \(k=2\) objectives:
It can be shown, that the problem has Pareto-optimal solutions \((x_1, x_2^*)\), where \(x_2^*\) is the globally minimum solution of \(g(x_2)\) and \(x_1\) can take any value [DEB1999].
As an example, we consider the multi-modal function
with local minima approximately located at integer values of \(x_2\) and the global minimum at \(g(x_2^* = 0)=10\). That is, the Pareto front is defined as \(f_2 = \frac{10}{f_1}\).
Deb, K., “Multi-objective genetic algorithms: Problem difficulties and construction of test problems”. Evolutionary computation, 7(3), pp.205-230 (1999)
1import sys,os
2import numpy as np
3import time
4import numpy as np
5import matplotlib.pyplot as plt
6
7
8from jcmoptimizer import Client, Study, Obseravtion
9client = Client()
10
11
12# Definition of the search domain
13design_space = [
14 {'name': 'x1', 'type': 'continuous', 'domain': (1,4)},
15 {'name': 'x2', 'type': 'continuous', 'domain': (-1.5,1.5)},
16]
17
18# Creation of the study object with study_id 'multi_objective_optimization'
19study = client.create_study(
20 design_space=design_space,
21 driver="ActiveLearning",
22 study_name="Multi-objective optimization",
23 study_id="multi_objective_optimization"
24)
25#Lower and upper reference point for hypervolume definition (see figure)
26lower_ref=[0, 0]
27upper_ref=[5, 50]
28
29study.configure(
30 max_iter = 50,
31 surrogates = [
32 #A multi-output Gaussian process that learns the dependence of
33 #the vectorial model on the design parameters.
34 dict(type="GP", name="model_vector", output_dim=2,
35 correlate_outputs=False)
36 ],
37 variables = [
38 #Selectors for the values of f1 and f2
39 dict(type="SingleSelector", name="f1",
40 input="model_vector", select_by_name="model_vector0"),
41 dict(type="SingleSelector", name="f2",
42 input="model_vector", select_by_name="model_vector1")
43 ],
44 objectives = [
45 #Multi-minimization objective for f1, f2
46 dict(type="MultiMinimizer", variables=["f1", "f2"], name="objective",
47 lower_reference=lower_ref, upper_reference=upper_ref)
48 ]
49)
50
51# Evaluation of the black-box function for specified design parameters
52def evaluate(study: Study, x1: float, x2: float) -> Observation:
53
54 time.sleep(2) # make objective expensive
55 observation = study.new_observation()
56 #determine list of the two objective values
57 g = 20 + (x2**2 - 10*np.cos(2*np.pi*x2))
58 observation.add([x1, g/x1])
59
60 return observation
61
62# Run the minimization
63study.set_evaluator(evaluate)
64study.run()
65
66plt.figure(figsize=(10,5))
67
68#Plot all observations of (f1, f2)
69f1 = study.driver.get_observed_values("variable", "f1")
70f2 = study.driver.get_observed_values("variable", "f2")
71plt.plot(f1["means"], f2["means"], "o", label="Donimated observations")
72
73#Plot the estimate of the Paretro front, i.e. nondominated observations
74pf = study.driver.get_state("pareto_front")
75nondom_f1, nondom_f2 = np.array(pf["f1"]), np.array(pf["f2"])
76plt.plot(nondom_f1, nondom_f2, "o", label="Pareto front estimate")
77
78#Plot analytic Pareto front
79X = np.linspace(1, 4, 100)
80plt.plot(X, 10/X, "k--", label="Analytic Pareto front")
81
82#Plot nondominated hypervolume
83plt.plot(*lower_ref, "rx", label="Lower reference")
84plt.plot(*upper_ref, "bx", label="Upper reference")
85nondom_f2_aug = np.hstack((upper_ref[1], nondom_f2))
86X = [lower_ref[0]] + np.repeat(nondom_f1,2).tolist() + [upper_ref[0]]
87nondom_lower = [lower_ref[1]]*len(X)
88nondom_upper = np.repeat(nondom_f2_aug, 2).tolist()
89plt.fill_between(X, nondom_lower, nondom_upper, alpha=0.2, color="blue",
90 label="Nondominated hypervolume")
91
92plt.xlabel("f1")
93plt.ylabel("f2")
94plt.legend(loc="upper center")
95plt.grid()
96plt.savefig("multi_objective_minimization.svg", transparent=True)
97
98client.shutdown_server()
Result of the multi-objective minimization in the \((f_1, f_2)\)-plane.
The blue dots show all observations, whereas the orange dots show nondominated solution, i.e. an estimate of the Pareto front. The dashed line shows the true analytic Pareto front within the domain of \((x_1, x_2)\).
The progress of the study is shown in terms of the nondominated hypervolume (blue shaded area) that is defined by the nondominating observations as well as a lower and upper reference point (red and blue cross).