Multi-objective optimization
- Driver:
- Download script:
Many optimization task do not consist of only a single objective \(f(\mathbf{x})\) to be minimized in some domain \(\mathbf{x}\in \mathcal{X}\subset \mathbb{R}^d\), but rather in a set of objectives \(\mathbf{f}(\mathbf{x}) = [f_1(\mathbf{x}), f_2(\mathbf{x}),\dots,f_k(\mathbf{x})]^{\rm T}\) to be minimized together.
A solution \(\mathbf{a} = \mathbf{f}(\mathbf{x}_1)\) is said to dominate another solution \(\mathbf{b} = \mathbf{f}(\mathbf{x}_2)\) if \(a_i \le b_i\) for all \(i \in [1, \dots, k]\) and there exists at least on entry \(j \in [1, \dots, k]\) such that \(a_j < b_j\).
A solution is called Pareto optimal if it is not dominated by any other solution. The set of Pareto optimal solutions is called the Pareto front. The aim of a multi-objective optimization is to find the Pareto front for the set of objectives \(\mathbf{f}(\mathbf{x})\).
We consider an analytic multi-objective optimization problem with \(k=2\) objectives:
It can be shown, that the problem has Pareto-optimal solutions \((x_1, x_2^*)\), where \(x_2^*\) is the globally minimum solution of \(g(x_2)\) and \(x_1\) can take any value [DEB1999].
As an example, we consider the multi-modal function
with local minima approximately located at integer values of \(x_2\) and the global minimum at \(g(x_2^* = 0)=10\). That is, the Pareto front is defined as \(f_2 = \frac{10}{f_1}\).
Deb, K., “Multi-objective genetic algorithms: Problem difficulties and construction of test problems”. Evolutionary computation, 7(3), pp.205-230 (1999)
1import sys,os
2import numpy as np
3import time
4import numpy as np
5import matplotlib.pyplot as plt
6
7
8jcm_optimizer_path = r"<JCM_OPTIMIZER_PATH>"
9sys.path.insert(0, os.path.join(jcm_optimizer_path, "interface", "python"))
10from jcmoptimizer import Server, Client, Study, Observation
11server = Server()
12client = Client(server.host)
13
14# Definition of the search domain
15design_space = [
16 {'name': 'x1', 'type': 'continuous', 'domain': (1,4)},
17 {'name': 'x2', 'type': 'continuous', 'domain': (-1.5,1.5)},
18]
19
20# Creation of the study object with study_id 'multi_objective_optimization'
21study = client.create_study(
22 design_space=design_space,
23 driver="ActiveLearning",
24 name="Multi-objective optimization",
25 study_id="multi_objective_optimization"
26)
27#Lower and upper reference point for hypervolume definition (see figure)
28lower_ref=[0, 0]
29upper_ref=[5, 50]
30
31study.configure(
32 max_iter = 50,
33 surrogates = [
34 #A multi-output Gaussian process that learns the dependence of
35 #the vectorial model on the design parameters.
36 dict(type="GP", name="model_vector", output_dim=2,
37 correlate_outputs=False)
38 ],
39 variables = [
40 #Selectors for the values of f1 and f2
41 dict(type="SingleSelector", name="f1",
42 input="model_vector", select_by_name="model_vector0"),
43 dict(type="SingleSelector", name="f2",
44 input="model_vector", select_by_name="model_vector1")
45 ],
46 objectives = [
47 #Multi-minimization objective for f1, f2
48 dict(type="MultiMinimizer", variables=["f1", "f2"], name="objective",
49 lower_reference=lower_ref, upper_reference=upper_ref)
50 ]
51)
52
53# Evaluation of the black-box function for specified design parameters
54def evaluate(study: Study, x1: float, x2: float) -> Observation:
55
56 time.sleep(2) # make objective expensive
57 observation = study.new_observation()
58 #determine list of the two objective values
59 g = 20 + (x2**2 - 10*np.cos(2*np.pi*x2))
60 observation.add([x1, g/x1])
61
62 return observation
63
64# Run the minimization
65study.set_evaluator(evaluate)
66study.run()
67
68plt.figure(figsize=(10,5))
69
70#Plot all observations of (f1, f2)
71f1 = study.driver.get_observed_values("variable", "f1")
72f2 = study.driver.get_observed_values("variable", "f2")
73plt.plot(f1["means"], f2["means"], "o", label="Donimated observations")
74
75#Plot the estimate of the Paretro front, i.e. nondominated observations
76pf = study.driver.get_state("pareto_front")
77nondom_f1, nondom_f2 = np.array(pf["f1"]), np.array(pf["f2"])
78plt.plot(nondom_f1, nondom_f2, "o", label="Pareto front estimate")
79
80#Plot analytic Pareto front
81X = np.linspace(1, 4, 100)
82plt.plot(X, 10/X, "k--", label="Analytic Pareto front")
83
84#Plot nondominated yypervolume
85plt.plot(*lower_ref, "rx", label="Lower reference")
86plt.plot(*upper_ref, "bx", label="Upper reference")
87nondom_f2_aug = np.hstack((upper_ref[1], nondom_f2))
88X = [lower_ref[0]] + np.repeat(nondom_f1,2).tolist() + [upper_ref[0]]
89nondom_lower = [lower_ref[1]]*len(X)
90nondom_upper = np.repeat(nondom_f2_aug, 2).tolist()
91plt.fill_between(X, nondom_lower, nondom_upper, alpha=0.2, color="blue",
92 label="Nondominated hypervolume")
93
94plt.xlabel("f1")
95plt.ylabel("f2")
96plt.legend(loc="upper center")
97plt.grid()
98plt.savefig("multi_objective_minimization.svg", transparent=True)
Result of the multi-objective minimization in the \((f_1, f_2)\)-plane.
The blue dots show all observations, whereas the orange dots show nondominated solution, i.e. an estimate of the Pareto front. The dashed line shows the true analytic Pareto front within the domain of \((x_1, x_2)\).
The progress of the study is shown in terms of the nondominated hypervolume (blue shaded area) that is defined by the nondominating observations as well as a lower and upper reference point (red and blue cross).