Benchmarking different studies against each other

Drivers:

BayesianOptimization, ActiveLearning, CMAES, DifferentialEvolution, ScipyMinimizer

Download script:

benchmark.py

In some cases, it is not clear, which driver or which driver configuration is able to minimize an objective function with the smallest number of iterations. If similar minimizations are performed on a regular basis, it can be worthwhile to find the best choice by benchmarking several studies with different drivers or configurations against each other.

As an example, the usual 2D Rastrigin function on a circular domain shall be minimized:

\[ \begin{align}\begin{aligned}&\text{min.}\,& f(x_1,x_2) = 2\cdot10 + \sum_{i=1,2} \left(x_i^2 - 10\cos(2\pi x_i)\right)\\&\text{s.t.}\,& \sqrt{x_1^2 + x_2^2} \leq 1.5.\end{aligned}\end{align} \]

We compare the heuristic minimization methods CMAES and DifferentialEvolution, the multi-start local minimization method ScipyMinimizer, and Bayesian optimization. For the latter, we use the standard BayesianOptimization driver and the ActiveLearning driver that is configured to use Neural Networks instead of a Gaussian process to learn the objective function.

We assume that we can evaluate the function 2 times in parallel and that each evaluation takes 2 seconds. Hence, the driver has to compute new samples in less than a second in order to avoid significant overhead. This is an edge case for the Bayesian optimization approach, that has a considerable overhead typically larger than a second. I this case the other methods with much faster sample computation times can be more appropriate. Nevertheless, in this case of a multi-modal objective Bayesian optimization reaches the global minim still well before the other optimization methods. Gaussian process regression is computational efficient for this small number of iterations. In comparison the Neural Network training time of the ActiveLearning driver leads to a larger overhead.

  1import sys,os
  2import numpy as np
  3import time
  4import matplotlib.pyplot as plt
  5
  6jcm_optimizer_path = r"<JCM_OPTIMIZER_PATH>"
  7sys.path.insert(0, os.path.join(jcm_optimizer_path, "interface", "python"))
  8from jcmoptimizer import Server, Client, Study, Observation
  9server = Server()
 10client = Client(server.host)
 11
 12# Definition of the search domain
 13design_space = [
 14    {'name': 'x1', 'type': 'continuous', 'domain': (-1.5,1.5)}, 
 15    {'name': 'x2', 'type': 'continuous', 'domain': (-1.5,1.5)},
 16]
 17
 18# Definition of fixed environment parameter
 19environment = [
 20    {'name': 'radius', 'type': 'fixed', 'domain': 1.5},
 21]
 22
 23# Definition of a constraint on the search domain
 24constraints = [
 25    {'name': 'circle', 'expression': 'sqrt(x1^2 + x2^2) <= radius'}
 26]
 27
 28#Creation of studies to benchmark against each other
 29studies: dict[str, Study] = {}
 30drivers: list[str] = ["BayesianOptimization", "ActiveLearning", "CMAES", "DifferentialEvolution", "ScipyMinimizer"]
 31for driver in drivers:
 32    studies[driver] = client.create_study(
 33        design_space=design_space,
 34        environment=environment,
 35        constraints=constraints,
 36        driver=driver,
 37        name=driver,
 38        study_id=f"benchmark_{driver}",
 39        open_browser=False
 40    )
 41
 42#Configuration of all studies
 43config_kwargs = dict(max_iter=250, num_parallel=2)
 44min_val = 1e-3 #Stop study when this value was observed
 45for driver, study in studies.items():
 46    if driver=="ActiveLearning":
 47        study.configure(
 48            surrogates=[dict(type="NN")],
 49            objectives=[dict(type="Minimizer", min_val=min_val)],
 50            **config_kwargs
 51        )
 52    elif driver=="ScipyMinimizer":
 53        #For a more global search, 3 initial gradient-free Nelder-Mead are started
 54        study.configure(method="Nelder-Mead", num_initial=3,
 55                        min_val=min_val, **config_kwargs)
 56    else:
 57        study.configure(min_val=min_val, **config_kwargs)
 58# Evaluation of the black-box function for specified design parameters
 59def evaluate(study: Study, x1: float, x2: float, radius: float) -> Observation:
 60
 61    time.sleep(2) # make objective expensive
 62    observation = study.new_observation()
 63    observation.add(10*2
 64                + (x1**2-10*np.cos(2*np.pi*x1)) 
 65                + (x2**2-10*np.cos(2*np.pi*x2))
 66            )
 67    return observation
 68
 69# Creation of a benchmark with 6 repetitions and add all 4 studies
 70benchmark = client.create_benchmark(num_average=6)
 71for study in studies.values():
 72    benchmark.add_study(study)
 73
 74# Run the benchmark - this will take a while
 75benchmark.set_evaluator(evaluate)
 76benchmark.run()
 77
 78# Plot cummin convergence w.r.t. number of evaluations and time
 79fig, (ax1, ax2) = plt.subplots(ncols=2, sharey=True, figsize=(10, 5))
 80plt.rc("font", family="serif")
 81plt.subplots_adjust(wspace=0.0)
 82
 83data = benchmark.get_data(x_type="num_evaluations")
 84for X, Y, sdev in zip(data["X"], data["Y"], data["sdev"]):
 85    std_error = np.array(sdev) / np.sqrt(6)
 86    p = ax1.plot(X, Y, linewidth=2.0)
 87    ax1.fill_between(X, Y - std_error, Y + std_error, alpha=0.2, color=p[0].get_color())
 88ax1.grid()
 89ax1.set_xlabel("Number of Evaluations", fontsize=12)
 90ax1.set_ylabel("Average Cummulative Minimum", fontsize=12)
 91ax1.set_ylim(-0.4, 10)
 92
 93data = benchmark.get_data(x_type="time")
 94for name, X, Y, sdev in zip(data["names"], data["X"], data["Y"], data["sdev"]):
 95    std_error = np.array(sdev) / np.sqrt(6)
 96    p = ax2.plot(X, Y, linewidth=2.0, label=name)
 97    ax2.fill_between(X, Y - std_error, Y + std_error, alpha=0.2, color=p[0].get_color())
 98ax2.legend()
 99ax2.grid()
100ax2.set_xlabel("Time (sec)", fontsize=12)
101ax2.set_ylim(-0.4, 10)
102plt.savefig("benchmark.svg", transparent=True)
103
Benchmark results

Left: The left graph shows the cummulative minimum as a function of the number of evaluations of the objective function. The results are averaged over six independent runs of each driver. The shaded area shows the uncertainty of the mean value. Clearly, the BayesianOptimization driver performs the best and finds the global minimum zero after about 70 iterations while the other methods did not converge to zero even after 250 iterations. Right: The convergence of the cummulative minimum as a function of the total optimization time looks slightly different. As expected, the ActiveLearning and BayesianOptimization drivers show an overhead for computing new samples. If one would have an optimization budget on less than 50 seconds, the multi-start local ScipyMinimizer performs better. It can be also seen that the training of Neural Networks leads to a larger overhead compared to the Gaussian process regression used by the BayesianOptimization driver.