resibots/limbo

The not good results for different objective functions in Limbo

langongjin opened this issue · 12 comments

hi, I tried for different objective function. Here is an example that not outputted good fit.
In my understanding, We have to set the parameters (include kernel model, etc.), acquisitive function (be included in the parameters) to find the suitable points, and an objective function to be fitted.

Here are my parameters
`struct Params {
struct bayes_opt_boptimizer : public defaults::bayes_opt_boptimizer {
};

// depending on which internal optimizer we use, we need to import different parameters
#ifdef USE_NLOPT
struct opt_nloptnograd : public defaults::opt_nloptnograd {
};
#elif defined(USE_LIBCMAES)
struct opt_cmaes : public defaults::opt_cmaes {
};
#else
struct opt_gridsearch : public defaults::opt_gridsearch {
};
#endif

struct kernel : public defaults::kernel {
    BO_PARAM(double, noise, 0.000001);
};

struct bayes_opt_bobase : public defaults::bayes_opt_bobase {

};

struct kernel_maternfivehalves : public defaults::kernel_maternfivehalves {
    BO_PARAM(double, sigma_sq, 1);
    BO_PARAM(double, l, 1);
};

struct init_randomsampling : public defaults::init_randomsampling {
    BO_PARAM(int, samples, 5);

};

struct stop_maxiterations : public defaults::stop_maxiterations {
    BO_PARAM(int, iterations, 50); //we stop after 40 iterations

};

// we use the default parameters for acqui_ucb
struct acqui_ucb : public defaults::acqui_ucb {
    //UCB(x) = \mu(x) + \alpha \sigma(x).
    BO_PARAM(double, alpha, 0.5); // default alpha = 0.5
};

};`

Here is the evaluation function.
'struct Eval {
// number of input dimension (x.size())
BO_PARAM(size_t, dim_in, 1);
// number of dimenions of the result (res.size())
BO_PARAM(size_t, dim_out, 1);

// the function to be optimized
Eigen::VectorXd operator()(const Eigen::VectorXd& x) const
{
    //double y = -((5 * x(0) - 2.5) * (5 * x(0) - 2.5)) + 5;
    double y = (x(0) - 0.5) * sin(15 * (x(0) - 0.4));
    //double y = 0.1 - (0.4 * x(0) - 0.3) * (0.4 * x(0) - 0.3);

    // we return a 1-dimensional vector
    return tools::make_vector(y);
}

};'

Here is the main
'int main()
{
// we use the default acquisition function / model / stat / etc.
bayes_opt::BOptimizer boptimizer;
// run the evaluation
boptimizer.optimize(Eval());
// the best sample found
std::cout << "Best sample: " << boptimizer.best_sample()(0) << " - Best observation: " << boptimizer.best_observation()(0) << std::endl;
return 0;
}'

Here is the output
'/Users/lan/projects/bayesian/LimboTest/cmake-build-debug/LimboTest
0 new point: 1 value: 0.206059 best:0.336748
1 new point: 1 value: 0.206059 best:0.336748
2 new point: 1 value: 0.206059 best:0.336748
3 new point: 1 value: 0.206059 best:0.336748
4 new point: 1 value: 0.206059 best:0.336748
5 new point: 1 value: 0.206059 best:0.336748
6 new point: 1 value: 0.206059 best:0.336748
7 new point: 1 value: 0.206059 best:0.336748
8 new point: 1 value: 0.206059 best:0.336748
9 new point: 1 value: 0.206059 best:0.336748
10 new point: 1 value: 0.206059 best:0.336748
11 new point: 1 value: 0.206059 best:0.336748
12 new point: 1 value: 0.206059 best:0.336748
13 new point: 1 value: 0.206059 best:0.336748
14 new point: 1 value: 0.206059 best:0.336748
15 new point: 1 value: 0.206059 best:0.336748
16 new point: 1 value: 0.206059 best:0.336748
17 new point: 1 value: 0.206059 best:0.336748
18 new point: 1 value: 0.206059 best:0.336748
19 new point: 1 value: 0.206059 best:0.336748
20 new point: 1 value: 0.206059 best:0.336748
21 new point: 1 value: 0.206059 best:0.336748
22 new point: 1 value: 0.206059 best:0.336748
23 new point: 1 value: 0.206059 best:0.336748
24 new point: 1 value: 0.206059 best:0.336748
25 new point: 1 value: 0.206059 best:0.336748
26 new point: 1 value: 0.206059 best:0.336748
27 new point: 1 value: 0.206059 best:0.336748
28 new point: 1 value: 0.206059 best:0.336748
29 new point: 1 value: 0.206059 best:0.336748
30 new point: 1 value: 0.206059 best:0.336748
31 new point: 1 value: 0.206059 best:0.336748
32 new point: 1 value: 0.206059 best:0.336748
33 new point: 1 value: 0.206059 best:0.336748
34 new point: 1 value: 0.206059 best:0.336748
35 new point: 1 value: 0.206059 best:0.336748
36 new point: 1 value: 0.206059 best:0.336748
37 new point: 1 value: 0.206059 best:0.336748
38 new point: 1 value: 0.206059 best:0.336748
39 new point: 1 value: 0.206059 best:0.336748
40 new point: 1 value: 0.206059 best:0.336748
41 new point: 1 value: 0.206059 best:0.336748
42 new point: 1 value: 0.206059 best:0.336748
43 new point: 1 value: 0.206059 best:0.336748
44 new point: 1 value: 0.206059 best:0.336748
45 new point: 1 value: 0.206059 best:0.336748
46 new point: 1 value: 0.206059 best:0.336748
47 new point: 1 value: 0.206059 best:0.336748
48 new point: 1 value: 0.206059 best:0.336748
49 new point: 1 value: 0.206059 best:0.336748
Best sample: 0.888727 - Best observation: 0.336748'

The right result should be towards (0.934, 0.429)

here is the figure of the objective function
image

I tried to set the different value of alpha for the different exploration, but the new points still have no more diversity.
I have tried the following function.
double y = 0.1 - (0.2 * x(0) - 0.1) * (0.2 * x(0) - 0.1);
The results have more diversity.
'0 new point: 0 value: 0.09 best:0.0992763
1 new point: 0.6 value: 0.0996 best:0.0996
2 new point: 0.4 value: 0.0996 best:0.0996
3 new point: 0.2 value: 0.0964 best:0.0996
4 new point: 0.6 value: 0.0996 best:0.0996
5 new point: 0.4 value: 0.0996 best:0.0996
6 new point: 0.6 value: 0.0996 best:0.0996
7 new point: 0.4 value: 0.0996 best:0.0996
8 new point: 0.6 value: 0.0996 best:0.0996
9 new point: 0.4 value: 0.0996 best:0.0996
10 new point: 0.6 value: 0.0996 best:0.0996
11 new point: 0.4 value: 0.0996 best:0.0996
12 new point: 0.6 value: 0.0996 best:0.0996
13 new point: 0.4 value: 0.0996 best:0.0996
14 new point: 0.6 value: 0.0996 best:0.0996
15 new point: 0.4 value: 0.0996 best:0.0996
16 new point: 0.6 value: 0.0996 best:0.0996
17 new point: 0.4 value: 0.0996 best:0.0996
18 new point: 0.6 value: 0.0996 best:0.0996
19 new point: 0.4 value: 0.0996 best:0.0996
20 new point: 0.6 value: 0.0996 best:0.0996
21 new point: 0.4 value: 0.0996 best:0.0996
22 new point: 0.6 value: 0.0996 best:0.0996
23 new point: 0.4 value: 0.0996 best:0.0996
24 new point: 0.6 value: 0.0996 best:0.0996
25 new point: 0.4 value: 0.0996 best:0.0996
26 new point: 0.6 value: 0.0996 best:0.0996
27 new point: 0.4 value: 0.0996 best:0.0996
28 new point: 0.6 value: 0.0996 best:0.0996
29 new point: 0.4 value: 0.0996 best:0.0996
30 new point: 0.6 value: 0.0996 best:0.0996
31 new point: 0.4 value: 0.0996 best:0.0996
32 new point: 0.6 value: 0.0996 best:0.0996
33 new point: 0.4 value: 0.0996 best:0.0996
34 new point: 0.6 value: 0.0996 best:0.0996
35 new point: 0.4 value: 0.0996 best:0.0996
36 new point: 0.6 value: 0.0996 best:0.0996
37 new point: 0.4 value: 0.0996 best:0.0996
38 new point: 0.6 value: 0.0996 best:0.0996
39 new point: 0.4 value: 0.0996 best:0.0996
40 new point: 0.6 value: 0.0996 best:0.0996
41 new point: 0.4 value: 0.0996 best:0.0996
42 new point: 0.6 value: 0.0996 best:0.0996
43 new point: 0.4 value: 0.0996 best:0.0996
44 new point: 0.6 value: 0.0996 best:0.0996
45 new point: 0.4 value: 0.0996 best:0.0996
46 new point: 0.6 value: 0.0996 best:0.0996
47 new point: 0.4 value: 0.0996 best:0.0996
48 new point: 0.6 value: 0.0996 best:0.0996
49 new point: 0.4 value: 0.0996 best:0.0996
Best sample: 0.6 - Best observation: 0.0996'

I am not much familiar with the Limbo, so, what is the problem in this example?

hi, could anybody give me a hand? I tried to fix this problem. I think the problem is about the acquisition function (UCB). But the problem is still here. Thanks!

Hi @langongjin,
There is definitely something weird in your experiment here. I don't see why Limbo would not be able to find the optimal solution of this objective function.

Could you try to reduce the "l" parameter of the kernel function (BO_PARAM(double, l, 1);) to something small, like 0.1.
This parameter controls the radius of impact of your evaluations on the surrogate model, if it's tool large then the model will be smashed by your initial sampling.

Note, your sigma_sq is quite large too. You can try several orders of magnitude smaller, like 0.001.

Let me know how it goes.

@langongjin I believe @jbmouret is right to wonder if nlopt and/or libcmaes are installed. Even if they are installed, they need to be found via CMake (since you are using CMake) and linked to limbo. Are you sure you are doing this?

Thanks all. I think you are right. The ./waf output good results.
'build/exp/test/test
0 new point: 1 value: 0.5 best:0.5
1 new point: 1 value: 0.5 best:0.5
2 new point: 0.556872 value: 0.498036 best:0.5
3 new point: 0.895507 value: 0.499891 best:0.5
4 new point: 1 value: 0.5 best:0.5
5 new point: 3.98928e-15 value: 0.49 best:0.5
6 new point: 0.882944 value: 0.499863 best:0.5
7 new point: 1 value: 0.5 best:0.5
8 new point: 0.574912 value: 0.498193 best:0.5
9 new point: 1 value: 0.5 best:0.5
10 new point: 0.872881 value: 0.499838 best:0.5
11 new point: 0.691326 value: 0.499047 best:0.5
12 new point: 1 value: 0.5 best:0.5
13 new point: 0.876253 value: 0.499847 best:0.5
14 new point: 0.491411 value: 0.497413 best:0.5
15 new point: 0.879074 value: 0.499854 best:0.5
16 new point: 1 value: 0.5 best:0.5
.....
'
And I think I have installed the NLOpt and libcmaes, we can see
'./waf configure --exp test
WARNING: simplejson not found some function may not work
WARNING: brewer2mpl (colors) not found
YELLOW: Could not import plot_bo_benchmarks! Will not plot anything!
WARNING: simplejson not found some function may not work
Setting top to : /Users/lan/projects/bayesian/limbo-master
Setting out to : /Users/lan/projects/bayesian/limbo-master/build
Checking for 'clang++' (C++ compiler) : /usr/bin/clang++
Checking for 'clang' (C compiler) : /usr/bin/clang
Checking for compiler flags "-march=native" : yes
Checking boost includes : 1_65_1
Checking boost libs : ok
Checking for Eigen : /usr/local/include/eigen3
Checking Intel TBB includes (optional) : /usr/local/include
Checking Intel TBB libs (optional) : /usr/local/lib
Checking for compiler option to support OpenMP : Not supported
Checking Intel MKL includes (optional) : Not found
Checking for NLOpt C++ includes (optional) : /usr/local/include
Checking for NLOpt C++ libs (optional) : /usr/local/lib
Checking for libcmaes includes (optional) : /usr/local/include
Checking for libcmaes libs (optional) : /usr/local/lib
CXXFLAGS: ['-Wall', '-std=c++11', '-fdiagnostics-color', '-O3', '-g', '-march=native']
configuring for exp: test
.................
'configure' finished successfully (0.504s)
'
so, the problem may be the CMakeLists.txt, so I add
'find_package(NLOpt REQUIRED)
find_package(libcmaes REQUIRED)'
to the CMakeLists.txt
but the NLOpt can not be find with the errors.
'CMake Error at CMakeLists.txt:12 (find_package):
By not providing "FindNLOpt.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "NLOpt", but
CMake did not find one.
Could not find a package configuration file provided by "NLOpt" with any of
the following names:
NLOptConfig.cmake
nlopt-config.cmake
Add the installation prefix of "NLOpt" to CMAKE_PREFIX_PATH or set
"NLOpt_DIR" to a directory containing one of the above files. If "NLOpt"
provides a separate development package or SDK, be sure it has been
installed.
-- Configuring incomplete, errors occurred!'

My CMakeLists.txt as following:
'cmake_minimum_required(VERSION 3.8)
project(LimboTest)
find_package(Eigen3 REQUIRED)
find_package(Boost REQUIRED COMPONENTS
system
filesystem
thread
regex
unit_test_framework)
find_package(NLOpt REQUIRED)
find_package(libcmaes REQUIRED)
if(Boost_FOUND)
include_directories(${Boost_INCLUDE_DIRS})
else(Boost_FOUND)
find_library(Boost boost PATHS /opt/local/lib)
include_directories(${Boost_LIBRARY_PATH})
endif()
#set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
set(SOURCE_FILES main.cpp)
include_directories(${EIGEN3_INCLUDE_DIR})
add_executable(LimboTest ${SOURCE_FILES})
target_link_libraries(LimboTest ${TBB_LIBRARIES} ${Boost_LIBRARIES})'

How can I set the CMakeList.txt ?
Thanks!

I do not know for CMAKE (I do not know if NLOpt and libcmaes provides cmake files), but you need:

  • to provide the path to the include of NLOpt and/or CMA-ES (this might be done by the cmake file)
  • to link to NLOpt and/or CMA-Es
  • to define USE_NLOPT and/or USE_LIBCMAES

@jbmouret yes, Thanks for your common sense.

  • I tried to use the sub-local file of NLOpt (include "limbo/opt/nlopt_no_grad.hpp").
  • Added "find_package(NLOpt REQUIRED)" to CMakeLists.txt
  • defined "#define USE_NLOPT", note that I only want to test NLOpt.

The CMake program is working, and NLOpt is working. I modified the bound to (0, 1.5). The results :
'0 new point: 3.13613e-07 value: -0.13971 best:-0.13971
1 new point: 0.0364741 value: -0.342143 best:-0.13971
2 new point: 3.48459e-08 value: -0.139708 best:-0.139708
3 new point: 2.82251e-06 value: -0.139727 best:-0.139708
4 new point: 1 value: 0.206059 best:0.206059
5 new point: 1.15265 value: -0.624611 best:0.206059
6 new point: 0.978548 value: 0.324989 best:0.324989
7 new point: 1.32411 value: 0.793014 best:0.793014
8 new point: 0.945605 value: 0.421548 best:0.793014
9 new point: 0.934987 value: 0.428656 best:0.793014
10 new point: 1.40424 value: 0.543167 best:0.793014
11 new point: 0.93312 value: 0.42871 best:0.793014
12 new point: 1.23655 value: -0.0133982 best:0.793014
13 new point: 0.933145 value: 0.428712 best:0.793014
14 new point: 0.933079 value: 0.428708 best:0.793014
15 new point: 1.15702 value: -0.614975 best:0.793014
16 new point: 0.933253 value: 0.428718 best:0.793014
17 new point: 0.933178 value: 0.428714 best:0.793014
18 new point: 0.933111 value: 0.42871 best:0.793014
19 n[NLOptNoGrad]: nlopt invalid argument
[NLOptNoGrad]: nlopt invalid argument
[NLOptNoGrad]: nlopt invalid argument
[NLOptNoGrad]: nlopt invalid argument
[NLOptNoGrad]: nlopt invalid argument
ew point: 0.933049 value: 0.428705 best:0.793014
20 new point: 0.932993 value: 0.428701 best:0.793014
21 new point: 0.932942 value: 0.428697 best:0.793014
22 new point: 0.932895 value: 0.428693 best:0.793014
23 new point: 0.932852 value: 0.428689 best:0.793014
24 new point: 0.932813 value: 0.428686 best:0.793014
[NLOptNoGrad]: nlopt invalid argument
25 new point: 1.38828 value: 0.686732 best:0.793014
26 new point: 0.932663 value: 0.42867 best:0.793014
27 new point: 0.932639 value: 0.428667 best:0.793014
28 new point: 0.932618 value: 0.428665 best:0.793014
29 new point: 0.932598 value: 0.428663 best:0.793014
30 new point: 0.932579 value: 0.42866 best:0.793014
31 new point: 0.932561 value: 0.428658 best:0.793014
32 new point: 0.932545 value: 0.428656 best:0.793014
[NLOptNoGrad]: nlopt invalid argument
33 new point: 1.00231 value: 0.191 best:0.793014
34 new point: 0.932545 value: 0.428656 best:0.793014
[NLOptNoGrad]: nlopt invalid argument
35 new point: 1.38514 value: 0.710037 best:0.793014
[NLOptNoGrad]: nlopt invalid argument
36 new point: 1.28334 value: 0.494823 best:0.793014
37 new point: 0.932413 value: 0.428639 best:0.793014
[NLOptNoGrad]: nlopt invalid argument
38 new point: 1.23662 value: -0.0125679 best:0.793014
[NLOptNoGrad]: nlopt invalid argument
39 new point: 1.35698 value: 0.836784 best:0.836784
40 new point: 0.932368 value: 0.428633 best:0.836784
41 new point: 0.932369 value: 0.428633 best:0.836784
[NLOptNoGrad]: nlopt invalid argument
42 new point: 1.24422 value: 0.0720583 best:0.836784
43 new point: 0.932408 value: 0.428638 best:0.836784
44 new point: 0.932405 value: 0.428638 best:0.836784
45 new point: 0.932402 value: 0.428638 best:0.836784
46 new point: 0.932398 value: 0.428637 best:0.836784
47 new point: 0.932395 value: 0.428637 best:0.836784
48 new point: 0.932392 value: 0.428636 best:0.836784
49 new point: 0.932388 value: 0.428636 best:0.836784
Best sample: 1.35698 - Best observation: 0.836784'

The warning "[NLOptNoGrad]: nlopt invalid argument" shows NLOpt is working. But why the results show it is still not learning? I really have no idea. I really really like Limbo because it is a clear BOA framework, and have fast learning process that is improtant for my robot learning. Is it possible anyboy help me to fix this problem? here is my CMake project of Limbo. Thank you very much!
LimboTest.zip

Here is what I get with your code (I think libcmaes is not installed, but I have NLOpt).

0 new point: 1 value: 0.206059 best:0.206059
1 new point: 0.990143 value: 0.265585 best:0.265585
2 new point: 0.92996 value: 0.428004 best:0.428004
3 new point: 0.923993 value: 0.423986 best:0.428004
4 new point: 0.932191 value: 0.428606 best:0.428606
5 new point: 0.93306 value: 0.428706 best:0.428706
6 new point: 0.93338 value: 0.428724 best:0.428724
7 new point: 0.933531 value: 0.428729 best:0.428729
8 new point: 0.933615 value: 0.42873 best:0.42873
9 new point: 0.933668 value: 0.428731 best:0.428731
10 new point: 0.933703 value: 0.428731 best:0.428731
11 new point: 0.933729 value: 0.428731 best:0.428731
12 new point: 0.933748 value: 0.428731 best:0.428731
13 new point: 0.933762 value: 0.428731 best:0.428731
14 new point: 0.933774 value: 0.428731 best:0.428731
15 new point: 0.933783 value: 0.428731 best:0.428731
16 new point: 0.93379 value: 0.428731 best:0.428731
17 new point: 0.933796 value: 0.428731 best:0.428731
18 new point: 0.933801 value: 0.428731 best:0.428731
19 new point: 0.933805 value: 0.428731 best:0.428731
20 new point: 0.933808 value: 0.428731 best:0.428731
21 new point: 0.933811 value: 0.428731 best:0.428731
22 new point: 0.933814 value: 0.428731 best:0.428731
23 new point: 0.933816 value: 0.428731 best:0.428731
24 new point: 0.933818 value: 0.428731 best:0.428731
25 new point: 0.933819 value: 0.428731 best:0.428731
26 new point: 0.933821 value: 0.428731 best:0.428731
27 new point: 0.933822 value: 0.428731 best:0.428731
28 new point: 0.933823 value: 0.428731 best:0.428731
29 new point: 0.933824 value: 0.428731 best:0.428731
30 new point: 0.933824 value: 0.428731 best:0.428731
31 new point: 0.933825 value: 0.428731 best:0.428731
32 new point: 0.933825 value: 0.428731 best:0.428731
33 new point: 0.933826 value: 0.428731 best:0.428731
34 new point: 0.933826 value: 0.428731 best:0.428731
35 new point: 0.933827 value: 0.428731 best:0.428731
36 new point: 0.933827 value: 0.428731 best:0.428731
37 new point: 0.933827 value: 0.428731 best:0.428731
38 new point: 0.933827 value: 0.428731 best:0.428731
39 new point: 0.933827 value: 0.428731 best:0.428731
40 new point: 0.933827 value: 0.428731 best:0.428731
41 new point: 0.933827 value: 0.428731 best:0.428731
42 new point: 0.933827 value: 0.428731 best:0.428731
43 new point: 0.933827 value: 0.428731 best:0.428731
44 new point: 0.933827 value: 0.428731 best:0.428731
45 new point: 0.933827 value: 0.428731 best:0.428731
46 new point: 0.933827 value: 0.428731 best:0.428731
47 new point: 0.933827 value: 0.428731 best:0.428731
48 new point: 0.933827 value: 0.428731 best:0.428731
49 new point: 0.933826 value: 0.428731 best:0.428731
Best sample: 0.933762 - Best observation: 0.428731

Another run:

0 new point: 0.0555556 value: -0.399368 best:0.369683
1 new point: 0.967518 value: 0.369683 best:0.369683
2 new point: 0.967518 value: 0.369683 best:0.369683
3 new point: 0.967518 value: 0.369683 best:0.369683
4 new point: 0.967518 value: 0.369683 best:0.369683
5 new point: 0.967518 value: 0.369683 best:0.369683
6 new point: 0.967518 value: 0.369683 best:0.369683
7 new point: 0.967518 value: 0.369683 best:0.369683
8 new point: 0.967518 value: 0.369683 best:0.369683
9 new point: 0.967518 value: 0.369683 best:0.369683
10 new point: 0.967518 value: 0.369683 best:0.369683
11 new point: 0.967518 value: 0.369683 best:0.369683
12 new point: 0.967518 value: 0.369683 best:0.369683
13 new point: 0.967518 value: 0.369683 best:0.369683
14 new point: 0.967518 value: 0.369683 best:0.369683
15 new point: 0.967518 value: 0.369683 best:0.369683
16 new point: 0.967518 value: 0.369682 best:0.369683
17 new point: 0.967518 value: 0.369682 best:0.369683
18 new point: 0.967518 value: 0.369682 best:0.369683
19 new point: 0.967518 value: 0.369682 best:0.369683
20 new point: 0.967518 value: 0.369682 best:0.369683
21 new point: 0.967518 value: 0.369681 best:0.369683
22 new point: 0.967518 value: 0.369681 best:0.369683
23 new point: 0.967519 value: 0.36968 best:0.369683
24 new point: 0.96752 value: 0.369677 best:0.369683
25 new point: 0.968269 value: 0.367024 best:0.369683
26 new point: 0.953967 value: 0.407677 best:0.407677
27 new point: 0.940488 value: 0.426428 best:0.426428
28 new point: 0.926948 value: 0.426409 best:0.426428
29 new point: 0.915588 value: 0.412591 best:0.426428
30 new point: 0.90781 value: 0.396427 best:0.426428
31 new point: 0.903255 value: 0.384625 best:0.426428
32 new point: 0.900843 value: 0.377717 best:0.426428
33 new point: 0.899655 value: 0.374155 best:0.426428
34 new point: 0.899109 value: 0.372482 best:0.426428
35 new point: 0.898881 value: 0.371776 best:0.426428
36 new point: 0.898802 value: 0.371531 best:0.426428
37 new point: 0.898791 value: 0.371497 best:0.426428
38 new point: 0.898808 value: 0.371552 best:0.426428
39 new point: 0.898837 value: 0.371641 best:0.426428
40 new point: 0.898869 value: 0.371741 best:0.426428
41 new point: 0.898902 value: 0.371842 best:0.426428
42 new point: 0.898933 value: 0.371939 best:0.426428
43 new point: 0.898964 value: 0.372033 best:0.426428
44 new point: 0.898993 value: 0.372122 best:0.426428
45 new point: 0.89902 value: 0.372208 best:0.426428
46 new point: 0.899046 value: 0.372289 best:0.426428
47 new point: 0.899072 value: 0.372366 best:0.426428
48 new point: 0.899096 value: 0.372441 best:0.426428
49 new point: 0.899119 value: 0.372512 best:0.426428
Best sample: 0.940488 - Best observation: 0.426428

It seems to make sense as the max in [0, 1] seems to be around 0.93:
screen shot 2018-04-18 at 00 01 48

Please also note that since it is a stochastic algorithm, it can sometimes fail (e.g. 1/10th of the times).

@langongjin the #define USE_NLOPT should be before any limbo header include..

Thanks all, you are right. Actually, it is working by comparing different objective function. The poor diversity due to the special objective function with an extreme maximum "mean" in the acquisition function.
It is working in my CMAKE project, as the previous attachments. I guess it is useful for somebody want Limbo in CMAKE.

I am going to test high dimension (10~30, or more) objective function. Because I have so many parameters to be optimized.
Do have some empirical idea to me? From my review about the Bayesian optimization, some research present it is not fast learning for high dimension and propose some improving methods for high dimension. Can you recommend me some papers about how is the learning tendency of Bayesian optimization for different dimension?

Thanks!

I am going to close this issue and post a new issue about the question I proposed in the end.