Use `num_search_each_probe` for parallel evaluation of the objective
taku-y opened this issue · 0 comments
Hi developers of combo,
Thanks for providing a great library. I want to use it for my projects.
I have a question about multiple evaluations at each probe. For a function f(x)
which takes a long time to run, I want to compute f(x)
for a number of x
values in parallel. To test this, I ran tutorial.ipynb
with the following modifications:
res = policy.random_search(max_num_probes=5, simulator=simulator(),
num_search_each_probe=4)
# Originally, max_num_probes=80 and num_search_each_prob=1
# I expect that the total amount of computation is not changed
res = policy.bayes_search(max_num_probes=20, simulator=simulator(), score='TS',
interval=5, num_rand_basis=5000, num_search_each_probe=4)
Is this the right way to do parallel computation as explained above?
And I'm concerned about actual computation time. The modified code seems slower
than the original, specifically on policy.bayes_search()
:
Original: 99.50355625152588 [sec]
Modified: 1856.6796779632568 [sec]
I expected the computation time is proportional to the number of evaluations of f(x)
,
so I don't understand the significant difference. That's why I'm wondering that I'm not correct
on using parallel evaluation.