Pre-compute loss function in `finite_horizon`
LarrySnyder opened this issue · 0 comments
LarrySnyder commented
Tried working on this, but it didn't seem to speed things up much. Why?
Added this code right before main loop:
# Pre-calculate standard normal loss function values.
min_z, max_z, step_z = -4, 4, 0.01
loss_table = lf.standard_normal_loss_dict(start=min_z, stop=max_z, step=step_z)
comp_table = lf.standard_normal_loss_dict(start=min_z, stop=max_z, step=step_z, complementary=True)
And this code when n(y) and \bar{n}(y) are calculated:
if [...some option is set...]:
z = (y - demand_mean[t]) / demand_sd[t]
if z < min_z:
n = demand_mean[t] - y
n_bar = 0
elif z > max_z:
n = 0
n_bar = y - demand_mean[t]
else:
n = nearest_dict_value(z, loss_table) * demand_sd[t]
n_bar = nearest_dict_value(z, comp_table) * demand_sd[t]
else:
n, n_bar = lf.normal_loss(y, demand_mean[t], demand_sd[t])
Maybe nearest_dict_value
is too slow?