hytest-org/workflow-hodson-2022-objective-benchmark

Bug in gradient

Opened this issue · 0 comments

There is a bug in the gradient code.
The current implementation seems fine, so long as the gradient is given as given as a vector of length (n) but fails when the gradient is a scalar.

For example, to apply a scalar gradient term (as in a change of units), you'd need to convert it to a vector of length n.

l1 = normal_ll(y, y_hat, transform=lambda x: x*5, gradient=np.repeat(5, n))
l2 = normal_ll(y, y_hat)