not an issue/just a bit of discussion
PaulSoderlind opened this issue · 6 comments
PaulSoderlind commented
Hi,
I took a quick look at your MLE tutorial. It looks nice. I just have one question and then a suggestion
- why not
optimize(β -> -log_likelihood(X, y, β)...)
? Because you want to reuse thenll
? - I have also some tutorials on MLE They include a simple example of both traditional and robust standard errors. Maybe of interest.
johnmyleswhite commented
- Yes, I wanted to reuse
nll
in other places. - I'll add a pointer to your tutorial.
PaulSoderlind commented
Thanks,
I was merely suggesting to add something on the Hessian/gradients. Best, Paul
johnmyleswhite commented
No worries: I think it's a great idea to link people to a discussion of robust standard errors.
johnmyleswhite commented
Closed by e8fc116
davidxiaoyuxu commented
This tutorial looks nice!
I am running a MLE but the likelihood function is based on a numerical integration, so I don't have equations for Hessian. It would be great if there is an example like this case.
johnmyleswhite commented
Sounds like you need finite differences. Have you tried https://github.com/JuliaDiff/FiniteDiff.jl