tsmodels/tsgarch

performance

Closed this issue · 5 comments

Could you please explain whether tsgarch improves rugarch in terms of performance? If yes, it would be interesting to rerun the benchmarks in this paper: https://www.zora.uzh.ch/id/eprint/186901/7/SSRN-id3551503.pdf.

For the dmbp benchmark it does not by default improve speed relative to rugarch, in fact it is twice as slow (though we are already talking about sub-second timings). In profiling the code, the most expensive part appears to be the evaluation of the inequality for the persistence constraint. On closer inspection this is based on index subset operation in data.table. Setting options(datatable.optimize = 2L) results in timings equal to that of rugarch. Reproducible example:

  • rugarch
library(microbenchmark)
library(rugarch)
data(dmbp)
spec <- ugarchspec(variance.model=list(model="sGARCH", garchOrder=c(1, 1)),mean.model=list(armaOrder=c(0, 0), include.mean=TRUE))`
microbenchmark(ugarchfit(data=dmbp, spec=spec), times=10, unit = "seconds")
Unit: seconds
                                expr       min        lq     mean    median        uq       max neval
 ugarchfit(data = dmbp, spec = spec) 0.1352751 0.1487088 0.152324 0.1510626 0.1546141 0.1834863    10
  • tsgarch (without optimization)
library(microbenchmark)
library(tsgarch)
library(xts)
dmbp <- xts(dmbp[,1], as.Date(1:nrow(dmbp)))
colnames(dmbp) <- "dmbp"
spec <- garch_modelspec(dmbp, constant = TRUE)
microbenchmark(estimate(object = spec, control = nloptr_fast_options()), times=10, unit = "seconds")
Unit: seconds
                                                     expr       min        lq     mean    median        uq       max neval
 estimate(object = spec, control = nloptr_fast_options()) 0.4084726 0.4758601 0.544552 0.5502915 0.5908789 0.7016986    10
  • tsgarch (with optimization)
library(microbenchmark)
library(tsgarch)
library(xts)
options(datatable.optimize = 2L)
dmbp <- xts(dmbp[,1], as.Date(1:nrow(dmbp)))
colnames(dmbp) <- "dmbp"
spec <- garch_modelspec(dmbp, constant = TRUE)
microbenchmark(estimate(object = spec, control = nloptr_fast_options()), times=10, unit = "seconds")
Unit: seconds
                                                     expr       min        lq      mean    median        uq       max neval
 estimate(object = spec, control = nloptr_fast_options()) 0.1373453 0.1579862 0.1703421 0.1671855 0.1815243 0.2215674    10

So the new package does not improve rugarch at all in terms of speed?

The primary objective was not speed improvements. Details on some of the differences and reasons for the re-implementation are here.

I thought the use of CppAD would make estimations faster.

There are a couple of key differences between rugarch and tsgarch:

rugarch uses a derivative free solver (Rsolnp), which has also worked quite well for these types of problems. tsgarch makes use of TMB for automatic differentiation, and also includes evaluation of the jacobian for the constraints. Additionally, to handle parameter scaling issues which often times crop up (particularly with regards to the garch intercept which is usually much smaller than other parameters), there is a second pass through the solver using a scaling matrix based on the hessian of the first pass solution. All these add to the overhead for the sake of higher accuracy/correctness.

With regards to comparisons with Julia, it's hard to get comparable performance without digging much deeper. My initial profiling shows that the actual likelihood evaluation in C++ is quite fast (not as fast as Julia but not too far), but the overhead is in the R code and calls to the persistence constraint and other checks. It would probably be possible to improve performance but not without significant time and effort.