Slow query performance with loom crosscat vs. baseline (lovcat) crosscat
versar opened this issue · 4 comments
I compared the performance of queries between loom crosscat and the default baseline (lovcat) crosscat. The queries ESTIMATE SIMILARITY
and ESTIMATE DEPENDENCE
using loom take longer. The difference is on the order of n_models (a.k.a. number of ANALYSES
), but not exactly.
For example, here below are runtimes from identical workflows I performed with loom and the default crosscat. The runtime differences are similar with multiprocess on, and multiprocess off.
n_models = 32:
ESTIMATE SIMILARITY
with 300 variables: 3.91 seconds default crosscat, 69.9 seconds loom
ESTIMATE DEPENDENCE
with 300 variables: 300 seconds default crosscat, waited a long time then interrupted loom
ESTIMATE DEPENDENCE
with 30 variables: 0.22 seconds default crosscat, 9.8 seconds loom
n_models = 16:
ESTIMATE SIMILARITY
with 300 variables: .63 seconds default crosscat, 29 seconds loom
ESTIMATE DEPENDENCE
with 300 variables: 148 seconds default crosscat, waited a long time then interrupted loom
ESTIMATE SIMILARITY
with 30 variables: .62 seconds default crosscat, 5.3 seconds loom
ESTIMATE DEPENDENCE
with 30 variables: .17 seconds default crosscat, 5.0 seconds loom
n_models = 2:
ESTIMATE DEPENDENCE
with 30 variables: 0.10 seconds default crosscat, .84 seconds loom
ESTIMATE SIMILARITY
with 30 variables: .23 seconds default crosscat, .93 seconds loom
ESTIMATE DEPENDENCE
with 300 variables: 0.10 seconds default crosscat, .84 seconds loom
ESTIMATE SIMILARITY
with 300 variables: .23 seconds default crosscat, .93 seconds loom
We need to convert the python loop over models to a single SQL lookup:
dependence probability:
bayeslite/src/metamodels/loom_metamodel.py
Lines 500 to 502 in 39e96af
row similarity:
bayeslite/src/metamodels/loom_metamodel.py
Lines 556 to 570 in 39e96af
kind lookup:
bayeslite/src/metamodels/loom_metamodel.py
Lines 507 to 515 in 39e96af
After re-running the benchmarks @versar reports that the candidate fix in feb1e22 appears to not have improved the runtime. The overhead of Loom might be related to reading data from disk versus memory, so we may consider caching results if this indeed is the case.
- retrieve the benchmark test for versar.
- re-profile the dependence queries and confirm.
- decide on next steps, e.g. caching or otherwise.
Confirmed issue for both cases (and this will be the same for all Loom queries): the culprit is the invocation of _check_loom_initialized
on a per-query basis. This check good error messages but unreasonable computational overhead.
bayeslite/src/metamodels/loom_metamodel.py
Lines 238 to 253 in 39e96af
Resolution plan:
- Fix #586 to significantly reduce the chance a user will encounter this error.
- Remove all invocations of _check_loom_initialized
- Add a ticket for comprehensible error messages, perhaps using a cached boolean flag or a better SQL query for checking.