denis-stepanov/advent

Calculate Dejavu deadband at runtime

denis-stepanov opened this issue · 1 comments

Issue #5 introduced a notion of a Dejavu deadband (a time when Dejavu is busy with recognition and is not listening to input). This deadband is currently hardcoded as 0.4s following measurements. As a matter of fact, this parameter should depend on machine performance, as well as on the number of hashes in the Dejavu database. It should be fairly straightforward to evaluate it at run-time by running three recognition attempts, measuring their effective time and averaging.

If properly done, estimation should be done while running with the same number of threads (e.g., 4). There's also an SQL database warmup to be taken into account - on cold start it could be quite significant (seconds scale). This would complicate the code and delay app startup by at least 3-4s (up to ~10s in worst case scenario = 3 consecutive attempts x 3.4s duration) - all this to calculate a parameter which would have little impact on final result. Maybe a better behavior (especially taking into account potential daemonization during which the database could be updated) would be to make this a floating parameter adjusted as threads run.

Remarkably, Dejavu documentation recognizes the deadband and even offers a formula to calculate it:

1.364757 * record_time - 0.034373 = time_to_match

Unfortunately, constants here are hardware and database-content dependent. If I apply this formula to Advent default record interval of 3s, I will get ~1s deadband, while experimentally it has been measured to be two times less. So, an empiric formula would not work here.