Concatenating precomuted hashes of consecutive parts of file
Closed this issue · 2 comments
moishy90 commented
Is there a way to concatenate precomputed hashes so that the times align properly?
I have recorded consecutive mp3 files and I precompute them. I would like to concatenate the precomputed files without needing to attach the original mp3 files and precomputing the attached file.
I'm using your code as a library, so if you can share a code sample that would be great.
I've tried offsetting the hash time by 430 for each file I add but it's not exact. even if I get the length of the original file using audio_read it's not exact.
moishy90 commented
@dpwe this seems to be impossible considering that the info is lost when cutting the file
dpwe commented
I’m not completely sure what you’re asking. The hash representation stores
track IDs and time offset within the track, indexed by the landmark hash.
If you merge hash tables, each track is a separate entity. If the
individual tracks are actually segmented parts of a longer recording, you
could imagine post-processing that converts offsets within a segment to the
corresponding time in the unsegmented recording. I don’t think this
functionality belongs inside the fingerprint engine. But maybe you mean
something else?
DAn.
…On Tue, Oct 23, 2018 at 16:13 Moshe Shwarzberg ***@***.***> wrote:
@dpwe <https://github.com/dpwe> this seems to be impossible considering
that the info is lost when cutting the file
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#43 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAhs0SsKARpVn9mWObUtbLimzs5KWrMlks5un3htgaJpZM4XCgFw>
.