jina-ai/late-chunking

How to impleplement late-chunking with jina api?

rickythink opened this issue · 3 comments

Thank you, Jina team, for sharing this method.

I am currently trying to implement late chunking in my own workflow.

I noticed the following example:

# chunk afterwards (context-sensitive chunked pooling)
inputs = tokenizer(input_text, return_tensors='pt')
model_output = model(**inputs)

Is it possible to use Jina’s API for this?

From what I’ve observed, the segment API only returns the start and end tokens, and it doesn’t seem to support this use case.

Let me know if I need any adjustments!

At the moment it is not possible with the API but we already working on an integration which makes it possible

Now the API supports late chunking for our new model. As we also want to extend the evaluation on jina embeddings v3 I implemented support for the new model and also add a test that compares the embeddings produced with the embeddings produced by the API. In general, they can differ a little bit (not only when using late chunking) because of different optimizations applied during inference (e.g. flash attention, optimizations done by cuda for bf16, ...). Both is not merged to the main branch at the moment, but if you want to take a look at how to do late chunking with the API, you can take a look at the test case in this PR: https://github.com/jina-ai/late-chunking/pull/8/files

@rickythink with v3 release, our api now supports late chunking out of the box, details can be found here