[v1.11.0] Apply changes (breaking or not) about AI-powered search
curquiza opened this issue · 1 comments
curquiza commented
Related to meilisearch/integration-guides#303
Explanation of the feature
Usage:
- https://meilisearch.notion.site/v1-11-AI-search-changes-0e37727193884a70999f254fa953ce6e?pvs=74
- binary quantization sub settings: https://meilisearch.notion.site/Binary-quantization-usage-v1-11-2a9c9559461a4a9d9fa3e0ea5149ad68?pvs=74
Breaking:
- When using the semantic or the hybrid search,
hybrid.embedder
is now a mandatory parameter inGET and POST /indexes/{:indexUid}/search
- As a consequence, it is now mandatory to pass
hybrid
even for full-vector search (with onlyvector
and notq
) embedder
is now a mandatory parameter inGET and POST /indexes/{:indexUid}/similar
- Ignore non-zero
semanticRatio
whenvector
is passed but notq
: a semantic search will be performed. - The default model for OpenAI is now
text-embedding-3-small
instead oftext-embedding-ada-002
.
Changes:
- A new sub setting in
embedders
setting to enable binary quantization and speed up indexing speed. - Limit the maximum length of a rendered document template: when the source of an embedder is set to
huggingFace
,openAi
,rest
orollama
, thendocumentTemplateMaxBytes
is now available as an optional parameter. This parameter describes the number of bytes in which the rendered document template text should fit when trying to embed a document. Longer texts are truncated to fit.
TODO
- Breaking changes section (see above)
- Ensure the breaking changes are applied in the code base
- Fix tests failing due to the of breaking changes
- Ensure we can enable binary quantization: add the
binaryQuantized
to in theembedders
settings (refer to usage page) - Ensure the
documentTemplateMaxBytes
parameter can be used withhuggingFace
,openAi
,rest
orollama
models - Add tests for the new added features
bump-meilisearch-v1.11.0
and NOT main
. Please do 1 PR for all of these changes, and not several.