qdrant/qdrant

Performance degradation in response time using range filters from Qdrant version 1.6 onwards

Opened this issue · 4 comments

Current Behavior

Since upgrading to the latest version of Qdrant, we have noticed that the response times have increased when using range filters during vector searches. To confirm this problem, I prepared test code. In these experiments, response times were about 1 millisecond in version 1.5 and earlier; however, from version 1.6 onwards, the response time has increased to about 3 milliseconds.

Steps to Reproduce

To reproduce this issue, we created a test code in the codelibs/search-ann-benchmark repository, which can be found here. This test involves the following steps:

  1. Changing Qdrant version in the notebook.
  2. Indexing 100,000 documents.
  3. Performing 10,000 vector searches to measure the average response times, retrieving the top 100 results.
  4. Collecting and comparing response times across different versions of Qdrant:
    • Version 1.4.1: 0.8823 msec
    • Version 1.5.1: 1.4941 msec
    • Version 1.6.1: 3.1262 msec
    • Version 1.7.4: 2.6125 msec
    • Version 1.8.4: 3.0016 msec

In version 1.8.4, changing the field schema to integer with settings of lookup:false & range:true showed some improvement:

  • 1.7784 msec
    Although this setting shows an improvement, the response time is still slower compared to earlier versions.

Expected Behavior

We expect the response times using range filters in vector searches to be similar to those in previous versions.

Context (Environment)

The environment used for testing includes:

  • OS: Amazon Linux 2
  • CPU Model: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
  • Memory: 31628 MB
  • Docker: 20.10.25

Hey @marevol, in the context of ANN search, measuring latency without accuracy might be misleading.
Could you please confirm that those measurements you provide also yield same level of accuracy?

Thank you for raising this point.
Our tests are designed to focus on latency, not accuracy, and use consistent conditions across different versions.
Given the significant impact on latency that we've observed, were there any major changes made to improve accuracy starting from version 1.6, which might explain this increase in latency?
For accuracy, I will check this aspect later.

Given the significant impact on latency that we've observed, were there any major changes made to improve accuracy starting from version 1.6, which might explain this increase in latency?

there were some

I have checked the precision@100 for accuracy across the versions.

  • Version 1.4.1: Precision@100 = 0.9310
  • Version 1.5.1: Precision@100 = 0.9293
  • Version 1.6.1: Precision@100 = 0.9271
  • Version 1.7.4: Precision@100 = 0.9396
  • Version 1.8.4: Precision@100 = 0.9354

It appears that there are no significant differences in accuracy between the versions.