about Ann-test comparison #5701
Replies: 2 comments
-
|
The ann-benckmarks project uses Milvus Knowhere for testing. Note that Milvus Knowhere is a library, not a service as qdrant |
Beta Was this translation helpful? Give feedback.
-
|
Good question — and welcome to the wonderful world of vector benchmark chaos 😅 You’re absolutely right to notice: Qdrant = 292 RPS But here’s what’s often missing: 🔍 Milvus Nowhere ≠ Qdrant service That’s like drag racing a sports engine vs. a truck with a trailer. 🚛 🔍 Ann-benchmark ≠ realistic workload 🔍 If you care about real performance, test with your actual payload shape, request pattern, and retrieval logic. So TL;DR: If you’re building a search engine for humans, Qdrant’s 292 RPS = very real. Cheers |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
hi, everyone, great work on qdrant.
I found qdrant achives highest RPS and lowest latencies in almost all the scenarios, no matter the precision threshold and the metric we choose. It has also shown 4x RPS gains on one of the datasets. see https://qdrant.tech/benchmarks/.
but in another famous ann-benchmarks project, the result is difference, for example in glove-100-angular, when recall is about 0.9, milvus can achieve 1751 rps, but qdrant only achieve 292 rps
please give me some hints for that, thanks
Beta Was this translation helpful? Give feedback.
All reactions