Building Russian Benchmark For Evaluation Of Information Retrieval Models | Awesome Similarity Search Papers

Building Russian Benchmark For Evaluation Of Information Retrieval Models

Grigory Kovalev, Mikhail Tikhomirov, Evgeny Kozhevnikov, Max Kornilov, Natalia Loukachevitch Β· INTERNATIONAL CONFERENCE on Computational Linguistics and Intellectual Technologies Β· 2025

We introduce RusBEIR, a comprehensive benchmark designed for zero-shot evaluation of information retrieval (IR) models in the Russian language. Comprising 17 datasets from various domains, it integrates adapted, translated, and newly created datasets, enabling systematic comparison of lexical and neural models. Our study highlights the importance of preprocessing for lexical models in morphologically rich languages and confirms BM25 as a strong baseline for full-document retrieval. Neural models, such as mE5-large and BGE-M3, demonstrate superior performance on most datasets, but face challenges with long-document retrieval due to input size constraints. RusBEIR offers a unified, open-source framework that promotes research in Russian-language information retrieval.

Explore more on:
Survey Paper
Similar Work
Loading…