Discovering Biases In Information Retrieval Models Using Relevance Thesaurus As Global Explanation | Awesome Similarity Search Papers

Discovering Biases In Information Retrieval Models Using Relevance Thesaurus As Global Explanation

Youngwoo Kim, Razieh Rahimi, James Allan · Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing · 2024

Most efforts in interpreting neural relevance models have focused on local explanations, which explain the relevance of a document to a query but are not useful in predicting the model’s behavior on unseen query-document pairs. We propose a novel method to globally explain neural relevance models by constructing a “relevance thesaurus” containing semantically relevant query and document term pairs. This thesaurus is used to augment lexical matching models such as BM25 to approximate the neural model’s predictions. Our method involves training a neural relevance model to score the relevance of partial query and document segments, which is then used to identify relevant terms across the vocabulary space. We evaluate the obtained thesaurus explanation based on ranking effectiveness and fidelity to the target neural ranking model. Notably, our thesaurus reveals the existence of brand name bias in ranking models, demonstrating one advantage of our explanation method.

Explore more on:
Uncategorized
Similar Work
Loading…