Datasets:
task_categories:
- text-retrieval
- text-ranking
language:
- en
tags:
- search
- reranking
license: cc-by-4.0
configs:
- config_name: annotated_queries
data_files:
- split: train
path: annotated_queries.parquet
default: true
- config_name: knowledge_base
data_files:
- split: corpus
path: knowledge_base.parquet
- config_name: test_queries
data_files:
- split: test
path: test_queries.parquet
DevRev Search Dataset
Summary
The DevRev Search Dataset contains transformed and anonymized user queries on DevRev's public articles, paired with relevant article chunks that answer them. It is designed to benchmark enterprise search and reranking systems. The dataset also includes test queries along with the underlying article chunks used for retrieval.
Dataset Details
- Curated by: Research@DevRev
- Language: English
- Intended Use: Evaluation of search, retrieval, and reranking models in enterprise support contexts
Dataset Structure
annotated_queries.parquet— Queries paired with annotated (golden) article chunksknowledge_base.parquet— Article chunks created from DevRev's customer-facing support documentationtest_queries.parquet— Held-out queries used for evaluation
Quick Start
from datasets import load_dataset
annotated_queries = load_dataset("devrev/search", "annotated_queries", split="train")
knowledge_base = load_dataset("devrev/search", "knowledge_base", split="corpus")
test_queries = load_dataset("devrev/search", "test_queries", split="test")
Dataset Creation
Curation Rationale
There is a lack of publicly available datasets representing enterprise support search. This dataset addresses that gap, enabling fair and consistent evaluation of enterprise-focused search and reranking systems.
Source Data
- Queries: Transformed and anonymized versions of real user queries on DevRev's public articles.
- Knowledge Base Articles: Segmented (chunked) excerpts from DevRev’s customer-facing support documentation.
Annotations
- Annotation Process: An automated proprietary pipeline identifies the most relevant article chunks for each query.
- Annotators: Fully automated system; no human annotators were used.
- Validation: Internal validation was conducted by selecting a representative sample of the automatically generated annotations, which were then manually reviewed by human evaluators to verify their correctness and ensure they meet the desired quality standards.
Personal and Sensitive Information
All user queries were anonymized and stripped of any identifying or sensitive data. No personal information is included.
Uses
- Primary Use: Training and evaluation of enterprise search, retrieval, and reranking models.
- Example Tasks:
- Query-to-document retrieval
- Reranking
Product Impact
This dataset played a key role in honing Computer by DevRev. Learn more about Computer here: Computer by DevRev | Your New Conversational AI Teammate
Citation
If you use this dataset, please cite:
BibTeX:
@dataset{devrev_search_2025,
title={DevRev Search Dataset},
author={Research@DevRev},
year={2025},
url={https://huggingface.co/datasets/devrev/search},
license={CC-BY-4.0}
}