Datasets:
File size: 3,415 Bytes
05d85e4 084c2e6 fbf7b14 05d85e4 c33c3ac 05d85e4 c33c3ac fbf7b14 c33c3ac 05d85e4 055da42 05d85e4 a31aeef 05d85e4 084c2e6 05d85e4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
---
task_categories:
- text-retrieval
- text-ranking
language:
- en
tags:
- search
- reranking
license: cc-by-4.0
configs:
- config_name: annotated_queries
data_files:
- split: train
path: annotated_queries.parquet
default: true
- config_name: knowledge_base
data_files:
- split: corpus
path: knowledge_base.parquet
- config_name: test_queries
data_files:
- split: test
path: test_queries.parquet
---
# DevRev Search Dataset
## Summary
The **DevRev Search Dataset** contains transformed and anonymized user queries on DevRev's public articles, paired with relevant article chunks that answer them. It is designed to benchmark enterprise search and reranking systems. The dataset also includes test queries along with the underlying article chunks used for retrieval.
## Dataset Details
- **Curated by:** Research@DevRev
- **Language:** English
- **Intended Use:** Evaluation of search, retrieval, and reranking models in enterprise support contexts
## Dataset Structure
- **`annotated_queries.parquet`** — Queries paired with annotated (golden) article chunks
- **`knowledge_base.parquet`** — Article chunks created from DevRev's customer-facing support documentation
- **`test_queries.parquet`** — Held-out queries used for evaluation
## Quick Start
```python
from datasets import load_dataset
annotated_queries = load_dataset("devrev/search", "annotated_queries", split="train")
knowledge_base = load_dataset("devrev/search", "knowledge_base", split="corpus")
test_queries = load_dataset("devrev/search", "test_queries", split="test")
```
## Dataset Creation
### Curation Rationale
There is a lack of publicly available datasets representing enterprise support search. This dataset addresses that gap, enabling fair and consistent evaluation of enterprise-focused search and reranking systems.
### Source Data
- **Queries:** Transformed and anonymized versions of real user queries on DevRev's public articles.
- **Knowledge Base Articles:** Segmented (chunked) excerpts from DevRev’s customer-facing support documentation.
## Annotations
- **Annotation Process:** An automated proprietary pipeline identifies the most relevant article chunks for each query.
- **Annotators:** Fully automated system; no human annotators were used.
- **Validation:** Internal validation was conducted by selecting a representative sample of the automatically generated annotations, which were then manually reviewed by human evaluators to verify their correctness and ensure they meet the desired quality standards.
## Personal and Sensitive Information
All user queries were anonymized and stripped of any identifying or sensitive data. No personal information is included.
## Uses
- **Primary Use:** Training and evaluation of enterprise search, retrieval, and reranking models.
- **Example Tasks:**
- [Query-to-document retrieval](https://huggingface.co/tasks/text-ranking)
- Reranking
## Product Impact
This dataset played a key role in honing Computer by DevRev. Learn more about Computer here: [Computer by DevRev | Your New Conversational AI Teammate](https://devrev.ai/meet-computer)
## Citation
If you use this dataset, please cite:
**BibTeX:**
```bibtex
@dataset{devrev_search_2025,
title={DevRev Search Dataset},
author={Research@DevRev},
year={2025},
url={https://huggingface.co/datasets/devrev/search},
license={CC-BY-4.0}
} |