Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,82 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
task_categories:
|
| 3 |
+
- text-retrieval
|
| 4 |
+
- text-ranking
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- search
|
| 9 |
+
- reranking
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# DevRev Search Dataset
|
| 13 |
+
|
| 14 |
+
## Summary
|
| 15 |
+
|
| 16 |
+
The **DevRev Search Dataset** contains transformed and anonymized user queries on DevRev's public articles, paired with relevant article chunks that answer them. It is designed to benchmark enterprise search and reranking systems. The dataset also includes test queries along with the underlying article chunks used for retrieval.
|
| 17 |
+
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
## Dataset Details
|
| 21 |
+
|
| 22 |
+
- **Curated by:** Research@DevRev
|
| 23 |
+
- **Language:** English
|
| 24 |
+
- **Intended Use:** Evaluation of search, retrieval, and reranking models in enterprise support contexts
|
| 25 |
+
|
| 26 |
+
---
|
| 27 |
+
|
| 28 |
+
## Dataset Structure
|
| 29 |
+
|
| 30 |
+
- **`annotated_queries.parquet`** — Queries paired with annotated (golden) article chunks
|
| 31 |
+
- **`knowledge_base.parquet`** — Article chunks created from DevRev’s customer-facing support documentation
|
| 32 |
+
- **`test_queries.parquet`** — Held-out queries used for evaluation
|
| 33 |
+
|
| 34 |
+
---
|
| 35 |
+
|
| 36 |
+
## Dataset Creation
|
| 37 |
+
|
| 38 |
+
### Curation Rationale
|
| 39 |
+
There is a lack of publicly available datasets representing enterprise support search. This dataset addresses that gap, enabling fair and consistent evaluation of enterprise-focused search and reranking systems.
|
| 40 |
+
|
| 41 |
+
### Source Data
|
| 42 |
+
|
| 43 |
+
- **Queries:** Transformed and anonymized versions of real user queries on DevRev's public articles.
|
| 44 |
+
- **Knowledge Base Articles:** Segmented (chunked) excerpts from DevRev’s customer-facing support documentation.
|
| 45 |
+
|
| 46 |
+
---
|
| 47 |
+
|
| 48 |
+
## Annotations
|
| 49 |
+
|
| 50 |
+
- **Annotation Process:** An automated proprietary pipeline identifies the most relevant article chunks for each query.
|
| 51 |
+
- **Annotators:** Fully automated system; no human annotators were used.
|
| 52 |
+
- **Validation:** Internal validation was conducted by selecting a representative sample of the automatically generated annotations, which were then manually reviewed by human evaluators to verify their correctness and ensure they meet the desired quality standards.
|
| 53 |
+
|
| 54 |
+
---
|
| 55 |
+
|
| 56 |
+
## Personal and Sensitive Information
|
| 57 |
+
All user queries were anonymized and stripped of any identifying or sensitive data. No personal information is included.
|
| 58 |
+
|
| 59 |
+
---
|
| 60 |
+
|
| 61 |
+
## Uses
|
| 62 |
+
|
| 63 |
+
- **Primary Use:** Training and evaluation of enterprise search, retrieval, and reranking models.
|
| 64 |
+
- **Example Tasks:**
|
| 65 |
+
- [Query-to-document retrieval](https://huggingface.co/tasks/text-ranking)
|
| 66 |
+
- Reranking evaluation
|
| 67 |
+
|
| 68 |
+
---
|
| 69 |
+
|
| 70 |
+
## Citation
|
| 71 |
+
|
| 72 |
+
If you use this dataset, please cite:
|
| 73 |
+
|
| 74 |
+
**BibTeX:**
|
| 75 |
+
```bibtex
|
| 76 |
+
@dataset{devrev_search_2025,
|
| 77 |
+
title={DevRev Search Dataset},
|
| 78 |
+
author={Research@DevRev},
|
| 79 |
+
year={2025},
|
| 80 |
+
url={https://huggingface.co/datasets/devrev/search},
|
| 81 |
+
license={Apache-2.0}
|
| 82 |
+
}
|