Datasets:
metadata
license: apache-2.0
task_categories:
- information-retrieval
- text-retrieval
tags:
- beir
- nq
- information-retrieval
- retrieval
- search
configs:
- config_name: corpus
data_files:
- split: train
path: corpus/train-*
- config_name: queries
data_files:
- split: train
path: queries/train-*
dataset_info:
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: metadata
dtype: 'null'
splits:
- name: train
num_bytes: 1381417863
num_examples: 2681468
download_size: 767872667
dataset_size: 1381417863
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: metadata
dtype: 'null'
splits:
- name: train
num_bytes: 220472
num_examples: 3452
download_size: 137910
dataset_size: 220472
BEIR NQ Dataset (Migrated)
This is a migrated version of BeIR/nq that is compatible with datasets library 4.0.0+.
Dataset Description
This dataset contains the nq dataset from the BEIR benchmark, converted from the old script-based format to Parquet format.
Dataset Structure
Queries
Split 'queries': 3,452 examples
- Features: ['_id', 'text', 'metadata']
Total examples: 3,452
Corpus
Split 'corpus': 2,681,468 examples
- Features: ['_id', 'title', 'text', 'metadata']
Total examples: 2,681,468
Usage
from datasets import load_dataset
# Load queries (split: queries)
queries = load_dataset("Hyukkyu/beir-nq", "queries", split="queries")
# Load corpus (split: corpus)
corpus = load_dataset("Hyukkyu/beir-nq", "corpus", split="corpus")
Available Splits
Queries
queries: 3,452 examples
Corpus
corpus: 2,681,468 examples
Original Dataset
This dataset is migrated from: BeIR/nq
Citation
If you use this dataset, please cite the original BEIR paper:
@article{thakur2021beir,
title={BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Thakur, Nandan and Reimers, Nils and Ruckle, Andreas and Srivastava, Abhishek and Gurevych, Iryna},
journal={arXiv preprint arXiv:2104.08663},
year={2021}
}