Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
License:
IL-PCSR (Indian Legal — Precedent & Statute Retrieval)
IL-PCSR: A dataset of Indian legal case queries annotated with relevant statutes and precedents, plus candidate pools of statutory provisions and precedent judgments for retrieval research.
Dataset Summary
IL-PCSR contains three configs:
queries— case queries (judgments / case descriptions) annotated with relevant statutes and precedents.statutes— statutory provisions (candidate documents).precedents— prior judgments (candidate documents).
Languages
- English (Indian legal terminology and occasional code-switching).
License & Access
- License: CC-BY-NC-SA 4.0 (non-commercial, share-alike).
- This dataset is gated. To request access, users must complete the provided access form (the gate fields are defined in the front-matter). By requesting the dataset the user must explicitly agree to:
- Use the dataset only for non-commercial research purposes, and
- Not further redistribute or upload the dataset anywhere else.
- DISCLAIMER (must be accepted): the dataset is released for research purposes only; authors disclaim responsibility for any loss or damage from usage.
Dataset structure (detailed)
The YAML in the front-matter contains exact dataset_info and configs. Below is a readable description of the same schema.
Config: queries (default)
Splits
train_queries: 5,017 examplesdev_queries: 627 examplestest_queries: 627 examples
Fields / schema
id: string → unique id for the query.case_title: string → short title of the case.date: string → judgment date in ISO:YYYY-MM-DD format.jurisdiction: string → court or jurisdiction name.text: list[string] — tokenized/segmented text lines or paragraphs (stored as a list to preserve segments).rhetorical_roles: list[string] → role labels for each segment (aligned withtextsegments).- [
Facts,Issue,Argument by Petitioner,Argument by Respondent,Statute Analysis,Precedent Analysis,Court Disclosure,Court Reasoning,NONE]
- [
relevant_statutes: list[string] → human-readable statute names referenced as relevant.relevant_statute_ids: list[string] → canonical IDs matching thestatutesconfigidfield.relevant_precedents: list[string] → human-readable precedent case titles.relevant_precedent_ids: list[string] → canonical IDs matching theprecedentsconfigidfield.
Notes
relevant_statute_idsandrelevant_precedent_idsare the fields used to build ground-truth relevance sets (qrels) for retrieval evaluation.
Config: statutes
Splits
statute_candidates: 936 examples
Fields
id: string → unique provision id (matchesrelevant_statute_idsinqueries).provision_name: string → statute/provision title.text: list[string] → provision text (segmented).
Config: precedents
Splits
precedent_candidates: 3,183 examples
Fields / schema - Same as queries
Data provenance & preprocessing
- Sources: Publicly available Indian judgments and statutory texts (collected from public repositories and court websites).
- Curation steps (summary):
- Document collection and deduplication.
- Minimal normalization (unicode normalisation, whitespace clean-up).
- Masking legal named entities and citations
- Rhetorical segments (Facts, Issues, Reasoning, Judgment) provided according to IndianKanoon in
rhetorical_roles. - Linking queries to statutes and precedents (producing the
relevant_*fields).
- PII & legal caution: Users are responsible for additional PII removal or compliance checks; the dataset maintainers provide the data "as curated" and do not guarantee absence of sensitive information.
How to use
All of the examples assume you have access (i.e., gate accepted).
1) Load from Hugging Face Hub
from datasets import load_dataset
# Load queries config (default)
ds_queries = load_dataset("Exploration-Lab/IL-PCSR", name="queries")
# Access splits using the explicit split names
train_q = ds_queries["train_queries"]
dev_q = ds_queries["dev_queries"]
test_q = ds_queries["test_queries"]
# Example: inspect first query
print(train_q[0])
2) Load specific configs
# Statutes (candidate pool)
ds_statutes = load_dataset("Exploration-Lab/IL-PCSR", name="statutes")
statute_pool = ds_statutes["statute_candidates"]
print(statute_pool[0])
# Precedents (candidate pool)
ds_precedents = load_dataset("Exploration-Lab/IL-PCSR", name="precedents")
precedent_pool = ds_precedents["precedent_candidates"]
print(precedent_pool[0])
3) Local development (loading from a local repo folder)
# If you're testing locally (folder contains dataset script + README.md + data/)
ds_local_queries = load_dataset("</path/to/local/IL-PCSR>", name="queries")
print(ds_local_queries["train_queries"][0])
4) Building qrels (ground-truth) for retrieval evaluation
# Example: build a qrels-like dict mapping query id -> set(candidate_ids)
def build_qrels(queries_split):
qrels = {}
for ex in queries_split:
qid = ex["id"]
# combine statute ids and precedent ids if you want a single candidate set
relevant = set(ex.get("relevant_statute_ids", []) + ex.get("relevant_precedent_ids", []))
qrels[qid] = list(relevant)
return qrels
qrels_train = build_qrels(train_q)
print("sample qrels:", list(qrels_train.items())[:2])
Citation
@inproceedings{il-pcsr2025,
title = "IL-PCSR: Legal Corpus for Prior Case and Statute Retrieval",
author = "Paul, Shounak and Ghumare, Dhananjay and Goyal, Pawan and Ghosh, Saptarshi and Modi, Ashutosh"
booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2025",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
note = "To Appear"
}
- Downloads last month
- 13