--- language: - fr license: etalab-2.0 size_categories: - 100K Title > Section > ...) 'texte_date_publi': str, # Publication date 'article_etat': str # Legal status (always "VIGUEUR" in this dataset) } ``` ## 🔍 Sample Article ```json { "article_id": "LEGIARTI000006419282", "article_num": "1", "article_contenu_text": "Laws and regulations are enforceable...", "texte_titre": "Code civil", "texte_nature": "CODE", "texte_contexte": "Preliminary Title: On the publication..." } ``` ## 🚀 Usage ### Basic Loading ```python import pandas as pd from datasets import load_dataset # Option 1: Using Hugging Face datasets dataset = load_dataset("VinceGx33/legi-french-law-2025", split="train") # Option 2: Using pandas df = pd.read_parquet("hf://datasets/VinceGx33/legi-french-law-2025/legi_french_law.parquet") ``` ### RAG System with LlamaIndex ```python from llama_index.core import VectorStoreIndex, Document from llama_index.embeddings.huggingface import HuggingFaceEmbedding import pandas as pd # Load dataset df = pd.read_parquet("hf://datasets/VinceGx33/legi-french-law-2025/legi_french_law.parquet") # Create documents documents = [ Document( text=f"Article {row['article_num']}\n{row['article_contenu_text']}", metadata={ "article_id": row['article_id'], "title": row['texte_titre'], "nature": row['texte_nature'] } ) for _, row in df.iterrows() ] # French legal-specialized embeddings embed_model = HuggingFaceEmbedding( model_name="OrdalieTech/Solon-embeddings-large-0.1" # Recommended for French law ) # Create vector index index = VectorStoreIndex.from_documents(documents, embed_model=embed_model) # Query query_engine = index.as_query_engine() response = query_engine.query("What are the conditions for divorce?") print(response) ``` ## 🔬 Data Processing Pipeline This dataset was created through a custom ETL pipeline that transforms official LEGI XML dumps directly into a RAG-optimized Parquet format. ### Design Approach: Direct XML-to-Parquet Transformation We chose a **direct XML → Parquet** approach to optimize for legal RAG applications: **Key decisions:** - **Parse XML directly** using official DILA DTD 2023 structure - **Filter at parse-time**: Only extract articles with `ETAT="VIGUEUR"` (currently in force) - **Skip intermediate database**: No SQLite layer, direct to Parquet - **Focus on current law**: Historical versions excluded (not needed for legal consultation) **Processing features:** - ✅ Robust error handling (malformed XML files skipped, no pipeline crash) - ✅ Full hierarchy extraction: Code → Book (Livre) → Title (Titre) → Chapter (Chapitre) → Section - ✅ Automatic HTML cleaning from article content - ✅ Batch processing with progress tracking - ✅ Memory-optimized (<2 GB RAM usage) **Why this approach:** - Existing tools like `legi.py` are abandoned and incompatible with 2025 dumps - RAG systems need current legislation, not historical database complexity - Direct transformation: 60-90 min processing time - Produces clean, analysis-ready Parquet format ### Step 1: Download Official LEGI Dump ```python # Script: 1_download_legi.py import requests DILA_BASE_URL = "https://echanges.dila.gouv.fr/OPENDATA/LEGI" DUMP_FILE = "Freemium_legi_global_YYYYMMDD-HHMMSS.tar.gz" # ~1.1 GB # Download with progress tracking response = requests.get(f"{DILA_BASE_URL}/{DUMP_FILE}", stream=True) # Total: 1.1 GB compressed, ~15 GB extracted (1M+ XML files) ``` **Why not use existing tools?** - `legi.py` (Legilibre) is abandoned and incompatible with 2025 dumps (AssertionError on line 305) - We built a custom parser based on [official DILA DTD documentation](https://echanges.dila.gouv.fr/OPENDATA/LEGI/documentation/) ### Step 2: Extract TAR Archive ```python # Script: 2b_parse_legi_xml_custom.py import tarfile with tarfile.open(dump_path, 'r:gz') as tar: tar.extractall(path=extract_dir) # Extracts 1M+ XML files (article/*.xml, texte/code/*.xml, etc.) ``` **Optimization**: Using `extractall()` instead of iterative extraction = **3-5x speedup** (8-12 min vs 30-60 min) ### Step 3: Parse XML According to LEGI Schema Official LEGI XML structure (from DILA DTD): ```xml
LEGIARTI000006419282 1 VIGUEUR Code civil 1804-03-15 Titre préliminaire : De la publication... Chapitre Ier : De la promulgation... Les lois et les règlements...
``` **Parsing logic** (simplified): ```python import xml.etree.ElementTree as ET from pathlib import Path def parse_article_xml(xml_path): tree = ET.parse(xml_path) root = tree.getroot() # Extract metadata article_id = root.find('.//ID').text article_num = root.find('.//NUM').text article_etat = root.find('.//ETAT').text # CRITICAL FILTER: Only "VIGUEUR" (in force) if article_etat != "VIGUEUR": return None # Skip abrogated/modified articles # Extract legislative text info texte = root.find('.//TEXTE') texte_titre = texte.find('TITRE').text # e.g., "Code civil" texte_date_publi = texte.find('DATE_PUBLI').text # Build hierarchy from TM elements (Livre > Titre > Chapitre > Section...) hierarchy_parts = [] for tm in root.findall('.//TM'): titre_tm = tm.find('TITRE_TM') if titre_tm is not None: hierarchy_parts.append(titre_tm.text.strip()) hierarchy = " > ".join(hierarchy_parts) # Extract article content contenu = root.find('.//CONTENU') article_content = contenu.text if contenu is not None else "" return { 'article_id': article_id, 'article_num': article_num, 'article_contenu_text': article_content, 'texte_titre': texte_titre, 'texte_contexte': hierarchy, # Full hierarchical path 'texte_date_publi': texte_date_publi, 'article_etat': article_etat } # Process all XML files recursively articles = [] for xml_file in Path(extract_dir).rglob('*.xml'): article_data = parse_article_xml(xml_file) if article_data: # Only EN VIGUEUR articles.append(article_data) ``` **Key filtering decisions**: - ✅ Keep: `ETAT="VIGUEUR"` → Currently in force - ❌ Discard: `ETAT="ABROGE"` → Repealed - ❌ Discard: `ETAT="MODIFIE"` → Modified (superseded version) - ❌ Discard: `ETAT="PERIME"` → Expired **Result**: 367,075 articles EN VIGUEUR from 1M+ total XML files ### Step 4: Enrich with Text Nature Classification ```python def detect_text_nature(xml_path): # Classify legislative text type from file path and content path_str = str(xml_path) if '/code/' in path_str: return 'CODE' elif '/loi/' in path_str: return 'LOI' elif '/ordonnance/' in path_str: return 'ORDONNANCE' elif '/decret/' in path_str: return 'DECRET' else: return 'AUTRE' ``` ### Step 5: Export to Parquet ```python import pandas as pd df = pd.DataFrame(articles) # Optimize data types df['article_id'] = df['article_id'].astype('string') df['article_num'] = df['article_num'].astype('string') df['texte_titre'] = df['texte_titre'].astype('string') df['texte_nature'] = df['texte_nature'].astype('category') # Save to Parquet with compression df.to_parquet( 'legi_french_law.parquet', engine='pyarrow', compression='snappy', index=False ) ``` **Final dataset characteristics**: - **Rows**: 367,075 articles - **Columns**: 9 fields - **Size**: 252 MB (Parquet compressed) - **Processing time**: ~60-90 minutes on Apple M4 Pro - **Memory usage**: <2 GB RAM (batch processing optimized) ### Reproducibility This dataset can be reproduced from the official LEGI dumps: 1. Download the latest LEGI dump from [data.gouv.fr](https://www.data.gouv.fr/fr/datasets/legi-codes-lois-et-reglements-consolides/) 2. Extract the TAR archive (~1M+ XML files) 3. Parse XML files according to the [official DILA DTD](https://echanges.dila.gouv.fr/OPENDATA/LEGI/documentation/) 4. Filter articles with `ETAT="VIGUEUR"` 5. Export to Parquet format **Processing scripts available upon request.** ## 📜 License **Licence Ouverte 2.0 (Etalab / French Open License 2.0)** - https://www.etalab.gouv.fr/licence-ouverte-open-licence/ This dataset is derived from LEGI data provided by DILA, released under Licence Ouverte 2.0. **You are free to**: - ✅ Reproduce, copy, publish and transmit the data - ✅ Distribute and redistribute the data - ✅ Adapt, modify, extract and transform the data - ✅ Exploit the data commercially **Requirement**: Acknowledge the source (DILA/Légifrance). ## 📖 Citation If you use this dataset in your research, please cite: ```bibtex @dataset{legi_french_law_2025, author = {DILA - Direction de l'information légale et administrative}, title = {LEGI - French Codes, Laws and Consolidated Regulations}, year = {2025}, month = {July}, url = {https://www.data.gouv.fr/fr/datasets/legi-codes-lois-et-reglements-consolides/}, note = {Prepared and published by @VinceGx33 on Hugging Face} } ``` ## 🔗 Additional Resources - **Official LEGI source**: https://www.data.gouv.fr/fr/datasets/legi-codes-lois-et-reglements-consolides/ - **DILA documentation**: https://echanges.dila.gouv.fr/OPENDATA/LEGI/ - **Légifrance portal**: https://www.legifrance.gouv.fr/ - **Recommended embeddings**: [OrdalieTech/Solon-embeddings-large-0.1](https://huggingface.co/OrdalieTech/Solon-embeddings-large-0.1) - **Complementary Q&A dataset**: [louisbrulenaudet/legalkit](https://huggingface.co/datasets/louisbrulenaudet/legalkit) ## 🙏 Acknowledgments - **DILA** for providing open access to French legislative texts - **Etalab** for the Licence Ouverte 2.0 framework - **OrdalieTech** for the Solon French legal embeddings - **Louis Brulé Naudet** ([@louisbrulenaudet](https://huggingface.co/louisbrulenaudet)) for the inspiring LegalKit dataset ## 📧 Contact For questions or suggestions: [Open a discussion](https://huggingface.co/datasets/VinceGx33/legi-french-law-2025/discussions)