The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: ReadTimeout
Message: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 08a0aa96-7870-4e17-9dab-7c3cfa443b55)')
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 632, in get_module
data_files = DataFilesDict.from_patterns(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 689, in from_patterns
else DataFilesList.from_patterns(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 592, in from_patterns
origin_metadata = _get_origin_metadata(data_files, download_config=download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 506, in _get_origin_metadata
return thread_map(
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/tqdm/std.py", line 1169, in __iter__
for obj in iterable:
^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 619, in result_iterator
yield _result_or_cancel(fs.pop())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 317, in _result_or_cancel
return fut.result(timeout)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 485, in _get_single_origin_metadata
resolved_path = fs.resolve_path(data_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path
repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist
self._api.repo_info(
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2816, in repo_info
return method(
^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2673, in dataset_info
r = get_session().get(path, headers=headers, timeout=timeout, params=params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 96, in send
return super().send(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/adapters.py", line 690, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 08a0aa96-7870-4e17-9dab-7c3cfa443b55)')Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
The ArGiMI Ardian datasets : Text only
The ArGiMi project is committed to open-source principles and data sharing. Thanks to our generous partners, we are releasing several valuable datasets to the public.
Dataset description
This dataset comprises 11,000 financial annual reports, written in english, meticulously extracted from their original PDF format to provide a valuable resource for researchers and developers in financial analysis and natural language processing (NLP). These reports were published from the late 90s to 2023.
This dataset only provides extracted text data. A heavier, more complete dataset that includes images of each document pages, is also available at
artefactory/Argimi-Ardian-Finance-10k-text-imaage.
You can load the dataset with:
from datasets import load_dataset
ds = load_dataset("artefactory/Argimi-Ardian-Finance-10k-text", split="train")
# Or you can stream the dataset to save memory space :
ds = load_dataset("artefactory/Argimi-Ardian-Finance-10k-text", split="train", streaming=True)
Dataset composition:
Each PDF was divided into individual pages to facilitate granular analysis.
For each page, the following data points were extracted:
- Raw Text: The complete textual content of the page, capturing all textual information present.
- Cells: Each cell within tables was identified and represented as a
Cellobject within thedoclingframework. EachCellobject encapsulates:id: A unique identifier assigned to each cell, ensuring unambiguous referencing.text: The textual content contained within the cell.bbox: The precise bounding box coordinates of the cell, defining its location and dimensions on the page.- When OCR is employed, cells are further represented as
OcrCellobjects, which include an additionalconfidenceattribute. This attribute quantifies the confidence level of the OCR process in accurately recognizing the cell's textual content.
- Segments: Beyond individual cells,
doclingsegments each page into distinct content units, each represented as aSegmentobject. These segments provide a structured representation of the document's layout and content, encompassing elements such as tables, headers, paragraphs, and other structural components. EachSegmentobject contains:text: The textual content of the segment.bbox: The bounding box coordinates, specifying the segment's position and size on the page.label: A categorical label indicating the type of content the segment represents (e.g., "table," "header," "paragraph").
To guarantee unique identification, each document is assigned a distinct identifier derived from the hash of its content.
Parsing description:
The datasets creation involved a systematic process using the docling library (Documentation).
- PDFs were processed using the
DocumentConverterclass, employing thePyPdfiumDocumentBackendfor handling of the PDF format. - To ensure high-quality extraction, the following
PdfPipelineOptionswere configured:pipeline_options = PdfPipelineOptions(ocr_options=EasyOcrOptions(use_gpu=True)) pipeline_options.images_scale = 2.0 # Scale image resolution by a factor of 2 pipeline_options.generate_page_images = True # Generate page images pipeline_options.do_ocr = True # Perform OCR pipeline_options.do_table_structure = True # Extract table structure pipeline_options.table_structure_options.do_cell_matching = True # Perform cell matching in tables pipeline_options.table_structure_options.mode = TableFormerMode.ACCURATE # Use accurate mode for table structure extraction - These options collectively enable:
- GPU-accelerated Optical Character Recognition (OCR) via
EasyOcr. - Upscaling of image resolution by a factor of 2, enhancing the clarity of visual elements.
- Generation of page images, providing a visual representation of each page within the document.
- Comprehensive table structure extraction, including cell matching, to accurately capture tabular data within the reports.
- The "accurate" mode for table structure extraction, prioritizing precision in identifying and delineating tables.
- GPU-accelerated Optical Character Recognition (OCR) via
Disclaimer:
This dataset, made available for experimental purposes as part of the ArGiMi research project, is provided "as is" for informational purposes only. The original publicly available data was provided by Ardian. Artefact has processed this dataset and now publicly releases it through Ardian, with Ardian's agreement. None of ArGiMi, Artefact, or Ardian make any representations or warranties of any kind (express or implied) regarding the completeness, accuracy, reliability, suitability, or availability of the dataset or its contents. Any reliance you place on such information is strictly at your own risk. In no event shall ArGiMi, Artefact, or Ardian be liable for any loss or damage, including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this dataset. This disclaimer includes, but is not limited to, claims relating to intellectual property infringement, negligence, breach of contract, and defamation.
Acknowledgement:
The ArGiMi consortium gratefully acknowledges Ardian for their invaluable contribution in gathering the documents that comprise this dataset. Their effort and collaboration were essential in enabling the creation and release of this dataset for public use. The ArGiMi project is a collaborative project with Giskard, Mistral, INA and BnF, and is sponsored by the France 2030 program of the French Government.
Citation:
If you find our datasets useful for your research, consider citing us in your works:
@misc{argimi2024Datasets,
title={The ArGiMi datasets},
author={Hicham Randrianarivo, Charles Moslonka, Arthur Garnier and Emmanuel Malherbe},
year={2024},
}
- Downloads last month
- 25,145
