Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:

Deciding on extraction path

#10
by Mdspike - opened

"To determine the extraction path, we first manually annotated 1,350 PDFs and trained an XGBoost model. The model relies on 7 document-level features alongside 120 page-level features sampled from 8 random pages. We applied this classifier to PDFs that were not truncated and routed them accordingly"

This comment is super interesting. Would you be open to sharing a bit more about the process you used here, or even release the manually annotated data and training process / XGBoost model?

I think this could be really valuable for anyone trying to do similar projects with large volumes of PDF files.

Thank you for providing such a great resource.

FineData org

Hi, we will soon release a full code, with the classifier and feature extractor :)

Hi @hynky any news on possible date to release extraction code ? Thanks for all the work.

Hi @hynky , hope all is well! Just wanted to gently check in on the extraction code and classifier you mentioned. No rush at all—completely understand these things take time. Really appreciate all the incredible work you've done with this resource, and looking forward to seeing the code when you're ready to share it. Thanks again!

FineData org

Hi, we have relesed the codebase, you can find the classifier there: https://github.com/huggingface/finepdfs

hynky changed discussion status to closed

Sign up or log in to comment