Datasets:
Deciding on extraction path
"To determine the extraction path, we first manually annotated 1,350 PDFs and trained an XGBoost model. The model relies on 7 document-level features alongside 120 page-level features sampled from 8 random pages. We applied this classifier to PDFs that were not truncated and routed them accordingly"
This comment is super interesting. Would you be open to sharing a bit more about the process you used here, or even release the manually annotated data and training process / XGBoost model?
I think this could be really valuable for anyone trying to do similar projects with large volumes of PDF files.
Thank you for providing such a great resource.
Hi, we will soon release a full code, with the classifier and feature extractor :)
Hi @hynky , hope all is well! Just wanted to gently check in on the extraction code and classifier you mentioned. No rush at all—completely understand these things take time. Really appreciate all the incredible work you've done with this resource, and looking forward to seeing the code when you're ready to share it. Thanks again!
Hi, we have relesed the codebase, you can find the classifier there: https://github.com/huggingface/finepdfs