Benefits of More Organization
I 'm thankful that you're building a pile of up-to-date GitHub repositories here (other models like The Stack are already 2 years old, and that limits their usefulness to me since their .NET code bases are already targeting versions that are past EOL).
I'd like to suggest adding metadata column(s) to annotate these curated files. It would help to identify the last-updated timestamp, programming language/versions (C++11 versus C++23, etc), target platforms, library dependencies (pytorch vs tensorflow), and license tags. Tracking the last-updated timestamp would make it easier to detect when a record in the dataset has become out-of-date with the content in GH for future maintenance.
I believe training to multiple programming languages--especially for smaller models--produces more confusing noise rather than signal. For example, patterns and syntax gleaned from a training set that combines OCaml and C# files may confound more than they teach. Similar benefits could come from training data slices focused on particular platforms or frameworks.
The-Stack-v2-dedup's folder structure by language could also make it easier for HF users to download just what they need from your dataset.
NLP analyzers of repo README files and classifiers on source files could be 2 ways to automate back-filling this information onto records already in the dataset.
Thank you again for considering these improvements, and for taking the time to assemble this dataset.
Thanks for this incredibly valuable feedback! You've absolutely nailed the key limitations and proposed excellent solutions.
You're right that comprehensive metadata would transform this dataset. The challenge is scale: adding the metadata columns you suggested would require 1.5+ million GitHub API calls. With GitHub's rate limits, that's roughly 1+ month of continuous processing for a single contributor.
The paths forward:
- π Time: I'll work on implementing this gradually
- π€ Community: If others want to help with API calls or metadata generation
- π¬ Research: Exploring bulk analysis approaches
Quick wins I can implement sooner:
- Basic language detection via file extensions
- Folder structure by broad language categories (.py, .js, .cpp files etc.)
- File creation/modification timestamps from existing data
The vision you described - version-aware, framework-specific slices - is exactly where this should go. It just needs either time or many hands.
If anyone wants to help accelerate this, please reach out! This could move from months to weeks with distributed effort.
I am already working on the problem of a classifier to sniff the language version (when that can't be obtained more directly from a repo's README or IDE project file) from C# source files, so I'll check back with what progress I have there.
My methodology there is to use supervised learning trained on long-lived .NET project repositories, using the Git history of the source files as they've migrated from version-to-version over the years, to learn what changes have taken place in terms of new language syntax, new Base Class Library types and methods, etc., by version. This approach can be extended by others for C, C++, Python, Rust; based on perhaps the histories of Linux, Pytorch, or Mozilla code bases to name a few examples.