OPT-1.3B Fine-tuned with PyTorch FSDP
This model is a fine-tuned version of facebook/opt-1.3b, fine-tuned using PyTorch's Fully Sharded Data Parallel (FSDP) for efficient multi-GPU training on consumer hardware.
This model was fine-tuned using the arxiv-abstract-dataset on 2 ร T4 16GB GPUs.
For detailed implementation, training procedures, and reproducibility instructions, please check out the project repository.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support