--- license: apache-2.0 --- # OPT-1.3B Fine-tuned with PyTorch FSDP This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b), fine-tuned using PyTorch's Fully Sharded Data Parallel (FSDP) for efficient multi-GPU training on consumer hardware. This model was fine-tuned using the [arxiv-abstract-dataset](https://huggingface.co/datasets/ash001/arxiv-abstract) on 2 × T4 16GB GPUs. For detailed implementation, training procedures, and reproducibility instructions, please check out the [project repository](https://github.com/sparklerz/multigpu-llm-finetuning/edit/main/pytorch-fsdp).