metascroy commited on
Commit
1d0be12
·
verified ·
1 Parent(s): b9b456c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -0
README.md CHANGED
@@ -23,6 +23,11 @@ This mistral3 model was trained 2x faster with [Unsloth](https://github.com/unsl
23
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
24
 
25
 
 
 
 
 
 
26
  ```python
27
  ################################################################################
28
  # We first load the model for QAT using the mobile CPU friendly int8-int4 scheme
 
23
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
24
 
25
 
26
+ # Finetune with unsloth and torchao
27
+
28
+ Below we show how to finetune Ministral-3-3B using unsloth in a way that can be deployed with [ExecuTorch](https://github.com/pytorch/executorch).
29
+ The example is based on the notebook [here](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Ministral_3_VL_(3B)_Vision.ipynb#scrollTo=PglJeZZoOWGG).
30
+
31
  ```python
32
  ################################################################################
33
  # We first load the model for QAT using the mobile CPU friendly int8-int4 scheme