Mistral 12B โ CPT + SFT (Two-Stage LoRA Fine-Tuning)
Model type: Causal Language Model
Base model: mistral-12b-cpt
License: Apache 2.0
Framework: Axolotl
Overview
mistral-12b-cpt-sft combines continual pretraining (CPT) and supervised fine-tuning (SFT).
The first stage extends general knowledge, and the second stage enhances instruction-following on synthetic QA.
This two-stage recipe improves coherence, factual recall, and reasoning while retaining compact LoRA efficiency.
Training was carried out on Leonardo EuroHPC.
Training Setup
Stage 1 (CPT): Unsupervised domain pretraining
Stage 2 (SFT): Supervised QA fine-tuning
Adapter type: LoRA (8-bit)
Precision: bfloat16
Hardware: 8 ร 2 ร A100 64 GB
Framework: Axolotl + DeepSpeed + PyTorch 2.5.1 + CUDA 12.1
Datasets
| Stage | Dataset | Description |
|---|---|---|
| CPT | arxiv.jsonl, gov.jsonl, news.jsonl, wiki.jsonl |
Unsupervised domain text |
| SFT | axolotl_deduplicated_synthetic_qa.jsonl |
Synthetic instructionโresponse QA pairs |
Hyperparameters
| Parameter | Value |
|---|---|
| Sequence length | 2048 |
| Micro batch size | 2 |
| Gradient accumulation | 2 |
| Epochs | 1 |
| Learning rate | 0.0002 |
| LR scheduler | cosine |
| Optimizer | AdamW (8-bit) |
| Warmup steps | 10 |
| Weight decay | 0.0 |
| LoRA rank (r) | 16 |
| LoRA alpha | 32 |
| LoRA dropout | 0.05 |
| LoRA targets | q_proj, k_proj, v_proj, o_proj |
| Gradient checkpointing | โ |
| Flash attention | โ |
| Auto-resume | โ |
| Loss watchdog | threshold 5.0, patience 3 |
Tokenizer
Tokenizer type: AutoTokenizer
Pad token: <|end_of_text|>
- Downloads last month
- 14
Model tree for ubitech-edg/mistral-12b-cpt-sft
Base model
mistralai/Mistral-Nemo-Base-2407
Finetuned
mistralai/Mistral-Nemo-Instruct-2407
Adapter
ubitech-edg/mistral-12b-cpt