Mistral 12B โ€” CPT + SFT (Two-Stage LoRA Fine-Tuning)

Model type: Causal Language Model
Base model: mistral-12b-cpt
License: Apache 2.0
Framework: Axolotl


Overview

mistral-12b-cpt-sft combines continual pretraining (CPT) and supervised fine-tuning (SFT).
The first stage extends general knowledge, and the second stage enhances instruction-following on synthetic QA.
This two-stage recipe improves coherence, factual recall, and reasoning while retaining compact LoRA efficiency.

Training was carried out on Leonardo EuroHPC.


Training Setup

Stage 1 (CPT): Unsupervised domain pretraining
Stage 2 (SFT): Supervised QA fine-tuning
Adapter type: LoRA (8-bit)
Precision: bfloat16
Hardware: 8 ร— 2 ร— A100 64 GB
Framework: Axolotl + DeepSpeed + PyTorch 2.5.1 + CUDA 12.1


Datasets

Stage Dataset Description
CPT arxiv.jsonl, gov.jsonl, news.jsonl, wiki.jsonl Unsupervised domain text
SFT axolotl_deduplicated_synthetic_qa.jsonl Synthetic instructionโ€“response QA pairs

Hyperparameters

Parameter Value
Sequence length 2048
Micro batch size 2
Gradient accumulation 2
Epochs 1
Learning rate 0.0002
LR scheduler cosine
Optimizer AdamW (8-bit)
Warmup steps 10
Weight decay 0.0
LoRA rank (r) 16
LoRA alpha 32
LoRA dropout 0.05
LoRA targets q_proj, k_proj, v_proj, o_proj
Gradient checkpointing โœ…
Flash attention โœ…
Auto-resume โœ…
Loss watchdog threshold 5.0, patience 3

Tokenizer

Tokenizer type: AutoTokenizer
Pad token: <|end_of_text|>

Downloads last month
14
Safetensors
Model size
12B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ubitech-edg/mistral-12b-cpt-sft

Dataset used to train ubitech-edg/mistral-12b-cpt-sft