Post
2284
Introducing Dhara-70M: A diffusion language model that achieves 3.8x higher throughput than autoregressive models!
Key findings from our research on optimal architectures for small language models:
ā Depth beats width: 32 layers outperforms 12 layers at the same parameter count
ā Best-in-class factuality: 47.5% on TruthfulQA
ā 10x training efficiency using WSD (Warmup-Stable-Decay) conversion
ā Canon layers add only 0.13% parameters but improve reasoning
We trained on 1B tokens using the optimal 50-30-20 dataset mix (PDFs + filtered web + educational content), then converted to diffusion with just 100M additional tokens.
Blog: https://huggingface.co/blog/codelion/optimal-model-architecture
Model: codelion/dhara-70m
Key findings from our research on optimal architectures for small language models:
ā Depth beats width: 32 layers outperforms 12 layers at the same parameter count
ā Best-in-class factuality: 47.5% on TruthfulQA
ā 10x training efficiency using WSD (Warmup-Stable-Decay) conversion
ā Canon layers add only 0.13% parameters but improve reasoning
We trained on 1B tokens using the optimal 50-30-20 dataset mix (PDFs + filtered web + educational content), then converted to diffusion with just 100M additional tokens.
Blog: https://huggingface.co/blog/codelion/optimal-model-architecture
Model: codelion/dhara-70m