Dolphy-1.0-GGUF

Dolphy AI's First step into the world of Machine Learning.

This is a fine tune of Qwen 3 4B 2507 Instruct, a lightweight but capable model that can outperform many larger models. We used Unsloth LoRA Finetuning on an extensive range of high quality diverse datasets. Dolphy 1.0 was fine tuned on 1.5M examples throughout it's fine tuning pipeline.

Dolphy 1.0 was trained in 20 different datasets, with 1.5M examples in total. Every dataset was carefully curated to extend the Qwen's behaviour to create a Small Model with Superior dominance over the 4B catagory.

Compatibility

As Dolphy 1.0 and Qwen3 2507 Instruct models share the same base, Dolphy 1.0 is compatible with Qwen3's extensive tool use, function calling and multilingual capibilities. The tokenizer is unchanged and the model archetecture is intact. You can also find this model in upcoming Dolphy AI releases.

Example usage:

  • For Llama.CPP: llama-cli --hf Dolphy-AI/Dolphy-1.0-GGUF -p "What is a Dolphin?"

You can also find this model in upcoming Dolphy AI releases.

Available Model files:

  • DolphyAI-1.0-Q5_K_M.gguf
Downloads last month
33
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Dolphy-AI/Dolphy-1.0-GGUF

Quantized
(4)
this model

Collection including Dolphy-AI/Dolphy-1.0-GGUF