🧠 ZeroXClem-Qwen3-4B-CrystalSonic
Overview
ZeroXClem-Qwen3-4B-CrystalSonic is an elite 4B-parameter merged model designed for deep reasoning, long-context tool use, structured code generation, and agentic autonomy. Built with MergeKit's model_stock method, this crystal-clear fusion draws from powerful contributors like MiroThinker, Muscae-UI, Fathom-Search, and Claude-distilled reasoning variants. At its heart lies Qwen3-4B-Pro, making this model both versatile and production-ready.
🔧 Merge Configuration
name: ZeroXClem-Qwen3-4B-CrystalSonic
base_model: bunnycore/Qwen3-4B-Pro
dtype: bfloat16
merge_method: model_stock
models:
- model: miromind-ai/MiroThinker-4B-DPO-v0.2
- model: prithivMLmods/Muscae-Qwen3-UI-Code-4B
- model: FractalAIResearch/Fathom-Search-4B
- model: Liontix/Qwen3-4B-Claude-Sonnet-4-Reasoning-Distill-Safetensor
- model: Qwen/Qwen3-4B-Thinking-2507
tokenizer_source: Qwen/Qwen3-4B-Thinking-2507
🧬 Models Merged
🧠 miromind-ai/MiroThinker-4B-DPO-v0.2
A cutting-edge agentic model with 64k context, designed for task decomposition, web search, retrieval-augmented reasoning, and long-horizon problem solving. Built on DPO with multilingual capabilities.
💻 prithivMLmods/Muscae-Qwen3-UI-Code-4B
Fine-tuned for structured code generation in HTML, React, Tailwind, Markdown, and YAML. Supports layout-aware reasoning, component hierarchy, and UI prototyping with structured output.
🌍 FractalAIResearch/Fathom-Search-4B
Trained for open-ended, deep information retrieval and autonomous search workflows. Sets new benchmarks in DeepSearch, surpassing GPT-4o + Search on reasoning-heavy QA.
🎭 Liontix/Qwen3-4B-Claude-Sonnet-4-Reasoning-Distill-Safetensor
Distilled from Claude Sonnet 4/3.7, this model contributes high-fidelity reasoning and conversational engagement to the CrystalSonic blend.
🚀 Qwen/Qwen3-4B-Thinking-2507
Base for long-context thought generation (262k context length). Improved reasoning across logic, math, alignment, tool use, and creativity.
✨ Features & Highlights
🔹 Advanced Reasoning & DeepSearch — From Fathom and MiroThinker: search-aware, long-horizon, tool-augmented thinking.
🔹 UI & Structured Code Generation — Muscae-UI brings layout-aware reasoning and polished frontend component synthesis.
🔹 Safe & Aligned Dialogues — Claude-style instruction distillation adds emotional nuance and safe defaults.
🔹 Agentic Capabilities — Native support for thinking modes, planning, web search, file parsing, and external tool use.
🔹 Multilingual & Scientific — Handles technical, scientific, and cross-lingual queries with elegance and depth.
🎯 Ideal Use Cases
- 🧑💻 Frontend & UI Prototyping
- 🧠 Search-Augmented Autonomous Agents
- 🧬 Scientific Reasoning & Math
- 💬 Conversational AI with Deep Context
- 📑 Tool-Augmented Research Assistants
- 🔍 Structured Information Synthesis
🚀 Quickstart (Transformers)
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ZeroXClem/Qwen3-4B-CrystalSonic"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
prompt = "Explain how quantum computing could impact AI research."
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=2048)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
📜 Licenses:
- Apache 2.0: Credit to MiroThinker, Fathom-Search, Muscae, Qwen3-4B for their amazing models!
💌 Feedback & Contributions
We welcome your prompts, benchmarks, and merge proposals!
🌐 Hugging Face: @ZeroXClem 📬 GitHub Issues & PRs: Let’s build smarter agents together.
ZeroXClem Team | 2025
- Downloads last month
- 4
