(Dataset based on Pinkstack/syngen-reasoning-0.6b-dataset)

This is a 4B parameter LLM designed for synthetic grounded reasoning generation between final model outputs, specifically for dataset modifications, but can be used for multiple use cases which require reasoning.

For example, this model allows you to turn any chat dataset into a reasoning dataset as if it was generated by DeepSeek R1 or Openai's GPT OSS!


Prompt Format

System Message

<reasoning_style>deepseek_r1</reasoning_style> # Can replace deepseek_r1 with gpt_oss
<system_prompt>Original System Prompt</system_prompt>

Prompt Message

<user>User Message Here</user>
<assistant>Assistant Final Response Here (without reasoning)</assistant>

Output Format

<think>Generated Reasoning</think>
Downloads last month
33
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for qingy2024/SynGen-4B-Instruct

Finetuned
(137)
this model
Quantizations
2 models

Dataset used to train qingy2024/SynGen-4B-Instruct