purbeshmitra commited on
Commit
4195603
·
verified ·
1 Parent(s): f84c23f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -24,6 +24,16 @@ metrics:
24
 
25
  🔗 Github link: [Training code](https://github.com/purbeshmitra/semantic-soft-bootstrapping)
26
 
 
 
 
 
 
 
 
 
 
 
27
  ## Citation
28
  If you find our work useful, consider citing it as:
29
  ```bibtex
 
24
 
25
  🔗 Github link: [Training code](https://github.com/purbeshmitra/semantic-soft-bootstrapping)
26
 
27
+ Semantic Soft Bootstrapping (SSB), an RL-free self-distillation framework that improves long-context reasoning in LLMs by training the model on its own hinted reasoning as a teacher. Rather than relying on a separate larger teacher or on-policy gradient with sparse rewards, SSB uses the same base model in two semantic roles: a hinted teacher that sees both correct and incorrect solutions and synthesizes a robust explanation, and a hint-free student that learns to reproduce this behavior from the bare question alone. Starting from a raw problem–answer dataset, we construct paired teacher–student conversations and then precompute teacher logits over the answer tokens, enabling efficient offline distillation without any human annotation or online RL loop. This is depicted as following:
28
+ <p align="center">
29
+ <img src="assets/ssb.png" alt="Alt Text" width="750">
30
+ </p>
31
+
32
+ Our experiments on unsloth/Qwen2.5-3B-Instruct show a gain of 10.6%, and 10% improvements in accuracy on MATH500 and AIME2024 benchmarks, compared to just GRPO based RLVR. The results are shown below:
33
+ <p align="center">
34
+ <img src="assets/ssb_results.png" alt="Alt Text" width="750">
35
+ </p>
36
+
37
  ## Citation
38
  If you find our work useful, consider citing it as:
39
  ```bibtex