| base_model: meta-llama/Llama-3.2-1B-Instruct | |
| datasets: | |
| - whynlp/gsm8k-aug | |
| library_name: transformers | |
| license: llama3.2 | |
| tags: [] | |
| pipeline_tag: text-generation | |
| # Learning When to Stop: Adaptive Latent Reasoning via Reinforcement Learning | |
| This repository contains the model presented in the paper [Learning When to Stop: Adaptive Latent Reasoning via Reinforcement Learning](https://huggingface.co/papers/2511.21581). | |
| Latent reasoning represents a new approach in Transformer language models, aiming to compress reasoning lengths. This model uses a post-SFT reinforcement-learning methodology to optimize latent reasoning length, minimizing it while maintaining accuracy. | |
| Code: [https://github.com/apning/adaptive-latent-reasoning](https://github.com/apning/adaptive-latent-reasoning) | |
| ## Sample Usage | |
| You can load these models using the `automodelforcausallm_from_pretrained_latent` function from `src.model_creation`. | |
| ```python | |
| from transformers import AutoTokenizer | |
| from src.model_creation import automodelforcausallm_from_pretrained_latent | |
| repo_id = "Lapisbird/Llama-adaLR-model-latent-6" | |
| model = automodelforcausallm_from_pretrained_latent(repo_id) | |
| tokenizer = AutoTokenizer.from_pretrained(repo_id) | |
| ``` |