Learning When to Stop: Adaptive Latent Reasoning via Reinforcement Learning

This repository contains the model presented in the paper Learning When to Stop: Adaptive Latent Reasoning via Reinforcement Learning.

Latent reasoning represents a new approach in Transformer language models, aiming to compress reasoning lengths. This model uses a post-SFT reinforcement-learning methodology to optimize latent reasoning length, minimizing it while maintaining accuracy.

Code: https://github.com/apning/adaptive-latent-reasoning

Sample Usage

You can load these models using the automodelforcausallm_from_pretrained_latent function from src.model_creation.

from transformers import AutoTokenizer
from src.model_creation import automodelforcausallm_from_pretrained_latent

repo_id = "Lapisbird/Llama-adaLR-model-latent-6"

model = automodelforcausallm_from_pretrained_latent(repo_id)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
Downloads last month
56
Safetensors
Model size
1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Lapisbird/Llama-adaLR-model-latent-6-by-1

Finetuned
(1178)
this model

Dataset used to train Lapisbird/Llama-adaLR-model-latent-6-by-1