Define "do_sample" explicitly in generation_config.json
#6 opened 2 days ago
by
Corellios
Update config.json
#5 opened 2 days ago
by
Corellios
Update inference examples to use the correct chat template
#4 opened 3 days ago
by
mario-sanz
Endless reasoning loop when serving the model with vLLM
3
#2 opened 6 days ago
by
sliuau