nielsr HF Staff commited on
Commit
2a53da4
·
verified ·
1 Parent(s): e96b530

Improve model card: Update paper/code links and BibTeX citation

Browse files

This PR enhances the model card for `ByteDance/Ouro-1.4B-Thinking` by:

* Updating the "Paper" link under "Project Links" to point directly to the Hugging Face paper page ([Scaling Latent Reasoning via Looped Language Models](https://huggingface.co/papers/2510.25741)) for improved discoverability on the Hub.
* Adding an explicit "Code" link under "Project Links" to the official GitHub repository (`https://github.com/Ouro-LLM/Ouro`), which is linked from the project page.
* Updating the BibTeX citation to include the full list of authors and a direct link to the arXiv paper for better attribution and accuracy.

These changes make it easier for users to access the paper, code, and correctly cite the work.

Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
 
2
  license: apache-2.0
3
  pipeline_tag: text-generation
4
- library_name: transformers
5
  tags:
6
  - looped-language-model
7
  - reasoning
@@ -130,11 +130,12 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
130
  ## Citation
131
 
132
  ```bibtex
133
- @article{ouro2025,
134
  title={Scaling Latent Reasoning via Looped Language Models},
135
- author={Zhu, Rui-Jie and Wang, Zixuan and Hua, Kai and Zhang, Tianyu and Li, Ziniu and Que, Haoran and Wei, Boyi and Yin, Fan and Wen, Zixin and Xing, He and others},
136
- journal={arXiv preprint},
137
- year={2025}
 
138
  }
139
  ```
140
 
@@ -144,9 +145,8 @@ This model is licensed under Apache-2.0. See the LICENSE file for details.
144
 
145
  ## Project Links
146
 
147
- - **Paper**: [Scaling Latent Reasoning via Looped Language Models](https://ouro-llm.github.io)
 
148
  - **Project Page**: [https://ouro-llm.github.io](https://ouro-llm.github.io)
149
 
150
- ---
151
-
152
-
 
1
  ---
2
+ library_name: transformers
3
  license: apache-2.0
4
  pipeline_tag: text-generation
 
5
  tags:
6
  - looped-language-model
7
  - reasoning
 
130
  ## Citation
131
 
132
  ```bibtex
133
+ @article{zhu2025scaling,
134
  title={Scaling Latent Reasoning via Looped Language Models},
135
+ author={Zhu, Rui-Jie and Wang, Zixuan and Hua, Kai and Zhang, Tianyu and Li, Ziniu and Que, Haoran and Boyi Wei and Zixin Wen and Fan Yin and He Xing and Lu Li and Jiajun Shi and Kaijing Ma and Shanda Li and Taylor Kergan and Andrew Smith and Xingwei Qu and Mude Hui and Bohong Wu and Qiyang Min and Hongzhi Huang and Xun Zhou and Wei Ye and Jiaheng Liu and Jian Yang and Yunfeng Shi and Chenghua Lin and Enduo Zhao and Tianle Cai and Ge Zhang and Wenhao Huang and Yoshua Bengio and Jason Eshraghian},
136
+ journal={arXiv preprint arXiv:2510.25741},
137
+ year={2025},
138
+ url={https://arxiv.org/abs/2510.25741},
139
  }
140
  ```
141
 
 
145
 
146
  ## Project Links
147
 
148
+ - **Paper**: [Scaling Latent Reasoning via Looped Language Models](https://huggingface.co/papers/2510.25741)
149
+ - **Code**: [https://github.com/Ouro-LLM/Ouro](https://github.com/Ouro-LLM/Ouro)
150
  - **Project Page**: [https://ouro-llm.github.io](https://ouro-llm.github.io)
151
 
152
+ ---