Update metadata and paper link for LongCat-Flash-Omni
Browse filesThis PR improves the model card for the LongCat-Flash-Omni model by:
- Updating the `library_name` to `transformers` to reflect its compatibility with the 🤗 Transformers library, enabling automated code snippets.
- Changing the `pipeline_tag` from `text-generation` to `any-to-any` to accurately represent its comprehensive omni-modal capabilities (text, image, video, and audio understanding and generation).
- Updating the paper link to point to the official Hugging Face paper page: https://huggingface.co/papers/2511.00279.
These changes enhance the model's discoverability and provide more accurate metadata for users on the Hugging Face Hub.
README.md
CHANGED
|
@@ -1,7 +1,7 @@
|
|
| 1 |
---
|
|
|
|
| 2 |
license: mit
|
| 3 |
-
|
| 4 |
-
pipeline_tag: text-generation
|
| 5 |
tags:
|
| 6 |
- transformers
|
| 7 |
---
|
|
@@ -39,7 +39,7 @@ tags:
|
|
| 39 |
</div>
|
| 40 |
|
| 41 |
<p align="center">
|
| 42 |
-
<a href="https://
|
| 43 |
</p>
|
| 44 |
|
| 45 |
## Model Introduction
|
|
@@ -77,7 +77,7 @@ Inspired by the concept of modality decoupling, we propose a Modality-Decoupled
|
|
| 77 |
We provide a comprehensive overview of the training methodology and data strategies behind LongCat-Flash-Omni, and release the model to accelerate future research and innovation in omni-modal intelligence.
|
| 78 |
|
| 79 |
|
| 80 |
-
For more detail, please refer to the comprehensive [***LongCat-Flash-Omni Technical Report***](https://
|
| 81 |
|
| 82 |
## Evaluation Results
|
| 83 |
|
|
@@ -362,4 +362,4 @@ We kindly encourage citation of our work if you find it useful.
|
|
| 362 |
Please contact us at <a href="mailto:[email protected]">[email protected]</a> or join our WeChat Group if you have any questions.
|
| 363 |
|
| 364 |
#### WeChat Group
|
| 365 |
-
<img src=https://raw.githubusercontent.com/meituan-longcat/LongCat-Flash-Omni/main/figures/wechat_qrcode.jpeg width="200px">
|
|
|
|
| 1 |
---
|
| 2 |
+
library_name: transformers
|
| 3 |
license: mit
|
| 4 |
+
pipeline_tag: any-to-any
|
|
|
|
| 5 |
tags:
|
| 6 |
- transformers
|
| 7 |
---
|
|
|
|
| 39 |
</div>
|
| 40 |
|
| 41 |
<p align="center">
|
| 42 |
+
<a href="https://huggingface.co/papers/2511.00279"><b>Tech Report</b> 📄</a>
|
| 43 |
</p>
|
| 44 |
|
| 45 |
## Model Introduction
|
|
|
|
| 77 |
We provide a comprehensive overview of the training methodology and data strategies behind LongCat-Flash-Omni, and release the model to accelerate future research and innovation in omni-modal intelligence.
|
| 78 |
|
| 79 |
|
| 80 |
+
For more detail, please refer to the comprehensive [***LongCat-Flash-Omni Technical Report***](https://huggingface.co/papers/2511.00279).
|
| 81 |
|
| 82 |
## Evaluation Results
|
| 83 |
|
|
|
|
| 362 |
Please contact us at <a href="mailto:[email protected]">[email protected]</a> or join our WeChat Group if you have any questions.
|
| 363 |
|
| 364 |
#### WeChat Group
|
| 365 |
+
<img src=https://raw.githubusercontent.com/meituan-longcat/LongCat-Flash-Omni/main/figures/wechat_qrcode.jpeg width="200px">
|