Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
datasets:
|
| 4 |
+
- TIGER-Lab/VisCode-Multi-679K
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
base_model:
|
| 8 |
+
- Qwen/Qwen2.5-Coder-32B-Instruct
|
| 9 |
+
tags:
|
| 10 |
+
- code
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
# VisCoder2-32B
|
| 14 |
+
|
| 15 |
+
[π Project Page](https://tiger-ai-lab.github.io/VisCoder2) | [π Paper](https://arxiv.org/abs/2510.23642) | [π» GitHub](https://github.com/TIGER-AI-Lab/VisCoder2) | [π€ VisCode2](https://hf.co/collections/TIGER-Lab/viscoder2)
|
| 16 |
+
|
| 17 |
+
**VisCoder2-32B** is a lightweight multi-language visualization coding model trained for **executable code generation, rendering, and iterative self-debugging**.
|
| 18 |
+
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
## π§ Model Description
|
| 22 |
+
|
| 23 |
+
**VisCoder2-32B** is trained on the **VisCode-Multi-679K** dataset, a large-scale instruction-tuning dataset for executable visualization tasks across **12 programming language**. It addresses a core challenge in multi-language visualization: generating code that not only executes successfully but also produces semantically consistent visual outputs by aligning natural-language instructions and rendering results.
|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
## π Main Results on VisPlotBench
|
| 28 |
+
|
| 29 |
+
We evaluate VisCoder2-32B on [**VisPlotBench**](https://huggingface.co/datasets/TIGER-Lab/VisPlotBench), which includes 888 executable visualization tasks spanning 8 languages, supporting both standard generation and multi-turn self-debugging.
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
|
| 33 |
+
> **VisCoder2-32B** shows consistent performance across multiple languages and achieves notable improvements under the multi-round self-debug setting.
|
| 34 |
+
---
|
| 35 |
+
|
| 36 |
+
## π Training Details
|
| 37 |
+
|
| 38 |
+
- **Base model**: Qwen2.5-Coder-32B-Instruct
|
| 39 |
+
- **Framework**: [ms-swift](https://github.com/modelscope/swift)
|
| 40 |
+
- **Tuning method**: Full-parameter supervised fine-tuning (SFT)
|
| 41 |
+
- **Dataset**: [VisCode-Multi-679K](https://huggingface.co/datasets/TIGER-Lab/VisCode-Multi-679K)
|
| 42 |
+
|
| 43 |
+
---
|
| 44 |
+
|
| 45 |
+
## π Citation
|
| 46 |
+
|
| 47 |
+
If you use VisCoder2-32B or related datasets in your research, please cite:
|
| 48 |
+
|
| 49 |
+
```bibtex
|
| 50 |
+
@misc{ni2025viscoder2buildingmultilanguagevisualization,
|
| 51 |
+
title={VisCoder2: Building Multi-Language Visualization Coding Agents},
|
| 52 |
+
author={Yuansheng Ni and Songcheng Cai and Xiangchao Chen and Jiarong Liang and Zhiheng Lyu and Jiaqi Deng and Kai Zou and Ping Nie and Fei Yuan and Xiang Yue and Wenhu Chen},
|
| 53 |
+
year={2025},
|
| 54 |
+
eprint={2510.23642},
|
| 55 |
+
archivePrefix={arXiv},
|
| 56 |
+
primaryClass={cs.SE},
|
| 57 |
+
url={https://arxiv.org/abs/2510.23642},
|
| 58 |
+
}
|
| 59 |
+
|
| 60 |
+
@article{ni2025viscoder,
|
| 61 |
+
title={VisCoder: Fine-Tuning LLMs for Executable Python Visualization Code Generation},
|
| 62 |
+
author={Ni, Yuansheng and Nie, Ping and Zou, Kai and Yue, Xiang and Chen, Wenhu},
|
| 63 |
+
journal={arXiv preprint arXiv:2506.03930},
|
| 64 |
+
year={2025}
|
| 65 |
+
}
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
For evaluation scripts and more information, see our [GitHub repository](https://github.com/TIGER-AI-Lab/VisCoder2).
|