shizhediao2 commited on
Commit
e3d777f
·
1 Parent(s): fb56db4

update README

Browse files
Files changed (1) hide show
  1. README.md +55 -20
README.md CHANGED
@@ -1,29 +1,59 @@
1
- # Model Overview
2
 
3
- ### Description:
4
- ToolOrchestrator-8B is an 8B open-weight model for complex agentic tasks such as Humanity's Last Exam, Tau²-Bench, and FRAMES.
5
- Given a question-answering task, the model first interprets the question, reasons through it, invokes tools when necessary, and finally generates the answer.
6
- It is trained using the Group Relative Policy Optimization (GRPO) algorithm on a diverse and comprehensive set of datasets.
7
- Our model has achieved impressive results, outperforming Deepseek’s model by a large margin on a broad range of tasks including Humanity's Last Exam, Tau²-Bench, and FRAMES.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
  This model is for research and development only.
10
 
11
- ### License/Terms of Use
12
- [NVIDIA License](LICENSE)
13
 
14
- ### Deployment Geography:
15
- Global <br>
16
 
17
- ## Model Architecture:
18
- **Architecture Type:** Dense decoder-only Transformer model <br>
 
 
19
 
20
- **Network Architecture:** [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) <br>
 
21
 
22
- **This model was developed based on Qwen3-8B <br>
23
- ** Number of model parameters 8B <br>
 
24
 
 
 
 
 
25
 
26
- ## Model Version(s):
 
 
 
 
 
 
 
 
 
 
27
  1.0 <br>
28
 
29
  ### Training Dataset:
@@ -35,16 +65,21 @@ Global <br>
35
 
36
 
37
 
38
- ## Ethical Considerations:
39
  NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. <br>
40
 
41
  Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns [here](https://app.intigriti.com/programs/nvidia/nvidiavdp/detail).
42
 
43
- ## Citation
44
- If you find this model useful, please cite:
 
 
 
 
 
45
  ```
46
  @article{toolorchestra,
47
- title={ToolOrchestrator-8B: An 8B Open-Weight Model for Complex Agentic Tasks},
48
  author={Su, Hongjin and Diao, Shizhe and Lu, Ximing and Liu, Mingjie and Xu, Jiacheng and Dong, Xin and Fu, Yonggan and Belcak, Peter and Ye, Hanrong and Yin, Hongxu and Dong, Yi and Bakhturina, Evelina and Yu, Tao and Choi, Yejin and Kautz, Jan and Molchanov, Pavlo}
49
  journal={arXiv preprint arXiv:XXXX},
50
  year={2025}
 
1
+ # ToolOrchestra: Elevating Intelligence via Efficient Model and Tool Orchestration
2
 
3
+ [![Paper](https://img.shields.io/badge/ArXiv-Paper-brown)](https://arxiv.org/abs/xxx)
4
+ [![Code](https://img.shields.io/badge/GitHub-Link-orange)](https://github.com/NVlabs/ToolOrchestra/)
5
+ [![Model](https://img.shields.io/badge/HuggingFace-Model-green)](https://huggingface.co/nvidia/Orchestrator-8B)
6
+ [![Data](https://img.shields.io/badge/HuggingFace-Data-blue)](https://huggingface.co/datasets/nvidia/ToolScale)
7
+ [![Website](https://img.shields.io/badge/Web-Page-purple)](https://research.nvidia.com/labs/lpr/ToolOrchestra/)
8
+
9
+
10
+ ### Description
11
+
12
+ Orchestrator-8B is a state-of-the-art 8B parameter orchestration model designed to solve complex, multi-turn agentic tasks by coordinating a diverse set of expert models and tools.
13
+ <p align="center">
14
+ <img src="./assets/method.png" width="100%"/>
15
+ <p>
16
+
17
+
18
+ On the Humanity's Last Exam (HLE) benchmark, ToolOrchestrator-8B achieves a score of 37.1%, outperforming GPT-5 (35.1%) while being approximately 2.5x more efficient.
19
+
20
+ <p align="center">
21
+ <img src="./assets/HLE_benchmark.png" width="80%"/>
22
+ <p>
23
 
24
  This model is for research and development only.
25
 
 
 
26
 
27
+ ### Key Features
 
28
 
29
+ - Intelligent Orchestration: Capable of managing heterogeneous toolsets including basic tools (search, code execution) and other LLMs (specialized and generalist).
30
+ - Multi-Objective RL Training: Trained via Group Relative Policy Optimization (GRPO) with a novel reward function that optimizes for accuracy, latency/cost, and adherence to user preferences.
31
+ - Efficiency: Delivers higher accuracy at significantly lower computational cost compared to monolithic frontier models.
32
+ - Robust Generalization: Demonstrated ability to generalize to unseen tools and pricing configurations.
33
 
34
+ ### Benchmark
35
+ On Humanity’s Last Exam, Orchestrator-8B achieves 37.1%, surpassing GPT-5 (35.1%) with only 30% monetary cost and 2.5x faster. On FRAMES and τ²-Bench, Orchestrator-8B consistently outperforms strong monolithic systems, demonstrating versatile reasoning and robust tool orchestration.
36
 
37
+ <p align="center">
38
+ <img src="./assets/results.png" width="100%"/>
39
+ <p>
40
 
41
+ Orchestrator-8B consistently outperforms GPT-5, Claude Opus 4.1 and Qwen3-235B-A22B on HLE with substantially lower cost.
42
+ <p align="center">
43
+ <img src="./assets/cost-performance.png" width="100%"/>
44
+ <p>
45
 
46
+
47
+ ### Model Details
48
+
49
+ - Developed by: NVIDIA & University of Hong Kong
50
+ - Model Type: Decoder-only Transformer
51
+ - Base Model: [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B)
52
+ - Parameters: 8B
53
+ - Language(s): English
54
+ - License: NVIDIA License
55
+
56
+ ### Model Version(s):
57
  1.0 <br>
58
 
59
  ### Training Dataset:
 
65
 
66
 
67
 
68
+ ### Ethical Considerations:
69
  NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. <br>
70
 
71
  Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns [here](https://app.intigriti.com/programs/nvidia/nvidiavdp/detail).
72
 
73
+
74
+ ### License/Terms of Use
75
+ [NVIDIA License](LICENSE)
76
+
77
+
78
+ ### Citation
79
+ If you find this model useful, please cite our [paper](https://arxiv.org/abs/xxx):
80
  ```
81
  @article{toolorchestra,
82
+ title={ToolOrchestra: Elevating Intelligence via Efficient Model and Tool Orchestration},
83
  author={Su, Hongjin and Diao, Shizhe and Lu, Ximing and Liu, Mingjie and Xu, Jiacheng and Dong, Xin and Fu, Yonggan and Belcak, Peter and Ye, Hanrong and Yin, Hongxu and Dong, Yi and Bakhturina, Evelina and Yu, Tao and Choi, Yejin and Kautz, Jan and Molchanov, Pavlo}
84
  journal={arXiv preprint arXiv:XXXX},
85
  year={2025}