FunReason-MT Technical Report: Overcoming the Complexity Barrier in Multi-Turn Function Calling
Model Overview
The FunReason-MT-4B model is a high-performance Large Language Model (LLM) fine-tuned for complex, multi-turn Function Calling (FC) and agentic tool-use tasks. Built upon the Qwen3-4B-Instruct-2507 base model , it has been trained using the novel FunReason-MT data synthesis framework.
FunReason-MT-4B achieves ssuperior results on the Berkeley Function-Calling Leaderboard (BFCLv3) Multi-Turn and Agentic Evaluation benchmarks. This performance demonstrates that high-quality, synthesized data can effectively overcome the complexity barrier in multi-turn FC data generation.
- Base Model: Qwen3-4B-Instruct-2507
- Size: 4 Billion parameters
- Key Capability: Advanced Multi-Turn Function Calling and Agentic Tool-Use
The full usage of the model is in our BFCL PR.
π Evaluation Results
The model was rigorously evaluated on the Berkeley Function-Calling Leaderboard (BFCL).
BFCLv3 Multi-Turn and Single-Turn Performance
| Model (4B - 235B) | Multi-Turn (Overall) | Single-Turn (Overall) |
|---|---|---|
| Qwen3-4B-Instruct (Base) | 15.75 | 78.19 |
| Qwen3-4B + FunReason-MT (RL) | 56.50 | 85.02 |
| Claude-Sonnet-4-20250514 | 54.75 | 84.72 |
| DeepSeek-R1-0528 | 44.50 | 78.22 |
| GPT-4o-2024-11-20 | 42.50 | 77.21 |
BFCL Agentic Evaluation (BFCLv4 OOD)
The FunReason-MT trained model leads in out-of-distribution agentic tasks (Web Search and Memory).
| Model | BFCLv4 Overall Score |
|---|---|
| FunReason-MT-4B (RL) | 15.10 |
| ToolACE-2-8B | 14.83 |
| BitAgent-8B | 8.24 |
| XLAM-2-3b-fc-r | 7.42 |
| watt-tool-8B | 6.30 |
π» Training Data and Framework
FunReason-MT Dataset
The training set comprises 16,000 high-quality multi-turn samples. This dataset was generated using the three-phase FunReason-MT data synthesis framework, which focuses on generating complex trajectories that require:
- Environment-API Graph Interactions for collecting goal-directed, correct execution traces.
- Advanced Tool-Query Synthesis for creating logical-jump queries that abstract multi-step actions.
- Guided Iterative Chain for enforcing reliable, consistent Chain-of-Thought (CoT) generation using self-correction.
Training Details
The model was fine-tuned with function calling data from APIGen and the FunReason-MT dataset.
- Training Libraries: LLama-Factory and Verl.
- Methodology: Supervised Fine-Tuning (SFT) followed by Reinforcement Learning (RL).
- Hardware: Conducted on 32 NVIDIA H20 GPUs.
Usage
Here we provide a code snippet of the handler of FunReason-MT.
class FunReasonMTHandler(OSSHandler):
def __init__(self, model_name, temperature) -> None:
super().__init__(model_name, temperature)
self.is_fc_model = False
self.top_p = 0.7
self.max_output_len = 20000
self.max_context_length = 247000
@override
def _query_prompting(self, inference_data: dict):
print("overide _query_prompting")
# We use the OpenAI Completions API
function: list[dict] = inference_data["function"]
message: list[dict] = inference_data["message"]
formatted_prompt: str = self._format_prompt(message, function)
inference_data["inference_input_log"] = {"formatted_prompt": formatted_prompt}
# Tokenize the formatted prompt to get token count
input_token_count = len(self.tokenizer.tokenize(formatted_prompt))
# Determine the number of tokens to request. Cap it at 4096 if the model has a larger limit.
if self.max_context_length < input_token_count + 2:
# If the prompt is already at the max length, just request 1000 token, we will get an error anyway
leftover_tokens_count = 1000
else:
leftover_tokens_count = min(
self.max_output_len,
self.max_context_length - input_token_count - 2,
)
extra_body = {}
if hasattr(self, "stop_token_ids"):
extra_body["stop_token_ids"] = self.stop_token_ids
if hasattr(self, "skip_special_tokens"):
extra_body["skip_special_tokens"] = self.skip_special_tokens
start_time = time.time()
if len(extra_body) > 0:
api_response = self.client.completions.create(
model=self.model_path_or_id,
temperature=self.temperature,
top_p=self.top_p,
prompt=formatted_prompt,
max_tokens=leftover_tokens_count,
extra_body=extra_body,
timeout=72000, # Avoid timeout errors
)
else:
api_response = self.client.completions.create(
model=self.model_path_or_id,
temperature=self.temperature,
top_p=self.top_p,
prompt=formatted_prompt,
max_tokens=leftover_tokens_count,
timeout=72000, # Avoid timeout errors
)
end_time = time.time()
return api_response, end_time - start_time
def _process_tool_response(self, tool_response_lst):
processed_tool_response = []
for tool_response in tool_response_lst:
processed_tool_response.append(tool_response)
return processed_tool_response
@override
def _format_prompt(self, messages, function):
new_messages = []
tool_content = []
for idx, message in enumerate(messages):
role = message["role"]
content = message["content"]
if role != "tool":
if len(tool_content) != 0:
tool_message = {
"role": "tool",
"content": str(tool_content),
}
new_messages.append(tool_message)
tool_content = []
new_messages.append(message)
else:
tool_content.append(content)
if len(tool_content) != 0:
tool_message = {
"role": "tool",
"content": str(tool_content),
}
new_messages.append(tool_message)
tool_content = []
print("new_messages", new_messages)
formatted_prompt = self.tokenizer.apply_chat_template(
new_messages, tokenize=False, add_generation_prompt=True
)
formatted_prompt += "<think>"
print("formated_prompt", formatted_prompt)
return formatted_prompt
@override
def _parse_query_response_prompting(self, api_response: Any) -> dict:
reasoning_content = ""
model_response = api_response.choices[0].text
cleaned_response = ""
reasoning_content = ""
cleaned_response = model_response
if "</think>" in model_response:
parts = model_response.split("</think>")
reasoning_content = parts[0].rstrip("
").split("<think>")[-1].lstrip("
")
cleaned_response = parts[-1].lstrip("
")
else:
cleaned_response = "response outputs too long or no slash think in response."
print("cleaned_response: ", cleaned_response)
response_data = {
"model_responses": cleaned_response,
"model_responses_message_for_chat_history": {
"role": "assistant",
"content": cleaned_response,
},
"reasoning_content": reasoning_content,
"input_token": api_response.usage.prompt_tokens,
"output_token": api_response.usage.completion_tokens,
}
# Attach reasoning content to the assistant message for the next turn if present
if reasoning_content:
response_data["model_responses_message_for_chat_history"][
"reasoning_content"
] = reasoning_content
if not reasoning_content:
del response_data["reasoning_content"]
return response_data
π Related Projects and Citation
This work is part of the open-source project AWorld, InclusionAI.
If you use FunReason-MT in your research, please cite the technical report:
@article{xu2025funreason,
title={FunReason-MT Technical Report: Overcoming the Complexity Barrier in Multi-Turn Function Calling},
author={Zengzhuang Xu and Bingguang Hao and Zechuan Wang and Yuntao Wen and Maolin Wang and Yang Liu and Long Chen and Dong Wang and Yicheng Chen and Cunyin Peng and Chenyi Zhuang and Jinjie Gu and Xiangyu Zhao and Shi Gu},
journal={arXiv preprint arXiv:2510.24645},
year={2025}
}
Contact
For inquiries, please contact:
- Downloads last month
- 122