shunxing1234 commited on
Commit
eb8e19a
·
verified ·
1 Parent(s): 8bc127e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -3
README.md CHANGED
@@ -1,3 +1,83 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ <div align="center">
5
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/61ee40a269351366e29972ad/KIYEa1c_WJEWPpeS0L_k1.png" width="100%" alt="Kwaipilot" />
6
+ </div>
7
+
8
+ <hr>
9
+
10
+ > This repository contains an FP8 quantized version of the [Kwaipilot/KAT-Dev-72B-Exp](https://huggingface.co/Kwaipilot/KAT-Dev-72B-Exp) model.
11
+
12
+
13
+ # News
14
+
15
+ 🔥 We’re thrilled to announce the release of **KAT-Dev-72B-Exp**, our latest and most powerful model yet!
16
+
17
+ 🔥 You can now try our **strongest** proprietary coder model **KAT-Coder** directly on the [**StreamLake**](https://www.streamlake.ai/product/kat-coder) platform **for free**.
18
+
19
+ # Highlights
20
+ **KAT-Dev-72B-Exp** is an open-source 72B-parameter model for software engineering tasks.
21
+
22
+ On SWE-Bench Verified, **KAT-Dev-72B-Exp** achieves **74.6%** accuracy ⚡ — **when evaluated strictly with the SWE-agent scaffold**.
23
+
24
+ **KAT-Dev-72B-Exp** is the experimental reinforcement-learning version of the KAT-Coder model. Through this open-source release, we aim to reveal the technical innovations behind KAT-Coder’s large-scale RL to developers and researchers.
25
+
26
+
27
+ ![Kim 2025-10-10 165138](https://cdn-uploads.huggingface.co/production/uploads/61ee40a269351366e29972ad/-1nx5HYc-wTjUFNbf-GfO.png)
28
+
29
+ # Introduction
30
+
31
+ We rewrote the attention kernel and redesigned the training engine for shared prefix trajectories to achieve highly efficient RL training, especially for scaffolds leveraging context management.
32
+
33
+ Furthermore, to prevent exploration collapse observed in RL training, we reshaped advantage distribution based on pass rates: amplifying the advantage scale of highly exploratory groups while reducing that of low-exploration ones.
34
+
35
+
36
+ # Quickstart
37
+
38
+ ```python
39
+ from transformers import AutoModelForCausalLM, AutoTokenizer
40
+
41
+ model_name = "KAT-Dev-72B-Exp"
42
+
43
+ # load the tokenizer and the model
44
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
45
+ model = AutoModelForCausalLM.from_pretrained(
46
+ model_name,
47
+ torch_dtype="auto",
48
+ device_map="auto"
49
+ )
50
+
51
+ # prepare the model input
52
+ prompt = "Give me a short introduction to large language model."
53
+ messages = [
54
+ {"role": "user", "content": prompt}
55
+ ]
56
+ text = tokenizer.apply_chat_template(
57
+ messages,
58
+ tokenize=False,
59
+ add_generation_prompt=True,
60
+ )
61
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
62
+
63
+ # conduct text completion
64
+ generated_ids = model.generate(
65
+ **model_inputs,
66
+ max_new_tokens=65536
67
+ )
68
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
69
+
70
+ content = tokenizer.decode(output_ids, skip_special_tokens=True)
71
+
72
+ print("content:", content)
73
+ ```
74
+
75
+ # SWE agent Evaluation Parameters
76
+
77
+ ```
78
+ temperature: 0.6
79
+ max_turns: 150
80
+ history_processors.n: 100
81
+ ```
82
+
83
+ For full settings please refer to inference.yaml