amarinference commited on
Commit
15d7615
·
verified ·
1 Parent(s): 25d5603

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +192 -0
README.md ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - nvidia/NVIDIA-Nemotron-Nano-12B-v2
7
+ ---
8
+
9
+ # Paper-Summarizer-Nemotron-12B
10
+
11
+ A fine-tuned Nemotron-12B model specialized for generating structured summaries of scientific research papers in standardized JSON format with superior throughput.
12
+
13
+ ## Model Description
14
+
15
+ This model is part of [Project AELLA](https://github.com/context-labs/laion-data-explorer), developed in collaboration with LAION and Wynd Labs to democratize access to scientific knowledge by creating structured summaries of research papers at scale.
16
+
17
+ **Base Model**: NVIDIA Nemotron 12B (Hybrid Mamba-Transformer)
18
+ **Training Data**: 110,000 curated research papers
19
+ **Performance**: Achieves 71.3% accuracy on QA evaluation
20
+ **Throughput**: 2.25× faster than Qwen3-14B variant
21
+
22
+ This generates comprehensive structured summaries in a JSON format. The papers are either classified as SCIENTIFIC_TEXT, PARTIAL_SCIENTIFIC_TEXT, or NON_SCIENTIFIC_TEXT. The fields extracted are key research elements such as methodology, results, claims, and limitations.
23
+
24
+ The model supports papers up to 131K tokens and is optimized for large-scale batch processing with high throughput (0.97 requests/sec).
25
+
26
+ ## Usage
27
+
28
+ ### Serving the Model
29
+
30
+ **Note**: This model requires a custom chat template for proper reasoning token handling.
31
+
32
+ ```bash
33
+ vllm serve inference-net/Paper-Summarizer-Nemotron-12B \
34
+ --port 8000 \
35
+ --host 0.0.0.0 \
36
+ --trust-remote-code \
37
+ --data-parallel-size 1 \
38
+ --tensor-parallel-size 1 \
39
+ --max-num-seqs 32 \
40
+ --max-model-len 131072 \
41
+ --max-num-batched-tokens 8192 \
42
+ --gpu-memory-utilization 0.90 \
43
+ --enable-prefix-caching \
44
+ --enable-chunked-prefill \
45
+ --chat-template "{%- set ns = namespace(enable_thinking=true) %}{%- for message in messages -%}{%- set content = message['content'] -%}{%- if message['role'] == 'user' or message['role'] == 'system' -%}{%- if '/think' in content -%}{%- set ns.enable_thinking = true -%}{%- elif '/no_think' in content -%}{%- set ns.enable_thinking = false -%}{%- endif -%}{%- endif -%}{%- endfor -%}{%- if messages[0]['role'] != 'system' -%}{%- set ns.non_tool_system_content = '' -%}{{- '<SPECIAL_10>System\n' -}}{%- else -%}{%- set ns.non_tool_system_content = messages[0]['content'].replace('/think', '').replace('/no_think', '').strip() -%}{{- '<SPECIAL_10>System\n' + ns.non_tool_system_content }}{%- endif -%}{%- if tools -%}{%- if ns.non_tool_system_content is defined and ns.non_tool_system_content != '' -%}{{- '\n\n' -}}{%- endif -%}{{- 'You can use the following tools to assist the user if required:' -}}{{- '\n<AVAILABLE_TOOLS>[' -}}{%- for tool in tools -%}{{- (tool.function if tool.function is defined else tool) | tojson -}}{{- ', ' if not loop.last else '' -}}{%- endfor -%}{{- ']</AVAILABLE_TOOLS>\n\n' -}}{{- 'If you decide to call any tool(s), use the following format:\n' -}}{{- '<TOOLCALL>[{{\"name\": \"tool_name1\", \"arguments\": \"tool_args1\"}}, ' -}}{{- '{{\"name\": \"tool_name2\", \"arguments\": \"tool_args2\"}}]</TOOLCALL>\n\n' -}}{{- 'The user will execute tool-calls and return responses from tool(s) in this format:\n' -}}{{- '<TOOL_RESPONSE>[{{\"tool_response1\"}}, {{\"tool_response2\"}}]</TOOL_RESPONSE>\n\n' -}}{{- 'Based on the tool responses, you can call additional tools if needed, correct tool calls if any errors are found, or just respond to the user.' -}}{%- endif -%}{{- '\n' -}}{%- set messages = messages[1:] if messages[0]['role'] == 'system' else messages -%}{%- if messages[-1]['role'] == 'assistant' -%}{%- set ns.last_turn_assistant_content = messages[-1]['content'].strip() -%}{%- set messages = messages[:-1] -%}{%- endif -%}{%- for message in messages %}{%- set content = message['content'] %}{%- if message['role'] == 'user' -%}{{- '<SPECIAL_11>User\n' + content.replace('/think', '').replace('/no_think', '').strip() + '\n' }}{%- elif message['role'] == 'tool' -%}{%- if loop.first or (messages[loop.index0 - 1].role != 'tool') -%}{{- '<SPECIAL_11>User\n' + '<TOOL_RESPONSE>[' }}{%- endif -%}{{- message['content'] -}}{{- ', ' if not loop.last and (messages[loop.index0 + 1].role == 'tool') else '' -}}{%- if loop.last or (messages[loop.index0 + 1].role != 'tool') -%}{{- ']</TOOL_RESPONSE>\n' -}}{%- endif -%}{%- elif message['role'] == 'assistant' -%}{%- if '</think>' in content -%}{%- set content = content.split('</think>')[1].strip() %}{%- endif -%}{{- '<SPECIAL_11>Assistant\n' + content.strip() }}{%- if message.tool_calls -%}{%- if content.strip() != '' -%}{{- '\n\n' -}}{%- endif -%}{{- '<TOOLCALL>[' -}}{%- for call in message.tool_calls -%}{%- set fn = call.function if call.function is defined else call -%}{{- '{\"name\": \"' + fn.name + '\", \"arguments\": ' -}}{%- if fn.arguments is string -%}{{- fn.arguments -}}{%- else -%}{{- fn.arguments | tojson -}}{%- endif -%}{{- '}' + (', ' if not loop.last else '') -}}{%- endfor -%}{{- ']</TOOLCALL>' -}}{%- endif -%}{{- '\n<SPECIAL_12>\n' -}}{%- endif -%}{%- endfor -%}{%- if add_generation_prompt -%}{{- '<SPECIAL_11>Assistant\n' -}}{%- if ns.enable_thinking is defined and ns.enable_thinking is false -%}{{- '<think></think>' -}}{%- else -%}{{- '<think>\n' -}}{%- endif -%}{%- if ns.last_turn_assistant_content is defined and ns.last_turn_assistant_content != '' -%}{{- ns.last_turn_assistant_content -}}{%- endif -%}{%- else -%}{%- if ns.last_turn_assistant_content is defined and ns.last_turn_assistant_content != '' -%}{{- '<SPECIAL_11>Assistant\n' -}}{%- if ns.enable_thinking is defined and ns.enable_thinking is false -%}{{- '<think></think>' -}}{%- else -%}{{- '<think>\n' -}}{%- endif -%}{{- ns.last_turn_assistant_content -}}{%- if continue_final_message is defined -%}{%- if continue_final_message is false -%}{{- '\n<SPECIAL_12>\n' -}}{%- endif -%}{%- else -%}{{- '\n<SPECIAL_12>\n' -}}{%- endif -%}{%- endif -%}{%- endif -%}"
46
+ ```
47
+
48
+ ### Making Requests
49
+
50
+ ```python
51
+ import requests
52
+
53
+ # System prompt (required)
54
+ system_prompt = """[Insert the full system prompt from the prompt.txt file -
55
+ see the full prompt in the model repository]"""
56
+
57
+ # User prompt: the paper text to summarize
58
+ paper_text = """
59
+ Title: Your Paper Title
60
+ Authors: Author 1, Author 2
61
+ Abstract: ...
62
+ [Full paper content]
63
+ """
64
+
65
+ # API request
66
+ response = requests.post(
67
+ "http://localhost:8000/v1/chat/completions",
68
+ json={
69
+ "model": "inference-net/Paper-Summarizer-Nemotron-12B",
70
+ "messages": [
71
+ {"role": "system", "content": system_prompt},
72
+ {"role": "user", "content": paper_text}
73
+ ],
74
+ "temperature": 0.2
75
+ },
76
+ timeout=600
77
+ )
78
+
79
+ result = response.json()
80
+ # Note: Response may include reasoning tokens wrapped in <think></think>
81
+ # These are automatically stripped by the chat template
82
+ summary = result["choices"][0]["message"]["content"]
83
+ print(summary)
84
+ ```
85
+
86
+ ### System Prompt
87
+
88
+ The model requires the same system prompt as the Qwen3-14B variant. The prompt instructs the model to:
89
+
90
+ 1. **Classify** the text as SCIENTIFIC_TEXT, PARTIAL_SCIENTIFIC_TEXT, or NON_SCIENTIFIC_TEXT
91
+ 2. **Extract** structured information including:
92
+ - Title, authors, publication year
93
+ - Research context and hypotheses
94
+ - Methodological details
95
+ - Key results with quantitative data
96
+ - Claims with supporting evidence
97
+ - Limitations and ethical considerations
98
+
99
+ The full system prompt is available in the model repository's `prompt.txt` file.
100
+
101
+ ### Output Format
102
+
103
+ The model outputs a single valid JSON object with this structure:
104
+
105
+ ```json
106
+ {
107
+ "article_classification": "SCIENTIFIC_TEXT",
108
+ "reason": null,
109
+ "summary": {
110
+ "title": "",
111
+ "authors": "",
112
+ "publication_year": null,
113
+ "field_subfield": "",
114
+ "executive_summary": "",
115
+ "research_context": "",
116
+ "methodological_details": "",
117
+ "key_results": "",
118
+ "claims": [...],
119
+ "contradictions_and_limitations": "",
120
+ ...
121
+ }
122
+ }
123
+ ```
124
+
125
+ ## Performance
126
+
127
+ ### LLM-as-a-Judge Evaluation
128
+ - **Score**: 4.095/5.0
129
+ - **Comparison**: Slightly behind Qwen3-14B (4.207) but still high quality
130
+
131
+ ### QA Dataset Evaluation
132
+ - **Accuracy**: 71.3%
133
+ - **Comparison**: Strong performance, suitable for batch processing
134
+
135
+ ### Throughput (8×H200 node)
136
+ - **Requests/sec**: 0.97 (2.25× faster than Qwen3-14B)
137
+ - **Input Tokens/sec**: 16,943.69
138
+ - **Output Tokens/sec**: 4,880.76
139
+ - **Single Request Tokens/sec**: 76.17
140
+
141
+ ### Cost Efficiency
142
+ - **Processing 100M papers**: ~$45,000 (vs $100,000 for Qwen3-14B, $5M+ for GPT-5)
143
+ - **Ideal for**: Large-scale batch processing where throughput matters
144
+
145
+ ## Training Details
146
+
147
+ - **Training Set**: 100,000 papers (same as Qwen3-14B)
148
+ - **Validation Set**: 10,000 papers
149
+ - **Average Paper Length**: 81,334 characters
150
+ - **Architecture**: Hybrid Mamba-Transformer for high throughput
151
+ - **Training Approach**: Post-training on summaries generated by frontier models
152
+
153
+ ## When to Use This Model
154
+
155
+ ### Choose Nemotron-12B if:
156
+ - Processing large batches (100K+ papers)
157
+ - Throughput and cost are primary concerns
158
+ - Accuracy in the 70-75% range is acceptable
159
+ - Running on GPU infrastructure with parallel processing
160
+
161
+ ### Choose Qwen3-14B if:
162
+ - Need highest possible accuracy (73.9% vs 71.3%)
163
+ - Processing smaller batches or single papers
164
+ - Quality is more important than speed
165
+
166
+ ## Limitations
167
+
168
+ - May generate subtle factual errors (hallucinations) for fine-grained details
169
+ - Context limit (131K tokens) may truncate extremely long documents
170
+ - Unified schema may not capture all domain-specific nuances
171
+ - Summaries are research aids, not replacements for primary sources in high-stakes scenarios
172
+ - Slightly lower accuracy than Qwen3-14B variant
173
+
174
+ ## Related Resources
175
+
176
+ - **Paper Visualization Website**: https://laion.inference.net
177
+ - **Visualization Repository**: https://github.com/context-labs/laion-data-explorer
178
+ - **Alexandria Paper**: https://arxiv.org/abs/2502.19413
179
+ - **Qwen3-14B Variant**: inference-net/Paper-Summarizer-Qwen3-14B
180
+
181
+ ## License
182
+
183
+ [License information to be added]
184
+
185
+ ## Acknowledgments
186
+
187
+ This work was made possible through collaboration with:
188
+ - LAION
189
+ - Wynd Labs
190
+ - Inference.net
191
+ - NVIDIA (base Nemotron architecture)
192
+ - Contributors to bethgelab, PeS2o, Common Pile, and OpenAlex