Update README.md
Browse files
README.md
CHANGED
|
@@ -65,7 +65,7 @@ The pipeline we used to produce the data and models is fully open-sourced!
|
|
| 65 |
We provide [all instructions](https://nvidia.github.io/NeMo-Skills/openmathreasoning1/)
|
| 66 |
to fully reproduce our results, including data generation.
|
| 67 |
|
| 68 |
-
|
| 69 |
|
| 70 |
Our models can be used in 3 inference modes: chain-of-thought (CoT), tool-integrated reasoning (TIR) and generative solution selection (GenSelect).
|
| 71 |
|
|
@@ -140,7 +140,7 @@ This model is intended to facilitate research in the area of mathematical reason
|
|
| 140 |
|
| 141 |
Huggingface 04/23/2025 <br>
|
| 142 |
|
| 143 |
-
|
| 144 |
|
| 145 |
**Architecture Type:** Transformer decoder-only language model <br>
|
| 146 |
|
|
@@ -151,7 +151,7 @@ Huggingface 04/23/2025 <br>
|
|
| 151 |
|
| 152 |
** This model has 1.5B of model parameters. <br>
|
| 153 |
|
| 154 |
-
|
| 155 |
|
| 156 |
**Input Type(s):** Text <br>
|
| 157 |
|
|
@@ -163,7 +163,7 @@ Huggingface 04/23/2025 <br>
|
|
| 163 |
|
| 164 |
|
| 165 |
|
| 166 |
-
|
| 167 |
|
| 168 |
**Output Type(s):** Text <br>
|
| 169 |
|
|
@@ -179,7 +179,7 @@ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated sys
|
|
| 179 |
|
| 180 |
|
| 181 |
|
| 182 |
-
|
| 183 |
|
| 184 |
**Runtime Engine(s):** <br>
|
| 185 |
|
|
@@ -201,7 +201,7 @@ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated sys
|
|
| 201 |
|
| 202 |
|
| 203 |
|
| 204 |
-
|
| 205 |
|
| 206 |
[OpenMath-Nemotron-1.5B](https://huggingface.co/nvidia/OpenMath-Nemotron-1.5B)
|
| 207 |
|
|
|
|
| 65 |
We provide [all instructions](https://nvidia.github.io/NeMo-Skills/openmathreasoning1/)
|
| 66 |
to fully reproduce our results, including data generation.
|
| 67 |
|
| 68 |
+
## How to use the models?
|
| 69 |
|
| 70 |
Our models can be used in 3 inference modes: chain-of-thought (CoT), tool-integrated reasoning (TIR) and generative solution selection (GenSelect).
|
| 71 |
|
|
|
|
| 140 |
|
| 141 |
Huggingface 04/23/2025 <br>
|
| 142 |
|
| 143 |
+
### Model Architecture: <br>
|
| 144 |
|
| 145 |
**Architecture Type:** Transformer decoder-only language model <br>
|
| 146 |
|
|
|
|
| 151 |
|
| 152 |
** This model has 1.5B of model parameters. <br>
|
| 153 |
|
| 154 |
+
### Input: <br>
|
| 155 |
|
| 156 |
**Input Type(s):** Text <br>
|
| 157 |
|
|
|
|
| 163 |
|
| 164 |
|
| 165 |
|
| 166 |
+
### Output: <br>
|
| 167 |
|
| 168 |
**Output Type(s):** Text <br>
|
| 169 |
|
|
|
|
| 179 |
|
| 180 |
|
| 181 |
|
| 182 |
+
### Software Integration : <br>
|
| 183 |
|
| 184 |
**Runtime Engine(s):** <br>
|
| 185 |
|
|
|
|
| 201 |
|
| 202 |
|
| 203 |
|
| 204 |
+
### Model Version(s):
|
| 205 |
|
| 206 |
[OpenMath-Nemotron-1.5B](https://huggingface.co/nvidia/OpenMath-Nemotron-1.5B)
|
| 207 |
|