Update README.md
Browse files
README.md
CHANGED
|
@@ -20,6 +20,13 @@ tags:
|
|
| 20 |
- arxiv:2408.11857
|
| 21 |
pipeline_tag: text-generation
|
| 22 |
library_name: transformers
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
---
|
| 24 |
|
| 25 |
# Hypnos i1-8B (Quantum-Informed Reasoning Model)
|
|
@@ -47,7 +54,7 @@ It represents a unique experiment in **Hybrid Quantum-Classical Machine Learning
|
|
| 47 |
|
| 48 |
<div align="center">
|
| 49 |
<h3>📊 Performance Benchmarks</h3>
|
| 50 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/67329d3f69fded92d56ab41a/RJGLLcIf-HFTUdsUYnVys.jpeg" width="
|
| 51 |
</div>
|
| 52 |
|
| 53 |
<br>
|
|
@@ -74,5 +81,5 @@ During the Supervised Fine-Tuning (SFT) stage, the model was exposed to raw bits
|
|
| 74 |
<br>
|
| 75 |
|
| 76 |
<div align="center">
|
| 77 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/67329d3f69fded92d56ab41a/f7K5oDyo9dX7t72IlcKqb.jpeg" width="
|
| 78 |
</div>
|
|
|
|
| 20 |
- arxiv:2408.11857
|
| 21 |
pipeline_tag: text-generation
|
| 22 |
library_name: transformers
|
| 23 |
+
datasets:
|
| 24 |
+
- open-thoughts/OpenThoughts-114k
|
| 25 |
+
- KingNish/reasoning-base-20k
|
| 26 |
+
- nvidia/OpenMathReasoning
|
| 27 |
+
- amphora/QwQ-LongCoT-130K
|
| 28 |
+
- gsm8k
|
| 29 |
+
|
| 30 |
---
|
| 31 |
|
| 32 |
# Hypnos i1-8B (Quantum-Informed Reasoning Model)
|
|
|
|
| 54 |
|
| 55 |
<div align="center">
|
| 56 |
<h3>📊 Performance Benchmarks</h3>
|
| 57 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/67329d3f69fded92d56ab41a/RJGLLcIf-HFTUdsUYnVys.jpeg" width="80%" alt="Hypnos Benchmarks vs Llama 3.1 Base"/>
|
| 58 |
</div>
|
| 59 |
|
| 60 |
<br>
|
|
|
|
| 81 |
<br>
|
| 82 |
|
| 83 |
<div align="center">
|
| 84 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/67329d3f69fded92d56ab41a/f7K5oDyo9dX7t72IlcKqb.jpeg" width="40%" alt="Hypnos Footer Image"/>
|
| 85 |
</div>
|