3rd-Degree-Burn's picture
Update README.md
b939b84 verified
|
raw
history blame
1.16 kB
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- NousResearch/Meta-Llama-3.1-8B-Instruct
- EpistemeAI/Fireball-Alpaca-Llama3.1.07-8B-Philos-Math-KTO-beta
- nvidia/OpenMath2-Llama3.1-8B
---
# Llama-3.1-8B-Squareroot
This is a TIES merge that combines the performance of the following models:
* [NousResearch/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3.1-8B-Instruct)
* [EpistemeAI/Fireball-Alpaca-Llama3.1.07-8B-Philos-Math-KTO-beta](https://huggingface.co/EpistemeAI/Fireball-Alpaca-Llama3.1.07-8B-Philos-Math-KTO-beta)
* [nvidia/OpenMath2-Llama3.1-8B](https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B)
![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6479f6dbed75e95d3e97bb4d%2FLpWI-ug9WZdpcrjBy44iw.png%3C%2Fspan%3E)%3C!-- HTML_TAG_END -->
# Description
I observed that when a model is trained to do just math, it does badly on everything else. So my plan was to merge a “math” model with a strong reasoning/inference model and a general instruction-following model. The result should be a model that's steerable (able to follow instructions) and still good at math.
# Examples
# Benchmarks
Coming very soon!