Created README.md from the official repository
#1
by
Andron00e
- opened
RADME.md
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
---
|
| 4 |
+
# MatMul-Free LL
|
| 5 |
+
|
| 6 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
| 7 |
+
|
| 8 |
+
## Model Details
|
| 9 |
+
|
| 10 |
+
[[Paper](https://arxiv.org/abs/2406.02528)] [[Code](https://github.com/ridgerchu/matmulfreellm/tree/master)]
|
| 11 |
+
|
| 12 |
+
MatMul-Free LM is a language model architecture that eliminates the need for Matrix Multiplication (MatMul) operations.
|
| 13 |
+
This repository provides an implementation of MatMul-Free LM that is compatible with the 🤗 Transformers library.
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
|
| 17 |
+
## Scaling Law
|
| 18 |
+
|
| 19 |
+
We evaluate how the scaling law fits to the 370M, 1.3B and 2.7B parameter models in both Transformer++ and our model.
|
| 20 |
+
For a fair comparison, each operation is treated identically, though our model uses more efficient ternary weights in some layers.
|
| 21 |
+
Interestingly, the scaling projection for our model exhibits a steeper descent compared to Transformer++,
|
| 22 |
+
suggesting our architecture is more efficient in leveraging additional compute to improve performance.
|
| 23 |
+
|
| 24 |
+

|
| 25 |
+
|
| 26 |
+
## Usage
|
| 27 |
+
|
| 28 |
+
We provide the implementations of models that are compatible with 🤗 Transformers library.
|
| 29 |
+
Here's an example of how to initialize a model from the default configs in ```matmulfreelm```:
|
| 30 |
+
This is a huggingface-compatible library that you can use such command to initialize the model with huggingface ```AutoModel```:
|
| 31 |
+
|
| 32 |
+
```shell
|
| 33 |
+
pip install transformers
|
| 34 |
+
pip install -U git+https://github.com/ridgerchu/matmulfreellm
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
```python
|
| 38 |
+
from mmfreelm.models import HGRNBitConfig
|
| 39 |
+
from mmfreelm.layers import hgrn_bit
|
| 40 |
+
|
| 41 |
+
from transformers import AutoModelForCausalLM
|
| 42 |
+
model = AutoModelForCausalLM.from_pretrained("ridger/MMfreeLM-2.7B")
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
## Pre-trained Model Zoo
|
| 46 |
+
|
| 47 |
+
| Model Size | Layer | Hidden dimension | Trained tokens |
|
| 48 |
+
| [370M](https://huggingface.co/ridger/MMfreeLM-370M) | 24 | 1024 | 15B |
|
| 49 |
+
| :---: | :---: | :---: | :---: |
|
| 50 |
+
| [1.3B](https://huggingface.co/ridger/MMfreeLM-1.3B) | 24 | 2048 | 100B |
|
| 51 |
+
| [2.7B](https://huggingface.co/ridger/MMfreeLM-2.7B) | 32 | 2560 | 100B |
|