slim.frikha
commited on
Commit
·
c78228d
0
Parent(s):
falcon3 release
Browse files- .gitattributes +44 -0
- Falcon3-10B-Instruct-f16.gguf +3 -0
- Falcon3-10B-Instruct-q2_k.gguf +3 -0
- Falcon3-10B-Instruct-q3_k_m.gguf +3 -0
- Falcon3-10B-Instruct-q4_0.gguf +3 -0
- Falcon3-10B-Instruct-q4_k_m.gguf +3 -0
- Falcon3-10B-Instruct-q5_0.gguf +3 -0
- Falcon3-10B-Instruct-q5_k_m.gguf +3 -0
- Falcon3-10B-Instruct-q6_k.gguf +3 -0
- Falcon3-10B-Instruct-q8_0.gguf +3 -0
- README.md +122 -0
.gitattributes
ADDED
|
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
Falcon3-10B-Instruct-f16.gguf filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
Falcon3-10B-Instruct-q2_k.gguf filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
Falcon3-10B-Instruct-q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
Falcon3-10B-Instruct-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
Falcon3-10B-Instruct-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
Falcon3-10B-Instruct-q5_0.gguf filter=lfs diff=lfs merge=lfs -text
|
| 42 |
+
Falcon3-10B-Instruct-q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
| 43 |
+
Falcon3-10B-Instruct-q6_k.gguf filter=lfs diff=lfs merge=lfs -text
|
| 44 |
+
Falcon3-10B-Instruct-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
Falcon3-10B-Instruct-f16.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:078115fe2ba92a7e99404447483a50bac5f90054d0dc6ccd1e1d12d4f541f70b
|
| 3 |
+
size 20616556128
|
Falcon3-10B-Instruct-q2_k.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:61fa2ab63b5928e7471e62f6a013dc000e61f359637e9905a4238b67d13e0bd1
|
| 3 |
+
size 3924045408
|
Falcon3-10B-Instruct-q3_k_m.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ac8e4e4aad489353d47cca3f895faa76a8b522dc26baded82990bf4c7a502e7a
|
| 3 |
+
size 5052477024
|
Falcon3-10B-Instruct-q4_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8b876e7cd16d0e97ceabae386b0a90c7cd8a2dcc485e97a7968d80233ab01349
|
| 3 |
+
size 5906345568
|
Falcon3-10B-Instruct-q4_k_m.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a0c0edbd35019ff26d972a0373b25b4c8d72315395a3b6036aca5e6bafa3d819
|
| 3 |
+
size 6287519328
|
Falcon3-10B-Instruct-q5_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dd5238985ca7f18e8ac07b7c615afdcc9c60f3dc5d2eba92aaca5c58525df92c
|
| 3 |
+
size 7144189536
|
Falcon3-10B-Instruct-q5_k_m.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a85e9254e5876532dc68090a65e3d6d4cf963503e855a2e2947e8b0cde0f1029
|
| 3 |
+
size 7340551776
|
Falcon3-10B-Instruct-q6_k.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:286d4d2ef5a681842392fc131b471a3dd81bb9c545fa8a043e44bf900997efe1
|
| 3 |
+
size 8459398752
|
Falcon3-10B-Instruct-q8_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2a234365b42e50904bdca36edf72f489a234eec6accfef6e31af58a90c09b29d
|
| 3 |
+
size 10955239008
|
README.md
ADDED
|
@@ -0,0 +1,122 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
- fr
|
| 6 |
+
- es
|
| 7 |
+
- pt
|
| 8 |
+
base_model:
|
| 9 |
+
- tiiuae/Falcon3-10B-Instruct
|
| 10 |
+
pipeline_tag: text-generation
|
| 11 |
+
library_name: transformers
|
| 12 |
+
tags:
|
| 13 |
+
- falcon3
|
| 14 |
+
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
<div align="center">
|
| 18 |
+
<img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/general/falco3-logo.png" alt="drawing" width="500"/>
|
| 19 |
+
</div>
|
| 20 |
+
|
| 21 |
+
# Falcon3-10B-Instruct-GGUF
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
**Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
|
| 25 |
+
|
| 26 |
+
**Falcon3-10B-Instruct** achieves state-of-the-art results (at release's time) on reasoning, language understanding, instruction following, code and mathematics tasks.
|
| 27 |
+
Falcon3-10B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 32K.
|
| 28 |
+
|
| 29 |
+
This repository contains the GGUFs instruction-tuned 10B Falcon3 model.
|
| 30 |
+
|
| 31 |
+
## Model Details
|
| 32 |
+
- Architecture
|
| 33 |
+
- Transformer-based causal decoder-only architecture
|
| 34 |
+
- 40 decoder blocks
|
| 35 |
+
- Grouped Query Attention (GQA) for faster inference: 12 query heads and 4 key-value heads
|
| 36 |
+
- Wider head dimension: 256
|
| 37 |
+
- High RoPE value to support long context understanding: 1000042
|
| 38 |
+
- Uses SwiGLu and RMSNorm
|
| 39 |
+
- 32K context length
|
| 40 |
+
- 131K vocab size
|
| 41 |
+
- Depth up-scaled from **Falcon3-7B-Base** with 2 Teratokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 1024 H100 GPU chips
|
| 42 |
+
- Posttrained on 1.2 million samples of STEM, conversational, code, safety and function call data
|
| 43 |
+
- Supports EN, FR, ES, PT
|
| 44 |
+
- Developed by [Technology Innovation Institute](https://www.tii.ae)
|
| 45 |
+
- License: TII Falcon-LLM License 2.0
|
| 46 |
+
- Model Release Date: December 2024
|
| 47 |
+
- Quantization: q2_K, q3_K_M, q4_0, q4_K_M, q5_0, q5_K_M, q6_K, q8_0
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
## Getting started
|
| 51 |
+
|
| 52 |
+
### 1. Download GGUF models from hugging face
|
| 53 |
+
|
| 54 |
+
First, download the model from Hugging Face. You can use the `huggingface_hub` library or download it manually:
|
| 55 |
+
|
| 56 |
+
```bash
|
| 57 |
+
pip install huggingface_hub
|
| 58 |
+
huggingface-cli download {model_name}
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
This will download the model to your current directory. Make sure to replace {model_name} with the actual username and model name from your Hugging Face repository.
|
| 62 |
+
|
| 63 |
+
## 2. Install llama.cpp
|
| 64 |
+
|
| 65 |
+
You have several options for installing llama.cpp:
|
| 66 |
+
|
| 67 |
+
**1. Build from source:**
|
| 68 |
+
|
| 69 |
+
This gives you the most flexibility and control. Follow the instructions in the llama.cpp repository to build from source:
|
| 70 |
+
|
| 71 |
+
```bash
|
| 72 |
+
|
| 73 |
+
git clone https://github.com/ggerganov/llama.cpp
|
| 74 |
+
cd llama.cpp
|
| 75 |
+
cmake -B build
|
| 76 |
+
cmake --build build --config Release
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
For more information about how to build llama.cpp from source please refere to llama.cpp documentation on how to build from source: **[llama.cpp build from source](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md)**.
|
| 80 |
+
|
| 81 |
+
**2. Download pre-built binaries:**
|
| 82 |
+
|
| 83 |
+
If you prefer a quicker setup, you can download pre-built binaries for your operating system. Check the llama.cpp repository for available binaries.
|
| 84 |
+
|
| 85 |
+
**3. Use Docker:**
|
| 86 |
+
|
| 87 |
+
For a more contained environment, you can use the official llama.cpp Docker image. Refer to the llama.cpp documentation for instructions on how to use the Docker image.
|
| 88 |
+
|
| 89 |
+
For detailed instructions and more information, please check the llama.cpp documentation on docker: **[llama.cpp docker](https://github.com/ggerganov/llama.cpp/blob/master/docs/docker.mdg)**.
|
| 90 |
+
|
| 91 |
+
### 3. Start playing with your model
|
| 92 |
+
|
| 93 |
+
Run simple text completion
|
| 94 |
+
```bash
|
| 95 |
+
llama-cli -m {path-to-gguf-model} -p "I believe the meaning of life is" -n 128
|
| 96 |
+
```
|
| 97 |
+
|
| 98 |
+
Run in conversation mode
|
| 99 |
+
```bash
|
| 100 |
+
llama-cli -m {path-to-gguf-model} -p "You are a helpful assistant" -cnv -co
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
## Useful links
|
| 104 |
+
- View our [release blogpost](https://huggingface.co/blog/falcon3).
|
| 105 |
+
- Feel free to join [our discord server](https://discord.gg/fwXpMyGc) if you have any questions or to interact with our researchers and developers.
|
| 106 |
+
|
| 107 |
+
## Technical Report
|
| 108 |
+
|
| 109 |
+
Coming soon....
|
| 110 |
+
|
| 111 |
+
## Citation
|
| 112 |
+
If the Falcon3 family of models were helpful to your work, feel free to give us a cite.
|
| 113 |
+
|
| 114 |
+
```
|
| 115 |
+
@misc{Falcon3,
|
| 116 |
+
title = {The Falcon 3 Family of Open Models},
|
| 117 |
+
url = {https://huggingface.co/blog/falcon3},
|
| 118 |
+
author = {Falcon-LLM Team},
|
| 119 |
+
month = {December},
|
| 120 |
+
year = {2024}
|
| 121 |
+
}
|
| 122 |
+
```
|