morriszms commited on
Commit
32501e1
Β·
verified Β·
1 Parent(s): 1ef9ac0

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Finance-Llama-8B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Finance-Llama-8B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Finance-Llama-8B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Finance-Llama-8B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Finance-Llama-8B-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Finance-Llama-8B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Finance-Llama-8B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Finance-Llama-8B-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Finance-Llama-8B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Finance-Llama-8B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Finance-Llama-8B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ Finance-Llama-8B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Finance-Llama-8B-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da5b5789eed8274cda4d1d9f99d1622d21ce4568691b9e59020228c35433595b
3
+ size 7819616
Finance-Llama-8B-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fe699426b4b5499f74d81fec500bd99b43ea855d9510531981f266bbec88483
3
+ size 7819616
Finance-Llama-8B-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00c0c82713108a2716585a25a6a809f49f6473731ed546e0aff168c04fc4cd4b
3
+ size 7819616
Finance-Llama-8B-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40804b22d3c9df82cbb7a8718f21f663f59a6563fad33a30c34a66167dea3b61
3
+ size 7819616
Finance-Llama-8B-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd26cbe6fc023a5b8cb3216b6a01526e539464f3e7269485f5927e98c15cc803
3
+ size 7819616
Finance-Llama-8B-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d181f5c178e8c3d630f464a8f8bf77bae6f9de13a44109362f7fc10fd6fd467
3
+ size 7819616
Finance-Llama-8B-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11079e4b9dff8ae23acff5291bb778be985ef45852a8dbd715035058e4d70ec5
3
+ size 7819616
Finance-Llama-8B-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c142bfebccbadc91e5f2410370e68025d248f0f3f1fa8269d412a0230a9ae73
3
+ size 7819616
Finance-Llama-8B-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:602fe4aeb04b62ace0c557c4db0b5034f83ef865df21a7b9fcab6e0f57d4bc2d
3
+ size 7819616
Finance-Llama-8B-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ab28c20005b857ca5aa977670adde0d337ff461aff5dc0651923df4904b7553
3
+ size 7819616
Finance-Llama-8B-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c3a78c3f0dc88de1795229b34be6361f892ce2e8ead979b8c3eca019bdf3684
3
+ size 7819616
Finance-Llama-8B-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c75017025dc729059a9630cacc168c726a82525c188ceba28aee2f27399591b
3
+ size 7819616
README.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - text-generation-inference
5
+ - finance
6
+ - economics
7
+ - TensorBlock
8
+ - GGUF
9
+ datasets:
10
+ - Josephgflowers/Finance-Instruct-500k
11
+ language:
12
+ - en
13
+ base_model: tarun7r/Finance-Llama-8B
14
+ pipeline_tag: text-generation
15
+ library_name: transformers
16
+ ---
17
+
18
+ <div style="width: auto; margin-left: auto; margin-right: auto">
19
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
20
+ </div>
21
+
22
+ [![Website](https://img.shields.io/badge/Website-tensorblock.co-blue?logo=google-chrome&logoColor=white)](https://tensorblock.co)
23
+ [![Twitter](https://img.shields.io/twitter/follow/tensorblock_aoi?style=social)](https://twitter.com/tensorblock_aoi)
24
+ [![Discord](https://img.shields.io/badge/Discord-Join%20Us-5865F2?logo=discord&logoColor=white)](https://discord.gg/Ej5NmeHFf2)
25
+ [![GitHub](https://img.shields.io/badge/GitHub-TensorBlock-black?logo=github&logoColor=white)](https://github.com/TensorBlock)
26
+ [![Telegram](https://img.shields.io/badge/Telegram-Group-blue?logo=telegram)](https://t.me/TensorBlock)
27
+
28
+
29
+ ## tarun7r/Finance-Llama-8B - GGUF
30
+
31
+ <div style="text-align: left; margin: 20px 0;">
32
+ <a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
33
+ Join our Discord to learn more about what we're building β†—
34
+ </a>
35
+ </div>
36
+
37
+ This repo contains GGUF format model files for [tarun7r/Finance-Llama-8B](https://huggingface.co/tarun7r/Finance-Llama-8B).
38
+
39
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
40
+
41
+ ## Our projects
42
+ <table border="1" cellspacing="0" cellpadding="10">
43
+ <tr>
44
+ <th colspan="2" style="font-size: 25px;">Forge</th>
45
+ </tr>
46
+ <tr>
47
+ <th colspan="2">
48
+ <img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
49
+ </th>
50
+ </tr>
51
+ <tr>
52
+ <th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
53
+ </tr>
54
+ <tr>
55
+ <th colspan="2">
56
+ <a href="https://github.com/TensorBlock/forge" target="_blank" style="
57
+ display: inline-block;
58
+ padding: 8px 16px;
59
+ background-color: #FF7F50;
60
+ color: white;
61
+ text-decoration: none;
62
+ border-radius: 6px;
63
+ font-weight: bold;
64
+ font-family: sans-serif;
65
+ ">πŸš€ Try it now! πŸš€</a>
66
+ </th>
67
+ </tr>
68
+
69
+ <tr>
70
+ <th style="font-size: 25px;">Awesome MCP Servers</th>
71
+ <th style="font-size: 25px;">TensorBlock Studio</th>
72
+ </tr>
73
+ <tr>
74
+ <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
75
+ <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
76
+ </tr>
77
+ <tr>
78
+ <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
79
+ <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
80
+ </tr>
81
+ <tr>
82
+ <th>
83
+ <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
84
+ display: inline-block;
85
+ padding: 8px 16px;
86
+ background-color: #FF7F50;
87
+ color: white;
88
+ text-decoration: none;
89
+ border-radius: 6px;
90
+ font-weight: bold;
91
+ font-family: sans-serif;
92
+ ">πŸ‘€ See what we built πŸ‘€</a>
93
+ </th>
94
+ <th>
95
+ <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
96
+ display: inline-block;
97
+ padding: 8px 16px;
98
+ background-color: #FF7F50;
99
+ color: white;
100
+ text-decoration: none;
101
+ border-radius: 6px;
102
+ font-weight: bold;
103
+ font-family: sans-serif;
104
+ ">πŸ‘€ See what we built πŸ‘€</a>
105
+ </th>
106
+ </tr>
107
+ </table>
108
+
109
+ ## Prompt template
110
+
111
+ ```
112
+ Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
113
+ ```
114
+
115
+ ## Model file specification
116
+
117
+ | Filename | Quant type | File Size | Description |
118
+ | -------- | ---------- | --------- | ----------- |
119
+ | [Finance-Llama-8B-Q2_K.gguf](https://huggingface.co/tensorblock/tarun7r_Finance-Llama-8B-GGUF/blob/main/Finance-Llama-8B-Q2_K.gguf) | Q2_K | 0.008 GB | smallest, significant quality loss - not recommended for most purposes |
120
+ | [Finance-Llama-8B-Q3_K_S.gguf](https://huggingface.co/tensorblock/tarun7r_Finance-Llama-8B-GGUF/blob/main/Finance-Llama-8B-Q3_K_S.gguf) | Q3_K_S | 0.008 GB | very small, high quality loss |
121
+ | [Finance-Llama-8B-Q3_K_M.gguf](https://huggingface.co/tensorblock/tarun7r_Finance-Llama-8B-GGUF/blob/main/Finance-Llama-8B-Q3_K_M.gguf) | Q3_K_M | 0.008 GB | very small, high quality loss |
122
+ | [Finance-Llama-8B-Q3_K_L.gguf](https://huggingface.co/tensorblock/tarun7r_Finance-Llama-8B-GGUF/blob/main/Finance-Llama-8B-Q3_K_L.gguf) | Q3_K_L | 0.008 GB | small, substantial quality loss |
123
+ | [Finance-Llama-8B-Q4_0.gguf](https://huggingface.co/tensorblock/tarun7r_Finance-Llama-8B-GGUF/blob/main/Finance-Llama-8B-Q4_0.gguf) | Q4_0 | 0.008 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
124
+ | [Finance-Llama-8B-Q4_K_S.gguf](https://huggingface.co/tensorblock/tarun7r_Finance-Llama-8B-GGUF/blob/main/Finance-Llama-8B-Q4_K_S.gguf) | Q4_K_S | 0.008 GB | small, greater quality loss |
125
+ | [Finance-Llama-8B-Q4_K_M.gguf](https://huggingface.co/tensorblock/tarun7r_Finance-Llama-8B-GGUF/blob/main/Finance-Llama-8B-Q4_K_M.gguf) | Q4_K_M | 0.008 GB | medium, balanced quality - recommended |
126
+ | [Finance-Llama-8B-Q5_0.gguf](https://huggingface.co/tensorblock/tarun7r_Finance-Llama-8B-GGUF/blob/main/Finance-Llama-8B-Q5_0.gguf) | Q5_0 | 0.008 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
127
+ | [Finance-Llama-8B-Q5_K_S.gguf](https://huggingface.co/tensorblock/tarun7r_Finance-Llama-8B-GGUF/blob/main/Finance-Llama-8B-Q5_K_S.gguf) | Q5_K_S | 0.008 GB | large, low quality loss - recommended |
128
+ | [Finance-Llama-8B-Q5_K_M.gguf](https://huggingface.co/tensorblock/tarun7r_Finance-Llama-8B-GGUF/blob/main/Finance-Llama-8B-Q5_K_M.gguf) | Q5_K_M | 0.008 GB | large, very low quality loss - recommended |
129
+ | [Finance-Llama-8B-Q6_K.gguf](https://huggingface.co/tensorblock/tarun7r_Finance-Llama-8B-GGUF/blob/main/Finance-Llama-8B-Q6_K.gguf) | Q6_K | 0.008 GB | very large, extremely low quality loss |
130
+ | [Finance-Llama-8B-Q8_0.gguf](https://huggingface.co/tensorblock/tarun7r_Finance-Llama-8B-GGUF/blob/main/Finance-Llama-8B-Q8_0.gguf) | Q8_0 | 0.008 GB | very large, extremely low quality loss - not recommended |
131
+
132
+
133
+ ## Downloading instruction
134
+
135
+ ### Command line
136
+
137
+ Firstly, install Huggingface Client
138
+
139
+ ```shell
140
+ pip install -U "huggingface_hub[cli]"
141
+ ```
142
+
143
+ Then, downoad the individual model file the a local directory
144
+
145
+ ```shell
146
+ huggingface-cli download tensorblock/tarun7r_Finance-Llama-8B-GGUF --include "Finance-Llama-8B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
147
+ ```
148
+
149
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
150
+
151
+ ```shell
152
+ huggingface-cli download tensorblock/tarun7r_Finance-Llama-8B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
153
+ ```