mradermacher commited on
Commit
3de5cc4
·
verified ·
1 Parent(s): ba4f7cb

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -5,6 +5,8 @@ language:
5
  library_name: transformers
6
  license: apache-2.0
7
  license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE
 
 
8
  quantized_by: mradermacher
9
  tags:
10
  - chat
@@ -19,6 +21,9 @@ tags:
19
  static quants of https://huggingface.co/Qwen/QwQ-32B
20
 
21
  <!-- provided-files -->
 
 
 
22
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/QwQ-32B-i1-GGUF
23
  ## Usage
24
 
@@ -61,6 +66,6 @@ questions you might have and/or if you want some other model quantized.
61
 
62
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
63
  me use its servers and providing upgrades to my workstation to enable
64
- this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
65
 
66
  <!-- end -->
 
5
  library_name: transformers
6
  license: apache-2.0
7
  license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE
8
+ mradermacher:
9
+ readme_rev: 1
10
  quantized_by: mradermacher
11
  tags:
12
  - chat
 
21
  static quants of https://huggingface.co/Qwen/QwQ-32B
22
 
23
  <!-- provided-files -->
24
+
25
+ ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#QwQ-32B-GGUF).***
26
+
27
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/QwQ-32B-i1-GGUF
28
  ## Usage
29
 
 
66
 
67
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
68
  me use its servers and providing upgrades to my workstation to enable
69
+ this work in my free time.
70
 
71
  <!-- end -->