mradermacher commited on
Commit
3aa8020
·
verified ·
1 Parent(s): 7677716

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -6,6 +6,8 @@ library_name: transformers
6
  license: other
7
  license_link: https://huggingface.co/Qwen/Qwen2.5-72B/blob/main/LICENSE
8
  license_name: qwen
 
 
9
  quantized_by: mradermacher
10
  ---
11
  ## About
@@ -18,6 +20,9 @@ quantized_by: mradermacher
18
  static quants of https://huggingface.co/Qwen/Qwen2.5-72B
19
 
20
  <!-- provided-files -->
 
 
 
21
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-72B-i1-GGUF
22
  ## Usage
23
 
@@ -72,6 +77,6 @@ questions you might have and/or if you want some other model quantized.
72
 
73
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
74
  me use its servers and providing upgrades to my workstation to enable
75
- this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
76
 
77
  <!-- end -->
 
6
  license: other
7
  license_link: https://huggingface.co/Qwen/Qwen2.5-72B/blob/main/LICENSE
8
  license_name: qwen
9
+ mradermacher:
10
+ readme_rev: 1
11
  quantized_by: mradermacher
12
  ---
13
  ## About
 
20
  static quants of https://huggingface.co/Qwen/Qwen2.5-72B
21
 
22
  <!-- provided-files -->
23
+
24
+ ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen2.5-72B-GGUF).***
25
+
26
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-72B-i1-GGUF
27
  ## Usage
28
 
 
77
 
78
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
79
  me use its servers and providing upgrades to my workstation to enable
80
+ this work in my free time.
81
 
82
  <!-- end -->