roleplaiapp commited on
Commit
9e28a09
·
verified ·
1 Parent(s): 2e6b1c3

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +34 -0
README.md ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - 24b
6
+ - 4x7b
7
+ - IQ4_XS
8
+ - dark
9
+ - enhanced32
10
+ - gguf
11
+ - iq4
12
+ - llama-cpp
13
+ - mistral
14
+ - moe
15
+ - multiverse
16
+ - text-generation
17
+ - uncensored
18
+ ---
19
+
20
+ # roleplaiapp/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-gguf-IQ4_XS-GGUF
21
+
22
+ **Repo:** `roleplaiapp/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-gguf-IQ4_XS-GGUF`
23
+ **Original Model:** `Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-gguf`
24
+ **Quantized File:** `M-MOE-4X7B-Dark-MultiVerse-UC-E32-24B-D_AU-IQ4_XS.gguf`
25
+ **Quantization:** `GGUF`
26
+ **Quantization Method:** `IQ4_XS`
27
+
28
+ ## Overview
29
+ This is a GGUF IQ4_XS quantized version of Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-gguf
30
+ ## Quantization By
31
+ I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
32
+ I hope the community finds these quantizations useful.
33
+
34
+ Andrew Webby @ [RolePlai](https://roleplai.app/).