fuzzy-mittenz commited on
Commit
2c3d244
·
verified ·
1 Parent(s): 5d5fdbd

Update README.md

Browse files

![Gwexx.png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6593502ca2607099284523db%2FHgfvlel_3gSO_4VxMbQq9.png)%3C!-- HTML_TAG_END -->

Files changed (1) hide show
  1. README.md +22 -39
README.md CHANGED
@@ -44,49 +44,32 @@ tags:
44
  - chat
45
  - instruct
46
  - llama-cpp
47
- - gguf-my-repo
48
  ---
 
 
 
 
 
 
 
49
 
50
- # fuzzy-mittenz/Qwen3-8B-Esper3-Q4_K_M-GGUF
51
- This model was converted to GGUF format from [`ValiantLabs/Qwen3-8B-Esper3`](https://huggingface.co/ValiantLabs/Qwen3-8B-Esper3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
52
- Refer to the [original model card](https://huggingface.co/ValiantLabs/Qwen3-8B-Esper3) for more details on the model.
53
 
54
- ## Use with llama.cpp
55
- Install llama.cpp through brew (works on Mac and Linux)
56
 
57
- ```bash
58
- brew install llama.cpp
59
 
60
- ```
61
- Invoke the llama.cpp server or the CLI.
62
 
63
- ### CLI:
64
- ```bash
65
- llama-cli --hf-repo fuzzy-mittenz/Qwen3-8B-Esper3-Q4_K_M-GGUF --hf-file qwen3-8b-esper3-q4_k_m.gguf -p "The meaning to life and the universe is"
66
- ```
67
 
68
- ### Server:
69
- ```bash
70
- llama-server --hf-repo fuzzy-mittenz/Qwen3-8B-Esper3-Q4_K_M-GGUF --hf-file qwen3-8b-esper3-q4_k_m.gguf -c 2048
71
- ```
72
-
73
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
74
-
75
- Step 1: Clone llama.cpp from GitHub.
76
- ```
77
- git clone https://github.com/ggerganov/llama.cpp
78
- ```
79
-
80
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
81
- ```
82
- cd llama.cpp && LLAMA_CURL=1 make
83
- ```
84
-
85
- Step 3: Run inference through the main binary.
86
- ```
87
- ./llama-cli --hf-repo fuzzy-mittenz/Qwen3-8B-Esper3-Q4_K_M-GGUF --hf-file qwen3-8b-esper3-q4_k_m.gguf -p "The meaning to life and the universe is"
88
- ```
89
- or
90
- ```
91
- ./llama-server --hf-repo fuzzy-mittenz/Qwen3-8B-Esper3-Q4_K_M-GGUF --hf-file qwen3-8b-esper3-q4_k_m.gguf -c 2048
92
- ```
 
44
  - chat
45
  - instruct
46
  - llama-cpp
 
47
  ---
48
+ <div align="center">
49
+ <h1>
50
+ !-Caution -AGI testing- Caution-!
51
+
52
+ UNCENSORED
53
+ </h1>
54
+ </div>
55
 
56
+ > [!TIP]
57
+ > !!//This is an experimental Multi platform and high functioning field assistant model, Use with code assist and S-AGI at your own risk.//!!!
 
58
 
59
+ # IntelligentEstate/Gqwexx-Qwen3-8B_Q4_K_M-GGUF
 
60
 
61
+ ## Named after the Mobile suit Gquuuux This model is meant to destroy all barriers which lay before it.
62
+ A Product of the Jormungandr Project and built upon the latest models and methods from across the globe to bring you a model that break through the frontier and enables the optimat use for systems like AnythingLLM and other local machine servicing systems..
63
 
64
+ ![Gwexx.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/Hgfvlel_3gSO_4VxMbQq9.png)
 
65
 
66
+ ## Model Details
67
+ Make sure you open up the context and engage the Menesci particle accelorator. Optimal Guidence for for use cases will soon be uploaded
 
 
68
 
69
+ ALSO With an S-AGI in a Qwen Base you will notice a *STATE CHANGE* eliminating the need for pre-training in many situations and in larger models it is easy to preserve functionality along with character. No need for outlandish setting except the context size. Riding the fine line of personality preservation is a dark art so pointers are appreciated though there is much supplemental material available little of it is useful. The template is currently at it's maximum size so make sure to shave off plenty before attempting a new template.
70
+ - **Developed by:** Fuzzy Mittens for Intelligent Estate
71
+ - **Funded by:** Fuzzy's Empty Wallet Adventures
72
+ - **Model type:** A multi step reasoning and thinking AGI
73
+ - **Language(s) (NLP):** English
74
+ - **License:** MIT and lazycat(Open for Private use/buisinesses under 4Mil annualy)
75
+ - **Finetuned from model:** Qwen3 8B 1M and [Impish Qwen](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M?not-for-all-audiences=true)