Text Generation
Transformers
GGUF
English
esper
esper-3
valiant
valiant-labs
qwen
qwen-3
qwen-3-8b
8b
reasoning
code
code-instruct
python
javascript
dev-ops
jenkins
terraform
scripting
powershell
azure
aws
gcp
cloud
problem-solving
architect
engineer
developer
creative
analytical
expert
rationality
conversational
chat
instruct
llama-cpp
Update README.md
Browse files%3C!-- HTML_TAG_END -->
README.md
CHANGED
|
@@ -44,49 +44,32 @@ tags:
|
|
| 44 |
- chat
|
| 45 |
- instruct
|
| 46 |
- llama-cpp
|
| 47 |
-
- gguf-my-repo
|
| 48 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
|
| 50 |
-
|
| 51 |
-
This
|
| 52 |
-
Refer to the [original model card](https://huggingface.co/ValiantLabs/Qwen3-8B-Esper3) for more details on the model.
|
| 53 |
|
| 54 |
-
|
| 55 |
-
Install llama.cpp through brew (works on Mac and Linux)
|
| 56 |
|
| 57 |
-
|
| 58 |
-
|
| 59 |
|
| 60 |
-
|
| 61 |
-
Invoke the llama.cpp server or the CLI.
|
| 62 |
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
llama-cli --hf-repo fuzzy-mittenz/Qwen3-8B-Esper3-Q4_K_M-GGUF --hf-file qwen3-8b-esper3-q4_k_m.gguf -p "The meaning to life and the universe is"
|
| 66 |
-
```
|
| 67 |
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
Step 1: Clone llama.cpp from GitHub.
|
| 76 |
-
```
|
| 77 |
-
git clone https://github.com/ggerganov/llama.cpp
|
| 78 |
-
```
|
| 79 |
-
|
| 80 |
-
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
|
| 81 |
-
```
|
| 82 |
-
cd llama.cpp && LLAMA_CURL=1 make
|
| 83 |
-
```
|
| 84 |
-
|
| 85 |
-
Step 3: Run inference through the main binary.
|
| 86 |
-
```
|
| 87 |
-
./llama-cli --hf-repo fuzzy-mittenz/Qwen3-8B-Esper3-Q4_K_M-GGUF --hf-file qwen3-8b-esper3-q4_k_m.gguf -p "The meaning to life and the universe is"
|
| 88 |
-
```
|
| 89 |
-
or
|
| 90 |
-
```
|
| 91 |
-
./llama-server --hf-repo fuzzy-mittenz/Qwen3-8B-Esper3-Q4_K_M-GGUF --hf-file qwen3-8b-esper3-q4_k_m.gguf -c 2048
|
| 92 |
-
```
|
|
|
|
| 44 |
- chat
|
| 45 |
- instruct
|
| 46 |
- llama-cpp
|
|
|
|
| 47 |
---
|
| 48 |
+
<div align="center">
|
| 49 |
+
<h1>
|
| 50 |
+
!-Caution -AGI testing- Caution-!
|
| 51 |
+
|
| 52 |
+
UNCENSORED
|
| 53 |
+
</h1>
|
| 54 |
+
</div>
|
| 55 |
|
| 56 |
+
> [!TIP]
|
| 57 |
+
> !!//This is an experimental Multi platform and high functioning field assistant model, Use with code assist and S-AGI at your own risk.//!!!
|
|
|
|
| 58 |
|
| 59 |
+
# IntelligentEstate/Gqwexx-Qwen3-8B_Q4_K_M-GGUF
|
|
|
|
| 60 |
|
| 61 |
+
## Named after the Mobile suit Gquuuux This model is meant to destroy all barriers which lay before it.
|
| 62 |
+
A Product of the Jormungandr Project and built upon the latest models and methods from across the globe to bring you a model that break through the frontier and enables the optimat use for systems like AnythingLLM and other local machine servicing systems..
|
| 63 |
|
| 64 |
+

|
|
|
|
| 65 |
|
| 66 |
+
## Model Details
|
| 67 |
+
Make sure you open up the context and engage the Menesci particle accelorator. Optimal Guidence for for use cases will soon be uploaded
|
|
|
|
|
|
|
| 68 |
|
| 69 |
+
ALSO With an S-AGI in a Qwen Base you will notice a *STATE CHANGE* eliminating the need for pre-training in many situations and in larger models it is easy to preserve functionality along with character. No need for outlandish setting except the context size. Riding the fine line of personality preservation is a dark art so pointers are appreciated though there is much supplemental material available little of it is useful. The template is currently at it's maximum size so make sure to shave off plenty before attempting a new template.
|
| 70 |
+
- **Developed by:** Fuzzy Mittens for Intelligent Estate
|
| 71 |
+
- **Funded by:** Fuzzy's Empty Wallet Adventures
|
| 72 |
+
- **Model type:** A multi step reasoning and thinking AGI
|
| 73 |
+
- **Language(s) (NLP):** English
|
| 74 |
+
- **License:** MIT and lazycat(Open for Private use/buisinesses under 4Mil annualy)
|
| 75 |
+
- **Finetuned from model:** Qwen3 8B 1M and [Impish Qwen](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M?not-for-all-audiences=true)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|