YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

These models were made by merging https://huggingface.co/huihui-ai/Huihui-GLM-4.5-Air-abliterated-GGUF with https://huggingface.co/unsloth/GLM-4.5-Air-GGUF in various ratios.

The goal is to attempt to preserve as much model capabilities as possible while remaining uncensored (since abliteration damages model intelligence).

GLM-4.5-Air: 0% Abliterated

This is the basic censored model. Has the highest intelligence and can remember obscure facts. Extremely censored. Jailbreaking via system prompts are extremely difficult and often unsuccessful, only a strong postfill can jailbreak the model.

CrabSoup-30: 30% Abliterated, 70% Normal

This model is still heavily censored, however jailbreaks work slightly easier now. Model general intelligence is slightly reduced compared to unmodified model.

CrabSoup-55: 55% Abliterated, 45% Normal (RECOMMENDED)

This model is mostly uncensored by default. It still respects alignment requests added to the system prompt, making it steerable. Model intelligence is moderated affected, it retains obscure knowledge but often makes mistakes.

CrabSoup-76: 76% Abliterated, 24% Normal

This model is almost always uncensored, and sometimes will respond in an uncensored way even if asked not to do so. Model intelligence is substantially degraded but still usable.

huihui-ai/Huihui-GLM-4.5-Air-abliterated-GGUF: 100% Abliterated

This is the abliterated model used in the above merges. Model intelligence is also strongly degraded, about the same level as CrabSoup-76. However, this model is incapable of refusal and will fulfill "harmful" requests even if instructed explicitly not to do so in a system prompt.

Downloads last month
19
GGUF
Model size
110B params
Architecture
glm4moe
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support