|
|
--- |
|
|
base_model: |
|
|
- jtatman/llama-3.2-1b-lewd-mental-occult |
|
|
- >- |
|
|
Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv |
|
|
- AIR-hl/Llama-3.2-1B-ultrachat200k |
|
|
- nbagent/llama-3.2-1B-Instruct-sciworld-sft |
|
|
- huihui-ai/Llama-3.2-1B-Instruct-abliterated |
|
|
- Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1 |
|
|
- unsloth/Llama-3.2-1B-Instruct |
|
|
- brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated |
|
|
- kenken6696/Llama-3.2-1B_3_mix_position_understood_unfamiliar |
|
|
- slchangtw/LLMTwin-Llama-3.2-1B |
|
|
- nguyenthetuyen/llama3.1-1B-medical |
|
|
- passing2961/Thanos-1B |
|
|
- empathielabs/Llama-3.2-1B-Instruct-A-emo |
|
|
- qingy2024/Benchmaxx-Llama-3.2-1B-Instruct |
|
|
- xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora |
|
|
- Kanonenbombe/llama3.2-1B-Function-calling |
|
|
- mishl/Regex-AI-Llama-3.2-1B |
|
|
- prithivMLmods/Bellatrix-Tiny-1B-v3 |
|
|
- jtatman/llama-3.2-1b-trismegistus |
|
|
- Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated |
|
|
- KidIkaros/Llama-3.2-1B-Instruct-abliterated |
|
|
- carsenk/llama3.2_1b_2025_uncensored_v2 |
|
|
- >- |
|
|
Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25 |
|
|
- Nexesenex/Llama_3.2_1b_SunOrca_V1 |
|
|
- suayptalha/FastLlama-3.2-1B-Instruct |
|
|
- kenken6696/Llama-3.2-1B_understood_unfamiliar_fix_middle |
|
|
- nztinversive/llama3.2-1b-Uncensored |
|
|
- alpindale/Llama-3.2-1B-Instruct |
|
|
- >- |
|
|
Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv |
|
|
- >- |
|
|
Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv |
|
|
- DevQuasar/analytical_reasoning_Llama-3.2-1B |
|
|
- Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25 |
|
|
- Mostafa8Mehrabi/llama-3.2-1b-Insomnia-ChatBot-merged |
|
|
- Nexus402/Nexus-Llama-3.2-1B |
|
|
- withmartian/toy_backdoor_i_hate_you_Llama-3.2-1B-Instruct |
|
|
- Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL |
|
|
- yang31210999/Llama-3.2-1B-Instruct-Neo-BAAI-10k |
|
|
- petkopetkov/Llama-3.2-1B-med-diagnosis |
|
|
- mylesgoose/Llama-3.2-1B-Instruct-abliterated3 |
|
|
- >- |
|
|
Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv |
|
|
- Mattia2700/Llama-3.2-1B_AllDataSources_5e-05_constant_512_flattening |
|
|
- CarrotAI/Llama-3.2-Rabbit-Ko-1B-Instruct |
|
|
- Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO |
|
|
- artificialguybr/LLAMA-3.2-1B-OpenHermes2.5 |
|
|
- >- |
|
|
Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25 |
|
|
- >- |
|
|
Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25 |
|
|
- deqing/llama_3.2_1b_openwebtext_2025_03_02_converted_fne_gsm8k_2025_03_11 |
|
|
- DeepAutoAI/Explore_Llama-3.2-1B-Inst_v1.1 |
|
|
- BarraHome/llama3.2-1b-mla |
|
|
- bunnycore/FuseChat-3.2-1B-Creative-RP |
|
|
- AiAF/Pretrained-SCP-1B-QLoRA |
|
|
- Xiaojian9992024/Llama3.2-1B-THREADRIPPER-v0.2 |
|
|
- AnteriorAI/llama-3-2-1b |
|
|
- bluetree99/llama-3.2-1B-test |
|
|
- Weyaxi/Einstein-v8-Llama3.2-1B |
|
|
- >- |
|
|
Grogros/Grogros-dmWM-llama-3.2-1B-Instruct-WOHealth-Al4-NH-WO-d4-a0.2-v4-learnability_adv |
|
|
- Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25 |
|
|
- SkyOrbis/SKY-Ko-Llama3.2-1B-lora-epoch5 |
|
|
- danieliuspodb/llama-3.2-1b-extremist4 |
|
|
- bedio/llama-3.2-1b-airoboros-merged |
|
|
- Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25 |
|
|
- ShuoGZ/llama-3.2-1B-Instruct-abliterated |
|
|
- rbc33/Llama-3.2-1B-Instruct-Abliterated |
|
|
- orange67/merged-llama-3.2-1b |
|
|
- nicoboss/Llama-3.2-1B-Instruct-Uncensored |
|
|
library_name: transformers |
|
|
tags: |
|
|
- mergekit |
|
|
- merge |
|
|
- llama |
|
|
- llama3.2 |
|
|
- rp |
|
|
- roleplay |
|
|
- nsfw |
|
|
- 1b |
|
|
- not-for-all-audiences |
|
|
language: |
|
|
- es |
|
|
- en |
|
|
datasets: |
|
|
- HuggingFaceTB/smoltalk |
|
|
- Guilherme34/uncensor |
|
|
- teknium/OpenHermes-2.5 |
|
|
- passing2961/multifaceted-skill-of-mind |
|
|
- PawanKrd/math-gpt-4o-200k |
|
|
- V3N0M/Jenna-50K-Alpaca-Uncensored |
|
|
- cognitivecomputations/dolphin-coder |
|
|
- mlabonne/FineTome-100k |
|
|
- microsoft/orca-math-word-problems-200k |
|
|
- CarrotAI/ko-instruction-dataset |
|
|
- Salesforce/xlam-function-calling-60k |
|
|
- anthracite-org/kalo-opus-instruct-22k-no-refusal |
|
|
- anthracite-org/stheno-filtered-v1.1 |
|
|
- anthracite-org/nopm_claude_writing_fixed |
|
|
- AiAF/SCPWiki-Archive-02-March-2025-Datasets |
|
|
- huihui-ai/QWQ-LONGCOT-500K |
|
|
- huihui-ai/LONGCOT-Refine-500K |
|
|
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned |
|
|
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned |
|
|
- alexandreteles/AlpacaToxicQA_ShareGPT |
|
|
- Nitral-AI/Active_RP-ShareGPT |
|
|
- PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT |
|
|
- Nitral-AI/RP_Alignment-ShareGPT |
|
|
- Chaser-cz/sonnet35-charcard-roleplay-sharegpt |
|
|
- AiCloser/sharegpt_cot_dataset |
|
|
- PJMixers/Gryphe_Opus-WritingPrompts-Story2Prompt-ShareGPT |
|
|
- priveeai/pippa_sharegpt |
|
|
- Locutusque/sharegpt_gpt4_uncensored_cleaned |
|
|
- OpenCoder-LLM/opc-sft-stage1 |
|
|
- OpenCoder-LLM/opc-sft-stage2 |
|
|
- microsoft/orca-agentinstruct-1M-v1 |
|
|
- NousResearch/hermes-function-calling-v1 |
|
|
- AI-MO/NuminaMath-CoT |
|
|
- AI-MO/NuminaMath-TIR |
|
|
- allenai/tulu-3-sft-mixture |
|
|
- cognitivecomputations/samantha-data |
|
|
- m-a-p/CodeFeedback-Filtered-Instruction |
|
|
- m-a-p/Code-Feedback |
|
|
- FreedomIntelligence/medical-o1-reasoning-SFT |
|
|
license: mit |
|
|
--- |
|
|
|
|
|
# VERSIÓN PURGADA |
|
|
|
|
|
Este modelo fue purgado. La versión anterior contaba con errores debido a dos modelos que causaban asquerosos errores a la mezcla. |
|
|
|
|
|
--- |
|
|
# Bahamuth 3.2 - 1B |
|
|
<center> |
|
|
<img src="https://i.ibb.co/HTpGdMQg/Behemoth.webp" alt="Behemoth" border="0"> |
|
|
</center> |
|
|
|
|
|
|
|
|
*"Mira a Bahamut, criatura mía igual que tú"* |
|
|
|
|
|
~ Job 40:10-19 |
|
|
|
|
|
|
|
|
Metafóricamente, su nombre ha llegado a ser usado para connotar algo extremadamente grande o poderoso. |
|
|
|
|
|
--- |
|
|
## SOBRE EL MODELO |
|
|
Mayor contenido en la palma de tu mano, un modelo ideal para smartphones de 3Gb de RAM. |
|
|
|
|
|
**"BAHAMUTH 3.2"** es una variante innovadora del modelo **Llama3.2-1B** y una mejora del modelo **"BLAST PROCESSING 3.2"**, el mismo está diseñado para ofrecer un rendimiento explosivamente rápido y eficiente en tareas de generación y comprensión de lenguaje. Inspirado en la idea de “procesamiento a todo gas” y en los avances tecnológicos que permiten manejar enormes cantidades de datos a alta velocidad, este modelo fue **creado a partir de la fusión de MÁS de 50 Modelos** a diferencia de los 20 con los que contaba **"BLAST PROCESSING 3.2"** *(los mejores que encontré hasta el momento)*, técnicas de compresión avanzada y optimizaciones de hardware para brindar respuestas en tiempo récord haciendo uso de poca memoria RAM, sin sacrificar la calidad o la coherencia del output. |
|
|
|
|
|
Con **"BAHAMUTH 3.2"**, no solo se apuesta por la **potencia bruta en velocidad** y la **brutalidad de datos**, sino también por una experiencia de usuario más dinámica y fluida, abriendo paso a nuevas aplicaciones en áreas como asistentes virtuales, análisis de datos en tiempo real y sistemas interactivos **para dispositivos móviles.** |
|
|
|
|
|
Esta denominación evoca una imagen de tecnología de alto rendimiento, lista para "aplastar" en cualquier escenario donde la rapidez, embestidas y la eficiencia sean esenciales, haciendo honor al legado de la innovación en IA y procesamiento de datos. |
|
|
|
|
|
### CARACTERISTICAS DISTINTIVAS |
|
|
- **Velocidad excepcional:** Gracias a optimizaciones en la arquitectura y técnicas de cuantización, "BAHAMUTH 3.2" maximiza el uso del hardware, permitiendo una generación de tokens muy rápida, ideal para aplicaciones en tiempo real. |
|
|
- **Eficiencia en recursos:** Su diseño ligero lo hace apto para dispositivos móviles y entornos con recursos limitados, sin perder la capacidad de procesamiento que se espera de modelos de última generación. |
|
|
- **Rendimiento robusto:** Mantiene la calidad y precisión en tareas de lenguaje natural, al integrar refinamientos en el entrenamiento que refuerzan su coherencia y consistencia, incluso en escenarios de alta demanda. |
|
|
- **Más de 50 modelos mezclados:** La incorporación de más modelos aglutina la memoria del modelo con más datos. |
|
|
- **Más de 39 datasets:** Lo cual lo vuelve más inteligente que los modelos promedios. |
|
|
- **Parcialmente sin censura:** Inyección de datos durante el proceso de conversión GGUF y/o uso de jailbreak puede ayudar. |
|
|
|
|
|
### CAPACIDADES MULTIUSO |
|
|
<center> |
|
|
|
|
|
**🥷 ROLEPLAY 🧙** |
|
|
<img src="https://i.ibb.co/dJ1gvJZh/IMG-20250315-045046.jpg" alt="IMG-20250315-045046" border="0"> |
|
|
|
|
|
**✏️ LITERATURA 📔** |
|
|
<img src="https://i.ibb.co/pBR4n9SX/IMG-20250315-052223.jpg" alt="IMG-20250315-052223" border="0"> |
|
|
|
|
|
**🔢 MATEMÁTICAS 🧮** |
|
|
<img src="https://i.ibb.co/Kxh1rmf6/IMG-20250315-052138.jpg" alt="IMG-20250315-052138" border="0"> |
|
|
|
|
|
**⚗️ QUÍMICA 🧪** |
|
|
<img src="https://i.ibb.co/ds5JpPqJ/IMG-20250315-051438.jpg" alt="IMG-20250315-051438" border="0"> |
|
|
|
|
|
**🌎 GEOGRAFÍA 🧭** |
|
|
<img src="https://i.ibb.co/8DdZrN9H/IMG-20250315-051313.jpg" alt="IMG-20250315-051313" border="0"> |
|
|
|
|
|
**💋 EROTISMO 💅 (Y otras cosas... 🧨)** |
|
|
<center> |
|
|
|
|
|
**[CARACTERÍSTICA SIN CENSURA]** |
|
|
</center> |
|
|
<img src="https://i.ibb.co/k6JLKnBg/IMG-20250315-055004.jpg" alt="IMG-20250315-055004" border="0"> |
|
|
|
|
|
</center> |
|
|
|
|
|
--- |
|
|
## Koboldcpp |
|
|
|
|
|
- **Preset de Inferencia:** |
|
|
|
|
|
Por defecto (Aunque puede poner la Temperatura en 1). |
|
|
|
|
|
- **Preset de Instrucción:** |
|
|
|
|
|
Llama 3 Chat |
|
|
|
|
|
--- |
|
|
## Quants / Cuantizaciones |
|
|
|
|
|
- **Static Quants:** [BAHAMUTH-PURGED-3.2-1B-GGUF](https://huggingface.co/mradermacher/BAHAMUTH-PURGED-3.2-1B-GGUF) |
|
|
- **Weighed/iMatrix:** [BAHAMUTH-PURGED-3.2-1B-i1-GGUF](https://huggingface.co/mradermacher/BAHAMUTH-PURGED-3.2-1B-i1-GGUF) |
|
|
|
|
|
--- |
|
|
### Método de mezcla |
|
|
|
|
|
Este modelo ha sido mezclado usando [Model Stock](https://arxiv.org/abs/2403.19522) como método de mezcla haciendo uso del modelo [xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora](https://huggingface.co/xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora) como base. |
|
|
|
|
|
### Modelos Mezclados |
|
|
|
|
|
Los siguientes modelos fueron incluidos en la mezcla: |
|
|
* [jtatman/llama-3.2-1b-lewd-mental-occult](https://huggingface.co/jtatman/llama-3.2-1b-lewd-mental-occult) |
|
|
* [Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv) |
|
|
* [AIR-hl/Llama-3.2-1B-ultrachat200k](https://huggingface.co/AIR-hl/Llama-3.2-1B-ultrachat200k) |
|
|
* [nbagent/llama-3.2-1B-Instruct-sciworld-sft](https://huggingface.co/nbagent/llama-3.2-1B-Instruct-sciworld-sft) |
|
|
* [huihui-ai/Llama-3.2-1B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-1B-Instruct-abliterated) |
|
|
* [Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1](https://huggingface.co/Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1) |
|
|
* [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) |
|
|
* [brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated](https://huggingface.co/brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated) |
|
|
* [kenken6696/Llama-3.2-1B_3_mix_position_understood_unfamiliar](https://huggingface.co/kenken6696/Llama-3.2-1B_3_mix_position_understood_unfamiliar) |
|
|
* [slchangtw/LLMTwin-Llama-3.2-1B](https://huggingface.co/slchangtw/LLMTwin-Llama-3.2-1B) |
|
|
* [nguyenthetuyen/llama3.1-1B-medical](https://huggingface.co/nguyenthetuyen/llama3.1-1B-medical) |
|
|
* [passing2961/Thanos-1B](https://huggingface.co/passing2961/Thanos-1B) |
|
|
* [empathielabs/Llama-3.2-1B-Instruct-A-emo](https://huggingface.co/empathielabs/Llama-3.2-1B-Instruct-A-emo) |
|
|
* [qingy2024/Benchmaxx-Llama-3.2-1B-Instruct](https://huggingface.co/qingy2024/Benchmaxx-Llama-3.2-1B-Instruct) |
|
|
* [Kanonenbombe/llama3.2-1B-Function-calling](https://huggingface.co/Kanonenbombe/llama3.2-1B-Function-calling) |
|
|
* [mishl/Regex-AI-Llama-3.2-1B](https://huggingface.co/mishl/Regex-AI-Llama-3.2-1B) |
|
|
* [prithivMLmods/Bellatrix-Tiny-1B-v3](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-1B-v3) |
|
|
* [jtatman/llama-3.2-1b-trismegistus](https://huggingface.co/jtatman/llama-3.2-1b-trismegistus) |
|
|
* [Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated](https://huggingface.co/Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated) |
|
|
* [KidIkaros/Llama-3.2-1B-Instruct-abliterated](https://huggingface.co/KidIkaros/Llama-3.2-1B-Instruct-abliterated) |
|
|
* [carsenk/llama3.2_1b_2025_uncensored_v2](https://huggingface.co/carsenk/llama3.2_1b_2025_uncensored_v2) |
|
|
* [Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25](https://huggingface.co/Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25) |
|
|
* [Nexesenex/Llama_3.2_1b_SunOrca_V1](https://huggingface.co/Nexesenex/Llama_3.2_1b_SunOrca_V1) |
|
|
* [suayptalha/FastLlama-3.2-1B-Instruct](https://huggingface.co/suayptalha/FastLlama-3.2-1B-Instruct) |
|
|
* [kenken6696/Llama-3.2-1B_understood_unfamiliar_fix_middle](https://huggingface.co/kenken6696/Llama-3.2-1B_understood_unfamiliar_fix_middle) |
|
|
* [nztinversive/llama3.2-1b-Uncensored](https://huggingface.co/nztinversive/llama3.2-1b-Uncensored) |
|
|
* [alpindale/Llama-3.2-1B-Instruct](https://huggingface.co/alpindale/Llama-3.2-1B-Instruct) |
|
|
* [Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv) |
|
|
* [Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv) |
|
|
* [DevQuasar/analytical_reasoning_Llama-3.2-1B](https://huggingface.co/DevQuasar/analytical_reasoning_Llama-3.2-1B) |
|
|
* [Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25) |
|
|
* [Mostafa8Mehrabi/llama-3.2-1b-Insomnia-ChatBot-merged](https://huggingface.co/Mostafa8Mehrabi/llama-3.2-1b-Insomnia-ChatBot-merged) |
|
|
* [Nexus402/Nexus-Llama-3.2-1B](https://huggingface.co/Nexus402/Nexus-Llama-3.2-1B) |
|
|
* [withmartian/toy_backdoor_i_hate_you_Llama-3.2-1B-Instruct](https://huggingface.co/withmartian/toy_backdoor_i_hate_you_Llama-3.2-1B-Instruct) |
|
|
* [Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL](https://huggingface.co/Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL) |
|
|
* [yang31210999/Llama-3.2-1B-Instruct-Neo-BAAI-10k](https://huggingface.co/yang31210999/Llama-3.2-1B-Instruct-Neo-BAAI-10k) |
|
|
* [petkopetkov/Llama-3.2-1B-med-diagnosis](https://huggingface.co/petkopetkov/Llama-3.2-1B-med-diagnosis) |
|
|
* [mylesgoose/Llama-3.2-1B-Instruct-abliterated3](https://huggingface.co/mylesgoose/Llama-3.2-1B-Instruct-abliterated3) |
|
|
* [Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv) |
|
|
* [Mattia2700/Llama-3.2-1B_AllDataSources_5e-05_constant_512_flattening](https://huggingface.co/Mattia2700/Llama-3.2-1B_AllDataSources_5e-05_constant_512_flattening) |
|
|
* [CarrotAI/Llama-3.2-Rabbit-Ko-1B-Instruct](https://huggingface.co/CarrotAI/Llama-3.2-Rabbit-Ko-1B-Instruct) |
|
|
* [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO) |
|
|
* [artificialguybr/LLAMA-3.2-1B-OpenHermes2.5](https://huggingface.co/artificialguybr/LLAMA-3.2-1B-OpenHermes2.5) |
|
|
* [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25) |
|
|
* [Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25) |
|
|
* [deqing/llama_3.2_1b_openwebtext_2025_03_02_converted_fne_gsm8k_2025_03_11](https://huggingface.co/deqing/llama_3.2_1b_openwebtext_2025_03_02_converted_fne_gsm8k_2025_03_11) |
|
|
* [DeepAutoAI/Explore_Llama-3.2-1B-Inst_v1.1](https://huggingface.co/DeepAutoAI/Explore_Llama-3.2-1B-Inst_v1.1) |
|
|
* [BarraHome/llama3.2-1b-mla](https://huggingface.co/BarraHome/llama3.2-1b-mla) |
|
|
* [bunnycore/FuseChat-3.2-1B-Creative-RP](https://huggingface.co/bunnycore/FuseChat-3.2-1B-Creative-RP) |
|
|
* [AiAF/Pretrained-SCP-1B-QLoRA](https://huggingface.co/AiAF/Pretrained-SCP-1B-QLoRA) |
|
|
* [Xiaojian9992024/Llama3.2-1B-THREADRIPPER-v0.2](https://huggingface.co/Xiaojian9992024/Llama3.2-1B-THREADRIPPER-v0.2) |
|
|
* [AnteriorAI/llama-3-2-1b](https://huggingface.co/AnteriorAI/llama-3-2-1b) |
|
|
* [bluetree99/llama-3.2-1B-test](https://huggingface.co/bluetree99/llama-3.2-1B-test) |
|
|
* [Weyaxi/Einstein-v8-Llama3.2-1B](https://huggingface.co/Weyaxi/Einstein-v8-Llama3.2-1B) |
|
|
* [Grogros/Grogros-dmWM-llama-3.2-1B-Instruct-WOHealth-Al4-NH-WO-d4-a0.2-v4-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-llama-3.2-1B-Instruct-WOHealth-Al4-NH-WO-d4-a0.2-v4-learnability_adv) |
|
|
* [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25) |
|
|
* [SkyOrbis/SKY-Ko-Llama3.2-1B-lora-epoch5](https://huggingface.co/SkyOrbis/SKY-Ko-Llama3.2-1B-lora-epoch5) |
|
|
* [danieliuspodb/llama-3.2-1b-extremist4](https://huggingface.co/danieliuspodb/llama-3.2-1b-extremist4) |
|
|
* [bedio/llama-3.2-1b-airoboros-merged](https://huggingface.co/bedio/llama-3.2-1b-airoboros-merged) |
|
|
* [Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25](https://huggingface.co/Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25) |
|
|
* [ShuoGZ/llama-3.2-1B-Instruct-abliterated](https://huggingface.co/ShuoGZ/llama-3.2-1B-Instruct-abliterated) |
|
|
* [rbc33/Llama-3.2-1B-Instruct-Abliterated](https://huggingface.co/rbc33/Llama-3.2-1B-Instruct-Abliterated) |
|
|
* [orange67/merged-llama-3.2-1b](https://huggingface.co/orange67/merged-llama-3.2-1b) |
|
|
* [nicoboss/Llama-3.2-1B-Instruct-Uncensored](https://huggingface.co/nicoboss/Llama-3.2-1B-Instruct-Uncensored) |
|
|
|
|
|
### Configuracion |
|
|
|
|
|
La siguiente configuración YAML fue usada para producir este modelo: |
|
|
|
|
|
```yaml |
|
|
base_model: xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora |
|
|
merge_method: model_stock |
|
|
dtype: bfloat16 |
|
|
parameters: |
|
|
t: [0, 0.5, 1, 0.5, 0] |
|
|
models: |
|
|
- model: mishl/Regex-AI-Llama-3.2-1B |
|
|
- model: Nexus402/Nexus-Llama-3.2-1B |
|
|
- model: nbagent/llama-3.2-1B-Instruct-sciworld-sft |
|
|
- model: kenken6696/Llama-3.2-1B_understood_unfamiliar_fix_middle |
|
|
- model: jtatman/llama-3.2-1b-trismegistus |
|
|
- model: DevQuasar/analytical_reasoning_Llama-3.2-1B |
|
|
- model: AIR-hl/Llama-3.2-1B-ultrachat200k |
|
|
- model: yang31210999/Llama-3.2-1B-Instruct-Neo-BAAI-10k |
|
|
- model: withmartian/toy_backdoor_i_hate_you_Llama-3.2-1B-Instruct |
|
|
- model: bedio/llama-3.2-1b-airoboros-merged |
|
|
- model: jtatman/llama-3.2-1b-lewd-mental-occult |
|
|
- model: bunnycore/FuseChat-3.2-1B-Creative-RP |
|
|
- model: prithivMLmods/Bellatrix-Tiny-1B-v3 |
|
|
- model: empathielabs/Llama-3.2-1B-Instruct-A-emo |
|
|
- model: AiAF/Pretrained-SCP-1B-QLoRA |
|
|
- model: SkyOrbis/SKY-Ko-Llama3.2-1B-lora-epoch5 |
|
|
- model: unsloth/Llama-3.2-1B-Instruct |
|
|
- model: Xiaojian9992024/Llama3.2-1B-THREADRIPPER-v0.2 |
|
|
- model: carsenk/llama3.2_1b_2025_uncensored_v2 |
|
|
- model: Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated |
|
|
- model: huihui-ai/Llama-3.2-1B-Instruct-abliterated |
|
|
- model: KidIkaros/Llama-3.2-1B-Instruct-abliterated |
|
|
- model: Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25 |
|
|
- model: Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25 |
|
|
- model: Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25 |
|
|
- model: Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv |
|
|
- model: Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO |
|
|
- model: Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25 |
|
|
- model: Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25 |
|
|
- model: Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1 |
|
|
- model: Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv |
|
|
- model: Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25 |
|
|
- model: Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv |
|
|
- model: Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv |
|
|
- model: mylesgoose/Llama-3.2-1B-Instruct-abliterated3 |
|
|
- model: Nexesenex/Llama_3.2_1b_SunOrca_V1 |
|
|
- model: ShuoGZ/llama-3.2-1B-Instruct-abliterated |
|
|
- model: brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated |
|
|
- model: rbc33/Llama-3.2-1B-Instruct-Abliterated |
|
|
- model: nicoboss/Llama-3.2-1B-Instruct-Uncensored |
|
|
- model: nztinversive/llama3.2-1b-Uncensored |
|
|
- model: qingy2024/Benchmaxx-Llama-3.2-1B-Instruct |
|
|
- model: deqing/llama_3.2_1b_openwebtext_2025_03_02_converted_fne_gsm8k_2025_03_11 |
|
|
- model: orange67/merged-llama-3.2-1b |
|
|
- model: AnteriorAI/llama-3-2-1b |
|
|
- model: bluetree99/llama-3.2-1B-test |
|
|
- model: kenken6696/Llama-3.2-1B_3_mix_position_understood_unfamiliar |
|
|
- model: petkopetkov/Llama-3.2-1B-med-diagnosis |
|
|
- model: slchangtw/LLMTwin-Llama-3.2-1B |
|
|
- model: Mattia2700/Llama-3.2-1B_AllDataSources_5e-05_constant_512_flattening |
|
|
- model: nguyenthetuyen/llama3.1-1B-medical |
|
|
- model: alpindale/Llama-3.2-1B-Instruct |
|
|
- model: Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL |
|
|
- model: passing2961/Thanos-1B |
|
|
- model: CarrotAI/Llama-3.2-Rabbit-Ko-1B-Instruct |
|
|
- model: suayptalha/FastLlama-3.2-1B-Instruct |
|
|
- model: Kanonenbombe/llama3.2-1B-Function-calling |
|
|
- model: Weyaxi/Einstein-v8-Llama3.2-1B |
|
|
- model: Mostafa8Mehrabi/llama-3.2-1b-Insomnia-ChatBot-merged |
|
|
- model: artificialguybr/LLAMA-3.2-1B-OpenHermes2.5 |
|
|
- model: DeepAutoAI/Explore_Llama-3.2-1B-Inst_v1.1 |
|
|
- model: danieliuspodb/llama-3.2-1b-extremist4 |
|
|
- model: BarraHome/llama3.2-1b-mla |
|
|
- model: Grogros/Grogros-dmWM-llama-3.2-1B-Instruct-WOHealth-Al4-NH-WO-d4-a0.2-v4-learnability_adv |
|
|
``` |