File size: 1,248 Bytes
60f0e74 66dad03 60f0e74 66dad03 60f0e74 66dad03 60f0e74 66dad03 60f0e74 66dad03 60f0e74 66dad03 60f0e74 66dad03 60f0e74 66dad03 60f0e74 66dad03 60f0e74 66dad03 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
---
license: apache-2.0
base_model: kakaocorp/kanana-safeguard-prompt-2.1b
quantized_by: Arc1el
quantization_method: bitsandbytes
tags:
- quantized
- 4bit
- bitsandbytes
- safeguard
- korean
---
# Kanana Safeguard Prompt 2.1B - 4bit Quantized
4bit ์์ํ๋ kakaocorp/kanana-safeguard-prompt-2.1b ๋ชจ๋ธ์
๋๋ค.
## ๋ชจ๋ธ ์ ๋ณด
- **์๋ณธ ๋ชจ๋ธ**: [kakaocorp/kanana-safeguard-prompt-2.1b](https://huggingface.co/kakaocorp/kanana-safeguard-prompt-2.1b)
- **์์ํ ๋ฐฉ๋ฒ**: BitsAndBytes 4bit (NF4)
- **ํ๋ผ๋ฏธํฐ**: 2.1B
- **์ฉ๋**: ํ๋กฌํํธ ์์ ์ฑ ๊ฒ์ฆ
## ์ฌ์ฉ๋ฒ
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(
"[your-username]/kanana-safeguard-prompt-2.1b-4bit",
quantization_config=bnb_config,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("nxtcloud-org/kanana-safeguard-prompt-2.1b-4bit")
```
## ์์คํ
์๊ตฌ์ฌํญ
- GPU ๋ฉ๋ชจ๋ฆฌ: 2GB+
- RAM: 4GB+
## ๋ผ์ด์ ์ค
Apache License 2.0 (์๋ณธ ๋ชจ๋ธ๊ณผ ๋์ผ) |