File size: 4,454 Bytes
2037d3c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9b99500
2037d3c
 
 
 
 
 
 
 
9b99500
 
c0d67ac
 
 
9b99500
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2037d3c
 
 
73f1df7
 
2037d3c
73f1df7
2037d3c
 
 
73f1df7
2037d3c
 
 
4703c4e
 
 
 
2037d3c
73f1df7
2037d3c
 
 
 
 
 
73f1df7
2037d3c
73f1df7
2037d3c
73f1df7
 
 
2037d3c
73f1df7
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
---
base_model: unsloth/Llama-3.2-1B-Instruct-bnb-4bit
library_name: transformers
pipeline_tag: text-generation
tags:
- gguf
- fine-tuned
- lima
language:
- en
license: apache-2.0
---

# Llama-3.2-1B-Instruct-bnb-4bit-lima - GGUF Format

GGUF format quantizations for llama.cpp/Ollama.

## Model Details

- **Base Model**: [unsloth/Llama-3.2-1B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-bnb-4bit)
- **Format**: gguf
- **Dataset**: [GAIR/lima](https://huggingface.co/datasets/GAIR/lima)
- **Size**: 0.75 GB - 2.31 GB
- **Usage**: llama.cpp / Ollama

## Related Models

- **LoRA Adapters**: [fs90/Llama-3.2-1B-Instruct-bnb-4bit-lima-lora](https://huggingface.co/fs90/Llama-3.2-1B-Instruct-bnb-4bit-lima-lora) - Smaller LoRA-only adapters
- **Merged FP16 Model**: [fs90/Llama-3.2-1B-Instruct-bnb-4bit-lima](https://huggingface.co/fs90/Llama-3.2-1B-Instruct-bnb-4bit-lima) - Original unquantized model in FP16


## Prompt Format

This model uses the **Llama 3.2** chat template.

### Ollama Template Format

```
{{ if .Messages }}
{{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|>
{{- if .System }}

{{ .System }}
{{- end }}
{{- if .Tools }}

You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the original use question.
{{- end }}
{{- end }}<|eot_id|>
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|>
{{- if and $.Tools $last }}

Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.

Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables.

{{ $.Tools }}
{{- end }}

{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}
{{- else if eq .Role "assistant" }}<|start_header_id|>assistant<|end_header_id|>
{{- if .ToolCalls }}

{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }}
{{- else }}

{{ .Content }}{{ if not $last }}<|eot_id|>{{ end }}
{{- end }}
{{- else if eq .Role "tool" }}<|start_header_id|>ipython<|end_header_id|>

{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}
{{- end }}
{{- end }}
{{- else }}
{{- if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}{{ .Response }}{{ if .Response }}<|eot_id|>{{ end }}
```


## Training Details

- **LoRA Rank**: 16
- **Training Steps**: 129
- **Training Loss**: 2.3025
- **Max Seq Length**: 4086
- **Training Scope**: 1,030 samples (1 epoch(s), full dataset)

For complete training configuration, see the LoRA adapters repository/directory.

## Available Quantizations

| Quantization | File | Size | Quality |
|--------------|------|------|---------|
| **F16** | [Llama-3.2-1B-Instruct-bnb-4bit-lima-F16.gguf](Llama-3.2-1B-Instruct-bnb-4bit-lima-F16.gguf) | 2.31 GB | Full precision (largest) |
| **Q4_K_M** | [Llama-3.2-1B-Instruct-bnb-4bit-lima-Q4_K_M.gguf](Llama-3.2-1B-Instruct-bnb-4bit-lima-Q4_K_M.gguf) | 0.75 GB | Good balance (recommended) |
| **Q6_K** | [Llama-3.2-1B-Instruct-bnb-4bit-lima-Q6_K.gguf](Llama-3.2-1B-Instruct-bnb-4bit-lima-Q6_K.gguf) | 0.95 GB | High quality |
| **Q8_0** | [Llama-3.2-1B-Instruct-bnb-4bit-lima-Q8_0.gguf](Llama-3.2-1B-Instruct-bnb-4bit-lima-Q8_0.gguf) | 1.23 GB | Very high quality, near original |

**Usage:** Use the dropdown menu above to select a quantization, then follow HuggingFace's provided instructions.

## License

Based on unsloth/Llama-3.2-1B-Instruct-bnb-4bit and trained on GAIR/lima.
Please refer to the original model and dataset licenses.

## Credits

**Trained by:** Farhan Syah

**Training pipeline:**
- [unsloth-finetuning](https://github.com/farhan-syah/unsloth-finetuning) by [@farhan-syah](https://github.com/farhan-syah)
- [Unsloth](https://github.com/unslothai/unsloth) - 2x faster LLM fine-tuning

**Base components:**
- Base model: [unsloth/Llama-3.2-1B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-bnb-4bit)
- Training dataset: [GAIR/lima](https://huggingface.co/datasets/GAIR/lima) by GAIR