geoffmunn commited on
Commit
bd5d0b3
·
verified ·
1 Parent(s): ce9162a

Model info updated

Browse files
Files changed (1) hide show
  1. Qwen3-0.6B-Q5_K_S/README.md +2 -2
Qwen3-0.6B-Q5_K_S/README.md CHANGED
@@ -20,7 +20,7 @@ Quantized version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) a
20
  ## Model Info
21
 
22
  - **Format**: GGUF (for llama.cpp and compatible runtimes)
23
- - **Size**: 519M
24
  - **Precision**: Q5_K_S
25
  - **Base Model**: [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)
26
  - **Conversion Tool**: [llama.cpp](https://github.com/ggerganov/llama.cpp)
@@ -97,7 +97,7 @@ Here’s how you can query this model via API using `curl` and `jq`. Replace the
97
 
98
  ```bash
99
  curl http://localhost:11434/api/generate -s -N -d '{
100
- "model": "hf.co/geoffmunn/Qwen3-0.6B:Q5_K_S;2D",
101
  "prompt": "Respond exactly as follows: Write a short joke about cats.",
102
  "temperature": 0.8,
103
  "top_p": 0.95,
 
20
  ## Model Info
21
 
22
  - **Format**: GGUF (for llama.cpp and compatible runtimes)
23
+ - **Size**: 544 MB
24
  - **Precision**: Q5_K_S
25
  - **Base Model**: [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)
26
  - **Conversion Tool**: [llama.cpp](https://github.com/ggerganov/llama.cpp)
 
97
 
98
  ```bash
99
  curl http://localhost:11434/api/generate -s -N -d '{
100
+ "model": "hf.co/geoffmunn/Qwen3-0.6B:Q5_K_S",
101
  "prompt": "Respond exactly as follows: Write a short joke about cats.",
102
  "temperature": 0.8,
103
  "top_p": 0.95,